Merge /spare/repo/linux-2.6/

This commit is contained in:
Jeff Garzik 2005-09-08 05:37:58 -04:00
commit 5a2cec83a9
886 changed files with 36125 additions and 23072 deletions

View File

@ -96,7 +96,7 @@
<chapter id="pubfunctions"> <chapter id="pubfunctions">
<title>Public Functions Provided</title> <title>Public Functions Provided</title>
!Earch/i386/kernel/mca.c !Edrivers/mca/mca-legacy.c
</chapter> </chapter>
<chapter id="dmafunctions"> <chapter id="dmafunctions">

View File

@ -605,12 +605,13 @@ is in the ipmi_poweroff module. When the system requests a powerdown,
it will send the proper IPMI commands to do this. This is supported on it will send the proper IPMI commands to do this. This is supported on
several platforms. several platforms.
There is a module parameter named "poweroff_control" that may either be zero There is a module parameter named "poweroff_powercycle" that may
(do a power down) or 2 (do a power cycle, power the system off, then power either be zero (do a power down) or non-zero (do a power cycle, power
it on in a few seconds). Setting ipmi_poweroff.poweroff_control=x will do the system off, then power it on in a few seconds). Setting
the same thing on the kernel command line. The parameter is also available ipmi_poweroff.poweroff_control=x will do the same thing on the kernel
via the proc filesystem in /proc/ipmi/poweroff_control. Note that if the command line. The parameter is also available via the proc filesystem
system does not support power cycling, it will always to the power off. in /proc/sys/dev/ipmi/poweroff_powercycle. Note that if the system
does not support power cycling, it will always do the power off.
Note that if you have ACPI enabled, the system will prefer using ACPI to Note that if you have ACPI enabled, the system will prefer using ACPI to
power off. power off.

View File

@ -0,0 +1,112 @@
Using RCU to Protect Dynamic NMI Handlers
Although RCU is usually used to protect read-mostly data structures,
it is possible to use RCU to provide dynamic non-maskable interrupt
handlers, as well as dynamic irq handlers. This document describes
how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
work in "arch/i386/oprofile/nmi_timer_int.c" and in
"arch/i386/kernel/traps.c".
The relevant pieces of code are listed below, each followed by a
brief explanation.
static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
{
return 0;
}
The dummy_nmi_callback() function is a "dummy" NMI handler that does
nothing, but returns zero, thus saying that it did nothing, allowing
the NMI handler to take the default machine-specific action.
static nmi_callback_t nmi_callback = dummy_nmi_callback;
This nmi_callback variable is a global function pointer to the current
NMI handler.
fastcall void do_nmi(struct pt_regs * regs, long error_code)
{
int cpu;
nmi_enter();
cpu = smp_processor_id();
++nmi_count(cpu);
if (!rcu_dereference(nmi_callback)(regs, cpu))
default_do_nmi(regs);
nmi_exit();
}
The do_nmi() function processes each NMI. It first disables preemption
in the same way that a hardware irq would, then increments the per-CPU
count of NMIs. It then invokes the NMI handler stored in the nmi_callback
function pointer. If this handler returns zero, do_nmi() invokes the
default_do_nmi() function to handle a machine-specific NMI. Finally,
preemption is restored.
Strictly speaking, rcu_dereference() is not needed, since this code runs
only on i386, which does not need rcu_dereference() anyway. However,
it is a good documentation aid, particularly for anyone attempting to
do something similar on Alpha.
Quick Quiz: Why might the rcu_dereference() be necessary on Alpha,
given that the code referenced by the pointer is read-only?
Back to the discussion of NMI and RCU...
void set_nmi_callback(nmi_callback_t callback)
{
rcu_assign_pointer(nmi_callback, callback);
}
The set_nmi_callback() function registers an NMI handler. Note that any
data that is to be used by the callback must be initialized up -before-
the call to set_nmi_callback(). On architectures that do not order
writes, the rcu_assign_pointer() ensures that the NMI handler sees the
initialized values.
void unset_nmi_callback(void)
{
rcu_assign_pointer(nmi_callback, dummy_nmi_callback);
}
This function unregisters an NMI handler, restoring the original
dummy_nmi_handler(). However, there may well be an NMI handler
currently executing on some other CPU. We therefore cannot free
up any data structures used by the old NMI handler until execution
of it completes on all other CPUs.
One way to accomplish this is via synchronize_sched(), perhaps as
follows:
unset_nmi_callback();
synchronize_sched();
kfree(my_nmi_data);
This works because synchronize_sched() blocks until all CPUs complete
any preemption-disabled segments of code that they were executing.
Since NMI handlers disable preemption, synchronize_sched() is guaranteed
not to return until all ongoing NMI handlers exit. It is therefore safe
to free up the handler's data as soon as synchronize_sched() returns.
Answer to Quick Quiz
Why might the rcu_dereference() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer: The caller to set_nmi_callback() might well have
initialized some data that is to be used by the
new NMI handler. In this case, the rcu_dereference()
would be needed, because otherwise a CPU that received
an NMI just after the new handler was set might see
the pointer to the new NMI handler, but the old
pre-initialized version of the handler's data.
More important, the rcu_dereference() makes it clear
to someone reading the code that the pointer is being
protected by RCU.

View File

@ -68,7 +68,8 @@ it a better device citizen. Further thanks to Joel Katz
Porfiri Claudio <C.Porfiri@nisms.tei.ericsson.se> for patches Porfiri Claudio <C.Porfiri@nisms.tei.ericsson.se> for patches
to make the driver work with the older CDU-510/515 series, and to make the driver work with the older CDU-510/515 series, and
Heiko Eissfeldt <heiko@colossus.escape.de> for pointing out that Heiko Eissfeldt <heiko@colossus.escape.de> for pointing out that
the verify_area() checks were ignoring the results of said checks. the verify_area() checks were ignoring the results of said checks
(note: verify_area() has since been replaced by access_ok()).
(Acknowledgments from Ron Jeppesen in the 0.3 release:) (Acknowledgments from Ron Jeppesen in the 0.3 release:)
Thanks to Corey Minyard who wrote the original CDU-31A driver on which Thanks to Corey Minyard who wrote the original CDU-31A driver on which

View File

@ -60,6 +60,18 @@ all of the cpus in the system. This removes any overhead due to
load balancing code trying to pull tasks outside of the cpu exclusive load balancing code trying to pull tasks outside of the cpu exclusive
cpuset only to be prevented by the tasks' cpus_allowed mask. cpuset only to be prevented by the tasks' cpus_allowed mask.
A cpuset that is mem_exclusive restricts kernel allocations for
page, buffer and other data commonly shared by the kernel across
multiple users. All cpusets, whether mem_exclusive or not, restrict
allocations of memory for user space. This enables configuring a
system so that several independent jobs can share common kernel
data, such as file system pages, while isolating each jobs user
allocation in its own cpuset. To do this, construct a large
mem_exclusive cpuset to hold all the jobs, and construct child,
non-mem_exclusive cpusets for each individual job. Only a small
amount of typical kernel memory, such as requests from interrupt
handlers, is allowed to be taken outside even a mem_exclusive cpuset.
User level code may create and destroy cpusets by name in the cpuset User level code may create and destroy cpusets by name in the cpuset
virtual file system, manage the attributes and permissions of these virtual file system, manage the attributes and permissions of these
cpusets and which CPUs and Memory Nodes are assigned to each cpuset, cpusets and which CPUs and Memory Nodes are assigned to each cpuset,

91
Documentation/dcdbas.txt Normal file
View File

@ -0,0 +1,91 @@
Overview
The Dell Systems Management Base Driver provides a sysfs interface for
systems management software such as Dell OpenManage to perform system
management interrupts and host control actions (system power cycle or
power off after OS shutdown) on certain Dell systems.
Dell OpenManage requires this driver on the following Dell PowerEdge systems:
300, 1300, 1400, 400SC, 500SC, 1500SC, 1550, 600SC, 1600SC, 650, 1655MC,
700, and 750. Other Dell software such as the open source libsmbios project
is expected to make use of this driver, and it may include the use of this
driver on other Dell systems.
The Dell libsmbios project aims towards providing access to as much BIOS
information as possible. See http://linux.dell.com/libsmbios/main/ for
more information about the libsmbios project.
System Management Interrupt
On some Dell systems, systems management software must access certain
management information via a system management interrupt (SMI). The SMI data
buffer must reside in 32-bit address space, and the physical address of the
buffer is required for the SMI. The driver maintains the memory required for
the SMI and provides a way for the application to generate the SMI.
The driver creates the following sysfs entries for systems management
software to perform these system management interrupts:
/sys/devices/platform/dcdbas/smi_data
/sys/devices/platform/dcdbas/smi_data_buf_phys_addr
/sys/devices/platform/dcdbas/smi_data_buf_size
/sys/devices/platform/dcdbas/smi_request
Systems management software must perform the following steps to execute
a SMI using this driver:
1) Lock smi_data.
2) Write system management command to smi_data.
3) Write "1" to smi_request to generate a calling interface SMI or
"2" to generate a raw SMI.
4) Read system management command response from smi_data.
5) Unlock smi_data.
Host Control Action
Dell OpenManage supports a host control feature that allows the administrator
to perform a power cycle or power off of the system after the OS has finished
shutting down. On some Dell systems, this host control feature requires that
a driver perform a SMI after the OS has finished shutting down.
The driver creates the following sysfs entries for systems management software
to schedule the driver to perform a power cycle or power off host control
action after the system has finished shutting down:
/sys/devices/platform/dcdbas/host_control_action
/sys/devices/platform/dcdbas/host_control_smi_type
/sys/devices/platform/dcdbas/host_control_on_shutdown
Dell OpenManage performs the following steps to execute a power cycle or
power off host control action using this driver:
1) Write host control action to be performed to host_control_action.
2) Write type of SMI that driver needs to perform to host_control_smi_type.
3) Write "1" to host_control_on_shutdown to enable host control action.
4) Initiate OS shutdown.
(Driver will perform host control SMI when it is notified that the OS
has finished shutting down.)
Host Control SMI Type
The following table shows the value to write to host_control_smi_type to
perform a power cycle or power off host control action:
PowerEdge System Host Control SMI Type
---------------- ---------------------
300 HC_SMITYPE_TYPE1
1300 HC_SMITYPE_TYPE1
1400 HC_SMITYPE_TYPE2
500SC HC_SMITYPE_TYPE2
1500SC HC_SMITYPE_TYPE2
1550 HC_SMITYPE_TYPE2
600SC HC_SMITYPE_TYPE2
1600SC HC_SMITYPE_TYPE2
650 HC_SMITYPE_TYPE2
1655MC HC_SMITYPE_TYPE2
700 HC_SMITYPE_TYPE3
750 HC_SMITYPE_TYPE3

View File

@ -0,0 +1,74 @@
Purpose:
Demonstrate the usage of the new open sourced rbu (Remote BIOS Update) driver
for updating BIOS images on Dell servers and desktops.
Scope:
This document discusses the functionality of the rbu driver only.
It does not cover the support needed from aplications to enable the BIOS to
update itself with the image downloaded in to the memory.
Overview:
This driver works with Dell OpenManage or Dell Update Packages for updating
the BIOS on Dell servers (starting from servers sold since 1999), desktops
and notebooks (starting from those sold in 2005).
Please go to http://support.dell.com register and you can find info on
OpenManage and Dell Update packages (DUP).
Dell_RBU driver supports BIOS update using the monilothic image and packetized
image methods. In case of moniolithic the driver allocates a contiguous chunk
of physical pages having the BIOS image. In case of packetized the app
using the driver breaks the image in to packets of fixed sizes and the driver
would place each packet in contiguous physical memory. The driver also
maintains a link list of packets for reading them back.
If the dell_rbu driver is unloaded all the allocated memory is freed.
The rbu driver needs to have an application which will inform the BIOS to
enable the update in the next system reboot.
The user should not unload the rbu driver after downloading the BIOS image
or updating.
The driver load creates the following directories under the /sys file system.
/sys/class/firmware/dell_rbu/loading
/sys/class/firmware/dell_rbu/data
/sys/devices/platform/dell_rbu/image_type
/sys/devices/platform/dell_rbu/data
The driver supports two types of update mechanism; monolithic and packetized.
These update mechanism depends upon the BIOS currently running on the system.
Most of the Dell systems support a monolithic update where the BIOS image is
copied to a single contiguous block of physical memory.
In case of packet mechanism the single memory can be broken in smaller chuks
of contiguous memory and the BIOS image is scattered in these packets.
By default the driver uses monolithic memory for the update type. This can be
changed to contiguous during the driver load time by specifying the load
parameter image_type=packet. This can also be changed later as below
echo packet > /sys/devices/platform/dell_rbu/image_type
Do the steps below to download the BIOS image.
1) echo 1 > /sys/class/firmware/dell_rbu/loading
2) cp bios_image.hdr /sys/class/firmware/dell_rbu/data
3) echo 0 > /sys/class/firmware/dell_rbu/loading
The /sys/class/firmware/dell_rbu/ entries will remain till the following is
done.
echo -1 > /sys/class/firmware/dell_rbu/loading
Until this step is completed the drivr cannot be unloaded.
Also the driver provides /sys/devices/platform/dell_rbu/data readonly file to
read back the image downloaded. This is useful in case of packet update
mechanism where the above steps 1,2,3 will repeated for every packet.
By reading the /sys/devices/platform/dell_rbu/data file all packet data
downloaded can be verified in a single file.
The packets are arranged in this file one after the other in a FIFO order.
NOTE:
This driver requires a patch for firmware_class.c which has the addition
of request_firmware_nowait_nohotplug function to wortk
Also after updating the BIOS image an user mdoe application neeeds to execute
code which message the BIOS update request to the BIOS. So on the next reboot
the BIOS knows about the new image downloaded and it updates it self.
Also don't unload the rbu drive if the image has to be updated.

View File

@ -16,7 +16,7 @@ Enable the following options:
"Device drivers" => "Multimedia devices" "Device drivers" => "Multimedia devices"
=> "Video For Linux" => "BT848 Video For Linux" => "Video For Linux" => "BT848 Video For Linux"
"Device drivers" => "Multimedia devices" => "Digital Video Broadcasting Devices" "Device drivers" => "Multimedia devices" => "Digital Video Broadcasting Devices"
=> "DVB for Linux" "DVB Core Support" "Nebula/Pinnacle PCTV/TwinHan PCI Cards" => "DVB for Linux" "DVB Core Support" "BT8xx based PCI cards"
3) Loading Modules, described by two approaches 3) Loading Modules, described by two approaches
=============================================== ===============================================

View File

@ -7,7 +7,7 @@ To protect itself the kernel has to verify this address.
In older versions of Linux this was done with the In older versions of Linux this was done with the
int verify_area(int type, const void * addr, unsigned long size) int verify_area(int type, const void * addr, unsigned long size)
function. function (which has since been replaced by access_ok()).
This function verified that the memory area starting at address This function verified that the memory area starting at address
addr and of size size was accessible for the operation specified addr and of size size was accessible for the operation specified

View File

@ -51,14 +51,6 @@ Who: Adrian Bunk <bunk@stusta.de>
--------------------------- ---------------------------
What: register_ioctl32_conversion() / unregister_ioctl32_conversion()
When: April 2005
Why: Replaced by ->compat_ioctl in file_operations and other method
vecors.
Who: Andi Kleen <ak@muc.de>, Christoph Hellwig <hch@lst.de>
---------------------------
What: RCU API moves to EXPORT_SYMBOL_GPL What: RCU API moves to EXPORT_SYMBOL_GPL
When: April 2006 When: April 2006
Files: include/linux/rcupdate.h, kernel/rcupdate.c Files: include/linux/rcupdate.h, kernel/rcupdate.c
@ -74,14 +66,6 @@ Who: Paul E. McKenney <paulmck@us.ibm.com>
--------------------------- ---------------------------
What: remove verify_area()
When: July 2006
Files: Various uaccess.h headers.
Why: Deprecated and redundant. access_ok() should be used instead.
Who: Jesper Juhl <juhl-lkml@dif.dk>
---------------------------
What: IEEE1394 Audio and Music Data Transmission Protocol driver, What: IEEE1394 Audio and Music Data Transmission Protocol driver,
Connection Management Procedures driver Connection Management Procedures driver
When: November 2005 When: November 2005

View File

@ -0,0 +1,362 @@
relayfs - a high-speed data relay filesystem
============================================
relayfs is a filesystem designed to provide an efficient mechanism for
tools and facilities to relay large and potentially sustained streams
of data from kernel space to user space.
The main abstraction of relayfs is the 'channel'. A channel consists
of a set of per-cpu kernel buffers each represented by a file in the
relayfs filesystem. Kernel clients write into a channel using
efficient write functions which automatically log to the current cpu's
channel buffer. User space applications mmap() the per-cpu files and
retrieve the data as it becomes available.
The format of the data logged into the channel buffers is completely
up to the relayfs client; relayfs does however provide hooks which
allow clients to impose some stucture on the buffer data. Nor does
relayfs implement any form of data filtering - this also is left to
the client. The purpose is to keep relayfs as simple as possible.
This document provides an overview of the relayfs API. The details of
the function parameters are documented along with the functions in the
filesystem code - please see that for details.
Semantics
=========
Each relayfs channel has one buffer per CPU, each buffer has one or
more sub-buffers. Messages are written to the first sub-buffer until
it is too full to contain a new message, in which case it it is
written to the next (if available). Messages are never split across
sub-buffers. At this point, userspace can be notified so it empties
the first sub-buffer, while the kernel continues writing to the next.
When notified that a sub-buffer is full, the kernel knows how many
bytes of it are padding i.e. unused. Userspace can use this knowledge
to copy only valid data.
After copying it, userspace can notify the kernel that a sub-buffer
has been consumed.
relayfs can operate in a mode where it will overwrite data not yet
collected by userspace, and not wait for it to consume it.
relayfs itself does not provide for communication of such data between
userspace and kernel, allowing the kernel side to remain simple and not
impose a single interface on userspace. It does provide a separate
helper though, described below.
klog, relay-app & librelay
==========================
relayfs itself is ready to use, but to make things easier, two
additional systems are provided. klog is a simple wrapper to make
writing formatted text or raw data to a channel simpler, regardless of
whether a channel to write into exists or not, or whether relayfs is
compiled into the kernel or is configured as a module. relay-app is
the kernel counterpart of userspace librelay.c, combined these two
files provide glue to easily stream data to disk, without having to
bother with housekeeping. klog and relay-app can be used together,
with klog providing high-level logging functions to the kernel and
relay-app taking care of kernel-user control and disk-logging chores.
It is possible to use relayfs without relay-app & librelay, but you'll
have to implement communication between userspace and kernel, allowing
both to convey the state of buffers (full, empty, amount of padding).
klog, relay-app and librelay can be found in the relay-apps tarball on
http://relayfs.sourceforge.net
The relayfs user space API
==========================
relayfs implements basic file operations for user space access to
relayfs channel buffer data. Here are the file operations that are
available and some comments regarding their behavior:
open() enables user to open an _existing_ buffer.
mmap() results in channel buffer being mapped into the caller's
memory space. Note that you can't do a partial mmap - you must
map the entire file, which is NRBUF * SUBBUFSIZE.
read() read the contents of a channel buffer. The bytes read are
'consumed' by the reader i.e. they won't be available again
to subsequent reads. If the channel is being used in
no-overwrite mode (the default), it can be read at any time
even if there's an active kernel writer. If the channel is
being used in overwrite mode and there are active channel
writers, results may be unpredictable - users should make
sure that all logging to the channel has ended before using
read() with overwrite mode.
poll() POLLIN/POLLRDNORM/POLLERR supported. User applications are
notified when sub-buffer boundaries are crossed.
close() decrements the channel buffer's refcount. When the refcount
reaches 0 i.e. when no process or kernel client has the buffer
open, the channel buffer is freed.
In order for a user application to make use of relayfs files, the
relayfs filesystem must be mounted. For example,
mount -t relayfs relayfs /mnt/relay
NOTE: relayfs doesn't need to be mounted for kernel clients to create
or use channels - it only needs to be mounted when user space
applications need access to the buffer data.
The relayfs kernel API
======================
Here's a summary of the API relayfs provides to in-kernel clients:
channel management functions:
relay_open(base_filename, parent, subbuf_size, n_subbufs,
callbacks)
relay_close(chan)
relay_flush(chan)
relay_reset(chan)
relayfs_create_dir(name, parent)
relayfs_remove_dir(dentry)
channel management typically called on instigation of userspace:
relay_subbufs_consumed(chan, cpu, subbufs_consumed)
write functions:
relay_write(chan, data, length)
__relay_write(chan, data, length)
relay_reserve(chan, length)
callbacks:
subbuf_start(buf, subbuf, prev_subbuf, prev_padding)
buf_mapped(buf, filp)
buf_unmapped(buf, filp)
helper functions:
relay_buf_full(buf)
subbuf_start_reserve(buf, length)
Creating a channel
------------------
relay_open() is used to create a channel, along with its per-cpu
channel buffers. Each channel buffer will have an associated file
created for it in the relayfs filesystem, which can be opened and
mmapped from user space if desired. The files are named
basename0...basenameN-1 where N is the number of online cpus, and by
default will be created in the root of the filesystem. If you want a
directory structure to contain your relayfs files, you can create it
with relayfs_create_dir() and pass the parent directory to
relay_open(). Clients are responsible for cleaning up any directory
structure they create when the channel is closed - use
relayfs_remove_dir() for that.
The total size of each per-cpu buffer is calculated by multiplying the
number of sub-buffers by the sub-buffer size passed into relay_open().
The idea behind sub-buffers is that they're basically an extension of
double-buffering to N buffers, and they also allow applications to
easily implement random-access-on-buffer-boundary schemes, which can
be important for some high-volume applications. The number and size
of sub-buffers is completely dependent on the application and even for
the same application, different conditions will warrant different
values for these parameters at different times. Typically, the right
values to use are best decided after some experimentation; in general,
though, it's safe to assume that having only 1 sub-buffer is a bad
idea - you're guaranteed to either overwrite data or lose events
depending on the channel mode being used.
Channel 'modes'
---------------
relayfs channels can be used in either of two modes - 'overwrite' or
'no-overwrite'. The mode is entirely determined by the implementation
of the subbuf_start() callback, as described below. In 'overwrite'
mode, also known as 'flight recorder' mode, writes continuously cycle
around the buffer and will never fail, but will unconditionally
overwrite old data regardless of whether it's actually been consumed.
In no-overwrite mode, writes will fail i.e. data will be lost, if the
number of unconsumed sub-buffers equals the total number of
sub-buffers in the channel. It should be clear that if there is no
consumer or if the consumer can't consume sub-buffers fast enought,
data will be lost in either case; the only difference is whether data
is lost from the beginning or the end of a buffer.
As explained above, a relayfs channel is made of up one or more
per-cpu channel buffers, each implemented as a circular buffer
subdivided into one or more sub-buffers. Messages are written into
the current sub-buffer of the channel's current per-cpu buffer via the
write functions described below. Whenever a message can't fit into
the current sub-buffer, because there's no room left for it, the
client is notified via the subbuf_start() callback that a switch to a
new sub-buffer is about to occur. The client uses this callback to 1)
initialize the next sub-buffer if appropriate 2) finalize the previous
sub-buffer if appropriate and 3) return a boolean value indicating
whether or not to actually go ahead with the sub-buffer switch.
To implement 'no-overwrite' mode, the userspace client would provide
an implementation of the subbuf_start() callback something like the
following:
static int subbuf_start(struct rchan_buf *buf,
void *subbuf,
void *prev_subbuf,
unsigned int prev_padding)
{
if (prev_subbuf)
*((unsigned *)prev_subbuf) = prev_padding;
if (relay_buf_full(buf))
return 0;
subbuf_start_reserve(buf, sizeof(unsigned int));
return 1;
}
If the current buffer is full i.e. all sub-buffers remain unconsumed,
the callback returns 0 to indicate that the buffer switch should not
occur yet i.e. until the consumer has had a chance to read the current
set of ready sub-buffers. For the relay_buf_full() function to make
sense, the consumer is reponsible for notifying relayfs when
sub-buffers have been consumed via relay_subbufs_consumed(). Any
subsequent attempts to write into the buffer will again invoke the
subbuf_start() callback with the same parameters; only when the
consumer has consumed one or more of the ready sub-buffers will
relay_buf_full() return 0, in which case the buffer switch can
continue.
The implementation of the subbuf_start() callback for 'overwrite' mode
would be very similar:
static int subbuf_start(struct rchan_buf *buf,
void *subbuf,
void *prev_subbuf,
unsigned int prev_padding)
{
if (prev_subbuf)
*((unsigned *)prev_subbuf) = prev_padding;
subbuf_start_reserve(buf, sizeof(unsigned int));
return 1;
}
In this case, the relay_buf_full() check is meaningless and the
callback always returns 1, causing the buffer switch to occur
unconditionally. It's also meaningless for the client to use the
relay_subbufs_consumed() function in this mode, as it's never
consulted.
The default subbuf_start() implementation, used if the client doesn't
define any callbacks, or doesn't define the subbuf_start() callback,
implements the simplest possible 'no-overwrite' mode i.e. it does
nothing but return 0.
Header information can be reserved at the beginning of each sub-buffer
by calling the subbuf_start_reserve() helper function from within the
subbuf_start() callback. This reserved area can be used to store
whatever information the client wants. In the example above, room is
reserved in each sub-buffer to store the padding count for that
sub-buffer. This is filled in for the previous sub-buffer in the
subbuf_start() implementation; the padding value for the previous
sub-buffer is passed into the subbuf_start() callback along with a
pointer to the previous sub-buffer, since the padding value isn't
known until a sub-buffer is filled. The subbuf_start() callback is
also called for the first sub-buffer when the channel is opened, to
give the client a chance to reserve space in it. In this case the
previous sub-buffer pointer passed into the callback will be NULL, so
the client should check the value of the prev_subbuf pointer before
writing into the previous sub-buffer.
Writing to a channel
--------------------
kernel clients write data into the current cpu's channel buffer using
relay_write() or __relay_write(). relay_write() is the main logging
function - it uses local_irqsave() to protect the buffer and should be
used if you might be logging from interrupt context. If you know
you'll never be logging from interrupt context, you can use
__relay_write(), which only disables preemption. These functions
don't return a value, so you can't determine whether or not they
failed - the assumption is that you wouldn't want to check a return
value in the fast logging path anyway, and that they'll always succeed
unless the buffer is full and no-overwrite mode is being used, in
which case you can detect a failed write in the subbuf_start()
callback by calling the relay_buf_full() helper function.
relay_reserve() is used to reserve a slot in a channel buffer which
can be written to later. This would typically be used in applications
that need to write directly into a channel buffer without having to
stage data in a temporary buffer beforehand. Because the actual write
may not happen immediately after the slot is reserved, applications
using relay_reserve() can keep a count of the number of bytes actually
written, either in space reserved in the sub-buffers themselves or as
a separate array. See the 'reserve' example in the relay-apps tarball
at http://relayfs.sourceforge.net for an example of how this can be
done. Because the write is under control of the client and is
separated from the reserve, relay_reserve() doesn't protect the buffer
at all - it's up to the client to provide the appropriate
synchronization when using relay_reserve().
Closing a channel
-----------------
The client calls relay_close() when it's finished using the channel.
The channel and its associated buffers are destroyed when there are no
longer any references to any of the channel buffers. relay_flush()
forces a sub-buffer switch on all the channel buffers, and can be used
to finalize and process the last sub-buffers before the channel is
closed.
Misc
----
Some applications may want to keep a channel around and re-use it
rather than open and close a new channel for each use. relay_reset()
can be used for this purpose - it resets a channel to its initial
state without reallocating channel buffer memory or destroying
existing mappings. It should however only be called when it's safe to
do so i.e. when the channel isn't currently being written to.
Finally, there are a couple of utility callbacks that can be used for
different purposes. buf_mapped() is called whenever a channel buffer
is mmapped from user space and buf_unmapped() is called when it's
unmapped. The client can use this notification to trigger actions
within the kernel application, such as enabling/disabling logging to
the channel.
Resources
=========
For news, example code, mailing list, etc. see the relayfs homepage:
http://relayfs.sourceforge.net
Credits
=======
The ideas and specs for relayfs came about as a result of discussions
on tracing involving the following:
Michel Dagenais <michel.dagenais@polymtl.ca>
Richard Moore <richardj_moore@uk.ibm.com>
Bob Wisniewski <bob@watson.ibm.com>
Karim Yaghmour <karim@opersys.com>
Tom Zanussi <zanussi@us.ibm.com>
Also thanks to Hubertus Franke for a lot of useful suggestions and bug
reports.

View File

@ -2,7 +2,7 @@
---------------------------- ----------------------------
H. Peter Anvin <hpa@zytor.com> H. Peter Anvin <hpa@zytor.com>
Last update 2002-01-01 Last update 2005-09-02
On the i386 platform, the Linux kernel uses a rather complicated boot On the i386 platform, the Linux kernel uses a rather complicated boot
convention. This has evolved partially due to historical aspects, as convention. This has evolved partially due to historical aspects, as
@ -34,6 +34,8 @@ Protocol 2.02: (Kernel 2.4.0-test3-pre3) New command line protocol.
Protocol 2.03: (Kernel 2.4.18-pre1) Explicitly makes the highest possible Protocol 2.03: (Kernel 2.4.18-pre1) Explicitly makes the highest possible
initrd address available to the bootloader. initrd address available to the bootloader.
Protocol 2.04: (Kernel 2.6.14) Extend the syssize field to four bytes.
**** MEMORY LAYOUT **** MEMORY LAYOUT
@ -103,10 +105,9 @@ The header looks like:
Offset Proto Name Meaning Offset Proto Name Meaning
/Size /Size
01F1/1 ALL setup_sects The size of the setup in sectors 01F1/1 ALL(1 setup_sects The size of the setup in sectors
01F2/2 ALL root_flags If set, the root is mounted readonly 01F2/2 ALL root_flags If set, the root is mounted readonly
01F4/2 ALL syssize DO NOT USE - for bootsect.S use only 01F4/4 2.04+(2 syssize The size of the 32-bit code in 16-byte paras
01F6/2 ALL swap_dev DO NOT USE - obsolete
01F8/2 ALL ram_size DO NOT USE - for bootsect.S use only 01F8/2 ALL ram_size DO NOT USE - for bootsect.S use only
01FA/2 ALL vid_mode Video mode control 01FA/2 ALL vid_mode Video mode control
01FC/2 ALL root_dev Default root device number 01FC/2 ALL root_dev Default root device number
@ -129,8 +130,12 @@ Offset Proto Name Meaning
0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line 0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line
022C/4 2.03+ initrd_addr_max Highest legal initrd address 022C/4 2.03+ initrd_addr_max Highest legal initrd address
For backwards compatibility, if the setup_sects field contains 0, the (1) For backwards compatibility, if the setup_sects field contains 0, the
real value is 4. real value is 4.
(2) For boot protocol prior to 2.04, the upper two bytes of the syssize
field are unusable, which means the size of a bzImage kernel
cannot be determined.
If the "HdrS" (0x53726448) magic number is not found at offset 0x202, If the "HdrS" (0x53726448) magic number is not found at offset 0x202,
the boot protocol version is "old". Loading an old kernel, the the boot protocol version is "old". Loading an old kernel, the
@ -230,12 +235,16 @@ loader to communicate with the kernel. Some of its options are also
relevant to the boot loader itself, see "special command line options" relevant to the boot loader itself, see "special command line options"
below. below.
The kernel command line is a null-terminated string up to 255 The kernel command line is a null-terminated string currently up to
characters long, plus the final null. 255 characters long, plus the final null. A string that is too long
will be automatically truncated by the kernel, a boot loader may allow
a longer command line to be passed to permit future kernels to extend
this limit.
If the boot protocol version is 2.02 or later, the address of the If the boot protocol version is 2.02 or later, the address of the
kernel command line is given by the header field cmd_line_ptr (see kernel command line is given by the header field cmd_line_ptr (see
above.) above.) This address can be anywhere between the end of the setup
heap and 0xA0000.
If the protocol version is *not* 2.02 or higher, the kernel If the protocol version is *not* 2.02 or higher, the kernel
command line is entered using the following protocol: command line is entered using the following protocol:
@ -255,7 +264,7 @@ command line is entered using the following protocol:
**** SAMPLE BOOT CONFIGURATION **** SAMPLE BOOT CONFIGURATION
As a sample configuration, assume the following layout of the real As a sample configuration, assume the following layout of the real
mode segment: mode segment (this is a typical, and recommended layout):
0x0000-0x7FFF Real mode kernel 0x0000-0x7FFF Real mode kernel
0x8000-0x8FFF Stack and heap 0x8000-0x8FFF Stack and heap
@ -312,9 +321,9 @@ Such a boot loader should enter the following fields in the header:
**** LOADING THE REST OF THE KERNEL **** LOADING THE REST OF THE KERNEL
The non-real-mode kernel starts at offset (setup_sects+1)*512 in the The 32-bit (non-real-mode) kernel starts at offset (setup_sects+1)*512
kernel file (again, if setup_sects == 0 the real value is 4.) It in the kernel file (again, if setup_sects == 0 the real value is 4.)
should be loaded at address 0x10000 for Image/zImage kernels and It should be loaded at address 0x10000 for Image/zImage kernels and
0x100000 for bzImage kernels. 0x100000 for bzImage kernels.
The kernel is a bzImage kernel if the protocol >= 2.00 and the 0x01 The kernel is a bzImage kernel if the protocol >= 2.00 and the 0x01

View File

@ -1174,6 +1174,11 @@ running once the system is up.
New name for the ramdisk parameter. New name for the ramdisk parameter.
See Documentation/ramdisk.txt. See Documentation/ramdisk.txt.
rdinit= [KNL]
Format: <full_path>
Run specified binary instead of /init from the ramdisk,
used for early userspace startup. See initrd.
reboot= [BUGS=IA-32,BUGS=ARM,BUGS=IA-64] Rebooting mode reboot= [BUGS=IA-32,BUGS=ARM,BUGS=IA-64] Rebooting mode
Format: <reboot_mode>[,<reboot_mode2>[,...]] Format: <reboot_mode>[,<reboot_mode2>[,...]]
See arch/*/kernel/reboot.c. See arch/*/kernel/reboot.c.

View File

@ -1,22 +1,20 @@
From kernel/suspend.c: Some warnings, first.
* BIG FAT WARNING ********************************************************* * BIG FAT WARNING *********************************************************
* *
* If you have unsupported (*) devices using DMA...
* ...say goodbye to your data.
*
* If you touch anything on disk between suspend and resume... * If you touch anything on disk between suspend and resume...
* ...kiss your data goodbye. * ...kiss your data goodbye.
* *
* If your disk driver does not support suspend... (IDE does) * If you do resume from initrd after your filesystems are mounted...
* ...you'd better find out how to get along * ...bye bye root partition.
* without your data. * [this is actually same case as above]
* *
* If you change kernel command line between suspend and resume... * If you have unsupported (*) devices using DMA, you may have some
* ...prepare for nasty fsck or worse. * problems. If your disk driver does not support suspend... (IDE does),
* * it may cause some problems, too. If you change kernel command line
* If you change your hardware while system is suspended... * between suspend and resume, it may do something wrong. If you change
* ...well, it was not good idea. * your hardware while system is suspended... well, it was not good idea;
* but it will probably only crash.
* *
* (*) suspend/resume support is needed to make it safe. * (*) suspend/resume support is needed to make it safe.
@ -30,6 +28,13 @@ echo shutdown > /sys/power/disk; echo disk > /sys/power/state
echo platform > /sys/power/disk; echo disk > /sys/power/state echo platform > /sys/power/disk; echo disk > /sys/power/state
Encrypted suspend image:
------------------------
If you want to store your suspend image encrypted with a temporary
key to prevent data gathering after resume you must compile
crypto and the aes algorithm into the kernel - modules won't work
as they cannot be loaded at resume time.
Article about goals and implementation of Software Suspend for Linux Article about goals and implementation of Software Suspend for Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -85,11 +90,6 @@ resume.
You have your server on UPS. Power died, and UPS is indicating 30 You have your server on UPS. Power died, and UPS is indicating 30
seconds to failure. What do you do? Suspend to disk. seconds to failure. What do you do? Suspend to disk.
Ethernet card in your server died. You want to replace it. Your
server is not hotplug capable. What do you do? Suspend to disk,
replace ethernet card, resume. If you are fast your users will not
even see broken connections.
Q: Maybe I'm missing something, but why don't the regular I/O paths work? Q: Maybe I'm missing something, but why don't the regular I/O paths work?
@ -117,31 +117,6 @@ Q: Does linux support ACPI S4?
A: Yes. That's what echo platform > /sys/power/disk does. A: Yes. That's what echo platform > /sys/power/disk does.
Q: My machine doesn't work with ACPI. How can I use swsusp than ?
A: Do a reboot() syscall with right parameters. Warning: glibc gets in
its way, so check with strace:
reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, 0xd000fce2)
(Thanks to Peter Osterlund:)
#include <unistd.h>
#include <syscall.h>
#define LINUX_REBOOT_MAGIC1 0xfee1dead
#define LINUX_REBOOT_MAGIC2 672274793
#define LINUX_REBOOT_CMD_SW_SUSPEND 0xD000FCE2
int main()
{
syscall(SYS_reboot, LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2,
LINUX_REBOOT_CMD_SW_SUSPEND, 0);
return 0;
}
Also /sys/ interface should be still present.
Q: What is 'suspend2'? Q: What is 'suspend2'?
A: suspend2 is 'Software Suspend 2', a forked implementation of A: suspend2 is 'Software Suspend 2', a forked implementation of
@ -312,9 +287,45 @@ system is shut down or suspended. Additionally use the encrypted
suspend image to prevent sensitive data from being stolen after suspend image to prevent sensitive data from being stolen after
resume. resume.
Q: Why we cannot suspend to a swap file? Q: Why can't we suspend to a swap file?
A: Because accessing swap file needs the filesystem mounted, and A: Because accessing swap file needs the filesystem mounted, and
filesystem might do something wrong (like replaying the journal) filesystem might do something wrong (like replaying the journal)
during mount. [Probably could be solved by modifying every filesystem during mount.
to support some kind of "really read-only!" option. Patches welcome.]
There are few ways to get that fixed:
1) Probably could be solved by modifying every filesystem to support
some kind of "really read-only!" option. Patches welcome.
2) suspend2 gets around that by storing absolute positions in on-disk
image (and blocksize), with resume parameter pointing directly to
suspend header.
Q: Is there a maximum system RAM size that is supported by swsusp?
A: It should work okay with highmem.
Q: Does swsusp (to disk) use only one swap partition or can it use
multiple swap partitions (aggregate them into one logical space)?
A: Only one swap partition, sorry.
Q: If my application(s) causes lots of memory & swap space to be used
(over half of the total system RAM), is it correct that it is likely
to be useless to try to suspend to disk while that app is running?
A: No, it should work okay, as long as your app does not mlock()
it. Just prepare big enough swap partition.
Q: What information is usefull for debugging suspend-to-disk problems?
A: Well, last messages on the screen are always useful. If something
is broken, it is usually some kernel driver, therefore trying with as
little as possible modules loaded helps a lot. I also prefer people to
suspend from console, preferably without X running. Booting with
init=/bin/bash, then swapon and starting suspend sequence manually
usually does the trick. Then it is good idea to try with latest
vanilla kernel.

View File

@ -120,6 +120,7 @@ IBM ThinkPad T42p (2373-GTG) s3_bios (2)
IBM TP X20 ??? (*) IBM TP X20 ??? (*)
IBM TP X30 s3_bios (2) IBM TP X30 s3_bios (2)
IBM TP X31 / Type 2672-XXH none (1), use radeontool (http://fdd.com/software/radeon/) to turn off backlight. IBM TP X31 / Type 2672-XXH none (1), use radeontool (http://fdd.com/software/radeon/) to turn off backlight.
IBM TP X32 none (1), but backlight is on and video is trashed after long suspend
IBM Thinkpad X40 Type 2371-7JG s3_bios,s3_mode (4) IBM Thinkpad X40 Type 2371-7JG s3_bios,s3_mode (4)
Medion MD4220 ??? (*) Medion MD4220 ??? (*)
Samsung P35 vbetool needed (6) Samsung P35 vbetool needed (6)

View File

@ -1,5 +1,5 @@
==================================================================== ====================================================================
= Adaptec Aic7xxx Fast -> Ultra160 Family Manager Set v6.2.28 = = Adaptec Aic7xxx Fast -> Ultra160 Family Manager Set v7.0 =
= README for = = README for =
= The Linux Operating System = = The Linux Operating System =
==================================================================== ====================================================================
@ -131,6 +131,10 @@ The following information is available in this file:
SCSI "stub" effects. SCSI "stub" effects.
2. Version History 2. Version History
7.0 (4th August, 2005)
- Updated driver to use SCSI transport class infrastructure
- Upported sequencer and core fixes from last adaptec released
version of the driver.
6.2.36 (June 3rd, 2003) 6.2.36 (June 3rd, 2003)
- Correct code that disables PCI parity error checking. - Correct code that disables PCI parity error checking.
- Correct and simplify handling of the ignore wide residue - Correct and simplify handling of the ignore wide residue

View File

@ -373,13 +373,11 @@ Summary:
scsi_activate_tcq - turn on tag command queueing scsi_activate_tcq - turn on tag command queueing
scsi_add_device - creates new scsi device (lu) instance scsi_add_device - creates new scsi device (lu) instance
scsi_add_host - perform sysfs registration and SCSI bus scan. scsi_add_host - perform sysfs registration and SCSI bus scan.
scsi_add_timer - (re-)start timer on a SCSI command.
scsi_adjust_queue_depth - change the queue depth on a SCSI device scsi_adjust_queue_depth - change the queue depth on a SCSI device
scsi_assign_lock - replace default host_lock with given lock scsi_assign_lock - replace default host_lock with given lock
scsi_bios_ptable - return copy of block device's partition table scsi_bios_ptable - return copy of block device's partition table
scsi_block_requests - prevent further commands being queued to given host scsi_block_requests - prevent further commands being queued to given host
scsi_deactivate_tcq - turn off tag command queueing scsi_deactivate_tcq - turn off tag command queueing
scsi_delete_timer - cancel timer on a SCSI command.
scsi_host_alloc - return a new scsi_host instance whose refcount==1 scsi_host_alloc - return a new scsi_host instance whose refcount==1
scsi_host_get - increments Scsi_Host instance's refcount scsi_host_get - increments Scsi_Host instance's refcount
scsi_host_put - decrements Scsi_Host instance's refcount (free if 0) scsi_host_put - decrements Scsi_Host instance's refcount (free if 0)
@ -457,27 +455,6 @@ struct scsi_device * scsi_add_device(struct Scsi_Host *shost,
int scsi_add_host(struct Scsi_Host *shost, struct device * dev) int scsi_add_host(struct Scsi_Host *shost, struct device * dev)
/**
* scsi_add_timer - (re-)start timer on a SCSI command.
* @scmd: pointer to scsi command instance
* @timeout: duration of timeout in "jiffies"
* @complete: pointer to function to call if timeout expires
*
* Returns nothing
*
* Might block: no
*
* Notes: Each scsi command has its own timer, and as it is added
* to the queue, we set up the timer. When the command completes,
* we cancel the timer. An LLD can use this function to change
* the existing timeout value.
*
* Defined in: drivers/scsi/scsi_error.c
**/
void scsi_add_timer(struct scsi_cmnd *scmd, int timeout,
void (*complete)(struct scsi_cmnd *))
/** /**
* scsi_adjust_queue_depth - allow LLD to change queue depth on a SCSI device * scsi_adjust_queue_depth - allow LLD to change queue depth on a SCSI device
* @sdev: pointer to SCSI device to change queue depth on * @sdev: pointer to SCSI device to change queue depth on
@ -565,24 +542,6 @@ void scsi_block_requests(struct Scsi_Host * shost)
void scsi_deactivate_tcq(struct scsi_device *sdev, int depth) void scsi_deactivate_tcq(struct scsi_device *sdev, int depth)
/**
* scsi_delete_timer - cancel timer on a SCSI command.
* @scmd: pointer to scsi command instance
*
* Returns 1 if able to cancel timer else 0 (i.e. too late or already
* cancelled).
*
* Might block: no [may in the future if it invokes del_timer_sync()]
*
* Notes: All commands issued by upper levels already have a timeout
* associated with them. An LLD can use this function to cancel the
* timer.
*
* Defined in: drivers/scsi/scsi_error.c
**/
int scsi_delete_timer(struct scsi_cmnd *scmd)
/** /**
* scsi_host_alloc - create a scsi host adapter instance and perform basic * scsi_host_alloc - create a scsi host adapter instance and perform basic
* initialization. * initialization.

View File

@ -99,6 +99,7 @@ statically linked into the kernel). Those options are:
SONYPI_MEYE_MASK 0x0400 SONYPI_MEYE_MASK 0x0400
SONYPI_MEMORYSTICK_MASK 0x0800 SONYPI_MEMORYSTICK_MASK 0x0800
SONYPI_BATTERY_MASK 0x1000 SONYPI_BATTERY_MASK 0x1000
SONYPI_WIRELESS_MASK 0x2000
useinput: if set (which is the default) two input devices are useinput: if set (which is the default) two input devices are
created, one which interprets the jogdial events as created, one which interprets the jogdial events as
@ -137,6 +138,15 @@ Bugs:
speed handling etc). Use ACPI instead of APM if it works on your speed handling etc). Use ACPI instead of APM if it works on your
laptop. laptop.
- sonypi lacks the ability to distinguish between certain key
events on some models.
- some models with the nvidia card (geforce go 6200 tc) uses a
different way to adjust the backlighting of the screen. There
is a userspace utility to adjust the brightness on those models,
which can be downloaded from
http://www.acc.umu.se/~erikw/program/smartdimmer-0.1.tar.bz2
- since all development was done by reverse engineering, there is - since all development was done by reverse engineering, there is
_absolutely no guarantee_ that this driver will not crash your _absolutely no guarantee_ that this driver will not crash your
laptop. Permanently. laptop. Permanently.

View File

@ -202,13 +202,6 @@ P: Colin Leroy
M: colin@colino.net M: colin@colino.net
S: Maintained S: Maintained
ADVANSYS SCSI DRIVER
P: Bob Frey
M: linux@advansys.com
W: http://www.advansys.com/linux.html
L: linux-scsi@vger.kernel.org
S: Maintained
AEDSP16 DRIVER AEDSP16 DRIVER
P: Riccardo Facchetti P: Riccardo Facchetti
M: fizban@tin.it M: fizban@tin.it
@ -696,6 +689,11 @@ M: dz@debian.org
W: http://www.debian.org/~dz/i8k/ W: http://www.debian.org/~dz/i8k/
S: Maintained S: Maintained
DELL SYSTEMS MANAGEMENT BASE DRIVER (dcdbas)
P: Doug Warzecha
M: Douglas_Warzecha@dell.com
S: Maintained
DEVICE-MAPPER DEVICE-MAPPER
P: Alasdair Kergon P: Alasdair Kergon
L: dm-devel@redhat.com L: dm-devel@redhat.com
@ -824,6 +822,13 @@ L: emu10k1-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/emu10k1/ W: http://sourceforge.net/projects/emu10k1/
S: Maintained S: Maintained
EMULEX LPFC FC SCSI DRIVER
P: James Smart
M: james.smart@emulex.com
L: linux-scsi@vger.kernel.org
W: http://sourceforge.net/projects/lpfcxxxx
S: Supported
EPSON 1355 FRAMEBUFFER DRIVER EPSON 1355 FRAMEBUFFER DRIVER
P: Christopher Hoover P: Christopher Hoover
M: ch@murgatroid.com, ch@hpl.hp.com M: ch@murgatroid.com, ch@hpl.hp.com
@ -879,7 +884,7 @@ S: Maintained
FILESYSTEMS (VFS and infrastructure) FILESYSTEMS (VFS and infrastructure)
P: Alexander Viro P: Alexander Viro
M: viro@parcelfarce.linux.theplanet.co.uk M: viro@zeniv.linux.org.uk
S: Maintained S: Maintained
FIRMWARE LOADER (request_firmware) FIRMWARE LOADER (request_firmware)
@ -1967,7 +1972,6 @@ S: Supported
ROCKETPORT DRIVER ROCKETPORT DRIVER
P: Comtrol Corp. P: Comtrol Corp.
M: support@comtrol.com
W: http://www.comtrol.com W: http://www.comtrol.com
S: Maintained S: Maintained

View File

@ -479,6 +479,9 @@ config EISA
depends on ALPHA_GENERIC || ALPHA_JENSEN || ALPHA_ALCOR || ALPHA_MIKASA || ALPHA_SABLE || ALPHA_LYNX || ALPHA_NORITAKE || ALPHA_RAWHIDE depends on ALPHA_GENERIC || ALPHA_JENSEN || ALPHA_ALCOR || ALPHA_MIKASA || ALPHA_SABLE || ALPHA_LYNX || ALPHA_NORITAKE || ALPHA_RAWHIDE
default y default y
config ARCH_MAY_HAVE_PC_FDC
def_bool y
config SMP config SMP
bool "Symmetric multi-processing support" bool "Symmetric multi-processing support"
depends on ALPHA_SABLE || ALPHA_LYNX || ALPHA_RAWHIDE || ALPHA_DP264 || ALPHA_WILDFIRE || ALPHA_TITAN || ALPHA_GENERIC || ALPHA_SHARK || ALPHA_MARVEL depends on ALPHA_SABLE || ALPHA_LYNX || ALPHA_RAWHIDE || ALPHA_DP264 || ALPHA_WILDFIRE || ALPHA_TITAN || ALPHA_GENERIC || ALPHA_SHARK || ALPHA_MARVEL

View File

@ -149,7 +149,7 @@ irqreturn_t timer_interrupt(int irq, void *dev, struct pt_regs * regs)
* CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
* called as close as possible to 500 ms before the new second starts. * called as close as possible to 500 ms before the new second starts.
*/ */
if ((time_status & STA_UNSYNC) == 0 if (ntp_synced()
&& xtime.tv_sec > state.last_rtc_update + 660 && xtime.tv_sec > state.last_rtc_update + 660
&& xtime.tv_nsec >= 500000 - ((unsigned) TICK_SIZE) / 2 && xtime.tv_nsec >= 500000 - ((unsigned) TICK_SIZE) / 2
&& xtime.tv_nsec <= 500000 + ((unsigned) TICK_SIZE) / 2) { && xtime.tv_nsec <= 500000 + ((unsigned) TICK_SIZE) / 2) {
@ -502,10 +502,7 @@ do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();

View File

@ -64,6 +64,9 @@ config GENERIC_CALIBRATE_DELAY
config GENERIC_BUST_SPINLOCK config GENERIC_BUST_SPINLOCK
bool bool
config ARCH_MAY_HAVE_PC_FDC
bool
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool
@ -150,6 +153,7 @@ config ARCH_RPC
select ARCH_ACORN select ARCH_ACORN
select FIQ select FIQ
select TIMER_ACORN select TIMER_ACORN
select ARCH_MAY_HAVE_PC_FDC
help help
On the Acorn Risc-PC, Linux can support the internal IDE disk and On the Acorn Risc-PC, Linux can support the internal IDE disk and
CD-ROM interface, serial and parallel port, and the floppy drive. CD-ROM interface, serial and parallel port, and the floppy drive.

View File

@ -7,7 +7,8 @@
* so we have to figure out the machine for ourselves... * so we have to figure out the machine for ourselves...
* *
* Support for Poodle, Corgi (SL-C700), Shepherd (SL-C750) * Support for Poodle, Corgi (SL-C700), Shepherd (SL-C750)
* and Husky (SL-C760). * Husky (SL-C760), Tosa (SL-C6000), Spitz (SL-C3000),
* Akita (SL-C1000) and Borzoi (SL-C3100).
* *
*/ */
@ -23,6 +24,22 @@
__SharpSL_start: __SharpSL_start:
/* Check for TC6393 - if found we have a Tosa */
ldr r7, .TOSAID
mov r1, #0x10000000 @ Base address of TC6393 chip
mov r6, #0x03
ldrh r3, [r1, #8] @ Load TC6393XB Revison: This is 0x0003
cmp r6, r3
beq .SHARPEND @ Success -> tosa
/* Check for pxa270 - if found, branch */
mrc p15, 0, r4, c0, c0 @ Get Processor ID
and r4, r4, #0xffffff00
ldr r3, .PXA270ID
cmp r4, r3
beq .PXA270
/* Check for w100 - if not found we have a Poodle */
ldr r1, .W100ADDR @ Base address of w100 chip + regs offset ldr r1, .W100ADDR @ Base address of w100 chip + regs offset
mov r6, #0x31 @ Load Magic Init value mov r6, #0x31 @ Load Magic Init value
@ -30,7 +47,7 @@ __SharpSL_start:
mov r5, #0x3000 mov r5, #0x3000
.W100LOOP: .W100LOOP:
subs r5, r5, #1 subs r5, r5, #1
bne .W100LOOP bne .W100LOOP
mov r6, #0x30 @ Load 2nd Magic Init value mov r6, #0x30 @ Load 2nd Magic Init value
str r6, [r1, #0x280] @ to SCRATCH_UMSK str r6, [r1, #0x280] @ to SCRATCH_UMSK
@ -40,45 +57,52 @@ __SharpSL_start:
cmp r6, r3 cmp r6, r3
bne .SHARPEND @ We have no w100 - Poodle bne .SHARPEND @ We have no w100 - Poodle
mrc p15, 0, r6, c0, c0 @ Get Processor ID /* Check for pxa250 - if found we have a Corgi */
and r6, r6, #0xffffff00
ldr r7, .CORGIID ldr r7, .CORGIID
ldr r3, .PXA255ID ldr r3, .PXA255ID
cmp r6, r3 cmp r4, r3
blo .SHARPEND @ We have a PXA250 - Corgi blo .SHARPEND @ We have a PXA250 - Corgi
mov r1, #0x0c000000 @ Base address of NAND chip /* Check for 64MiB flash - if found we have a Shepherd */
ldrb r3, [r1, #24] @ Load FLASHCTL bl get_flash_ids
bic r3, r3, #0x11 @ SET NCE
orr r3, r3, #0x0a @ SET CLR + FLWP
strb r3, [r1, #24] @ Save to FLASHCTL
mov r2, #0x90 @ Command "readid"
strb r2, [r1, #20] @ Save to FLASHIO
bic r3, r3, #2 @ CLR CLE
orr r3, r3, #4 @ SET ALE
strb r3, [r1, #24] @ Save to FLASHCTL
mov r2, #0 @ Address 0x00
strb r2, [r1, #20] @ Save to FLASHIO
bic r3, r3, #4 @ CLR ALE
strb r3, [r1, #24] @ Save to FLASHCTL
.SHARP1:
ldrb r3, [r1, #24] @ Load FLASHCTL
tst r3, #32 @ Is chip ready?
beq .SHARP1
ldrb r2, [r1, #20] @ NAND Manufacturer ID
ldrb r3, [r1, #20] @ NAND Chip ID
ldr r7, .SHEPHERDID ldr r7, .SHEPHERDID
cmp r3, #0x76 @ 64MiB flash cmp r3, #0x76 @ 64MiB flash
beq .SHARPEND @ We have Shepherd beq .SHARPEND @ We have Shepherd
/* Must be a Husky */
ldr r7, .HUSKYID @ Must be Husky ldr r7, .HUSKYID @ Must be Husky
b .SHARPEND b .SHARPEND
.PXA270:
/* Check for 16MiB flash - if found we have Spitz */
bl get_flash_ids
ldr r7, .SPITZID
cmp r3, #0x73 @ 16MiB flash
beq .SHARPEND @ We have Spitz
/* Check for a second SCOOP chip - if found we have Borzoi */
ldr r1, .SCOOP2ADDR
ldr r7, .BORZOIID
mov r6, #0x0140
strh r6, [r1]
ldrh r6, [r1]
cmp r6, #0x0140
beq .SHARPEND @ We have Borzoi
/* Must be Akita */
ldr r7, .AKITAID
b .SHARPEND @ We have Borzoi
.PXA255ID: .PXA255ID:
.word 0x69052d00 @ PXA255 Processor ID .word 0x69052d00 @ PXA255 Processor ID
.PXA270ID:
.word 0x69054100 @ PXA270 Processor ID
.W100ID: .W100ID:
.word 0x57411002 @ w100 Chip ID .word 0x57411002 @ w100 Chip ID
.W100ADDR: .W100ADDR:
.word 0x08010000 @ w100 Chip ID Reg Address .word 0x08010000 @ w100 Chip ID Reg Address
.SCOOP2ADDR:
.word 0x08800040
.POODLEID: .POODLEID:
.word MACH_TYPE_POODLE .word MACH_TYPE_POODLE
.CORGIID: .CORGIID:
@ -87,6 +111,41 @@ __SharpSL_start:
.word MACH_TYPE_SHEPHERD .word MACH_TYPE_SHEPHERD
.HUSKYID: .HUSKYID:
.word MACH_TYPE_HUSKY .word MACH_TYPE_HUSKY
.TOSAID:
.word MACH_TYPE_TOSA
.SPITZID:
.word MACH_TYPE_SPITZ
.AKITAID:
.word MACH_TYPE_AKITA
.BORZOIID:
.word MACH_TYPE_BORZOI
/*
* Return: r2 - NAND Manufacturer ID
* r3 - NAND Chip ID
* Corrupts: r1
*/
get_flash_ids:
mov r1, #0x0c000000 @ Base address of NAND chip
ldrb r3, [r1, #24] @ Load FLASHCTL
bic r3, r3, #0x11 @ SET NCE
orr r3, r3, #0x0a @ SET CLR + FLWP
strb r3, [r1, #24] @ Save to FLASHCTL
mov r2, #0x90 @ Command "readid"
strb r2, [r1, #20] @ Save to FLASHIO
bic r3, r3, #2 @ CLR CLE
orr r3, r3, #4 @ SET ALE
strb r3, [r1, #24] @ Save to FLASHCTL
mov r2, #0 @ Address 0x00
strb r2, [r1, #20] @ Save to FLASHIO
bic r3, r3, #4 @ CLR ALE
strb r3, [r1, #24] @ Save to FLASHCTL
.fids1:
ldrb r3, [r1, #24] @ Load FLASHCTL
tst r3, #32 @ Is chip ready?
beq .fids1
ldrb r2, [r1, #20] @ NAND Manufacturer ID
ldrb r3, [r1, #20] @ NAND Chip ID
mov pc, lr
.SHARPEND: .SHARPEND:

View File

@ -1,7 +1,7 @@
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc2 # Linux kernel version: 2.6.13
# Fri Jul 8 04:49:34 2005 # Mon Sep 5 18:07:12 2005
# #
CONFIG_ARM=y CONFIG_ARM=y
CONFIG_MMU=y CONFIG_MMU=y
@ -102,9 +102,11 @@ CONFIG_OMAP_MUX_WARNINGS=y
# CONFIG_OMAP_MPU_TIMER is not set # CONFIG_OMAP_MPU_TIMER is not set
CONFIG_OMAP_32K_TIMER=y CONFIG_OMAP_32K_TIMER=y
CONFIG_OMAP_32K_TIMER_HZ=128 CONFIG_OMAP_32K_TIMER_HZ=128
# CONFIG_OMAP_DM_TIMER is not set
CONFIG_OMAP_LL_DEBUG_UART1=y CONFIG_OMAP_LL_DEBUG_UART1=y
# CONFIG_OMAP_LL_DEBUG_UART2 is not set # CONFIG_OMAP_LL_DEBUG_UART2 is not set
# CONFIG_OMAP_LL_DEBUG_UART3 is not set # CONFIG_OMAP_LL_DEBUG_UART3 is not set
CONFIG_OMAP_SERIAL_WAKE=y
# #
# OMAP Core Type # OMAP Core Type
@ -166,7 +168,6 @@ CONFIG_ISA_DMA_API=y
# #
# Kernel Features # Kernel Features
# #
# CONFIG_SMP is not set
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NO_IDLE_HZ=y CONFIG_NO_IDLE_HZ=y
# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set # CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
@ -229,6 +230,68 @@ CONFIG_BINFMT_AOUT=y
CONFIG_PM=y CONFIG_PM=y
# CONFIG_APM is not set # CONFIG_APM is not set
#
# Networking
#
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_ARPD is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_IP_TCPDIAG=y
# CONFIG_IP_TCPDIAG_IPV6 is not set
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_SCHED is not set
# CONFIG_NET_CLS_ROUTE is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# #
# Device Drivers # Device Drivers
# #
@ -243,78 +306,7 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
# #
# Memory Technology Devices (MTD) # Memory Technology Devices (MTD)
# #
CONFIG_MTD=y # CONFIG_MTD is not set
CONFIG_MTD_DEBUG=y
CONFIG_MTD_DEBUG_VERBOSE=3
# CONFIG_MTD_CONCAT is not set
CONFIG_MTD_PARTITIONS=y
# CONFIG_MTD_REDBOOT_PARTS is not set
CONFIG_MTD_CMDLINE_PARTS=y
# CONFIG_MTD_AFS_PARTS is not set
#
# User Modules And Translation Layers
#
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_FTL is not set
# CONFIG_NFTL is not set
# CONFIG_INFTL is not set
#
# RAM/ROM/Flash chip drivers
#
CONFIG_MTD_CFI=y
# CONFIG_MTD_JEDECPROBE is not set
CONFIG_MTD_GEN_PROBE=y
# CONFIG_MTD_CFI_ADV_OPTIONS is not set
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_CFI_I4 is not set
# CONFIG_MTD_CFI_I8 is not set
CONFIG_MTD_CFI_INTELEXT=y
# CONFIG_MTD_CFI_AMDSTD is not set
# CONFIG_MTD_CFI_STAA is not set
CONFIG_MTD_CFI_UTIL=y
# CONFIG_MTD_RAM is not set
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
# CONFIG_MTD_XIP is not set
#
# Mapping drivers for chip access
#
# CONFIG_MTD_COMPLEX_MAPPINGS is not set
# CONFIG_MTD_PHYSMAP is not set
# CONFIG_MTD_ARM_INTEGRATOR is not set
# CONFIG_MTD_EDB7312 is not set
#
# Self-contained MTD device drivers
#
# CONFIG_MTD_SLRAM is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOC2000 is not set
# CONFIG_MTD_DOC2001 is not set
# CONFIG_MTD_DOC2001PLUS is not set
#
# NAND Flash Device Drivers
#
# CONFIG_MTD_NAND is not set
# #
# Parallel port support # Parallel port support
@ -403,72 +395,8 @@ CONFIG_SCSI_PROC_FS=y
# #
# #
# Networking support # Network device support
# #
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_ARPD is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_IP_TCPDIAG=y
# CONFIG_IP_TCPDIAG_IPV6 is not set
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
# CONFIG_NET_CLS_ROUTE is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
# CONFIG_DUMMY is not set # CONFIG_DUMMY is not set
# CONFIG_BONDING is not set # CONFIG_BONDING is not set
@ -518,6 +446,8 @@ CONFIG_SLIP_COMPRESSED=y
# CONFIG_SLIP_MODE_SLIP6 is not set # CONFIG_SLIP_MODE_SLIP6 is not set
# CONFIG_SHAPER is not set # CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set # CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
# #
# ISDN subsystem # ISDN subsystem
@ -615,77 +545,15 @@ CONFIG_WATCHDOG_NOWAYOUT=y
# #
# I2C support # I2C support
# #
CONFIG_I2C=y # CONFIG_I2C is not set
CONFIG_I2C_CHARDEV=y
#
# I2C Algorithms
#
# CONFIG_I2C_ALGOBIT is not set
# CONFIG_I2C_ALGOPCF is not set
# CONFIG_I2C_ALGOPCA is not set
#
# I2C Hardware Bus support
#
# CONFIG_I2C_ISA is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_PCA_ISA is not set
#
# Hardware Sensors Chip support
#
# CONFIG_I2C_SENSOR is not set # CONFIG_I2C_SENSOR is not set
# CONFIG_SENSORS_ADM1021 is not set CONFIG_ISP1301_OMAP=y
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_FSCHER is not set
# CONFIG_SENSORS_FSCPOS is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# #
# Other I2C Chip support # Hardware Monitoring support
# #
# CONFIG_SENSORS_DS1337 is not set CONFIG_HWMON=y
# CONFIG_SENSORS_DS1374 is not set # CONFIG_HWMON_DEBUG_CHIP is not set
# CONFIG_SENSORS_EEPROM is not set
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_SENSORS_PCA9539 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_SENSORS_RTC8564 is not set
CONFIG_ISP1301_OMAP=y
CONFIG_TPS65010=y
# CONFIG_SENSORS_MAX6875 is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_I2C_DEBUG_CHIP is not set
# #
# Misc devices # Misc devices
@ -756,15 +624,9 @@ CONFIG_SOUND=y
# Open Sound System # Open Sound System
# #
CONFIG_SOUND_PRIME=y CONFIG_SOUND_PRIME=y
# CONFIG_SOUND_BT878 is not set
# CONFIG_SOUND_FUSION is not set
# CONFIG_SOUND_CS4281 is not set
# CONFIG_SOUND_SONICVIBES is not set
# CONFIG_SOUND_TRIDENT is not set
# CONFIG_SOUND_MSNDCLAS is not set # CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_SOUND_MSNDPIN is not set # CONFIG_SOUND_MSNDPIN is not set
# CONFIG_SOUND_OSS is not set # CONFIG_SOUND_OSS is not set
# CONFIG_SOUND_TVMIXER is not set
# CONFIG_SOUND_AD1980 is not set # CONFIG_SOUND_AD1980 is not set
# #
@ -810,6 +672,7 @@ CONFIG_EXT2_FS=y
# CONFIG_JBD is not set # CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set # CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set # CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# #
# XFS support # XFS support
@ -817,6 +680,7 @@ CONFIG_EXT2_FS=y
# CONFIG_XFS_FS is not set # CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set # CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y CONFIG_ROMFS_FS=y
CONFIG_INOTIFY=y
# CONFIG_QUOTA is not set # CONFIG_QUOTA is not set
CONFIG_DNOTIFY=y CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set # CONFIG_AUTOFS_FS is not set
@ -857,15 +721,6 @@ CONFIG_RAMFS=y
# CONFIG_BEFS_FS is not set # CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set # CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set # CONFIG_EFS_FS is not set
# CONFIG_JFFS_FS is not set
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=2
# CONFIG_JFFS2_FS_NAND is not set
# CONFIG_JFFS2_FS_NOR_ECC is not set
# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_RTIME=y
# CONFIG_JFFS2_RUBIN is not set
CONFIG_CRAMFS=y CONFIG_CRAMFS=y
# CONFIG_VXFS_FS is not set # CONFIG_VXFS_FS is not set
# CONFIG_HPFS_FS is not set # CONFIG_HPFS_FS is not set
@ -1007,4 +862,3 @@ CONFIG_CRYPTO_DES=y
CONFIG_CRC32=y CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set # CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=y CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y

View File

@ -102,7 +102,7 @@ static unsigned long next_rtc_update;
*/ */
static inline void do_set_rtc(void) static inline void do_set_rtc(void)
{ {
if (time_status & STA_UNSYNC || set_rtc == NULL) if (!ntp_synced() || set_rtc == NULL)
return; return;
if (next_rtc_update && if (next_rtc_update &&
@ -292,10 +292,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;

View File

@ -87,6 +87,7 @@ config FOOTBRIDGE_ADDIN
# EBSA285 board in either host or addin mode # EBSA285 board in either host or addin mode
config ARCH_EBSA285 config ARCH_EBSA285
select ARCH_MAY_HAVE_PC_FDC
bool bool
endif endif

View File

@ -60,7 +60,7 @@ static unsigned long iop321_gettimeoffset(void)
/* /*
* Now convert them to usec. * Now convert them to usec.
*/ */
usec = (unsigned long)(elapsed * (tick_nsec / 1000)) / LATCH; usec = (unsigned long)(elapsed / (CLOCK_TICK_RATE/1000000));
return usec; return usec;
} }

View File

@ -58,7 +58,7 @@ static unsigned long iop331_gettimeoffset(void)
/* /*
* Now convert them to usec. * Now convert them to usec.
*/ */
usec = (unsigned long)(elapsed * (tick_nsec / 1000)) / LATCH; usec = (unsigned long)(elapsed / (CLOCK_TICK_RATE/1000000));
return usec; return usec;
} }

View File

@ -382,7 +382,7 @@ static void ixp2000_GPIO_irq_unmask(unsigned int irq)
static struct irqchip ixp2000_GPIO_irq_chip = { static struct irqchip ixp2000_GPIO_irq_chip = {
.ack = ixp2000_GPIO_irq_mask_ack, .ack = ixp2000_GPIO_irq_mask_ack,
.mask = ixp2000_GPIO_irq_mask, .mask = ixp2000_GPIO_irq_mask,
.unmask = ixp2000_GPIO_irq_unmask .unmask = ixp2000_GPIO_irq_unmask,
.set_type = ixp2000_GPIO_irq_type, .set_type = ixp2000_GPIO_irq_type,
}; };

View File

@ -179,17 +179,17 @@ static void ixp4xx_irq_level_unmask(unsigned int irq)
} }
static struct irqchip ixp4xx_irq_level_chip = { static struct irqchip ixp4xx_irq_level_chip = {
.ack = ixp4xx_irq_mask, .ack = ixp4xx_irq_mask,
.mask = ixp4xx_irq_mask, .mask = ixp4xx_irq_mask,
.unmask = ixp4xx_irq_level_unmask, .unmask = ixp4xx_irq_level_unmask,
.type = ixp4xx_set_irq_type .set_type = ixp4xx_set_irq_type,
}; };
static struct irqchip ixp4xx_irq_edge_chip = { static struct irqchip ixp4xx_irq_edge_chip = {
.ack = ixp4xx_irq_ack, .ack = ixp4xx_irq_ack,
.mask = ixp4xx_irq_mask, .mask = ixp4xx_irq_mask,
.unmask = ixp4xx_irq_unmask, .unmask = ixp4xx_irq_unmask,
.type = ixp4xx_set_irq_type .set_type = ixp4xx_set_irq_type,
}; };
static void ixp4xx_config_irq(unsigned irq, enum ixp4xx_irq_type type) static void ixp4xx_config_irq(unsigned irq, enum ixp4xx_irq_type type)

View File

@ -165,10 +165,10 @@ static struct omap_irq_bank omap1610_irq_banks[] = {
#endif #endif
static struct irqchip omap_irq_chip = { static struct irqchip omap_irq_chip = {
.ack = omap_mask_ack_irq, .ack = omap_mask_ack_irq,
.mask = omap_mask_irq, .mask = omap_mask_irq,
.unmask = omap_unmask_irq, .unmask = omap_unmask_irq,
.wake = omap_wake_irq, .set_wake = omap_wake_irq,
}; };
void __init omap_init_irq(void) void __init omap_init_irq(void)

View File

@ -11,7 +11,7 @@ obj-$(CONFIG_PXA27x) += pxa27x.o
obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o
obj-$(CONFIG_MACH_MAINSTONE) += mainstone.o obj-$(CONFIG_MACH_MAINSTONE) += mainstone.o
obj-$(CONFIG_ARCH_PXA_IDP) += idp.o obj-$(CONFIG_ARCH_PXA_IDP) += idp.o
obj-$(CONFIG_PXA_SHARP_C7xx) += corgi.o corgi_ssp.o ssp.o obj-$(CONFIG_PXA_SHARP_C7xx) += corgi.o corgi_ssp.o corgi_lcd.o ssp.o
obj-$(CONFIG_MACH_POODLE) += poodle.o obj-$(CONFIG_MACH_POODLE) += poodle.o
# Support for blinky lights # Support for blinky lights

View File

@ -39,7 +39,6 @@
#include <asm/mach/sharpsl_param.h> #include <asm/mach/sharpsl_param.h>
#include <asm/hardware/scoop.h> #include <asm/hardware/scoop.h>
#include <video/w100fb.h>
#include "generic.h" #include "generic.h"
@ -87,7 +86,7 @@ struct platform_device corgiscoop_device = {
* also use scoop functions and this makes the power up/down order * also use scoop functions and this makes the power up/down order
* work correctly. * work correctly.
*/ */
static struct platform_device corgissp_device = { struct platform_device corgissp_device = {
.name = "corgi-ssp", .name = "corgi-ssp",
.dev = { .dev = {
.parent = &corgiscoop_device.dev, .parent = &corgiscoop_device.dev,
@ -96,35 +95,6 @@ static struct platform_device corgissp_device = {
}; };
/*
* Corgi w100 Frame Buffer Device
*/
static struct w100fb_mach_info corgi_fb_info = {
.w100fb_ssp_send = corgi_ssp_lcdtg_send,
.comadj = -1,
.phadadj = -1,
};
static struct resource corgi_fb_resources[] = {
[0] = {
.start = 0x08000000,
.end = 0x08ffffff,
.flags = IORESOURCE_MEM,
},
};
static struct platform_device corgifb_device = {
.name = "w100fb",
.id = -1,
.dev = {
.platform_data = &corgi_fb_info,
.parent = &corgissp_device.dev,
},
.num_resources = ARRAY_SIZE(corgi_fb_resources),
.resource = corgi_fb_resources,
};
/* /*
* Corgi Backlight Device * Corgi Backlight Device
*/ */
@ -137,6 +107,27 @@ static struct platform_device corgibl_device = {
}; };
/*
* Corgi Keyboard Device
*/
static struct platform_device corgikbd_device = {
.name = "corgi-keyboard",
.id = -1,
};
/*
* Corgi Touch Screen Device
*/
static struct platform_device corgits_device = {
.name = "corgi-ts",
.dev = {
.parent = &corgissp_device.dev,
},
.id = -1,
};
/* /*
* MMC/SD Device * MMC/SD Device
* *
@ -199,6 +190,11 @@ static void corgi_mci_setpower(struct device *dev, unsigned int vdd)
} }
} }
static int corgi_mci_get_ro(struct device *dev)
{
return GPLR(CORGI_GPIO_nSD_WP) & GPIO_bit(CORGI_GPIO_nSD_WP);
}
static void corgi_mci_exit(struct device *dev, void *data) static void corgi_mci_exit(struct device *dev, void *data)
{ {
free_irq(CORGI_IRQ_GPIO_nSD_DETECT, data); free_irq(CORGI_IRQ_GPIO_nSD_DETECT, data);
@ -208,11 +204,13 @@ static void corgi_mci_exit(struct device *dev, void *data)
static struct pxamci_platform_data corgi_mci_platform_data = { static struct pxamci_platform_data corgi_mci_platform_data = {
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
.init = corgi_mci_init, .init = corgi_mci_init,
.get_ro = corgi_mci_get_ro,
.setpower = corgi_mci_setpower, .setpower = corgi_mci_setpower,
.exit = corgi_mci_exit, .exit = corgi_mci_exit,
}; };
/* /*
* USB Device Controller * USB Device Controller
*/ */
@ -238,14 +236,13 @@ static struct platform_device *devices[] __initdata = {
&corgiscoop_device, &corgiscoop_device,
&corgissp_device, &corgissp_device,
&corgifb_device, &corgifb_device,
&corgikbd_device,
&corgibl_device, &corgibl_device,
&corgits_device,
}; };
static void __init corgi_init(void) static void __init corgi_init(void)
{ {
corgi_fb_info.comadj=sharpsl_param.comadj;
corgi_fb_info.phadadj=sharpsl_param.phadadj;
pxa_gpio_mode(CORGI_GPIO_USB_PULLUP | GPIO_OUT); pxa_gpio_mode(CORGI_GPIO_USB_PULLUP | GPIO_OUT);
pxa_set_udc_info(&udc_info); pxa_set_udc_info(&udc_info);
pxa_set_mci_info(&corgi_mci_platform_data); pxa_set_mci_info(&corgi_mci_platform_data);

View File

@ -0,0 +1,396 @@
/*
* linux/drivers/video/w100fb.c
*
* Corgi LCD Specific Code for ATI Imageon w100 (Wallaby)
*
* Copyright (C) 2005 Richard Purdie
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/device.h>
#include <asm/arch/corgi.h>
#include <asm/mach/sharpsl_param.h>
#include <video/w100fb.h>
/* Register Addresses */
#define RESCTL_ADRS 0x00
#define PHACTRL_ADRS 0x01
#define DUTYCTRL_ADRS 0x02
#define POWERREG0_ADRS 0x03
#define POWERREG1_ADRS 0x04
#define GPOR3_ADRS 0x05
#define PICTRL_ADRS 0x06
#define POLCTRL_ADRS 0x07
/* Resgister Bit Definitions */
#define RESCTL_QVGA 0x01
#define RESCTL_VGA 0x00
#define POWER1_VW_ON 0x01 /* VW Supply FET ON */
#define POWER1_GVSS_ON 0x02 /* GVSS(-8V) Power Supply ON */
#define POWER1_VDD_ON 0x04 /* VDD(8V),SVSS(-4V) Power Supply ON */
#define POWER1_VW_OFF 0x00 /* VW Supply FET OFF */
#define POWER1_GVSS_OFF 0x00 /* GVSS(-8V) Power Supply OFF */
#define POWER1_VDD_OFF 0x00 /* VDD(8V),SVSS(-4V) Power Supply OFF */
#define POWER0_COM_DCLK 0x01 /* COM Voltage DC Bias DAC Serial Data Clock */
#define POWER0_COM_DOUT 0x02 /* COM Voltage DC Bias DAC Serial Data Out */
#define POWER0_DAC_ON 0x04 /* DAC Power Supply ON */
#define POWER0_COM_ON 0x08 /* COM Powewr Supply ON */
#define POWER0_VCC5_ON 0x10 /* VCC5 Power Supply ON */
#define POWER0_DAC_OFF 0x00 /* DAC Power Supply OFF */
#define POWER0_COM_OFF 0x00 /* COM Powewr Supply OFF */
#define POWER0_VCC5_OFF 0x00 /* VCC5 Power Supply OFF */
#define PICTRL_INIT_STATE 0x01
#define PICTRL_INIOFF 0x02
#define PICTRL_POWER_DOWN 0x04
#define PICTRL_COM_SIGNAL_OFF 0x08
#define PICTRL_DAC_SIGNAL_OFF 0x10
#define POLCTRL_SYNC_POL_FALL 0x01
#define POLCTRL_EN_POL_FALL 0x02
#define POLCTRL_DATA_POL_FALL 0x04
#define POLCTRL_SYNC_ACT_H 0x08
#define POLCTRL_EN_ACT_L 0x10
#define POLCTRL_SYNC_POL_RISE 0x00
#define POLCTRL_EN_POL_RISE 0x00
#define POLCTRL_DATA_POL_RISE 0x00
#define POLCTRL_SYNC_ACT_L 0x00
#define POLCTRL_EN_ACT_H 0x00
#define PHACTRL_PHASE_MANUAL 0x01
#define DEFAULT_PHAD_QVGA (9)
#define DEFAULT_COMADJ (125)
/*
* This is only a psuedo I2C interface. We can't use the standard kernel
* routines as the interface is write only. We just assume the data is acked...
*/
static void lcdtg_ssp_i2c_send(u8 data)
{
corgi_ssp_lcdtg_send(POWERREG0_ADRS, data);
udelay(10);
}
static void lcdtg_i2c_send_bit(u8 data)
{
lcdtg_ssp_i2c_send(data);
lcdtg_ssp_i2c_send(data | POWER0_COM_DCLK);
lcdtg_ssp_i2c_send(data);
}
static void lcdtg_i2c_send_start(u8 base)
{
lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK | POWER0_COM_DOUT);
lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK);
lcdtg_ssp_i2c_send(base);
}
static void lcdtg_i2c_send_stop(u8 base)
{
lcdtg_ssp_i2c_send(base);
lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK);
lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK | POWER0_COM_DOUT);
}
static void lcdtg_i2c_send_byte(u8 base, u8 data)
{
int i;
for (i = 0; i < 8; i++) {
if (data & 0x80)
lcdtg_i2c_send_bit(base | POWER0_COM_DOUT);
else
lcdtg_i2c_send_bit(base);
data <<= 1;
}
}
static void lcdtg_i2c_wait_ack(u8 base)
{
lcdtg_i2c_send_bit(base);
}
static void lcdtg_set_common_voltage(u8 base_data, u8 data)
{
/* Set Common Voltage to M62332FP via I2C */
lcdtg_i2c_send_start(base_data);
lcdtg_i2c_send_byte(base_data, 0x9c);
lcdtg_i2c_wait_ack(base_data);
lcdtg_i2c_send_byte(base_data, 0x00);
lcdtg_i2c_wait_ack(base_data);
lcdtg_i2c_send_byte(base_data, data);
lcdtg_i2c_wait_ack(base_data);
lcdtg_i2c_send_stop(base_data);
}
/* Set Phase Adjuct */
static void lcdtg_set_phadadj(struct w100fb_par *par)
{
int adj;
switch(par->xres) {
case 480:
case 640:
/* Setting for VGA */
adj = sharpsl_param.phadadj;
if (adj < 0) {
adj = PHACTRL_PHASE_MANUAL;
} else {
adj = ((adj & 0x0f) << 1) | PHACTRL_PHASE_MANUAL;
}
break;
case 240:
case 320:
default:
/* Setting for QVGA */
adj = (DEFAULT_PHAD_QVGA << 1) | PHACTRL_PHASE_MANUAL;
break;
}
corgi_ssp_lcdtg_send(PHACTRL_ADRS, adj);
}
static int lcd_inited;
static void lcdtg_hw_init(struct w100fb_par *par)
{
if (!lcd_inited) {
int comadj;
/* Initialize Internal Logic & Port */
corgi_ssp_lcdtg_send(PICTRL_ADRS, PICTRL_POWER_DOWN | PICTRL_INIOFF | PICTRL_INIT_STATE
| PICTRL_COM_SIGNAL_OFF | PICTRL_DAC_SIGNAL_OFF);
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_OFF
| POWER0_COM_OFF | POWER0_VCC5_OFF);
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_OFF);
/* VDD(+8V), SVSS(-4V) ON */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_ON);
mdelay(3);
/* DAC ON */
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON
| POWER0_COM_OFF | POWER0_VCC5_OFF);
/* INIB = H, INI = L */
/* PICTL[0] = H , PICTL[1] = PICTL[2] = PICTL[4] = L */
corgi_ssp_lcdtg_send(PICTRL_ADRS, PICTRL_INIT_STATE | PICTRL_COM_SIGNAL_OFF);
/* Set Common Voltage */
comadj = sharpsl_param.comadj;
if (comadj < 0)
comadj = DEFAULT_COMADJ;
lcdtg_set_common_voltage((POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_OFF), comadj);
/* VCC5 ON, DAC ON */
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON |
POWER0_COM_OFF | POWER0_VCC5_ON);
/* GVSS(-8V) ON, VDD ON */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_ON | POWER1_VDD_ON);
mdelay(2);
/* COM SIGNAL ON (PICTL[3] = L) */
corgi_ssp_lcdtg_send(PICTRL_ADRS, PICTRL_INIT_STATE);
/* COM ON, DAC ON, VCC5_ON */
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON
| POWER0_COM_ON | POWER0_VCC5_ON);
/* VW ON, GVSS ON, VDD ON */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_ON | POWER1_GVSS_ON | POWER1_VDD_ON);
/* Signals output enable */
corgi_ssp_lcdtg_send(PICTRL_ADRS, 0);
/* Set Phase Adjuct */
lcdtg_set_phadadj(par);
/* Initialize for Input Signals from ATI */
corgi_ssp_lcdtg_send(POLCTRL_ADRS, POLCTRL_SYNC_POL_RISE | POLCTRL_EN_POL_RISE
| POLCTRL_DATA_POL_RISE | POLCTRL_SYNC_ACT_L | POLCTRL_EN_ACT_H);
udelay(1000);
lcd_inited=1;
} else {
lcdtg_set_phadadj(par);
}
switch(par->xres) {
case 480:
case 640:
/* Set Lcd Resolution (VGA) */
corgi_ssp_lcdtg_send(RESCTL_ADRS, RESCTL_VGA);
break;
case 240:
case 320:
default:
/* Set Lcd Resolution (QVGA) */
corgi_ssp_lcdtg_send(RESCTL_ADRS, RESCTL_QVGA);
break;
}
}
static void lcdtg_suspend(struct w100fb_par *par)
{
/* 60Hz x 2 frame = 16.7msec x 2 = 33.4 msec */
mdelay(34);
/* (1)VW OFF */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_ON | POWER1_VDD_ON);
/* (2)COM OFF */
corgi_ssp_lcdtg_send(PICTRL_ADRS, PICTRL_COM_SIGNAL_OFF);
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_ON);
/* (3)Set Common Voltage Bias 0V */
lcdtg_set_common_voltage(POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_ON, 0);
/* (4)GVSS OFF */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_ON);
/* (5)VCC5 OFF */
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_OFF);
/* (6)Set PDWN, INIOFF, DACOFF */
corgi_ssp_lcdtg_send(PICTRL_ADRS, PICTRL_INIOFF | PICTRL_DAC_SIGNAL_OFF |
PICTRL_POWER_DOWN | PICTRL_COM_SIGNAL_OFF);
/* (7)DAC OFF */
corgi_ssp_lcdtg_send(POWERREG0_ADRS, POWER0_DAC_OFF | POWER0_COM_OFF | POWER0_VCC5_OFF);
/* (8)VDD OFF */
corgi_ssp_lcdtg_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_OFF);
lcd_inited = 0;
}
static struct w100_tg_info corgi_lcdtg_info = {
.change=lcdtg_hw_init,
.suspend=lcdtg_suspend,
.resume=lcdtg_hw_init,
};
/*
* Corgi w100 Frame Buffer Device
*/
static struct w100_mem_info corgi_fb_mem = {
.ext_cntl = 0x00040003,
.sdram_mode_reg = 0x00650021,
.ext_timing_cntl = 0x10002a4a,
.io_cntl = 0x7ff87012,
.size = 0x1fffff,
};
static struct w100_gen_regs corgi_fb_regs = {
.lcd_format = 0x00000003,
.lcdd_cntl1 = 0x01CC0000,
.lcdd_cntl2 = 0x0003FFFF,
.genlcd_cntl1 = 0x00FFFF0D,
.genlcd_cntl2 = 0x003F3003,
.genlcd_cntl3 = 0x000102aa,
};
static struct w100_gpio_regs corgi_fb_gpio = {
.init_data1 = 0x000000bf,
.init_data2 = 0x00000000,
.gpio_dir1 = 0x00000000,
.gpio_oe1 = 0x03c0feff,
.gpio_dir2 = 0x00000000,
.gpio_oe2 = 0x00000000,
};
static struct w100_mode corgi_fb_modes[] = {
{
.xres = 480,
.yres = 640,
.left_margin = 0x56,
.right_margin = 0x55,
.upper_margin = 0x03,
.lower_margin = 0x00,
.crtc_ss = 0x82360056,
.crtc_ls = 0xA0280000,
.crtc_gs = 0x80280028,
.crtc_vpos_gs = 0x02830002,
.crtc_rev = 0x00400008,
.crtc_dclk = 0xA0000000,
.crtc_gclk = 0x8015010F,
.crtc_goe = 0x80100110,
.crtc_ps1_active = 0x41060010,
.pll_freq = 75,
.fast_pll_freq = 100,
.sysclk_src = CLK_SRC_PLL,
.sysclk_divider = 0,
.pixclk_src = CLK_SRC_PLL,
.pixclk_divider = 2,
.pixclk_divider_rotated = 6,
},{
.xres = 240,
.yres = 320,
.left_margin = 0x27,
.right_margin = 0x2e,
.upper_margin = 0x01,
.lower_margin = 0x00,
.crtc_ss = 0x81170027,
.crtc_ls = 0xA0140000,
.crtc_gs = 0xC0140014,
.crtc_vpos_gs = 0x00010141,
.crtc_rev = 0x00400008,
.crtc_dclk = 0xA0000000,
.crtc_gclk = 0x8015010F,
.crtc_goe = 0x80100110,
.crtc_ps1_active = 0x41060010,
.pll_freq = 0,
.fast_pll_freq = 0,
.sysclk_src = CLK_SRC_XTAL,
.sysclk_divider = 0,
.pixclk_src = CLK_SRC_XTAL,
.pixclk_divider = 1,
.pixclk_divider_rotated = 1,
},
};
static struct w100fb_mach_info corgi_fb_info = {
.tg = &corgi_lcdtg_info,
.init_mode = INIT_MODE_ROTATED,
.mem = &corgi_fb_mem,
.regs = &corgi_fb_regs,
.modelist = &corgi_fb_modes[0],
.num_modes = 2,
.gpio = &corgi_fb_gpio,
.xtal_freq = 12500000,
.xtal_dbl = 0,
};
static struct resource corgi_fb_resources[] = {
[0] = {
.start = 0x08000000,
.end = 0x08ffffff,
.flags = IORESOURCE_MEM,
},
};
struct platform_device corgifb_device = {
.name = "w100fb",
.id = -1,
.num_resources = ARRAY_SIZE(corgi_fb_resources),
.resource = corgi_fb_resources,
.dev = {
.platform_data = &corgi_fb_info,
.parent = &corgissp_device.dev,
},
};

View File

@ -2,6 +2,13 @@ if ARCH_S3C2410
menu "S3C24XX Implementations" menu "S3C24XX Implementations"
config MACH_ANUBIS
bool "Simtec Electronics ANUBIS"
select CPU_S3C2440
help
Say Y gere if you are using the Simtec Electronics ANUBIS
development system
config ARCH_BAST config ARCH_BAST
bool "Simtec Electronics BAST (EB2410ITX)" bool "Simtec Electronics BAST (EB2410ITX)"
select CPU_S3C2410 select CPU_S3C2410
@ -11,6 +18,14 @@ config ARCH_BAST
Product page: <http://www.simtec.co.uk/products/EB2410ITX/>. Product page: <http://www.simtec.co.uk/products/EB2410ITX/>.
config BAST_PC104_IRQ
bool "BAST PC104 IRQ support"
depends on ARCH_BAST
default y
help
Say Y here to enable the PC104 IRQ routing on the
Simtec BAST (EB2410ITX)
config ARCH_H1940 config ARCH_H1940
bool "IPAQ H1940" bool "IPAQ H1940"
select CPU_S3C2410 select CPU_S3C2410

View File

@ -26,8 +26,13 @@ obj-$(CONFIG_CPU_S3C2440) += s3c2440.o s3c2440-dsc.o
obj-$(CONFIG_CPU_S3C2440) += s3c2440-irq.o obj-$(CONFIG_CPU_S3C2440) += s3c2440-irq.o
obj-$(CONFIG_CPU_S3C2440) += s3c2440-clock.o obj-$(CONFIG_CPU_S3C2440) += s3c2440-clock.o
# bast extras
obj-$(CONFIG_BAST_PC104_IRQ) += bast-irq.o
# machine specific support # machine specific support
obj-$(CONFIG_MACH_ANUBIS) += mach-anubis.o
obj-$(CONFIG_ARCH_BAST) += mach-bast.o usb-simtec.o obj-$(CONFIG_ARCH_BAST) += mach-bast.o usb-simtec.o
obj-$(CONFIG_ARCH_H1940) += mach-h1940.o obj-$(CONFIG_ARCH_H1940) += mach-h1940.o
obj-$(CONFIG_MACH_N30) += mach-n30.o obj-$(CONFIG_MACH_N30) += mach-n30.o

View File

@ -1,6 +1,6 @@
/* linux/arch/arm/mach-s3c2410/bast-irq.c /* linux/arch/arm/mach-s3c2410/bast-irq.c
* *
* Copyright (c) 2004 Simtec Electronics * Copyright (c) 2003,2005 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk> * Ben Dooks <ben@simtec.co.uk>
* *
* http://www.simtec.co.uk/products/EB2410ITX/ * http://www.simtec.co.uk/products/EB2410ITX/
@ -21,7 +21,8 @@
* *
* Modifications: * Modifications:
* 08-Jan-2003 BJD Moved from central IRQ code * 08-Jan-2003 BJD Moved from central IRQ code
*/ * 21-Aug-2005 BJD Fixed missing code and compile errors
*/
#include <linux/init.h> #include <linux/init.h>
@ -30,12 +31,19 @@
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/sysdev.h> #include <linux/sysdev.h>
#include <asm/mach-types.h>
#include <asm/hardware.h> #include <asm/hardware.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/mach/irq.h> #include <asm/mach/irq.h>
#include <asm/hardware/s3c2410/irq.h>
#include <asm/arch/regs-irq.h>
#include <asm/arch/bast-map.h>
#include <asm/arch/bast-irq.h>
#include "irq.h"
#if 0 #if 0
#include <asm/debug-ll.h> #include <asm/debug-ll.h>
@ -79,15 +87,15 @@ bast_pc104_mask(unsigned int irqno)
temp = __raw_readb(BAST_VA_PC104_IRQMASK); temp = __raw_readb(BAST_VA_PC104_IRQMASK);
temp &= ~bast_pc104_irqmasks[irqno]; temp &= ~bast_pc104_irqmasks[irqno];
__raw_writeb(temp, BAST_VA_PC104_IRQMASK); __raw_writeb(temp, BAST_VA_PC104_IRQMASK);
if (temp == 0)
bast_extint_mask(IRQ_ISA);
} }
static void static void
bast_pc104_ack(unsigned int irqno) bast_pc104_maskack(unsigned int irqno)
{ {
bast_extint_ack(IRQ_ISA); struct irqdesc *desc = irq_desc + IRQ_ISA;
bast_pc104_mask(irqno);
desc->chip->ack(IRQ_ISA);
} }
static void static void
@ -98,14 +106,12 @@ bast_pc104_unmask(unsigned int irqno)
temp = __raw_readb(BAST_VA_PC104_IRQMASK); temp = __raw_readb(BAST_VA_PC104_IRQMASK);
temp |= bast_pc104_irqmasks[irqno]; temp |= bast_pc104_irqmasks[irqno];
__raw_writeb(temp, BAST_VA_PC104_IRQMASK); __raw_writeb(temp, BAST_VA_PC104_IRQMASK);
bast_extint_unmask(IRQ_ISA);
} }
static struct bast_pc104_chip = { static struct irqchip bast_pc104_chip = {
.mask = bast_pc104_mask, .mask = bast_pc104_mask,
.unmask = bast_pc104_unmask, .unmask = bast_pc104_unmask,
.ack = bast_pc104_ack .ack = bast_pc104_maskack
}; };
static void static void
@ -119,14 +125,49 @@ bast_irq_pc104_demux(unsigned int irq,
stat = __raw_readb(BAST_VA_PC104_IRQREQ) & 0xf; stat = __raw_readb(BAST_VA_PC104_IRQREQ) & 0xf;
for (i = 0; i < 4 && stat != 0; i++) { if (unlikely(stat == 0)) {
if (stat & 1) { /* ack if we get an irq with nothing (ie, startup) */
irqno = bast_pc104_irqs[i];
desc = irq_desc + irqno;
desc_handle_irq(irqno, desc, regs); desc = irq_desc + IRQ_ISA;
desc->chip->ack(IRQ_ISA);
} else {
/* handle the IRQ */
for (i = 0; stat != 0; i++, stat >>= 1) {
if (stat & 1) {
irqno = bast_pc104_irqs[i];
desc_handle_irq(irqno, irq_desc + irqno, regs);
}
} }
stat >>= 1;
} }
} }
static __init int bast_irq_init(void)
{
unsigned int i;
if (machine_is_bast()) {
printk(KERN_INFO "BAST PC104 IRQ routing, (c) 2005 Simtec Electronics\n");
/* zap all the IRQs */
__raw_writeb(0x0, BAST_VA_PC104_IRQMASK);
set_irq_chained_handler(IRQ_ISA, bast_irq_pc104_demux);
/* reigster our IRQs */
for (i = 0; i < 4; i++) {
unsigned int irqno = bast_pc104_irqs[i];
set_irq_chip(irqno, &bast_pc104_chip);
set_irq_handler(irqno, do_level_IRQ);
set_irq_flags(irqno, IRQF_VALID);
}
}
return 0;
}
arch_initcall(bast_irq_init);

View File

@ -0,0 +1,270 @@
/* linux/arch/arm/mach-s3c2410/mach-anubis.c
*
* Copyright (c) 2003-2005 Simtec Electronics
* http://armlinux.simtec.co.uk/
* Ben Dooks <ben@simtec.co.uk>
*
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Modifications:
* 02-May-2005 BJD Copied from mach-bast.c
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/device.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/irq.h>
#include <asm/arch/anubis-map.h>
#include <asm/arch/anubis-irq.h>
#include <asm/arch/anubis-cpld.h>
#include <asm/hardware.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach-types.h>
#include <asm/arch/regs-serial.h>
#include <asm/arch/regs-gpio.h>
#include <asm/arch/regs-mem.h>
#include <asm/arch/regs-lcd.h>
#include <asm/arch/nand.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include "clock.h"
#include "devs.h"
#include "cpu.h"
#define COPYRIGHT ", (c) 2005 Simtec Electronics"
static struct map_desc anubis_iodesc[] __initdata = {
/* ISA IO areas */
{ (u32)S3C24XX_VA_ISA_BYTE, 0x0, SZ_16M, MT_DEVICE },
{ (u32)S3C24XX_VA_ISA_WORD, 0x0, SZ_16M, MT_DEVICE },
/* we could possibly compress the next set down into a set of smaller tables
* pagetables, but that would mean using an L2 section, and it still means
* we cannot actually feed the same register to an LDR due to 16K spacing
*/
/* CPLD control registers */
{ (u32)ANUBIS_VA_CTRL1, ANUBIS_PA_CTRL1, SZ_4K, MT_DEVICE },
{ (u32)ANUBIS_VA_CTRL2, ANUBIS_PA_CTRL2, SZ_4K, MT_DEVICE },
/* IDE drives */
{ (u32)ANUBIS_IDEPRI, S3C2410_CS3, SZ_1M, MT_DEVICE },
{ (u32)ANUBIS_IDEPRIAUX, S3C2410_CS3+(1<<26), SZ_1M, MT_DEVICE },
{ (u32)ANUBIS_IDESEC, S3C2410_CS4, SZ_1M, MT_DEVICE },
{ (u32)ANUBIS_IDESECAUX, S3C2410_CS4+(1<<26), SZ_1M, MT_DEVICE },
};
#define UCON S3C2410_UCON_DEFAULT | S3C2410_UCON_UCLK
#define ULCON S3C2410_LCON_CS8 | S3C2410_LCON_PNONE | S3C2410_LCON_STOPB
#define UFCON S3C2410_UFCON_RXTRIG8 | S3C2410_UFCON_FIFOMODE
static struct s3c24xx_uart_clksrc anubis_serial_clocks[] = {
[0] = {
.name = "uclk",
.divisor = 1,
.min_baud = 0,
.max_baud = 0,
},
[1] = {
.name = "pclk",
.divisor = 1,
.min_baud = 0,
.max_baud = 0.
}
};
static struct s3c2410_uartcfg anubis_uartcfgs[] = {
[0] = {
.hwport = 0,
.flags = 0,
.ucon = UCON,
.ulcon = ULCON,
.ufcon = UFCON,
.clocks = anubis_serial_clocks,
.clocks_size = ARRAY_SIZE(anubis_serial_clocks)
},
[1] = {
.hwport = 2,
.flags = 0,
.ucon = UCON,
.ulcon = ULCON,
.ufcon = UFCON,
.clocks = anubis_serial_clocks,
.clocks_size = ARRAY_SIZE(anubis_serial_clocks)
},
};
/* NAND Flash on Anubis board */
static int external_map[] = { 2 };
static int chip0_map[] = { 0 };
static int chip1_map[] = { 1 };
struct mtd_partition anubis_default_nand_part[] = {
[0] = {
.name = "Boot Agent",
.size = SZ_16K,
.offset = 0
},
[1] = {
.name = "/boot",
.size = SZ_4M - SZ_16K,
.offset = SZ_16K,
},
[2] = {
.name = "user1",
.offset = SZ_4M,
.size = SZ_32M - SZ_4M,
},
[3] = {
.name = "user2",
.offset = SZ_32M,
.size = MTDPART_SIZ_FULL,
}
};
/* the Anubis has 3 selectable slots for nand-flash, the two
* on-board chip areas, as well as the external slot.
*
* Note, there is no current hot-plug support for the External
* socket.
*/
static struct s3c2410_nand_set anubis_nand_sets[] = {
[1] = {
.name = "External",
.nr_chips = 1,
.nr_map = external_map,
.nr_partitions = ARRAY_SIZE(anubis_default_nand_part),
.partitions = anubis_default_nand_part
},
[0] = {
.name = "chip0",
.nr_chips = 1,
.nr_map = chip0_map,
.nr_partitions = ARRAY_SIZE(anubis_default_nand_part),
.partitions = anubis_default_nand_part
},
[2] = {
.name = "chip1",
.nr_chips = 1,
.nr_map = chip1_map,
.nr_partitions = ARRAY_SIZE(anubis_default_nand_part),
.partitions = anubis_default_nand_part
},
};
static void anubis_nand_select(struct s3c2410_nand_set *set, int slot)
{
unsigned int tmp;
slot = set->nr_map[slot] & 3;
pr_debug("anubis_nand: selecting slot %d (set %p,%p)\n",
slot, set, set->nr_map);
tmp = __raw_readb(ANUBIS_VA_CTRL1);
tmp &= ~ANUBIS_CTRL1_NANDSEL;
tmp |= slot;
pr_debug("anubis_nand: ctrl1 now %02x\n", tmp);
__raw_writeb(tmp, ANUBIS_VA_CTRL1);
}
static struct s3c2410_platform_nand anubis_nand_info = {
.tacls = 25,
.twrph0 = 80,
.twrph1 = 80,
.nr_sets = ARRAY_SIZE(anubis_nand_sets),
.sets = anubis_nand_sets,
.select_chip = anubis_nand_select,
};
/* Standard Anubis devices */
static struct platform_device *anubis_devices[] __initdata = {
&s3c_device_usb,
&s3c_device_wdt,
&s3c_device_adc,
&s3c_device_i2c,
&s3c_device_rtc,
&s3c_device_nand,
};
static struct clk *anubis_clocks[] = {
&s3c24xx_dclk0,
&s3c24xx_dclk1,
&s3c24xx_clkout0,
&s3c24xx_clkout1,
&s3c24xx_uclk,
};
static struct s3c24xx_board anubis_board __initdata = {
.devices = anubis_devices,
.devices_count = ARRAY_SIZE(anubis_devices),
.clocks = anubis_clocks,
.clocks_count = ARRAY_SIZE(anubis_clocks)
};
void __init anubis_map_io(void)
{
/* initialise the clocks */
s3c24xx_dclk0.parent = NULL;
s3c24xx_dclk0.rate = 12*1000*1000;
s3c24xx_dclk1.parent = NULL;
s3c24xx_dclk1.rate = 24*1000*1000;
s3c24xx_clkout0.parent = &s3c24xx_dclk0;
s3c24xx_clkout1.parent = &s3c24xx_dclk1;
s3c24xx_uclk.parent = &s3c24xx_clkout1;
s3c_device_nand.dev.platform_data = &anubis_nand_info;
s3c24xx_init_io(anubis_iodesc, ARRAY_SIZE(anubis_iodesc));
s3c24xx_init_clocks(0);
s3c24xx_init_uarts(anubis_uartcfgs, ARRAY_SIZE(anubis_uartcfgs));
s3c24xx_set_board(&anubis_board);
/* ensure that the GPIO is setup */
s3c2410_gpio_setpin(S3C2410_GPA0, 1);
}
MACHINE_START(ANUBIS, "Simtec-Anubis")
/* Maintainer: Ben Dooks <ben@simtec.co.uk> */
.phys_ram = S3C2410_SDRAM_PA,
.phys_io = S3C2410_PA_UART,
.io_pg_offst = (((u32)S3C24XX_VA_UART) >> 18) & 0xfffc,
.boot_params = S3C2410_SDRAM_PA + 0x100,
.map_io = anubis_map_io,
.init_irq = s3c24xx_init_irq,
.timer = &s3c24xx_timer,
MACHINE_END

View File

@ -48,7 +48,7 @@ static __init int pm_simtec_init(void)
/* check which machine we are running on */ /* check which machine we are running on */
if (!machine_is_bast() && !machine_is_vr1000()) if (!machine_is_bast() && !machine_is_vr1000() && !machine_is_anubis())
return 0; return 0;
printk(KERN_INFO "Simtec Board Power Manangement" COPYRIGHT "\n"); printk(KERN_INFO "Simtec Board Power Manangement" COPYRIGHT "\n");

View File

@ -164,7 +164,7 @@ static void s3c2410_timer_setup (void)
/* configure the system for whichever machine is in use */ /* configure the system for whichever machine is in use */
if (machine_is_bast() || machine_is_vr1000()) { if (machine_is_bast() || machine_is_vr1000() || machine_is_anubis()) {
/* timer is at 12MHz, scaler is 1 */ /* timer is at 12MHz, scaler is 1 */
timer_usec_ticks = timer_mask_usec_ticks(1, 12000000); timer_usec_ticks = timer_mask_usec_ticks(1, 12000000);
tcnt = 12000000 / HZ; tcnt = 12000000 / HZ;

View File

@ -91,6 +91,13 @@ config OMAP_32K_TIMER_HZ
Kernel internal timer frequency should be a divisor of 32768, Kernel internal timer frequency should be a divisor of 32768,
such as 64 or 128. such as 64 or 128.
config OMAP_DM_TIMER
bool "Use dual-mode timer"
default n
depends on ARCH_OMAP16XX
help
Select this option if you want to use OMAP Dual-Mode timers.
choice choice
prompt "Low-level debug console UART" prompt "Low-level debug console UART"
depends on ARCH_OMAP depends on ARCH_OMAP
@ -107,6 +114,15 @@ config OMAP_LL_DEBUG_UART3
endchoice endchoice
config OMAP_SERIAL_WAKE
bool "Enable wake-up events for serial ports"
depends OMAP_MUX
default y
help
Select this option if you want to have your system wake up
to data on the serial RX line. This allows you to wake the
system from serial console.
endmenu endmenu
endif endif

View File

@ -3,7 +3,7 @@
# #
# Common support # Common support
obj-y := common.o dma.o clock.o mux.o gpio.o mcbsp.o usb.o obj-y := common.o sram.o sram-fn.o clock.o dma.o mux.o gpio.o mcbsp.o usb.o
obj-m := obj-m :=
obj-n := obj-n :=
obj- := obj- :=
@ -15,3 +15,5 @@ obj-$(CONFIG_ARCH_OMAP16XX) += ocpi.o
obj-$(CONFIG_PM) += pm.o sleep.o obj-$(CONFIG_PM) += pm.o sleep.o
obj-$(CONFIG_CPU_FREQ) += cpu-omap.o obj-$(CONFIG_CPU_FREQ) += cpu-omap.o
obj-$(CONFIG_OMAP_DM_TIMER) += dmtimer.o

View File

@ -21,6 +21,7 @@
#include <asm/arch/usb.h> #include <asm/arch/usb.h>
#include "clock.h" #include "clock.h"
#include "sram.h"
static LIST_HEAD(clocks); static LIST_HEAD(clocks);
static DECLARE_MUTEX(clocks_sem); static DECLARE_MUTEX(clocks_sem);
@ -141,7 +142,7 @@ static struct clk arm_ck = {
static struct clk armper_ck = { static struct clk armper_ck = {
.name = "armper_ck", .name = "armper_ck",
.parent = &ck_dpll1, .parent = &ck_dpll1,
.flags = CLOCK_IN_OMAP730 | CLOCK_IN_OMAP1510 | CLOCK_IN_OMAP16XX | .flags = CLOCK_IN_OMAP1510 | CLOCK_IN_OMAP16XX |
RATE_CKCTL, RATE_CKCTL,
.enable_reg = ARM_IDLECT2, .enable_reg = ARM_IDLECT2,
.enable_bit = EN_PERCK, .enable_bit = EN_PERCK,
@ -385,7 +386,8 @@ static struct clk uart2_ck = {
.name = "uart2_ck", .name = "uart2_ck",
/* Direct from ULPD, no parent */ /* Direct from ULPD, no parent */
.rate = 12000000, .rate = 12000000,
.flags = CLOCK_IN_OMAP1510 | CLOCK_IN_OMAP16XX | ENABLE_REG_32BIT, .flags = CLOCK_IN_OMAP1510 | CLOCK_IN_OMAP16XX | ENABLE_REG_32BIT |
ALWAYS_ENABLED,
.enable_reg = MOD_CONF_CTRL_0, .enable_reg = MOD_CONF_CTRL_0,
.enable_bit = 30, /* Chooses between 12MHz and 48MHz */ .enable_bit = 30, /* Chooses between 12MHz and 48MHz */
.set_rate = &set_uart_rate, .set_rate = &set_uart_rate,
@ -443,6 +445,15 @@ static struct clk usb_hhc_ck16xx = {
.enable_bit = 8 /* UHOST_EN */, .enable_bit = 8 /* UHOST_EN */,
}; };
static struct clk usb_dc_ck = {
.name = "usb_dc_ck",
/* Direct from ULPD, no parent */
.rate = 48000000,
.flags = CLOCK_IN_OMAP16XX | RATE_FIXED,
.enable_reg = SOFT_REQ_REG,
.enable_bit = 4,
};
static struct clk mclk_1510 = { static struct clk mclk_1510 = {
.name = "mclk", .name = "mclk",
/* Direct from ULPD, no parent. May be enabled by ext hardware. */ /* Direct from ULPD, no parent. May be enabled by ext hardware. */
@ -552,6 +563,7 @@ static struct clk * onchip_clks[] = {
&uart3_16xx, &uart3_16xx,
&usb_clko, &usb_clko,
&usb_hhc_ck1510, &usb_hhc_ck16xx, &usb_hhc_ck1510, &usb_hhc_ck16xx,
&usb_dc_ck,
&mclk_1510, &mclk_16xx, &mclk_1510, &mclk_16xx,
&bclk_1510, &bclk_16xx, &bclk_1510, &bclk_16xx,
&mmc1_ck, &mmc1_ck,
@ -946,14 +958,13 @@ static int select_table_rate(struct clk * clk, unsigned long rate)
if (!ptr->rate) if (!ptr->rate)
return -EINVAL; return -EINVAL;
if (!ptr->rate) /*
return -EINVAL; * In most cases we should not need to reprogram DPLL.
* Reprogramming the DPLL is tricky, it must be done from SRAM.
*/
omap_sram_reprogram_clock(ptr->dpllctl_val, ptr->ckctl_val);
if (unlikely(ck_dpll1.rate == 0)) { ck_dpll1.rate = ptr->pll_rate;
omap_writew(ptr->dpllctl_val, DPLL_CTL);
ck_dpll1.rate = ptr->pll_rate;
}
omap_writew(ptr->ckctl_val, ARM_CKCTL);
propagate_rate(&ck_dpll1); propagate_rate(&ck_dpll1);
return 0; return 0;
} }
@ -1224,9 +1235,11 @@ int __init clk_init(void)
#endif #endif
/* Cache rates for clocks connected to ck_ref (not dpll1) */ /* Cache rates for clocks connected to ck_ref (not dpll1) */
propagate_rate(&ck_ref); propagate_rate(&ck_ref);
printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): %ld.%01ld/%ld/%ld MHz\n", printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): "
"%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n",
ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10,
ck_dpll1.rate, arm_ck.rate); ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,
arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);
#ifdef CONFIG_MACH_OMAP_PERSEUS2 #ifdef CONFIG_MACH_OMAP_PERSEUS2
/* Select slicer output as OMAP input clock */ /* Select slicer output as OMAP input clock */
@ -1271,7 +1284,9 @@ static int __init omap_late_clk_reset(void)
struct clk *p; struct clk *p;
__u32 regval32; __u32 regval32;
omap_writew(0, SOFT_REQ_REG); /* USB_REQ_EN will be disabled later if necessary (usb_dc_ck) */
regval32 = omap_readw(SOFT_REQ_REG) & (1 << 4);
omap_writew(regval32, SOFT_REQ_REG);
omap_writew(0, SOFT_REQ_REG2); omap_writew(0, SOFT_REQ_REG2);
list_for_each_entry(p, &clocks, node) { list_for_each_entry(p, &clocks, node) {

View File

@ -26,6 +26,7 @@
#include <asm/hardware/clock.h> #include <asm/hardware/clock.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/mach-types.h> #include <asm/mach-types.h>
#include <asm/setup.h>
#include <asm/arch/board.h> #include <asm/arch/board.h>
#include <asm/arch/mux.h> #include <asm/arch/mux.h>
@ -35,11 +36,11 @@
#define NO_LENGTH_CHECK 0xffffffff #define NO_LENGTH_CHECK 0xffffffff
extern int omap_bootloader_tag_len; unsigned char omap_bootloader_tag[512];
extern u8 omap_bootloader_tag[]; int omap_bootloader_tag_len;
struct omap_board_config_kernel *omap_board_config; struct omap_board_config_kernel *omap_board_config;
int omap_board_config_size = 0; int omap_board_config_size;
static const void *get_config(u16 tag, size_t len, int skip, size_t *len_out) static const void *get_config(u16 tag, size_t len, int skip, size_t *len_out)
{ {

View File

@ -425,7 +425,7 @@ static int dma_handle_ch(int ch)
dma_chan[ch + 6].saved_csr = csr >> 7; dma_chan[ch + 6].saved_csr = csr >> 7;
csr &= 0x7f; csr &= 0x7f;
} }
if (!csr) if ((csr & 0x3f) == 0)
return 0; return 0;
if (unlikely(dma_chan[ch].dev_id == -1)) { if (unlikely(dma_chan[ch].dev_id == -1)) {
printk(KERN_WARNING "Spurious interrupt from DMA channel %d (CSR %04x)\n", printk(KERN_WARNING "Spurious interrupt from DMA channel %d (CSR %04x)\n",
@ -890,11 +890,11 @@ void omap_enable_lcd_dma(void)
w |= 1 << 8; w |= 1 << 8;
omap_writew(w, OMAP1610_DMA_LCD_CTRL); omap_writew(w, OMAP1610_DMA_LCD_CTRL);
lcd_dma.active = 1;
w = omap_readw(OMAP1610_DMA_LCD_CCR); w = omap_readw(OMAP1610_DMA_LCD_CCR);
w |= 1 << 7; w |= 1 << 7;
omap_writew(w, OMAP1610_DMA_LCD_CCR); omap_writew(w, OMAP1610_DMA_LCD_CCR);
lcd_dma.active = 1;
} }
void omap_setup_lcd_dma(void) void omap_setup_lcd_dma(void)
@ -965,8 +965,8 @@ void omap_clear_dma(int lch)
*/ */
dma_addr_t omap_get_dma_src_pos(int lch) dma_addr_t omap_get_dma_src_pos(int lch)
{ {
return (dma_addr_t) (OMAP_DMA_CSSA_L(lch) | return (dma_addr_t) (omap_readw(OMAP_DMA_CSSA_L(lch)) |
(OMAP_DMA_CSSA_U(lch) << 16)); (omap_readw(OMAP_DMA_CSSA_U(lch)) << 16));
} }
/* /*
@ -979,8 +979,18 @@ dma_addr_t omap_get_dma_src_pos(int lch)
*/ */
dma_addr_t omap_get_dma_dst_pos(int lch) dma_addr_t omap_get_dma_dst_pos(int lch)
{ {
return (dma_addr_t) (OMAP_DMA_CDSA_L(lch) | return (dma_addr_t) (omap_readw(OMAP_DMA_CDSA_L(lch)) |
(OMAP_DMA_CDSA_U(lch) << 16)); (omap_readw(OMAP_DMA_CDSA_U(lch)) << 16));
}
/*
* Returns current source transfer counting for the given DMA channel.
* Can be used to monitor the progress of a transfer inside a block.
* It must be called with disabled interrupts.
*/
int omap_get_dma_src_addr_counter(int lch)
{
return (dma_addr_t) omap_readw(OMAP_DMA_CSAC(lch));
} }
int omap_dma_running(void) int omap_dma_running(void)
@ -1076,6 +1086,7 @@ arch_initcall(omap_init_dma);
EXPORT_SYMBOL(omap_get_dma_src_pos); EXPORT_SYMBOL(omap_get_dma_src_pos);
EXPORT_SYMBOL(omap_get_dma_dst_pos); EXPORT_SYMBOL(omap_get_dma_dst_pos);
EXPORT_SYMBOL(omap_get_dma_src_addr_counter);
EXPORT_SYMBOL(omap_clear_dma); EXPORT_SYMBOL(omap_clear_dma);
EXPORT_SYMBOL(omap_set_dma_priority); EXPORT_SYMBOL(omap_set_dma_priority);
EXPORT_SYMBOL(omap_request_dma); EXPORT_SYMBOL(omap_request_dma);

View File

@ -0,0 +1,260 @@
/*
* linux/arch/arm/plat-omap/dmtimer.c
*
* OMAP Dual-Mode Timers
*
* Copyright (C) 2005 Nokia Corporation
* Author: Lauri Leukkunen <lauri.leukkunen@nokia.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
* NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/init.h>
#include <asm/arch/hardware.h>
#include <asm/arch/dmtimer.h>
#include <asm/io.h>
#include <asm/arch/irqs.h>
#include <linux/spinlock.h>
#include <linux/list.h>
#define OMAP_TIMER_COUNT 8
#define OMAP_TIMER_ID_REG 0x00
#define OMAP_TIMER_OCP_CFG_REG 0x10
#define OMAP_TIMER_SYS_STAT_REG 0x14
#define OMAP_TIMER_STAT_REG 0x18
#define OMAP_TIMER_INT_EN_REG 0x1c
#define OMAP_TIMER_WAKEUP_EN_REG 0x20
#define OMAP_TIMER_CTRL_REG 0x24
#define OMAP_TIMER_COUNTER_REG 0x28
#define OMAP_TIMER_LOAD_REG 0x2c
#define OMAP_TIMER_TRIGGER_REG 0x30
#define OMAP_TIMER_WRITE_PEND_REG 0x34
#define OMAP_TIMER_MATCH_REG 0x38
#define OMAP_TIMER_CAPTURE_REG 0x3c
#define OMAP_TIMER_IF_CTRL_REG 0x40
static struct dmtimer_info_struct {
struct list_head unused_timers;
struct list_head reserved_timers;
} dm_timer_info;
static struct omap_dm_timer dm_timers[] = {
{ .base=0xfffb1400, .irq=INT_1610_GPTIMER1 },
{ .base=0xfffb1c00, .irq=INT_1610_GPTIMER2 },
{ .base=0xfffb2400, .irq=INT_1610_GPTIMER3 },
{ .base=0xfffb2c00, .irq=INT_1610_GPTIMER4 },
{ .base=0xfffb3400, .irq=INT_1610_GPTIMER5 },
{ .base=0xfffb3c00, .irq=INT_1610_GPTIMER6 },
{ .base=0xfffb4400, .irq=INT_1610_GPTIMER7 },
{ .base=0xfffb4c00, .irq=INT_1610_GPTIMER8 },
{ .base=0x0 },
};
static spinlock_t dm_timer_lock;
inline void omap_dm_timer_write_reg(struct omap_dm_timer *timer, int reg, u32 value)
{
omap_writel(value, timer->base + reg);
while (omap_dm_timer_read_reg(timer, OMAP_TIMER_WRITE_PEND_REG))
;
}
u32 omap_dm_timer_read_reg(struct omap_dm_timer *timer, int reg)
{
return omap_readl(timer->base + reg);
}
int omap_dm_timers_active(void)
{
struct omap_dm_timer *timer;
for (timer = &dm_timers[0]; timer->base; ++timer)
if (omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG) &
OMAP_TIMER_CTRL_ST)
return 1;
return 0;
}
void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
{
int n = (timer - dm_timers) << 1;
u32 l;
l = omap_readl(MOD_CONF_CTRL_1) & ~(0x03 << n);
l |= source << n;
omap_writel(l, MOD_CONF_CTRL_1);
}
static void omap_dm_timer_reset(struct omap_dm_timer *timer)
{
/* Reset and set posted mode */
omap_dm_timer_write_reg(timer, OMAP_TIMER_IF_CTRL_REG, 0x06);
omap_dm_timer_write_reg(timer, OMAP_TIMER_OCP_CFG_REG, 0x02);
omap_dm_timer_set_source(timer, OMAP_TIMER_SRC_ARMXOR);
}
struct omap_dm_timer * omap_dm_timer_request(void)
{
struct omap_dm_timer *timer = NULL;
unsigned long flags;
spin_lock_irqsave(&dm_timer_lock, flags);
if (!list_empty(&dm_timer_info.unused_timers)) {
timer = (struct omap_dm_timer *)
dm_timer_info.unused_timers.next;
list_move_tail((struct list_head *)timer,
&dm_timer_info.reserved_timers);
}
spin_unlock_irqrestore(&dm_timer_lock, flags);
return timer;
}
void omap_dm_timer_free(struct omap_dm_timer *timer)
{
unsigned long flags;
omap_dm_timer_reset(timer);
spin_lock_irqsave(&dm_timer_lock, flags);
list_move_tail((struct list_head *)timer, &dm_timer_info.unused_timers);
spin_unlock_irqrestore(&dm_timer_lock, flags);
}
void omap_dm_timer_set_int_enable(struct omap_dm_timer *timer,
unsigned int value)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_INT_EN_REG, value);
}
unsigned int omap_dm_timer_read_status(struct omap_dm_timer *timer)
{
return omap_dm_timer_read_reg(timer, OMAP_TIMER_STAT_REG);
}
void omap_dm_timer_write_status(struct omap_dm_timer *timer, unsigned int value)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_STAT_REG, value);
}
void omap_dm_timer_enable_autoreload(struct omap_dm_timer *timer)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
l |= OMAP_TIMER_CTRL_AR;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
void omap_dm_timer_trigger(struct omap_dm_timer *timer)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_TRIGGER_REG, 1);
}
void omap_dm_timer_set_trigger(struct omap_dm_timer *timer, unsigned int value)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
l |= value & 0x3;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
void omap_dm_timer_start(struct omap_dm_timer *timer)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
l |= OMAP_TIMER_CTRL_ST;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
void omap_dm_timer_stop(struct omap_dm_timer *timer)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
l &= ~0x1;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
unsigned int omap_dm_timer_read_counter(struct omap_dm_timer *timer)
{
return omap_dm_timer_read_reg(timer, OMAP_TIMER_COUNTER_REG);
}
void omap_dm_timer_reset_counter(struct omap_dm_timer *timer)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG, 0);
}
void omap_dm_timer_set_load(struct omap_dm_timer *timer, unsigned int load)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_LOAD_REG, load);
}
void omap_dm_timer_set_match(struct omap_dm_timer *timer, unsigned int match)
{
omap_dm_timer_write_reg(timer, OMAP_TIMER_MATCH_REG, match);
}
void omap_dm_timer_enable_compare(struct omap_dm_timer *timer)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
l |= OMAP_TIMER_CTRL_CE;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
static inline void __dm_timer_init(void)
{
struct omap_dm_timer *timer;
spin_lock_init(&dm_timer_lock);
INIT_LIST_HEAD(&dm_timer_info.unused_timers);
INIT_LIST_HEAD(&dm_timer_info.reserved_timers);
timer = &dm_timers[0];
while (timer->base) {
list_add_tail((struct list_head *)timer, &dm_timer_info.unused_timers);
omap_dm_timer_reset(timer);
timer++;
}
}
static int __init omap_dm_timer_init(void)
{
if (cpu_is_omap16xx())
__dm_timer_init();
return 0;
}
arch_initcall(omap_dm_timer_init);

View File

@ -3,7 +3,7 @@
* *
* Support functions for OMAP GPIO * Support functions for OMAP GPIO
* *
* Copyright (C) 2003 Nokia Corporation * Copyright (C) 2003-2005 Nokia Corporation
* Written by Juha Yrjölä <juha.yrjola@nokia.com> * Written by Juha Yrjölä <juha.yrjola@nokia.com>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
@ -17,8 +17,11 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/sysdev.h>
#include <linux/err.h>
#include <asm/hardware.h> #include <asm/hardware.h>
#include <asm/hardware/clock.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/arch/irqs.h> #include <asm/arch/irqs.h>
#include <asm/arch/gpio.h> #include <asm/arch/gpio.h>
@ -29,7 +32,7 @@
/* /*
* OMAP1510 GPIO registers * OMAP1510 GPIO registers
*/ */
#define OMAP1510_GPIO_BASE 0xfffce000 #define OMAP1510_GPIO_BASE (void __iomem *)0xfffce000
#define OMAP1510_GPIO_DATA_INPUT 0x00 #define OMAP1510_GPIO_DATA_INPUT 0x00
#define OMAP1510_GPIO_DATA_OUTPUT 0x04 #define OMAP1510_GPIO_DATA_OUTPUT 0x04
#define OMAP1510_GPIO_DIR_CONTROL 0x08 #define OMAP1510_GPIO_DIR_CONTROL 0x08
@ -43,34 +46,37 @@
/* /*
* OMAP1610 specific GPIO registers * OMAP1610 specific GPIO registers
*/ */
#define OMAP1610_GPIO1_BASE 0xfffbe400 #define OMAP1610_GPIO1_BASE (void __iomem *)0xfffbe400
#define OMAP1610_GPIO2_BASE 0xfffbec00 #define OMAP1610_GPIO2_BASE (void __iomem *)0xfffbec00
#define OMAP1610_GPIO3_BASE 0xfffbb400 #define OMAP1610_GPIO3_BASE (void __iomem *)0xfffbb400
#define OMAP1610_GPIO4_BASE 0xfffbbc00 #define OMAP1610_GPIO4_BASE (void __iomem *)0xfffbbc00
#define OMAP1610_GPIO_REVISION 0x0000 #define OMAP1610_GPIO_REVISION 0x0000
#define OMAP1610_GPIO_SYSCONFIG 0x0010 #define OMAP1610_GPIO_SYSCONFIG 0x0010
#define OMAP1610_GPIO_SYSSTATUS 0x0014 #define OMAP1610_GPIO_SYSSTATUS 0x0014
#define OMAP1610_GPIO_IRQSTATUS1 0x0018 #define OMAP1610_GPIO_IRQSTATUS1 0x0018
#define OMAP1610_GPIO_IRQENABLE1 0x001c #define OMAP1610_GPIO_IRQENABLE1 0x001c
#define OMAP1610_GPIO_WAKEUPENABLE 0x0028
#define OMAP1610_GPIO_DATAIN 0x002c #define OMAP1610_GPIO_DATAIN 0x002c
#define OMAP1610_GPIO_DATAOUT 0x0030 #define OMAP1610_GPIO_DATAOUT 0x0030
#define OMAP1610_GPIO_DIRECTION 0x0034 #define OMAP1610_GPIO_DIRECTION 0x0034
#define OMAP1610_GPIO_EDGE_CTRL1 0x0038 #define OMAP1610_GPIO_EDGE_CTRL1 0x0038
#define OMAP1610_GPIO_EDGE_CTRL2 0x003c #define OMAP1610_GPIO_EDGE_CTRL2 0x003c
#define OMAP1610_GPIO_CLEAR_IRQENABLE1 0x009c #define OMAP1610_GPIO_CLEAR_IRQENABLE1 0x009c
#define OMAP1610_GPIO_CLEAR_WAKEUPENA 0x00a8
#define OMAP1610_GPIO_CLEAR_DATAOUT 0x00b0 #define OMAP1610_GPIO_CLEAR_DATAOUT 0x00b0
#define OMAP1610_GPIO_SET_IRQENABLE1 0x00dc #define OMAP1610_GPIO_SET_IRQENABLE1 0x00dc
#define OMAP1610_GPIO_SET_WAKEUPENA 0x00e8
#define OMAP1610_GPIO_SET_DATAOUT 0x00f0 #define OMAP1610_GPIO_SET_DATAOUT 0x00f0
/* /*
* OMAP730 specific GPIO registers * OMAP730 specific GPIO registers
*/ */
#define OMAP730_GPIO1_BASE 0xfffbc000 #define OMAP730_GPIO1_BASE (void __iomem *)0xfffbc000
#define OMAP730_GPIO2_BASE 0xfffbc800 #define OMAP730_GPIO2_BASE (void __iomem *)0xfffbc800
#define OMAP730_GPIO3_BASE 0xfffbd000 #define OMAP730_GPIO3_BASE (void __iomem *)0xfffbd000
#define OMAP730_GPIO4_BASE 0xfffbd800 #define OMAP730_GPIO4_BASE (void __iomem *)0xfffbd800
#define OMAP730_GPIO5_BASE 0xfffbe000 #define OMAP730_GPIO5_BASE (void __iomem *)0xfffbe000
#define OMAP730_GPIO6_BASE 0xfffbe800 #define OMAP730_GPIO6_BASE (void __iomem *)0xfffbe800
#define OMAP730_GPIO_DATA_INPUT 0x00 #define OMAP730_GPIO_DATA_INPUT 0x00
#define OMAP730_GPIO_DATA_OUTPUT 0x04 #define OMAP730_GPIO_DATA_OUTPUT 0x04
#define OMAP730_GPIO_DIR_CONTROL 0x08 #define OMAP730_GPIO_DIR_CONTROL 0x08
@ -78,14 +84,43 @@
#define OMAP730_GPIO_INT_MASK 0x10 #define OMAP730_GPIO_INT_MASK 0x10
#define OMAP730_GPIO_INT_STATUS 0x14 #define OMAP730_GPIO_INT_STATUS 0x14
/*
* omap24xx specific GPIO registers
*/
#define OMAP24XX_GPIO1_BASE (void __iomem *)0x48018000
#define OMAP24XX_GPIO2_BASE (void __iomem *)0x4801a000
#define OMAP24XX_GPIO3_BASE (void __iomem *)0x4801c000
#define OMAP24XX_GPIO4_BASE (void __iomem *)0x4801e000
#define OMAP24XX_GPIO_REVISION 0x0000
#define OMAP24XX_GPIO_SYSCONFIG 0x0010
#define OMAP24XX_GPIO_SYSSTATUS 0x0014
#define OMAP24XX_GPIO_IRQSTATUS1 0x0018
#define OMAP24XX_GPIO_IRQENABLE1 0x001c
#define OMAP24XX_GPIO_CTRL 0x0030
#define OMAP24XX_GPIO_OE 0x0034
#define OMAP24XX_GPIO_DATAIN 0x0038
#define OMAP24XX_GPIO_DATAOUT 0x003c
#define OMAP24XX_GPIO_LEVELDETECT0 0x0040
#define OMAP24XX_GPIO_LEVELDETECT1 0x0044
#define OMAP24XX_GPIO_RISINGDETECT 0x0048
#define OMAP24XX_GPIO_FALLINGDETECT 0x004c
#define OMAP24XX_GPIO_CLEARIRQENABLE1 0x0060
#define OMAP24XX_GPIO_SETIRQENABLE1 0x0064
#define OMAP24XX_GPIO_CLEARWKUENA 0x0080
#define OMAP24XX_GPIO_SETWKUENA 0x0084
#define OMAP24XX_GPIO_CLEARDATAOUT 0x0090
#define OMAP24XX_GPIO_SETDATAOUT 0x0094
#define OMAP_MPUIO_MASK (~OMAP_MAX_GPIO_LINES & 0xff) #define OMAP_MPUIO_MASK (~OMAP_MAX_GPIO_LINES & 0xff)
struct gpio_bank { struct gpio_bank {
u32 base; void __iomem *base;
u16 irq; u16 irq;
u16 virtual_irq_start; u16 virtual_irq_start;
u8 method; int method;
u32 reserved_map; u32 reserved_map;
u32 suspend_wakeup;
u32 saved_wakeup;
spinlock_t lock; spinlock_t lock;
}; };
@ -93,8 +128,9 @@ struct gpio_bank {
#define METHOD_GPIO_1510 1 #define METHOD_GPIO_1510 1
#define METHOD_GPIO_1610 2 #define METHOD_GPIO_1610 2
#define METHOD_GPIO_730 3 #define METHOD_GPIO_730 3
#define METHOD_GPIO_24XX 4
#if defined(CONFIG_ARCH_OMAP16XX) #ifdef CONFIG_ARCH_OMAP16XX
static struct gpio_bank gpio_bank_1610[5] = { static struct gpio_bank gpio_bank_1610[5] = {
{ OMAP_MPUIO_BASE, INT_MPUIO, IH_MPUIO_BASE, METHOD_MPUIO}, { OMAP_MPUIO_BASE, INT_MPUIO, IH_MPUIO_BASE, METHOD_MPUIO},
{ OMAP1610_GPIO1_BASE, INT_GPIO_BANK1, IH_GPIO_BASE, METHOD_GPIO_1610 }, { OMAP1610_GPIO1_BASE, INT_GPIO_BANK1, IH_GPIO_BASE, METHOD_GPIO_1610 },
@ -123,6 +159,15 @@ static struct gpio_bank gpio_bank_730[7] = {
}; };
#endif #endif
#ifdef CONFIG_ARCH_OMAP24XX
static struct gpio_bank gpio_bank_24xx[4] = {
{ OMAP24XX_GPIO1_BASE, INT_24XX_GPIO_BANK1, IH_GPIO_BASE, METHOD_GPIO_24XX },
{ OMAP24XX_GPIO2_BASE, INT_24XX_GPIO_BANK2, IH_GPIO_BASE + 32, METHOD_GPIO_24XX },
{ OMAP24XX_GPIO3_BASE, INT_24XX_GPIO_BANK3, IH_GPIO_BASE + 64, METHOD_GPIO_24XX },
{ OMAP24XX_GPIO4_BASE, INT_24XX_GPIO_BANK4, IH_GPIO_BASE + 96, METHOD_GPIO_24XX },
};
#endif
static struct gpio_bank *gpio_bank; static struct gpio_bank *gpio_bank;
static int gpio_bank_count; static int gpio_bank_count;
@ -149,14 +194,23 @@ static inline struct gpio_bank *get_gpio_bank(int gpio)
return &gpio_bank[1 + (gpio >> 5)]; return &gpio_bank[1 + (gpio >> 5)];
} }
#endif #endif
#ifdef CONFIG_ARCH_OMAP24XX
if (cpu_is_omap24xx())
return &gpio_bank[gpio >> 5];
#endif
} }
static inline int get_gpio_index(int gpio) static inline int get_gpio_index(int gpio)
{ {
#ifdef CONFIG_ARCH_OMAP730
if (cpu_is_omap730()) if (cpu_is_omap730())
return gpio & 0x1f; return gpio & 0x1f;
else #endif
return gpio & 0x0f; #ifdef CONFIG_ARCH_OMAP24XX
if (cpu_is_omap24xx())
return gpio & 0x1f;
#endif
return gpio & 0x0f;
} }
static inline int gpio_valid(int gpio) static inline int gpio_valid(int gpio)
@ -179,6 +233,10 @@ static inline int gpio_valid(int gpio)
#ifdef CONFIG_ARCH_OMAP730 #ifdef CONFIG_ARCH_OMAP730
if (cpu_is_omap730() && gpio < 192) if (cpu_is_omap730() && gpio < 192)
return 0; return 0;
#endif
#ifdef CONFIG_ARCH_OMAP24XX
if (cpu_is_omap24xx() && gpio < 128)
return 0;
#endif #endif
return -1; return -1;
} }
@ -195,7 +253,7 @@ static int check_gpio(int gpio)
static void _set_gpio_direction(struct gpio_bank *bank, int gpio, int is_input) static void _set_gpio_direction(struct gpio_bank *bank, int gpio, int is_input)
{ {
u32 reg = bank->base; void __iomem *reg = bank->base;
u32 l; u32 l;
switch (bank->method) { switch (bank->method) {
@ -211,6 +269,9 @@ static void _set_gpio_direction(struct gpio_bank *bank, int gpio, int is_input)
case METHOD_GPIO_730: case METHOD_GPIO_730:
reg += OMAP730_GPIO_DIR_CONTROL; reg += OMAP730_GPIO_DIR_CONTROL;
break; break;
case METHOD_GPIO_24XX:
reg += OMAP24XX_GPIO_OE;
break;
} }
l = __raw_readl(reg); l = __raw_readl(reg);
if (is_input) if (is_input)
@ -234,7 +295,7 @@ void omap_set_gpio_direction(int gpio, int is_input)
static void _set_gpio_dataout(struct gpio_bank *bank, int gpio, int enable) static void _set_gpio_dataout(struct gpio_bank *bank, int gpio, int enable)
{ {
u32 reg = bank->base; void __iomem *reg = bank->base;
u32 l = 0; u32 l = 0;
switch (bank->method) { switch (bank->method) {
@ -269,6 +330,13 @@ static void _set_gpio_dataout(struct gpio_bank *bank, int gpio, int enable)
else else
l &= ~(1 << gpio); l &= ~(1 << gpio);
break; break;
case METHOD_GPIO_24XX:
if (enable)
reg += OMAP24XX_GPIO_SETDATAOUT;
else
reg += OMAP24XX_GPIO_CLEARDATAOUT;
l = 1 << gpio;
break;
default: default:
BUG(); BUG();
return; return;
@ -291,7 +359,7 @@ void omap_set_gpio_dataout(int gpio, int enable)
int omap_get_gpio_datain(int gpio) int omap_get_gpio_datain(int gpio)
{ {
struct gpio_bank *bank; struct gpio_bank *bank;
u32 reg; void __iomem *reg;
if (check_gpio(gpio) < 0) if (check_gpio(gpio) < 0)
return -1; return -1;
@ -310,109 +378,132 @@ int omap_get_gpio_datain(int gpio)
case METHOD_GPIO_730: case METHOD_GPIO_730:
reg += OMAP730_GPIO_DATA_INPUT; reg += OMAP730_GPIO_DATA_INPUT;
break; break;
case METHOD_GPIO_24XX:
reg += OMAP24XX_GPIO_DATAIN;
break;
default: default:
BUG(); BUG();
return -1; return -1;
} }
return (__raw_readl(reg) & (1 << get_gpio_index(gpio))) != 0; return (__raw_readl(reg)
& (1 << get_gpio_index(gpio))) != 0;
} }
static void _set_gpio_edge_ctrl(struct gpio_bank *bank, int gpio, int edge) #define MOD_REG_BIT(reg, bit_mask, set) \
do { \
int l = __raw_readl(base + reg); \
if (set) l |= bit_mask; \
else l &= ~bit_mask; \
__raw_writel(l, base + reg); \
} while(0)
static inline void set_24xx_gpio_triggering(void __iomem *base, int gpio, int trigger)
{ {
u32 reg = bank->base; u32 gpio_bit = 1 << gpio;
u32 l;
MOD_REG_BIT(OMAP24XX_GPIO_LEVELDETECT0, gpio_bit,
trigger & IRQT_LOW);
MOD_REG_BIT(OMAP24XX_GPIO_LEVELDETECT1, gpio_bit,
trigger & IRQT_HIGH);
MOD_REG_BIT(OMAP24XX_GPIO_RISINGDETECT, gpio_bit,
trigger & IRQT_RISING);
MOD_REG_BIT(OMAP24XX_GPIO_FALLINGDETECT, gpio_bit,
trigger & IRQT_FALLING);
/* FIXME: Possibly do 'set_irq_handler(j, do_level_IRQ)' if only level
* triggering requested. */
}
static int _set_gpio_triggering(struct gpio_bank *bank, int gpio, int trigger)
{
void __iomem *reg = bank->base;
u32 l = 0;
switch (bank->method) { switch (bank->method) {
case METHOD_MPUIO: case METHOD_MPUIO:
reg += OMAP_MPUIO_GPIO_INT_EDGE; reg += OMAP_MPUIO_GPIO_INT_EDGE;
l = __raw_readl(reg); l = __raw_readl(reg);
if (edge == OMAP_GPIO_RISING_EDGE) if (trigger == IRQT_RISING)
l |= 1 << gpio; l |= 1 << gpio;
else else if (trigger == IRQT_FALLING)
l &= ~(1 << gpio); l &= ~(1 << gpio);
__raw_writel(l, reg); else
goto bad;
break; break;
case METHOD_GPIO_1510: case METHOD_GPIO_1510:
reg += OMAP1510_GPIO_INT_CONTROL; reg += OMAP1510_GPIO_INT_CONTROL;
l = __raw_readl(reg); l = __raw_readl(reg);
if (edge == OMAP_GPIO_RISING_EDGE) if (trigger == IRQT_RISING)
l |= 1 << gpio; l |= 1 << gpio;
else else if (trigger == IRQT_FALLING)
l &= ~(1 << gpio); l &= ~(1 << gpio);
__raw_writel(l, reg); else
goto bad;
break; break;
case METHOD_GPIO_1610: case METHOD_GPIO_1610:
edge &= 0x03;
if (gpio & 0x08) if (gpio & 0x08)
reg += OMAP1610_GPIO_EDGE_CTRL2; reg += OMAP1610_GPIO_EDGE_CTRL2;
else else
reg += OMAP1610_GPIO_EDGE_CTRL1; reg += OMAP1610_GPIO_EDGE_CTRL1;
gpio &= 0x07; gpio &= 0x07;
/* We allow only edge triggering, i.e. two lowest bits */
if (trigger & ~IRQT_BOTHEDGE)
BUG();
/* NOTE: knows __IRQT_{FAL,RIS}EDGE match OMAP hardware */
trigger &= 0x03;
l = __raw_readl(reg); l = __raw_readl(reg);
l &= ~(3 << (gpio << 1)); l &= ~(3 << (gpio << 1));
l |= edge << (gpio << 1); l |= trigger << (gpio << 1);
__raw_writel(l, reg);
break; break;
case METHOD_GPIO_730: case METHOD_GPIO_730:
reg += OMAP730_GPIO_INT_CONTROL; reg += OMAP730_GPIO_INT_CONTROL;
l = __raw_readl(reg); l = __raw_readl(reg);
if (edge == OMAP_GPIO_RISING_EDGE) if (trigger == IRQT_RISING)
l |= 1 << gpio; l |= 1 << gpio;
else else if (trigger == IRQT_FALLING)
l &= ~(1 << gpio); l &= ~(1 << gpio);
__raw_writel(l, reg); else
goto bad;
break;
case METHOD_GPIO_24XX:
set_24xx_gpio_triggering(reg, gpio, trigger);
break; break;
default: default:
BUG(); BUG();
return; goto bad;
} }
__raw_writel(l, reg);
return 0;
bad:
return -EINVAL;
} }
void omap_set_gpio_edge_ctrl(int gpio, int edge) static int gpio_irq_type(unsigned irq, unsigned type)
{ {
struct gpio_bank *bank; struct gpio_bank *bank;
unsigned gpio;
int retval;
if (irq > IH_MPUIO_BASE)
gpio = OMAP_MPUIO(irq - IH_MPUIO_BASE);
else
gpio = irq - IH_GPIO_BASE;
if (check_gpio(gpio) < 0) if (check_gpio(gpio) < 0)
return; return -EINVAL;
if (type & (__IRQT_LOWLVL|__IRQT_HIGHLVL|IRQT_PROBE))
return -EINVAL;
bank = get_gpio_bank(gpio); bank = get_gpio_bank(gpio);
spin_lock(&bank->lock); spin_lock(&bank->lock);
_set_gpio_edge_ctrl(bank, get_gpio_index(gpio), edge); retval = _set_gpio_triggering(bank, get_gpio_index(gpio), type);
spin_unlock(&bank->lock); spin_unlock(&bank->lock);
} return retval;
static int _get_gpio_edge_ctrl(struct gpio_bank *bank, int gpio)
{
u32 reg = bank->base, l;
switch (bank->method) {
case METHOD_MPUIO:
l = __raw_readl(reg + OMAP_MPUIO_GPIO_INT_EDGE);
return (l & (1 << gpio)) ?
OMAP_GPIO_RISING_EDGE : OMAP_GPIO_FALLING_EDGE;
case METHOD_GPIO_1510:
l = __raw_readl(reg + OMAP1510_GPIO_INT_CONTROL);
return (l & (1 << gpio)) ?
OMAP_GPIO_RISING_EDGE : OMAP_GPIO_FALLING_EDGE;
case METHOD_GPIO_1610:
if (gpio & 0x08)
reg += OMAP1610_GPIO_EDGE_CTRL2;
else
reg += OMAP1610_GPIO_EDGE_CTRL1;
return (__raw_readl(reg) >> ((gpio & 0x07) << 1)) & 0x03;
case METHOD_GPIO_730:
l = __raw_readl(reg + OMAP730_GPIO_INT_CONTROL);
return (l & (1 << gpio)) ?
OMAP_GPIO_RISING_EDGE : OMAP_GPIO_FALLING_EDGE;
default:
BUG();
return -1;
}
} }
static void _clear_gpio_irqbank(struct gpio_bank *bank, int gpio_mask) static void _clear_gpio_irqbank(struct gpio_bank *bank, int gpio_mask)
{ {
u32 reg = bank->base; void __iomem *reg = bank->base;
switch (bank->method) { switch (bank->method) {
case METHOD_MPUIO: case METHOD_MPUIO:
@ -428,6 +519,9 @@ static void _clear_gpio_irqbank(struct gpio_bank *bank, int gpio_mask)
case METHOD_GPIO_730: case METHOD_GPIO_730:
reg += OMAP730_GPIO_INT_STATUS; reg += OMAP730_GPIO_INT_STATUS;
break; break;
case METHOD_GPIO_24XX:
reg += OMAP24XX_GPIO_IRQSTATUS1;
break;
default: default:
BUG(); BUG();
return; return;
@ -442,7 +536,7 @@ static inline void _clear_gpio_irqstatus(struct gpio_bank *bank, int gpio)
static void _enable_gpio_irqbank(struct gpio_bank *bank, int gpio_mask, int enable) static void _enable_gpio_irqbank(struct gpio_bank *bank, int gpio_mask, int enable)
{ {
u32 reg = bank->base; void __iomem *reg = bank->base;
u32 l; u32 l;
switch (bank->method) { switch (bank->method) {
@ -477,6 +571,13 @@ static void _enable_gpio_irqbank(struct gpio_bank *bank, int gpio_mask, int enab
else else
l |= gpio_mask; l |= gpio_mask;
break; break;
case METHOD_GPIO_24XX:
if (enable)
reg += OMAP24XX_GPIO_SETIRQENABLE1;
else
reg += OMAP24XX_GPIO_CLEARIRQENABLE1;
l = gpio_mask;
break;
default: default:
BUG(); BUG();
return; return;
@ -489,6 +590,50 @@ static inline void _set_gpio_irqenable(struct gpio_bank *bank, int gpio, int ena
_enable_gpio_irqbank(bank, 1 << get_gpio_index(gpio), enable); _enable_gpio_irqbank(bank, 1 << get_gpio_index(gpio), enable);
} }
/*
* Note that ENAWAKEUP needs to be enabled in GPIO_SYSCONFIG register.
* 1510 does not seem to have a wake-up register. If JTAG is connected
* to the target, system will wake up always on GPIO events. While
* system is running all registered GPIO interrupts need to have wake-up
* enabled. When system is suspended, only selected GPIO interrupts need
* to have wake-up enabled.
*/
static int _set_gpio_wakeup(struct gpio_bank *bank, int gpio, int enable)
{
switch (bank->method) {
case METHOD_GPIO_1610:
case METHOD_GPIO_24XX:
spin_lock(&bank->lock);
if (enable)
bank->suspend_wakeup |= (1 << gpio);
else
bank->suspend_wakeup &= ~(1 << gpio);
spin_unlock(&bank->lock);
return 0;
default:
printk(KERN_ERR "Can't enable GPIO wakeup for method %i\n",
bank->method);
return -EINVAL;
}
}
/* Use disable_irq_wake() and enable_irq_wake() functions from drivers */
static int gpio_wake_enable(unsigned int irq, unsigned int enable)
{
unsigned int gpio = irq - IH_GPIO_BASE;
struct gpio_bank *bank;
int retval;
if (check_gpio(gpio) < 0)
return -ENODEV;
bank = get_gpio_bank(gpio);
spin_lock(&bank->lock);
retval = _set_gpio_wakeup(bank, get_gpio_index(gpio), enable);
spin_unlock(&bank->lock);
return retval;
}
int omap_request_gpio(int gpio) int omap_request_gpio(int gpio)
{ {
struct gpio_bank *bank; struct gpio_bank *bank;
@ -505,14 +650,32 @@ int omap_request_gpio(int gpio)
return -1; return -1;
} }
bank->reserved_map |= (1 << get_gpio_index(gpio)); bank->reserved_map |= (1 << get_gpio_index(gpio));
/* Set trigger to none. You need to enable the trigger after request_irq */
_set_gpio_triggering(bank, get_gpio_index(gpio), IRQT_NOEDGE);
#ifdef CONFIG_ARCH_OMAP1510 #ifdef CONFIG_ARCH_OMAP1510
if (bank->method == METHOD_GPIO_1510) { if (bank->method == METHOD_GPIO_1510) {
u32 reg; void __iomem *reg;
/* Claim the pin for the ARM */ /* Claim the pin for MPU */
reg = bank->base + OMAP1510_GPIO_PIN_CONTROL; reg = bank->base + OMAP1510_GPIO_PIN_CONTROL;
__raw_writel(__raw_readl(reg) | (1 << get_gpio_index(gpio)), reg); __raw_writel(__raw_readl(reg) | (1 << get_gpio_index(gpio)), reg);
} }
#endif
#ifdef CONFIG_ARCH_OMAP16XX
if (bank->method == METHOD_GPIO_1610) {
/* Enable wake-up during idle for dynamic tick */
void __iomem *reg = bank->base + OMAP1610_GPIO_SET_WAKEUPENA;
__raw_writel(1 << get_gpio_index(gpio), reg);
}
#endif
#ifdef CONFIG_ARCH_OMAP24XX
if (bank->method == METHOD_GPIO_24XX) {
/* Enable wake-up during idle for dynamic tick */
void __iomem *reg = bank->base + OMAP24XX_GPIO_SETWKUENA;
__raw_writel(1 << get_gpio_index(gpio), reg);
}
#endif #endif
spin_unlock(&bank->lock); spin_unlock(&bank->lock);
@ -533,6 +696,20 @@ void omap_free_gpio(int gpio)
spin_unlock(&bank->lock); spin_unlock(&bank->lock);
return; return;
} }
#ifdef CONFIG_ARCH_OMAP16XX
if (bank->method == METHOD_GPIO_1610) {
/* Disable wake-up during idle for dynamic tick */
void __iomem *reg = bank->base + OMAP1610_GPIO_CLEAR_WAKEUPENA;
__raw_writel(1 << get_gpio_index(gpio), reg);
}
#endif
#ifdef CONFIG_ARCH_OMAP24XX
if (bank->method == METHOD_GPIO_24XX) {
/* Disable wake-up during idle for dynamic tick */
void __iomem *reg = bank->base + OMAP24XX_GPIO_CLEARWKUENA;
__raw_writel(1 << get_gpio_index(gpio), reg);
}
#endif
bank->reserved_map &= ~(1 << get_gpio_index(gpio)); bank->reserved_map &= ~(1 << get_gpio_index(gpio));
_set_gpio_direction(bank, get_gpio_index(gpio), 1); _set_gpio_direction(bank, get_gpio_index(gpio), 1);
_set_gpio_irqenable(bank, gpio, 0); _set_gpio_irqenable(bank, gpio, 0);
@ -552,7 +729,7 @@ void omap_free_gpio(int gpio)
static void gpio_irq_handler(unsigned int irq, struct irqdesc *desc, static void gpio_irq_handler(unsigned int irq, struct irqdesc *desc,
struct pt_regs *regs) struct pt_regs *regs)
{ {
u32 isr_reg = 0; void __iomem *isr_reg = NULL;
u32 isr; u32 isr;
unsigned int gpio_irq; unsigned int gpio_irq;
struct gpio_bank *bank; struct gpio_bank *bank;
@ -574,24 +751,30 @@ static void gpio_irq_handler(unsigned int irq, struct irqdesc *desc,
if (bank->method == METHOD_GPIO_730) if (bank->method == METHOD_GPIO_730)
isr_reg = bank->base + OMAP730_GPIO_INT_STATUS; isr_reg = bank->base + OMAP730_GPIO_INT_STATUS;
#endif #endif
#ifdef CONFIG_ARCH_OMAP24XX
if (bank->method == METHOD_GPIO_24XX)
isr_reg = bank->base + OMAP24XX_GPIO_IRQSTATUS1;
#endif
isr = __raw_readl(isr_reg); while(1) {
_enable_gpio_irqbank(bank, isr, 0); isr = __raw_readl(isr_reg);
_clear_gpio_irqbank(bank, isr); _enable_gpio_irqbank(bank, isr, 0);
_enable_gpio_irqbank(bank, isr, 1); _clear_gpio_irqbank(bank, isr);
desc->chip->unmask(irq); _enable_gpio_irqbank(bank, isr, 1);
desc->chip->unmask(irq);
if (unlikely(!isr)) if (!isr)
return; break;
gpio_irq = bank->virtual_irq_start; gpio_irq = bank->virtual_irq_start;
for (; isr != 0; isr >>= 1, gpio_irq++) { for (; isr != 0; isr >>= 1, gpio_irq++) {
struct irqdesc *d; struct irqdesc *d;
if (!(isr & 1)) if (!(isr & 1))
continue; continue;
d = irq_desc + gpio_irq; d = irq_desc + gpio_irq;
desc_handle_irq(gpio_irq, d, regs); desc_handle_irq(gpio_irq, d, regs);
} }
}
} }
static void gpio_ack_irq(unsigned int irq) static void gpio_ack_irq(unsigned int irq)
@ -613,14 +796,10 @@ static void gpio_mask_irq(unsigned int irq)
static void gpio_unmask_irq(unsigned int irq) static void gpio_unmask_irq(unsigned int irq)
{ {
unsigned int gpio = irq - IH_GPIO_BASE; unsigned int gpio = irq - IH_GPIO_BASE;
unsigned int gpio_idx = get_gpio_index(gpio);
struct gpio_bank *bank = get_gpio_bank(gpio); struct gpio_bank *bank = get_gpio_bank(gpio);
if (_get_gpio_edge_ctrl(bank, get_gpio_index(gpio)) == OMAP_GPIO_NO_EDGE) { _set_gpio_irqenable(bank, gpio_idx, 1);
printk(KERN_ERR "OMAP GPIO %d: trying to enable GPIO IRQ while no edge is set\n",
gpio);
_set_gpio_edge_ctrl(bank, get_gpio_index(gpio), OMAP_GPIO_RISING_EDGE);
}
_set_gpio_irqenable(bank, gpio, 1);
} }
static void mpuio_ack_irq(unsigned int irq) static void mpuio_ack_irq(unsigned int irq)
@ -645,9 +824,11 @@ static void mpuio_unmask_irq(unsigned int irq)
} }
static struct irqchip gpio_irq_chip = { static struct irqchip gpio_irq_chip = {
.ack = gpio_ack_irq, .ack = gpio_ack_irq,
.mask = gpio_mask_irq, .mask = gpio_mask_irq,
.unmask = gpio_unmask_irq, .unmask = gpio_unmask_irq,
.set_type = gpio_irq_type,
.set_wake = gpio_wake_enable,
}; };
static struct irqchip mpuio_irq_chip = { static struct irqchip mpuio_irq_chip = {
@ -657,6 +838,7 @@ static struct irqchip mpuio_irq_chip = {
}; };
static int initialized = 0; static int initialized = 0;
static struct clk * gpio_ck = NULL;
static int __init _omap_gpio_init(void) static int __init _omap_gpio_init(void)
{ {
@ -665,6 +847,14 @@ static int __init _omap_gpio_init(void)
initialized = 1; initialized = 1;
if (cpu_is_omap1510()) {
gpio_ck = clk_get(NULL, "arm_gpio_ck");
if (IS_ERR(gpio_ck))
printk("Could not get arm_gpio_ck\n");
else
clk_use(gpio_ck);
}
#ifdef CONFIG_ARCH_OMAP1510 #ifdef CONFIG_ARCH_OMAP1510
if (cpu_is_omap1510()) { if (cpu_is_omap1510()) {
printk(KERN_INFO "OMAP1510 GPIO hardware\n"); printk(KERN_INFO "OMAP1510 GPIO hardware\n");
@ -674,7 +864,7 @@ static int __init _omap_gpio_init(void)
#endif #endif
#if defined(CONFIG_ARCH_OMAP16XX) #if defined(CONFIG_ARCH_OMAP16XX)
if (cpu_is_omap16xx()) { if (cpu_is_omap16xx()) {
int rev; u32 rev;
gpio_bank_count = 5; gpio_bank_count = 5;
gpio_bank = gpio_bank_1610; gpio_bank = gpio_bank_1610;
@ -689,6 +879,17 @@ static int __init _omap_gpio_init(void)
gpio_bank_count = 7; gpio_bank_count = 7;
gpio_bank = gpio_bank_730; gpio_bank = gpio_bank_730;
} }
#endif
#ifdef CONFIG_ARCH_OMAP24XX
if (cpu_is_omap24xx()) {
int rev;
gpio_bank_count = 4;
gpio_bank = gpio_bank_24xx;
rev = omap_readl(gpio_bank[0].base + OMAP24XX_GPIO_REVISION);
printk(KERN_INFO "OMAP24xx GPIO hardware version %d.%d\n",
(rev >> 4) & 0x0f, rev & 0x0f);
}
#endif #endif
for (i = 0; i < gpio_bank_count; i++) { for (i = 0; i < gpio_bank_count; i++) {
int j, gpio_count = 16; int j, gpio_count = 16;
@ -710,6 +911,7 @@ static int __init _omap_gpio_init(void)
if (bank->method == METHOD_GPIO_1610) { if (bank->method == METHOD_GPIO_1610) {
__raw_writew(0x0000, bank->base + OMAP1610_GPIO_IRQENABLE1); __raw_writew(0x0000, bank->base + OMAP1610_GPIO_IRQENABLE1);
__raw_writew(0xffff, bank->base + OMAP1610_GPIO_IRQSTATUS1); __raw_writew(0xffff, bank->base + OMAP1610_GPIO_IRQSTATUS1);
__raw_writew(0x0014, bank->base + OMAP1610_GPIO_SYSCONFIG);
} }
#endif #endif
#ifdef CONFIG_ARCH_OMAP730 #ifdef CONFIG_ARCH_OMAP730
@ -719,6 +921,14 @@ static int __init _omap_gpio_init(void)
gpio_count = 32; /* 730 has 32-bit GPIOs */ gpio_count = 32; /* 730 has 32-bit GPIOs */
} }
#endif
#ifdef CONFIG_ARCH_OMAP24XX
if (bank->method == METHOD_GPIO_24XX) {
__raw_writel(0x00000000, bank->base + OMAP24XX_GPIO_IRQENABLE1);
__raw_writel(0xffffffff, bank->base + OMAP24XX_GPIO_IRQSTATUS1);
gpio_count = 32;
}
#endif #endif
for (j = bank->virtual_irq_start; for (j = bank->virtual_irq_start;
j < bank->virtual_irq_start + gpio_count; j++) { j < bank->virtual_irq_start + gpio_count; j++) {
@ -735,12 +945,97 @@ static int __init _omap_gpio_init(void)
/* Enable system clock for GPIO module. /* Enable system clock for GPIO module.
* The CAM_CLK_CTRL *is* really the right place. */ * The CAM_CLK_CTRL *is* really the right place. */
if (cpu_is_omap1610() || cpu_is_omap1710()) if (cpu_is_omap16xx())
omap_writel(omap_readl(ULPD_CAM_CLK_CTRL) | 0x04, ULPD_CAM_CLK_CTRL); omap_writel(omap_readl(ULPD_CAM_CLK_CTRL) | 0x04, ULPD_CAM_CLK_CTRL);
return 0; return 0;
} }
#if defined (CONFIG_ARCH_OMAP16XX) || defined (CONFIG_ARCH_OMAP24XX)
static int omap_gpio_suspend(struct sys_device *dev, pm_message_t mesg)
{
int i;
if (!cpu_is_omap24xx() && !cpu_is_omap16xx())
return 0;
for (i = 0; i < gpio_bank_count; i++) {
struct gpio_bank *bank = &gpio_bank[i];
void __iomem *wake_status;
void __iomem *wake_clear;
void __iomem *wake_set;
switch (bank->method) {
case METHOD_GPIO_1610:
wake_status = bank->base + OMAP1610_GPIO_WAKEUPENABLE;
wake_clear = bank->base + OMAP1610_GPIO_CLEAR_WAKEUPENA;
wake_set = bank->base + OMAP1610_GPIO_SET_WAKEUPENA;
break;
case METHOD_GPIO_24XX:
wake_status = bank->base + OMAP24XX_GPIO_SETWKUENA;
wake_clear = bank->base + OMAP24XX_GPIO_CLEARWKUENA;
wake_set = bank->base + OMAP24XX_GPIO_SETWKUENA;
break;
default:
continue;
}
spin_lock(&bank->lock);
bank->saved_wakeup = __raw_readl(wake_status);
__raw_writel(0xffffffff, wake_clear);
__raw_writel(bank->suspend_wakeup, wake_set);
spin_unlock(&bank->lock);
}
return 0;
}
static int omap_gpio_resume(struct sys_device *dev)
{
int i;
if (!cpu_is_omap24xx() && !cpu_is_omap16xx())
return 0;
for (i = 0; i < gpio_bank_count; i++) {
struct gpio_bank *bank = &gpio_bank[i];
void __iomem *wake_clear;
void __iomem *wake_set;
switch (bank->method) {
case METHOD_GPIO_1610:
wake_clear = bank->base + OMAP1610_GPIO_CLEAR_WAKEUPENA;
wake_set = bank->base + OMAP1610_GPIO_SET_WAKEUPENA;
break;
case METHOD_GPIO_24XX:
wake_clear = bank->base + OMAP1610_GPIO_CLEAR_WAKEUPENA;
wake_set = bank->base + OMAP1610_GPIO_SET_WAKEUPENA;
break;
default:
continue;
}
spin_lock(&bank->lock);
__raw_writel(0xffffffff, wake_clear);
__raw_writel(bank->saved_wakeup, wake_set);
spin_unlock(&bank->lock);
}
return 0;
}
static struct sysdev_class omap_gpio_sysclass = {
set_kset_name("gpio"),
.suspend = omap_gpio_suspend,
.resume = omap_gpio_resume,
};
static struct sys_device omap_gpio_device = {
.id = 0,
.cls = &omap_gpio_sysclass,
};
#endif
/* /*
* This may get called early from board specific init * This may get called early from board specific init
*/ */
@ -752,11 +1047,30 @@ int omap_gpio_init(void)
return 0; return 0;
} }
static int __init omap_gpio_sysinit(void)
{
int ret = 0;
if (!initialized)
ret = _omap_gpio_init();
#if defined(CONFIG_ARCH_OMAP16XX) || defined(CONFIG_ARCH_OMAP24XX)
if (cpu_is_omap16xx() || cpu_is_omap24xx()) {
if (ret == 0) {
ret = sysdev_class_register(&omap_gpio_sysclass);
if (ret == 0)
ret = sysdev_register(&omap_gpio_device);
}
}
#endif
return ret;
}
EXPORT_SYMBOL(omap_request_gpio); EXPORT_SYMBOL(omap_request_gpio);
EXPORT_SYMBOL(omap_free_gpio); EXPORT_SYMBOL(omap_free_gpio);
EXPORT_SYMBOL(omap_set_gpio_direction); EXPORT_SYMBOL(omap_set_gpio_direction);
EXPORT_SYMBOL(omap_set_gpio_dataout); EXPORT_SYMBOL(omap_set_gpio_dataout);
EXPORT_SYMBOL(omap_get_gpio_datain); EXPORT_SYMBOL(omap_get_gpio_datain);
EXPORT_SYMBOL(omap_set_gpio_edge_ctrl);
arch_initcall(omap_gpio_init); arch_initcall(omap_gpio_sysinit);

View File

@ -27,6 +27,7 @@
#include <asm/arch/dma.h> #include <asm/arch/dma.h>
#include <asm/arch/mux.h> #include <asm/arch/mux.h>
#include <asm/arch/irqs.h> #include <asm/arch/irqs.h>
#include <asm/arch/dsp_common.h>
#include <asm/arch/mcbsp.h> #include <asm/arch/mcbsp.h>
#include <asm/hardware/clock.h> #include <asm/hardware/clock.h>
@ -187,9 +188,6 @@ static int omap_mcbsp_check(unsigned int id)
return -1; return -1;
} }
#define EN_XORPCK 1
#define DSP_RSTCT2 0xe1008014
static void omap_mcbsp_dsp_request(void) static void omap_mcbsp_dsp_request(void)
{ {
if (cpu_is_omap1510() || cpu_is_omap16xx()) { if (cpu_is_omap1510() || cpu_is_omap16xx()) {
@ -198,6 +196,11 @@ static void omap_mcbsp_dsp_request(void)
/* enable 12MHz clock to mcbsp 1 & 3 */ /* enable 12MHz clock to mcbsp 1 & 3 */
clk_use(mcbsp_dspxor_ck); clk_use(mcbsp_dspxor_ck);
/*
* DSP external peripheral reset
* FIXME: This should be moved to dsp code
*/
__raw_writew(__raw_readw(DSP_RSTCT2) | 1 | 1 << 1, __raw_writew(__raw_readw(DSP_RSTCT2) | 1 | 1 << 1,
DSP_RSTCT2); DSP_RSTCT2);
} }

View File

@ -48,6 +48,9 @@ omap_cfg_reg(const reg_cfg_t reg_cfg)
pull_orig = 0, pull = 0; pull_orig = 0, pull = 0;
unsigned int mask, warn = 0; unsigned int mask, warn = 0;
if (cpu_is_omap7xx())
return 0;
if (reg_cfg > ARRAY_SIZE(reg_cfg_table)) { if (reg_cfg > ARRAY_SIZE(reg_cfg_table)) {
printk(KERN_ERR "MUX: reg_cfg %d\n", reg_cfg); printk(KERN_ERR "MUX: reg_cfg %d\n", reg_cfg);
return -EINVAL; return -EINVAL;

View File

@ -25,6 +25,7 @@
#include <linux/config.h> #include <linux/config.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/version.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>

View File

@ -39,24 +39,32 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/interrupt.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach/time.h> #include <asm/mach/time.h>
#include <asm/mach-types.h> #include <asm/mach/irq.h>
#include <asm/arch/omap16xx.h> #include <asm/mach-types.h>
#include <asm/arch/irqs.h>
#include <asm/arch/tc.h>
#include <asm/arch/pm.h> #include <asm/arch/pm.h>
#include <asm/arch/mux.h> #include <asm/arch/mux.h>
#include <asm/arch/tc.h>
#include <asm/arch/tps65010.h> #include <asm/arch/tps65010.h>
#include <asm/arch/dsp_common.h>
#include "clock.h" #include "clock.h"
#include "sram.h"
static unsigned int arm_sleep_save[ARM_SLEEP_SAVE_SIZE]; static unsigned int arm_sleep_save[ARM_SLEEP_SAVE_SIZE];
static unsigned short ulpd_sleep_save[ULPD_SLEEP_SAVE_SIZE]; static unsigned short ulpd_sleep_save[ULPD_SLEEP_SAVE_SIZE];
static unsigned int mpui1510_sleep_save[MPUI1510_SLEEP_SAVE_SIZE]; static unsigned int mpui1510_sleep_save[MPUI1510_SLEEP_SAVE_SIZE];
static unsigned int mpui1610_sleep_save[MPUI1610_SLEEP_SAVE_SIZE]; static unsigned int mpui1610_sleep_save[MPUI1610_SLEEP_SAVE_SIZE];
static void (*omap_sram_idle)(void) = NULL;
static void (*omap_sram_suspend)(unsigned long r0, unsigned long r1) = NULL;
/* /*
* Let's power down on idle, but only if we are really * Let's power down on idle, but only if we are really
* idle, because once we start down the path of * idle, because once we start down the path of
@ -65,7 +73,6 @@ static unsigned int mpui1610_sleep_save[MPUI1610_SLEEP_SAVE_SIZE];
*/ */
void omap_pm_idle(void) void omap_pm_idle(void)
{ {
int (*func_ptr)(void) = 0;
unsigned int mask32 = 0; unsigned int mask32 = 0;
/* /*
@ -83,6 +90,13 @@ void omap_pm_idle(void)
} }
mask32 = omap_readl(ARM_SYSST); mask32 = omap_readl(ARM_SYSST);
/*
* Prevent the ULPD from entering low power state by setting
* POWER_CTRL_REG:4 = 0
*/
omap_writew(omap_readw(ULPD_POWER_CTRL) &
~ULPD_DEEP_SLEEP_TRANSITION_EN, ULPD_POWER_CTRL);
/* /*
* Since an interrupt may set up a timer, we don't want to * Since an interrupt may set up a timer, we don't want to
* reprogram the hardware timer with interrupts enabled. * reprogram the hardware timer with interrupts enabled.
@ -92,18 +106,9 @@ void omap_pm_idle(void)
if ((mask32 & DSP_IDLE) == 0) { if ((mask32 & DSP_IDLE) == 0) {
__asm__ volatile ("mcr p15, 0, r0, c7, c0, 4"); __asm__ volatile ("mcr p15, 0, r0, c7, c0, 4");
} else { } else
omap_sram_idle();
if (cpu_is_omap1510()) {
func_ptr = (void *)(OMAP1510_SRAM_IDLE_SUSPEND);
} else if (cpu_is_omap1610() || cpu_is_omap1710()) {
func_ptr = (void *)(OMAP1610_SRAM_IDLE_SUSPEND);
} else if (cpu_is_omap5912()) {
func_ptr = (void *)(OMAP5912_SRAM_IDLE_SUSPEND);
}
func_ptr();
}
local_fiq_enable(); local_fiq_enable();
local_irq_enable(); local_irq_enable();
} }
@ -115,58 +120,55 @@ void omap_pm_idle(void)
*/ */
static void omap_pm_wakeup_setup(void) static void omap_pm_wakeup_setup(void)
{ {
/* u32 level1_wake = OMAP_IRQ_BIT(INT_IH2_IRQ);
* Enable ARM XOR clock and release peripheral from reset by u32 level2_wake = OMAP_IRQ_BIT(INT_UART2) | OMAP_IRQ_BIT(INT_KEYBOARD);
* writing 1 to PER_EN bit in ARM_RSTCT2, this is required
* for UART configuration to use UART2 to wake up.
*/
omap_writel(omap_readl(ARM_IDLECT2) | ENABLE_XORCLK, ARM_IDLECT2);
omap_writel(omap_readl(ARM_RSTCT2) | PER_EN, ARM_RSTCT2);
omap_writew(MODEM_32K_EN, ULPD_CLOCK_CTRL);
/* /*
* Turn off all interrupts except L1-2nd level cascade, * Turn off all interrupts except GPIO bank 1, L1-2nd level cascade,
* and the L2 wakeup interrupts: keypad and UART2. * and the L2 wakeup interrupts: keypad and UART2. Note that the
* drivers must still separately call omap_set_gpio_wakeup() to
* wake up to a GPIO interrupt.
*/ */
if (cpu_is_omap1510() || cpu_is_omap16xx())
level1_wake |= OMAP_IRQ_BIT(INT_GPIO_BANK1);
else if (cpu_is_omap730())
level1_wake |= OMAP_IRQ_BIT(INT_730_GPIO_BANK1);
omap_writel(~IRQ_LEVEL2, OMAP_IH1_MIR); omap_writel(~level1_wake, OMAP_IH1_MIR);
if (cpu_is_omap1510()) { if (cpu_is_omap1510())
omap_writel(~(IRQ_UART2 | IRQ_KEYBOARD), OMAP_IH2_MIR); omap_writel(~level2_wake, OMAP_IH2_MIR);
}
/* INT_1610_WAKE_UP_REQ is needed for GPIO wakeup... */
if (cpu_is_omap16xx()) { if (cpu_is_omap16xx()) {
omap_writel(~(IRQ_UART2 | IRQ_KEYBOARD), OMAP_IH2_0_MIR); omap_writel(~level2_wake, OMAP_IH2_0_MIR);
omap_writel(~OMAP_IRQ_BIT(INT_1610_WAKE_UP_REQ), OMAP_IH2_1_MIR);
omap_writel(~0x0, OMAP_IH2_1_MIR);
omap_writel(~0x0, OMAP_IH2_2_MIR); omap_writel(~0x0, OMAP_IH2_2_MIR);
omap_writel(~0x0, OMAP_IH2_3_MIR); omap_writel(~0x0, OMAP_IH2_3_MIR);
} }
/* New IRQ agreement */ /* New IRQ agreement, recalculate in cascade order */
omap_writel(1, OMAP_IH2_CONTROL);
omap_writel(1, OMAP_IH1_CONTROL); omap_writel(1, OMAP_IH1_CONTROL);
/* external PULL to down, bit 22 = 0 */
omap_writel(omap_readl(PULL_DWN_CTRL_2) & ~(1<<22), PULL_DWN_CTRL_2);
} }
void omap_pm_suspend(void) void omap_pm_suspend(void)
{ {
unsigned int mask32 = 0;
unsigned long arg0 = 0, arg1 = 0; unsigned long arg0 = 0, arg1 = 0;
int (*func_ptr)(unsigned short, unsigned short) = 0;
unsigned short save_dsp_idlect2;
printk("PM: OMAP%x is entering deep sleep now ...\n", system_rev); printk("PM: OMAP%x is trying to enter deep sleep...\n", system_rev);
omap_serial_wake_trigger(1);
if (machine_is_omap_osk()) { if (machine_is_omap_osk()) {
/* Stop LED1 (D9) blink */ /* Stop LED1 (D9) blink */
tps65010_set_led(LED1, OFF); tps65010_set_led(LED1, OFF);
} }
omap_writew(0xffff, ULPD_SOFT_DISABLE_REQ_REG);
/* /*
* Step 1: turn off interrupts * Step 1: turn off interrupts (FIXME: NOTE: already disabled)
*/ */
local_irq_disable(); local_irq_disable();
@ -207,6 +209,8 @@ void omap_pm_suspend(void)
ARM_SAVE(ARM_CKCTL); ARM_SAVE(ARM_CKCTL);
ARM_SAVE(ARM_IDLECT1); ARM_SAVE(ARM_IDLECT1);
ARM_SAVE(ARM_IDLECT2); ARM_SAVE(ARM_IDLECT2);
if (!(cpu_is_omap1510()))
ARM_SAVE(ARM_IDLECT3);
ARM_SAVE(ARM_EWUPCT); ARM_SAVE(ARM_EWUPCT);
ARM_SAVE(ARM_RSTCT1); ARM_SAVE(ARM_RSTCT1);
ARM_SAVE(ARM_RSTCT2); ARM_SAVE(ARM_RSTCT2);
@ -214,42 +218,12 @@ void omap_pm_suspend(void)
ULPD_SAVE(ULPD_CLOCK_CTRL); ULPD_SAVE(ULPD_CLOCK_CTRL);
ULPD_SAVE(ULPD_STATUS_REQ); ULPD_SAVE(ULPD_STATUS_REQ);
/* /* (Step 3 removed - we now allow deep sleep by default) */
* Step 3: LOW_PWR signal enabling
*
* Allow the LOW_PWR signal to be visible on MPUIO5 ball.
*/
if (cpu_is_omap1510()) {
/* POWER_CTRL_REG = 0x1 (LOW_POWER is available) */
omap_writew(omap_readw(ULPD_POWER_CTRL) |
OMAP1510_ULPD_LOW_POWER_REQ, ULPD_POWER_CTRL);
} else if (cpu_is_omap16xx()) {
/* POWER_CTRL_REG = 0x1 (LOW_POWER is available) */
omap_writew(omap_readw(ULPD_POWER_CTRL) |
OMAP1610_ULPD_LOW_POWER_REQ, ULPD_POWER_CTRL);
}
/* configure LOW_PWR pin */
omap_cfg_reg(T20_1610_LOW_PWR);
/* /*
* Step 4: OMAP DSP Shutdown * Step 4: OMAP DSP Shutdown
*/ */
/* Set DSP_RST = 1 and DSP_EN = 0, put DSP block into reset */
omap_writel((omap_readl(ARM_RSTCT1) | DSP_RST) & ~DSP_ENABLE,
ARM_RSTCT1);
/* Set DSP boot mode to DSP-IDLE, DSP_BOOT_MODE = 0x2 */
omap_writel(DSP_IDLE_MODE, MPUI_DSP_BOOT_CONFIG);
/* Set EN_DSPCK = 0, stop DSP block clock */
omap_writel(omap_readl(ARM_CKCTL) & ~DSP_CLOCK_ENABLE, ARM_CKCTL);
/* Stop any DSP domain clocks */
omap_writel(omap_readl(ARM_IDLECT2) | (1<<EN_APICK), ARM_IDLECT2);
save_dsp_idlect2 = __raw_readw(DSP_IDLECT2);
__raw_writew(0, DSP_IDLECT2);
/* /*
* Step 5: Wakeup Event Setup * Step 5: Wakeup Event Setup
@ -258,24 +232,9 @@ void omap_pm_suspend(void)
omap_pm_wakeup_setup(); omap_pm_wakeup_setup();
/* /*
* Step 6a: ARM and Traffic controller shutdown * Step 6: ARM and Traffic controller shutdown
*
* Step 6 starts here with clock and watchdog disable
*/ */
/* stop clocks */
mask32 = omap_readl(ARM_IDLECT2);
mask32 &= ~(1<<EN_WDTCK); /* bit 0 -> 0 (WDT clock) */
mask32 |= (1<<EN_XORPCK); /* bit 1 -> 1 (XORPCK clock) */
mask32 &= ~(1<<EN_PERCK); /* bit 2 -> 0 (MPUPER_CK clock) */
mask32 &= ~(1<<EN_LCDCK); /* bit 3 -> 0 (LCDC clock) */
mask32 &= ~(1<<EN_LBCK); /* bit 4 -> 0 (local bus clock) */
mask32 |= (1<<EN_APICK); /* bit 6 -> 1 (MPUI clock) */
mask32 &= ~(1<<EN_TIMCK); /* bit 7 -> 0 (MPU timer clock) */
mask32 &= ~(1<<DMACK_REQ); /* bit 8 -> 0 (DMAC clock) */
mask32 &= ~(1<<EN_GPIOCK); /* bit 9 -> 0 (GPIO clock) */
omap_writel(mask32, ARM_IDLECT2);
/* disable ARM watchdog */ /* disable ARM watchdog */
omap_writel(0x00F5, OMAP_WDT_TIMER_MODE); omap_writel(0x00F5, OMAP_WDT_TIMER_MODE);
omap_writel(0x00A0, OMAP_WDT_TIMER_MODE); omap_writel(0x00A0, OMAP_WDT_TIMER_MODE);
@ -295,47 +254,24 @@ void omap_pm_suspend(void)
arg0 = arm_sleep_save[ARM_SLEEP_SAVE_ARM_IDLECT1]; arg0 = arm_sleep_save[ARM_SLEEP_SAVE_ARM_IDLECT1];
arg1 = arm_sleep_save[ARM_SLEEP_SAVE_ARM_IDLECT2]; arg1 = arm_sleep_save[ARM_SLEEP_SAVE_ARM_IDLECT2];
if (cpu_is_omap1510()) {
func_ptr = (void *)(OMAP1510_SRAM_API_SUSPEND);
} else if (cpu_is_omap1610() || cpu_is_omap1710()) {
func_ptr = (void *)(OMAP1610_SRAM_API_SUSPEND);
} else if (cpu_is_omap5912()) {
func_ptr = (void *)(OMAP5912_SRAM_API_SUSPEND);
}
/* /*
* Step 6c: ARM and Traffic controller shutdown * Step 6c: ARM and Traffic controller shutdown
* *
* Jump to assembly code. The processor will stay there * Jump to assembly code. The processor will stay there
* until wake up. * until wake up.
*/ */
omap_sram_suspend(arg0, arg1);
func_ptr(arg0, arg1);
/* /*
* If we are here, processor is woken up! * If we are here, processor is woken up!
*/ */
if (cpu_is_omap1510()) {
/* POWER_CTRL_REG = 0x0 (LOW_POWER is disabled) */
omap_writew(omap_readw(ULPD_POWER_CTRL) &
~OMAP1510_ULPD_LOW_POWER_REQ, ULPD_POWER_CTRL);
} else if (cpu_is_omap16xx()) {
/* POWER_CTRL_REG = 0x0 (LOW_POWER is disabled) */
omap_writew(omap_readw(ULPD_POWER_CTRL) &
~OMAP1610_ULPD_LOW_POWER_REQ, ULPD_POWER_CTRL);
}
/* Restore DSP clocks */
omap_writel(omap_readl(ARM_IDLECT2) | (1<<EN_APICK), ARM_IDLECT2);
__raw_writew(save_dsp_idlect2, DSP_IDLECT2);
ARM_RESTORE(ARM_IDLECT2);
/* /*
* Restore ARM state, except ARM_IDLECT1/2 which omap_cpu_suspend did * Restore ARM state, except ARM_IDLECT1/2 which omap_cpu_suspend did
*/ */
if (!(cpu_is_omap1510()))
ARM_RESTORE(ARM_IDLECT3);
ARM_RESTORE(ARM_CKCTL); ARM_RESTORE(ARM_CKCTL);
ARM_RESTORE(ARM_EWUPCT); ARM_RESTORE(ARM_EWUPCT);
ARM_RESTORE(ARM_RSTCT1); ARM_RESTORE(ARM_RSTCT1);
@ -366,6 +302,8 @@ void omap_pm_suspend(void)
MPUI1610_RESTORE(OMAP_IH2_3_MIR); MPUI1610_RESTORE(OMAP_IH2_3_MIR);
} }
omap_writew(0, ULPD_SOFT_DISABLE_REQ_REG);
/* /*
* Reenable interrupts * Reenable interrupts
*/ */
@ -373,6 +311,8 @@ void omap_pm_suspend(void)
local_irq_enable(); local_irq_enable();
local_fiq_enable(); local_fiq_enable();
omap_serial_wake_trigger(0);
printk("PM: OMAP%x is re-starting from deep sleep...\n", system_rev); printk("PM: OMAP%x is re-starting from deep sleep...\n", system_rev);
if (machine_is_omap_osk()) { if (machine_is_omap_osk()) {
@ -401,6 +341,8 @@ static int omap_pm_read_proc(
ARM_SAVE(ARM_CKCTL); ARM_SAVE(ARM_CKCTL);
ARM_SAVE(ARM_IDLECT1); ARM_SAVE(ARM_IDLECT1);
ARM_SAVE(ARM_IDLECT2); ARM_SAVE(ARM_IDLECT2);
if (!(cpu_is_omap1510()))
ARM_SAVE(ARM_IDLECT3);
ARM_SAVE(ARM_EWUPCT); ARM_SAVE(ARM_EWUPCT);
ARM_SAVE(ARM_RSTCT1); ARM_SAVE(ARM_RSTCT1);
ARM_SAVE(ARM_RSTCT2); ARM_SAVE(ARM_RSTCT2);
@ -436,6 +378,7 @@ static int omap_pm_read_proc(
"ARM_CKCTL_REG: 0x%-8x \n" "ARM_CKCTL_REG: 0x%-8x \n"
"ARM_IDLECT1_REG: 0x%-8x \n" "ARM_IDLECT1_REG: 0x%-8x \n"
"ARM_IDLECT2_REG: 0x%-8x \n" "ARM_IDLECT2_REG: 0x%-8x \n"
"ARM_IDLECT3_REG: 0x%-8x \n"
"ARM_EWUPCT_REG: 0x%-8x \n" "ARM_EWUPCT_REG: 0x%-8x \n"
"ARM_RSTCT1_REG: 0x%-8x \n" "ARM_RSTCT1_REG: 0x%-8x \n"
"ARM_RSTCT2_REG: 0x%-8x \n" "ARM_RSTCT2_REG: 0x%-8x \n"
@ -449,6 +392,7 @@ static int omap_pm_read_proc(
ARM_SHOW(ARM_CKCTL), ARM_SHOW(ARM_CKCTL),
ARM_SHOW(ARM_IDLECT1), ARM_SHOW(ARM_IDLECT1),
ARM_SHOW(ARM_IDLECT2), ARM_SHOW(ARM_IDLECT2),
ARM_SHOW(ARM_IDLECT3),
ARM_SHOW(ARM_EWUPCT), ARM_SHOW(ARM_EWUPCT),
ARM_SHOW(ARM_RSTCT1), ARM_SHOW(ARM_RSTCT1),
ARM_SHOW(ARM_RSTCT2), ARM_SHOW(ARM_RSTCT2),
@ -507,7 +451,7 @@ static void omap_pm_init_proc(void)
entry = create_proc_read_entry("driver/omap_pm", entry = create_proc_read_entry("driver/omap_pm",
S_IWUSR | S_IRUGO, NULL, S_IWUSR | S_IRUGO, NULL,
omap_pm_read_proc, 0); omap_pm_read_proc, NULL);
} }
#endif /* DEBUG && CONFIG_PROC_FS */ #endif /* DEBUG && CONFIG_PROC_FS */
@ -580,7 +524,21 @@ static int omap_pm_finish(suspend_state_t state)
} }
struct pm_ops omap_pm_ops ={ static irqreturn_t omap_wakeup_interrupt(int irq, void * dev,
struct pt_regs * regs)
{
return IRQ_HANDLED;
}
static struct irqaction omap_wakeup_irq = {
.name = "peripheral wakeup",
.flags = SA_INTERRUPT,
.handler = omap_wakeup_interrupt
};
static struct pm_ops omap_pm_ops ={
.pm_disk_mode = 0, .pm_disk_mode = 0,
.prepare = omap_pm_prepare, .prepare = omap_pm_prepare,
.enter = omap_pm_enter, .enter = omap_pm_enter,
@ -590,42 +548,61 @@ struct pm_ops omap_pm_ops ={
static int __init omap_pm_init(void) static int __init omap_pm_init(void)
{ {
printk("Power Management for TI OMAP.\n"); printk("Power Management for TI OMAP.\n");
pm_idle = omap_pm_idle;
/* /*
* We copy the assembler sleep/wakeup routines to SRAM. * We copy the assembler sleep/wakeup routines to SRAM.
* These routines need to be in SRAM as that's the only * These routines need to be in SRAM as that's the only
* memory the MPU can see when it wakes up. * memory the MPU can see when it wakes up.
*/ */
#ifdef CONFIG_ARCH_OMAP1510
if (cpu_is_omap1510()) { if (cpu_is_omap1510()) {
memcpy((void *)OMAP1510_SRAM_IDLE_SUSPEND, omap_sram_idle = omap_sram_push(omap1510_idle_loop_suspend,
omap1510_idle_loop_suspend, omap1510_idle_loop_suspend_sz);
omap1510_idle_loop_suspend_sz); omap_sram_suspend = omap_sram_push(omap1510_cpu_suspend,
memcpy((void *)OMAP1510_SRAM_API_SUSPEND, omap1510_cpu_suspend, omap1510_cpu_suspend_sz);
omap1510_cpu_suspend_sz); } else if (cpu_is_omap16xx()) {
} else omap_sram_idle = omap_sram_push(omap1610_idle_loop_suspend,
#endif omap1610_idle_loop_suspend_sz);
if (cpu_is_omap1610() || cpu_is_omap1710()) { omap_sram_suspend = omap_sram_push(omap1610_cpu_suspend,
memcpy((void *)OMAP1610_SRAM_IDLE_SUSPEND, omap1610_cpu_suspend_sz);
omap1610_idle_loop_suspend,
omap1610_idle_loop_suspend_sz);
memcpy((void *)OMAP1610_SRAM_API_SUSPEND, omap1610_cpu_suspend,
omap1610_cpu_suspend_sz);
} else if (cpu_is_omap5912()) {
memcpy((void *)OMAP5912_SRAM_IDLE_SUSPEND,
omap1610_idle_loop_suspend,
omap1610_idle_loop_suspend_sz);
memcpy((void *)OMAP5912_SRAM_API_SUSPEND, omap1610_cpu_suspend,
omap1610_cpu_suspend_sz);
} }
if (omap_sram_idle == NULL || omap_sram_suspend == NULL) {
printk(KERN_ERR "PM not initialized: Missing SRAM support\n");
return -ENODEV;
}
pm_idle = omap_pm_idle;
setup_irq(INT_1610_WAKE_UP_REQ, &omap_wakeup_irq);
#if 0
/* --- BEGIN BOARD-DEPENDENT CODE --- */
/* Sleepx mask direction */
omap_writew((omap_readw(0xfffb5008) & ~2), 0xfffb5008);
/* Unmask sleepx signal */
omap_writew((omap_readw(0xfffb5004) & ~2), 0xfffb5004);
/* --- END BOARD-DEPENDENT CODE --- */
#endif
/* Program new power ramp-up time
* (0 for most boards since we don't lower voltage when in deep sleep)
*/
omap_writew(ULPD_SETUP_ANALOG_CELL_3_VAL, ULPD_SETUP_ANALOG_CELL_3);
/* Setup ULPD POWER_CTRL_REG - enter deep sleep whenever possible */
omap_writew(ULPD_POWER_CTRL_REG_VAL, ULPD_POWER_CTRL);
/* Configure IDLECT3 */
if (cpu_is_omap16xx())
omap_writel(OMAP1610_IDLECT3_VAL, OMAP1610_IDLECT3);
pm_set_ops(&omap_pm_ops); pm_set_ops(&omap_pm_ops);
#if defined(DEBUG) && defined(CONFIG_PROC_FS) #if defined(DEBUG) && defined(CONFIG_PROC_FS)
omap_pm_init_proc(); omap_pm_init_proc();
#endif #endif
/* configure LOW_PWR pin */
omap_cfg_reg(T20_1610_LOW_PWR);
return 0; return 0;
} }
__initcall(omap_pm_init); __initcall(omap_pm_init);

View File

@ -66,7 +66,7 @@ ENTRY(omap1510_idle_loop_suspend)
@ get ARM_IDLECT2 into r2 @ get ARM_IDLECT2 into r2
ldrh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] ldrh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
mov r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff mov r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff
orr r5,r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff00 orr r5, r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff00
strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
@ request ARM idle @ request ARM idle
@ -76,7 +76,7 @@ ENTRY(omap1510_idle_loop_suspend)
strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
mov r5, #IDLE_WAIT_CYCLES & 0xff mov r5, #IDLE_WAIT_CYCLES & 0xff
orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00 orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00
l_1510: subs r5, r5, #1 l_1510: subs r5, r5, #1
bne l_1510 bne l_1510
/* /*
@ -96,7 +96,7 @@ l_1510: subs r5, r5, #1
strh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
strh r1, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r1, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
ldmfd sp!, {r0 - r12, pc} @ restore regs and return ldmfd sp!, {r0 - r12, pc} @ restore regs and return
ENTRY(omap1510_idle_loop_suspend_sz) ENTRY(omap1510_idle_loop_suspend_sz)
.word . - omap1510_idle_loop_suspend .word . - omap1510_idle_loop_suspend
@ -115,8 +115,8 @@ ENTRY(omap1610_idle_loop_suspend)
@ turn off clock domains @ turn off clock domains
@ get ARM_IDLECT2 into r2 @ get ARM_IDLECT2 into r2
ldrh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] ldrh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
mov r5, #OMAP1610_IDLE_CLOCK_DOMAINS & 0xff mov r5, #OMAP1610_IDLECT2_SLEEP_VAL & 0xff
orr r5,r5, #OMAP1610_IDLE_CLOCK_DOMAINS & 0xff00 orr r5, r5, #OMAP1610_IDLECT2_SLEEP_VAL & 0xff00
strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
@ request ARM idle @ request ARM idle
@ -126,7 +126,7 @@ ENTRY(omap1610_idle_loop_suspend)
strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
mov r5, #IDLE_WAIT_CYCLES & 0xff mov r5, #IDLE_WAIT_CYCLES & 0xff
orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00 orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00
l_1610: subs r5, r5, #1 l_1610: subs r5, r5, #1
bne l_1610 bne l_1610
/* /*
@ -146,7 +146,7 @@ l_1610: subs r5, r5, #1
strh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r2, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
strh r1, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r1, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
ldmfd sp!, {r0 - r12, pc} @ restore regs and return ldmfd sp!, {r0 - r12, pc} @ restore regs and return
ENTRY(omap1610_idle_loop_suspend_sz) ENTRY(omap1610_idle_loop_suspend_sz)
.word . - omap1610_idle_loop_suspend .word . - omap1610_idle_loop_suspend
@ -208,7 +208,7 @@ ENTRY(omap1510_cpu_suspend)
@ turn off clock domains @ turn off clock domains
mov r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff mov r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff
orr r5,r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff00 orr r5, r5, #OMAP1510_IDLE_CLOCK_DOMAINS & 0xff00
strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
@ request ARM idle @ request ARM idle
@ -217,7 +217,7 @@ ENTRY(omap1510_cpu_suspend)
strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
mov r5, #IDLE_WAIT_CYCLES & 0xff mov r5, #IDLE_WAIT_CYCLES & 0xff
orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00 orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00
l_1510_2: l_1510_2:
subs r5, r5, #1 subs r5, r5, #1
bne l_1510_2 bne l_1510_2
@ -237,7 +237,7 @@ l_1510_2:
strh r0, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r0, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
@ restore regs and return @ restore regs and return
ldmfd sp!, {r0 - r12, pc} ldmfd sp!, {r0 - r12, pc}
ENTRY(omap1510_cpu_suspend_sz) ENTRY(omap1510_cpu_suspend_sz)
.word . - omap1510_cpu_suspend .word . - omap1510_cpu_suspend
@ -249,21 +249,26 @@ ENTRY(omap1610_cpu_suspend)
@ save registers on stack @ save registers on stack
stmfd sp!, {r0 - r12, lr} stmfd sp!, {r0 - r12, lr}
@ Drain write cache
mov r4, #0
mcr p15, 0, r0, c7, c10, 4
nop
@ load base address of Traffic Controller @ load base address of Traffic Controller
mov r4, #TCMIF_ASM_BASE & 0xff000000 mov r6, #TCMIF_ASM_BASE & 0xff000000
orr r4, r4, #TCMIF_ASM_BASE & 0x00ff0000 orr r6, r6, #TCMIF_ASM_BASE & 0x00ff0000
orr r4, r4, #TCMIF_ASM_BASE & 0x0000ff00 orr r6, r6, #TCMIF_ASM_BASE & 0x0000ff00
@ prepare to put SDRAM into self-refresh manually @ prepare to put SDRAM into self-refresh manually
ldr r5, [r4, #EMIFF_SDRAM_CONFIG_ASM_OFFSET & 0xff] ldr r7, [r6, #EMIFF_SDRAM_CONFIG_ASM_OFFSET & 0xff]
orr r5, r5, #SELF_REFRESH_MODE & 0xff000000 orr r9, r7, #SELF_REFRESH_MODE & 0xff000000
orr r5, r5, #SELF_REFRESH_MODE & 0x000000ff orr r9, r9, #SELF_REFRESH_MODE & 0x000000ff
str r5, [r4, #EMIFF_SDRAM_CONFIG_ASM_OFFSET & 0xff] str r9, [r6, #EMIFF_SDRAM_CONFIG_ASM_OFFSET & 0xff]
@ prepare to put EMIFS to Sleep @ prepare to put EMIFS to Sleep
ldr r5, [r4, #EMIFS_CONFIG_ASM_OFFSET & 0xff] ldr r8, [r6, #EMIFS_CONFIG_ASM_OFFSET & 0xff]
orr r5, r5, #IDLE_EMIFS_REQUEST & 0xff orr r9, r8, #IDLE_EMIFS_REQUEST & 0xff
str r5, [r4, #EMIFS_CONFIG_ASM_OFFSET & 0xff] str r9, [r6, #EMIFS_CONFIG_ASM_OFFSET & 0xff]
@ load base address of ARM_IDLECT1 and ARM_IDLECT2 @ load base address of ARM_IDLECT1 and ARM_IDLECT2
mov r4, #CLKGEN_REG_ASM_BASE & 0xff000000 mov r4, #CLKGEN_REG_ASM_BASE & 0xff000000
@ -271,26 +276,22 @@ ENTRY(omap1610_cpu_suspend)
orr r4, r4, #CLKGEN_REG_ASM_BASE & 0x0000ff00 orr r4, r4, #CLKGEN_REG_ASM_BASE & 0x0000ff00
@ turn off clock domains @ turn off clock domains
mov r5, #OMAP1610_IDLE_CLOCK_DOMAINS & 0xff @ do not disable PERCK (0x04)
orr r5,r5, #OMAP1610_IDLE_CLOCK_DOMAINS & 0xff00 mov r5, #OMAP1610_IDLECT2_SLEEP_VAL & 0xff
strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] orr r5, r5, #OMAP1610_IDLECT2_SLEEP_VAL & 0xff00
@ work around errata of OMAP1610/5912. Enable (!) peripheral
@ clock to let the chip go into deep sleep
ldrh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
orr r5,r5, #EN_PERCK_BIT & 0xff
strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r5, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
@ request ARM idle @ request ARM idle
mov r3, #OMAP1610_DEEP_SLEEP_REQUEST & 0xff mov r3, #OMAP1610_IDLECT1_SLEEP_VAL & 0xff
orr r3, r3, #OMAP1610_DEEP_SLEEP_REQUEST & 0xff00 orr r3, r3, #OMAP1610_IDLECT1_SLEEP_VAL & 0xff00
strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r3, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
mov r5, #IDLE_WAIT_CYCLES & 0xff @ disable instruction cache
orr r5, r5, #IDLE_WAIT_CYCLES & 0xff00 mrc p15, 0, r9, c1, c0, 0
l_1610_2: bic r2, r9, #0x1000
subs r5, r5, #1 mcr p15, 0, r2, c1, c0, 0
bne l_1610_2 nop
/* /*
* Let's wait for the next wake up event to wake us up. r0 can't be * Let's wait for the next wake up event to wake us up. r0 can't be
* used here because r0 holds ARM_IDLECT1 * used here because r0 holds ARM_IDLECT1
@ -301,13 +302,21 @@ l_1610_2:
* omap1610_cpu_suspend()'s resume point. * omap1610_cpu_suspend()'s resume point.
* *
* It will just start executing here, so we'll restore stuff from the * It will just start executing here, so we'll restore stuff from the
* stack, reset the ARM_IDLECT1 and ARM_IDLECT2. * stack.
*/ */
@ re-enable Icache
mcr p15, 0, r9, c1, c0, 0
@ reset the ARM_IDLECT1 and ARM_IDLECT2.
strh r1, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff] strh r1, [r4, #ARM_IDLECT2_ASM_OFFSET & 0xff]
strh r0, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff] strh r0, [r4, #ARM_IDLECT1_ASM_OFFSET & 0xff]
@ Restore EMIFF controls
str r7, [r6, #EMIFF_SDRAM_CONFIG_ASM_OFFSET & 0xff]
str r8, [r6, #EMIFS_CONFIG_ASM_OFFSET & 0xff]
@ restore regs and return @ restore regs and return
ldmfd sp!, {r0 - r12, pc} ldmfd sp!, {r0 - r12, pc}
ENTRY(omap1610_cpu_suspend_sz) ENTRY(omap1610_cpu_suspend_sz)
.word . - omap1610_cpu_suspend .word . - omap1610_cpu_suspend

View File

@ -0,0 +1,58 @@
/*
* linux/arch/arm/plat-omap/sram.S
*
* Functions that need to be run in internal SRAM
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/config.h>
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/arch/io.h>
#include <asm/arch/hardware.h>
.text
/*
* Reprograms ULPD and CKCTL.
*/
ENTRY(sram_reprogram_clock)
stmfd sp!, {r0 - r12, lr} @ save registers on stack
mov r2, #IO_ADDRESS(DPLL_CTL) & 0xff000000
orr r2, r2, #IO_ADDRESS(DPLL_CTL) & 0x00ff0000
orr r2, r2, #IO_ADDRESS(DPLL_CTL) & 0x0000ff00
mov r3, #IO_ADDRESS(ARM_CKCTL) & 0xff000000
orr r3, r3, #IO_ADDRESS(ARM_CKCTL) & 0x00ff0000
orr r3, r3, #IO_ADDRESS(ARM_CKCTL) & 0x0000ff00
tst r0, #1 << 4 @ want lock mode?
beq newck @ nope
bic r0, r0, #1 << 4 @ else clear lock bit
strh r0, [r2] @ set dpll into bypass mode
orr r0, r0, #1 << 4 @ set lock bit again
newck:
strh r1, [r3] @ write new ckctl value
strh r0, [r2] @ write new dpll value
mov r4, #0x0700 @ let the clocks settle
orr r4, r4, #0x00ff
delay: sub r4, r4, #1
cmp r4, #0
bne delay
lock: ldrh r4, [r2], #0 @ read back dpll value
tst r0, #1 << 4 @ want lock mode?
beq out @ nope
tst r4, #1 << 0 @ dpll rate locked?
beq lock @ try again
out:
ldmfd sp!, {r0 - r12, pc} @ restore regs and return
ENTRY(sram_reprogram_clock_sz)
.word . - sram_reprogram_clock

116
arch/arm/plat-omap/sram.c Normal file
View File

@ -0,0 +1,116 @@
/*
* linux/arch/arm/plat-omap/sram.c
*
* OMAP SRAM detection and management
*
* Copyright (C) 2005 Nokia Corporation
* Written by Tony Lindgren <tony@atomide.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <asm/mach/map.h>
#include <asm/io.h>
#include <asm/cacheflush.h>
#include "sram.h"
#define OMAP1_SRAM_BASE 0xd0000000
#define OMAP1_SRAM_START 0x20000000
#define SRAM_BOOTLOADER_SZ 0x80
static unsigned long omap_sram_base;
static unsigned long omap_sram_size;
static unsigned long omap_sram_ceil;
/*
* The amount of SRAM depends on the core type:
* 730 = 200K, 1510 = 512K, 5912 = 256K, 1610 = 16K, 1710 = 16K
* Note that we cannot try to test for SRAM here because writes
* to secure SRAM will hang the system. Also the SRAM is not
* yet mapped at this point.
*/
void __init omap_detect_sram(void)
{
omap_sram_base = OMAP1_SRAM_BASE;
if (cpu_is_omap730())
omap_sram_size = 0x32000;
else if (cpu_is_omap1510())
omap_sram_size = 0x80000;
else if (cpu_is_omap1610() || cpu_is_omap1621() || cpu_is_omap1710())
omap_sram_size = 0x4000;
else if (cpu_is_omap1611())
omap_sram_size = 0x3e800;
else {
printk(KERN_ERR "Could not detect SRAM size\n");
omap_sram_size = 0x4000;
}
printk(KERN_INFO "SRAM size: 0x%lx\n", omap_sram_size);
omap_sram_ceil = omap_sram_base + omap_sram_size;
}
static struct map_desc omap_sram_io_desc[] __initdata = {
{ OMAP1_SRAM_BASE, OMAP1_SRAM_START, 0, MT_DEVICE }
};
/*
* In order to use last 2kB of SRAM on 1611b, we must round the size
* up to multiple of PAGE_SIZE. We cannot use ioremap for SRAM, as
* clock init needs SRAM early.
*/
void __init omap_map_sram(void)
{
if (omap_sram_size == 0)
return;
omap_sram_io_desc[0].length = (omap_sram_size + PAGE_SIZE-1)/PAGE_SIZE;
omap_sram_io_desc[0].length *= PAGE_SIZE;
iotable_init(omap_sram_io_desc, ARRAY_SIZE(omap_sram_io_desc));
/*
* Looks like we need to preserve some bootloader code at the
* beginning of SRAM for jumping to flash for reboot to work...
*/
memset((void *)omap_sram_base + SRAM_BOOTLOADER_SZ, 0,
omap_sram_size - SRAM_BOOTLOADER_SZ);
}
static void (*_omap_sram_reprogram_clock)(u32 dpllctl, u32 ckctl) = NULL;
void omap_sram_reprogram_clock(u32 dpllctl, u32 ckctl)
{
if (_omap_sram_reprogram_clock == NULL)
panic("Cannot use SRAM");
return _omap_sram_reprogram_clock(dpllctl, ckctl);
}
void * omap_sram_push(void * start, unsigned long size)
{
if (size > (omap_sram_ceil - (omap_sram_base + SRAM_BOOTLOADER_SZ))) {
printk(KERN_ERR "Not enough space in SRAM\n");
return NULL;
}
omap_sram_ceil -= size;
omap_sram_ceil &= ~0x3;
memcpy((void *)omap_sram_ceil, start, size);
return (void *)omap_sram_ceil;
}
void __init omap_sram_init(void)
{
omap_detect_sram();
omap_map_sram();
_omap_sram_reprogram_clock = omap_sram_push(sram_reprogram_clock,
sram_reprogram_clock_sz);
}

21
arch/arm/plat-omap/sram.h Normal file
View File

@ -0,0 +1,21 @@
/*
* linux/arch/arm/plat-omap/sram.h
*
* Interface for functions that need to be run in internal SRAM
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ARCH_ARM_OMAP_SRAM_H
#define __ARCH_ARM_OMAP_SRAM_H
extern void * omap_sram_push(void * start, unsigned long size);
extern void omap_sram_reprogram_clock(u32 dpllctl, u32 ckctl);
/* Do not use these */
extern void sram_reprogram_clock(u32 ckctl, u32 dpllctl);
extern unsigned long sram_reprogram_clock_sz;
#endif

View File

@ -41,6 +41,7 @@
/* These routines should handle the standard chip-specific modes /* These routines should handle the standard chip-specific modes
* for usb0/1/2 ports, covering basic mux and transceiver setup. * for usb0/1/2 ports, covering basic mux and transceiver setup.
* Call omap_usb_init() once, from INIT_MACHINE().
* *
* Some board-*.c files will need to set up additional mux options, * Some board-*.c files will need to set up additional mux options,
* like for suspend handling, vbus sensing, GPIOs, and the D+ pullup. * like for suspend handling, vbus sensing, GPIOs, and the D+ pullup.

View File

@ -55,6 +55,10 @@ config GENERIC_BUST_SPINLOCK
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool
config ARCH_MAY_HAVE_PC_FDC
bool
default y
source "init/Kconfig" source "init/Kconfig"

View File

@ -17,10 +17,6 @@ ifeq ($(CONFIG_FRAME_POINTER),y)
CFLAGS +=-fno-omit-frame-pointer -mno-sched-prolog CFLAGS +=-fno-omit-frame-pointer -mno-sched-prolog
endif endif
ifeq ($(CONFIG_DEBUG_INFO),y)
CFLAGS +=-g
endif
CFLAGS_BOOT :=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm CFLAGS_BOOT :=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm
CFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm CFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm
AFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float AFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float

View File

@ -114,7 +114,7 @@ static unsigned long next_rtc_update;
*/ */
static inline void do_set_rtc(void) static inline void do_set_rtc(void)
{ {
if (time_status & STA_UNSYNC || set_rtc == NULL) if (!ntp_synced() || set_rtc == NULL)
return; return;
//FIXME - timespec.tv_sec is a time_t not unsigned long //FIXME - timespec.tv_sec is a time_t not unsigned long
@ -189,10 +189,7 @@ int do_settimeofday(struct timespec *tv)
xtime.tv_sec = tv->tv_sec; xtime.tv_sec = tv->tv_sec;
xtime.tv_nsec = tv->tv_nsec; xtime.tv_nsec = tv->tv_nsec;
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;

View File

@ -240,7 +240,7 @@ timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
* The division here is not time critical since it will run once in * The division here is not time critical since it will run once in
* 11 minutes * 11 minutes
*/ */
if ((time_status & STA_UNSYNC) == 0 && if (ntp_synced() &&
xtime.tv_sec > last_rtc_update + 660 && xtime.tv_sec > last_rtc_update + 660 &&
(xtime.tv_nsec / 1000) >= 500000 - (tick_nsec / 1000) / 2 && (xtime.tv_nsec / 1000) >= 500000 - (tick_nsec / 1000) / 2 &&
(xtime.tv_nsec / 1000) <= 500000 + (tick_nsec / 1000) / 2) { (xtime.tv_nsec / 1000) <= 500000 + (tick_nsec / 1000) / 2) {

View File

@ -114,10 +114,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;

View File

@ -85,7 +85,7 @@ static irqreturn_t timer_interrupt(int irq, void *dummy, struct pt_regs * regs)
* CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
* called as close as possible to 500 ms before the new second starts. * called as close as possible to 500 ms before the new second starts.
*/ */
if ((time_status & STA_UNSYNC) == 0 && if (ntp_synced() &&
xtime.tv_sec > last_rtc_update + 660 && xtime.tv_sec > last_rtc_update + 660 &&
(xtime.tv_nsec / 1000) >= 500000 - ((unsigned) TICK_SIZE) / 2 && (xtime.tv_nsec / 1000) >= 500000 - ((unsigned) TICK_SIZE) / 2 &&
(xtime.tv_nsec / 1000) <= 500000 + ((unsigned) TICK_SIZE) / 2 (xtime.tv_nsec / 1000) <= 500000 + ((unsigned) TICK_SIZE) / 2
@ -216,10 +216,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;

View File

@ -116,10 +116,7 @@ int do_settimeofday(struct timespec *tv)
xtime.tv_sec = tv->tv_sec; xtime.tv_sec = tv->tv_sec;
xtime.tv_nsec = tv->tv_nsec; xtime.tv_nsec = tv->tv_nsec;
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;

View File

@ -37,6 +37,10 @@ config GENERIC_IOMAP
bool bool
default y default y
config ARCH_MAY_HAVE_PC_FDC
bool
default y
source "init/Kconfig" source "init/Kconfig"
menu "Processor type and features" menu "Processor type and features"
@ -1318,6 +1322,11 @@ config GENERIC_IRQ_PROBE
bool bool
default y default y
config GENERIC_PENDING_IRQ
bool
depends on GENERIC_HARDIRQS && SMP
default y
config X86_SMP config X86_SMP
bool bool
depends on SMP && !X86_VOYAGER depends on SMP && !X86_VOYAGER

View File

@ -82,7 +82,7 @@ start:
# This is the setup header, and it must start at %cs:2 (old 0x9020:2) # This is the setup header, and it must start at %cs:2 (old 0x9020:2)
.ascii "HdrS" # header signature .ascii "HdrS" # header signature
.word 0x0203 # header version number (>= 0x0105) .word 0x0204 # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
start_sys_seg: .word SYSSEG start_sys_seg: .word SYSSEG

View File

@ -177,7 +177,9 @@ int main(int argc, char ** argv)
die("Output: seek failed"); die("Output: seek failed");
buf[0] = (sys_size & 0xff); buf[0] = (sys_size & 0xff);
buf[1] = ((sys_size >> 8) & 0xff); buf[1] = ((sys_size >> 8) & 0xff);
if (write(1, buf, 2) != 2) buf[2] = ((sys_size >> 16) & 0xff);
buf[3] = ((sys_size >> 24) & 0xff);
if (write(1, buf, 4) != 4)
die("Write of image length failed"); die("Write of image length failed");
return 0; /* Everything is OK */ return 0; /* Everything is OK */

View File

@ -6,32 +6,28 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
struct dmi_header {
u8 type;
u8 length;
u16 handle;
};
#undef DMI_DEBUG
#ifdef DMI_DEBUG
#define dmi_printk(x) printk x
#else
#define dmi_printk(x)
#endif
static char * __init dmi_string(struct dmi_header *dm, u8 s) static char * __init dmi_string(struct dmi_header *dm, u8 s)
{ {
u8 *bp = ((u8 *) dm) + dm->length; u8 *bp = ((u8 *) dm) + dm->length;
char *str = "";
if (!s) if (s) {
return "";
s--;
while (s > 0 && *bp) {
bp += strlen(bp) + 1;
s--; s--;
} while (s > 0 && *bp) {
return bp; bp += strlen(bp) + 1;
s--;
}
if (*bp != 0) {
str = alloc_bootmem(strlen(bp) + 1);
if (str != NULL)
strcpy(str, bp);
else
printk(KERN_ERR "dmi_string: out of memory.\n");
}
}
return str;
} }
/* /*
@ -84,7 +80,111 @@ static int __init dmi_checksum(u8 *buf)
return sum == 0; return sum == 0;
} }
static int __init dmi_iterate(void (*decode)(struct dmi_header *)) static char *dmi_ident[DMI_STRING_MAX];
static LIST_HEAD(dmi_devices);
/*
* Save a DMI string
*/
static void __init dmi_save_ident(struct dmi_header *dm, int slot, int string)
{
char *p, *d = (char*) dm;
if (dmi_ident[slot])
return;
p = dmi_string(dm, d[string]);
if (p == NULL)
return;
dmi_ident[slot] = p;
}
static void __init dmi_save_devices(struct dmi_header *dm)
{
int i, count = (dm->length - sizeof(struct dmi_header)) / 2;
struct dmi_device *dev;
for (i = 0; i < count; i++) {
char *d = ((char *) dm) + (i * 2);
/* Skip disabled device */
if ((*d & 0x80) == 0)
continue;
dev = alloc_bootmem(sizeof(*dev));
if (!dev) {
printk(KERN_ERR "dmi_save_devices: out of memory.\n");
break;
}
dev->type = *d++ & 0x7f;
dev->name = dmi_string(dm, *d);
dev->device_data = NULL;
list_add(&dev->list, &dmi_devices);
}
}
static void __init dmi_save_ipmi_device(struct dmi_header *dm)
{
struct dmi_device *dev;
void * data;
data = alloc_bootmem(dm->length);
if (data == NULL) {
printk(KERN_ERR "dmi_save_ipmi_device: out of memory.\n");
return;
}
memcpy(data, dm, dm->length);
dev = alloc_bootmem(sizeof(*dev));
if (!dev) {
printk(KERN_ERR "dmi_save_ipmi_device: out of memory.\n");
return;
}
dev->type = DMI_DEV_TYPE_IPMI;
dev->name = "IPMI controller";
dev->device_data = data;
list_add(&dev->list, &dmi_devices);
}
/*
* Process a DMI table entry. Right now all we care about are the BIOS
* and machine entries. For 2.5 we should pull the smbus controller info
* out of here.
*/
static void __init dmi_decode(struct dmi_header *dm)
{
switch(dm->type) {
case 0: /* BIOS Information */
dmi_save_ident(dm, DMI_BIOS_VENDOR, 4);
dmi_save_ident(dm, DMI_BIOS_VERSION, 5);
dmi_save_ident(dm, DMI_BIOS_DATE, 8);
break;
case 1: /* System Information */
dmi_save_ident(dm, DMI_SYS_VENDOR, 4);
dmi_save_ident(dm, DMI_PRODUCT_NAME, 5);
dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6);
dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7);
break;
case 2: /* Base Board Information */
dmi_save_ident(dm, DMI_BOARD_VENDOR, 4);
dmi_save_ident(dm, DMI_BOARD_NAME, 5);
dmi_save_ident(dm, DMI_BOARD_VERSION, 6);
break;
case 10: /* Onboard Devices Information */
dmi_save_devices(dm);
break;
case 38: /* IPMI Device Information */
dmi_save_ipmi_device(dm);
}
}
void __init dmi_scan_machine(void)
{ {
u8 buf[15]; u8 buf[15];
char __iomem *p, *q; char __iomem *p, *q;
@ -96,7 +196,7 @@ static int __init dmi_iterate(void (*decode)(struct dmi_header *))
*/ */
p = ioremap(0xF0000, 0x10000); p = ioremap(0xF0000, 0x10000);
if (p == NULL) if (p == NULL)
return -1; goto out;
for (q = p; q < p + 0x10000; q += 16) { for (q = p; q < p + 0x10000; q += 16) {
memcpy_fromio(buf, q, 15); memcpy_fromio(buf, q, 15);
@ -116,82 +216,12 @@ static int __init dmi_iterate(void (*decode)(struct dmi_header *))
else else
printk(KERN_INFO "DMI present.\n"); printk(KERN_INFO "DMI present.\n");
dmi_printk((KERN_INFO "%d structures occupying %d bytes.\n", if (dmi_table(base,len, num, dmi_decode) == 0)
num, len)); return;
dmi_printk((KERN_INFO "DMI table at 0x%08X.\n", base));
if (dmi_table(base,len, num, decode) == 0)
return 0;
} }
} }
return -1;
}
static char *dmi_ident[DMI_STRING_MAX]; out: printk(KERN_INFO "DMI not present.\n");
/*
* Save a DMI string
*/
static void __init dmi_save_ident(struct dmi_header *dm, int slot, int string)
{
char *d = (char*)dm;
char *p = dmi_string(dm, d[string]);
if (p == NULL || *p == 0)
return;
if (dmi_ident[slot])
return;
dmi_ident[slot] = alloc_bootmem(strlen(p) + 1);
if(dmi_ident[slot])
strcpy(dmi_ident[slot], p);
else
printk(KERN_ERR "dmi_save_ident: out of memory.\n");
}
/*
* Process a DMI table entry. Right now all we care about are the BIOS
* and machine entries. For 2.5 we should pull the smbus controller info
* out of here.
*/
static void __init dmi_decode(struct dmi_header *dm)
{
u8 *data __attribute__((__unused__)) = (u8 *)dm;
switch(dm->type) {
case 0:
dmi_printk(("BIOS Vendor: %s\n", dmi_string(dm, data[4])));
dmi_save_ident(dm, DMI_BIOS_VENDOR, 4);
dmi_printk(("BIOS Version: %s\n", dmi_string(dm, data[5])));
dmi_save_ident(dm, DMI_BIOS_VERSION, 5);
dmi_printk(("BIOS Release: %s\n", dmi_string(dm, data[8])));
dmi_save_ident(dm, DMI_BIOS_DATE, 8);
break;
case 1:
dmi_printk(("System Vendor: %s\n", dmi_string(dm, data[4])));
dmi_save_ident(dm, DMI_SYS_VENDOR, 4);
dmi_printk(("Product Name: %s\n", dmi_string(dm, data[5])));
dmi_save_ident(dm, DMI_PRODUCT_NAME, 5);
dmi_printk(("Version: %s\n", dmi_string(dm, data[6])));
dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6);
dmi_printk(("Serial Number: %s\n", dmi_string(dm, data[7])));
dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7);
break;
case 2:
dmi_printk(("Board Vendor: %s\n", dmi_string(dm, data[4])));
dmi_save_ident(dm, DMI_BOARD_VENDOR, 4);
dmi_printk(("Board Name: %s\n", dmi_string(dm, data[5])));
dmi_save_ident(dm, DMI_BOARD_NAME, 5);
dmi_printk(("Board Version: %s\n", dmi_string(dm, data[6])));
dmi_save_ident(dm, DMI_BOARD_VERSION, 6);
break;
}
}
void __init dmi_scan_machine(void)
{
if (dmi_iterate(dmi_decode))
printk(KERN_INFO "DMI not present.\n");
} }
@ -218,9 +248,9 @@ int dmi_check_system(struct dmi_system_id *list)
/* No match */ /* No match */
goto fail; goto fail;
} }
count++;
if (d->callback && d->callback(d)) if (d->callback && d->callback(d))
break; break;
count++;
fail: d++; fail: d++;
} }
@ -240,3 +270,32 @@ char *dmi_get_system_info(int field)
return dmi_ident[field]; return dmi_ident[field];
} }
EXPORT_SYMBOL(dmi_get_system_info); EXPORT_SYMBOL(dmi_get_system_info);
/**
* dmi_find_device - find onboard device by type/name
* @type: device type or %DMI_DEV_TYPE_ANY to match all device types
* @desc: device name string or %NULL to match all
* @from: previous device found in search, or %NULL for new search.
*
* Iterates through the list of known onboard devices. If a device is
* found with a matching @vendor and @device, a pointer to its device
* structure is returned. Otherwise, %NULL is returned.
* A new search is initiated by passing %NULL to the @from argument.
* If @from is not %NULL, searches continue from next device.
*/
struct dmi_device * dmi_find_device(int type, const char *name,
struct dmi_device *from)
{
struct list_head *d, *head = from ? &from->list : &dmi_devices;
for(d = head->next; d != &dmi_devices; d = d->next) {
struct dmi_device *dev = list_entry(d, struct dmi_device, list);
if (((type == DMI_DEV_TYPE_ANY) || (dev->type == type)) &&
((name == NULL) || (strcmp(dev->name, name) == 0)))
return dev;
}
return NULL;
}
EXPORT_SYMBOL(dmi_find_device);

View File

@ -507,7 +507,7 @@ label: \
pushl $__KERNEL_CS; \ pushl $__KERNEL_CS; \
pushl $sysenter_past_esp pushl $sysenter_past_esp
ENTRY(debug) KPROBE_ENTRY(debug)
cmpl $sysenter_entry,(%esp) cmpl $sysenter_entry,(%esp)
jne debug_stack_correct jne debug_stack_correct
FIX_STACK(12, debug_stack_correct, debug_esp_fix_insn) FIX_STACK(12, debug_stack_correct, debug_esp_fix_insn)
@ -518,7 +518,7 @@ debug_stack_correct:
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call do_debug call do_debug
jmp ret_from_exception jmp ret_from_exception
.previous .text
/* /*
* NMI is doubly nasty. It can happen _while_ we're handling * NMI is doubly nasty. It can happen _while_ we're handling
* a debug fault, and the debug fault hasn't yet been able to * a debug fault, and the debug fault hasn't yet been able to
@ -591,13 +591,14 @@ nmi_16bit_stack:
.long 1b,iret_exc .long 1b,iret_exc
.previous .previous
ENTRY(int3) KPROBE_ENTRY(int3)
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
SAVE_ALL SAVE_ALL
xorl %edx,%edx # zero error code xorl %edx,%edx # zero error code
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call do_int3 call do_int3
jmp ret_from_exception jmp ret_from_exception
.previous .text
ENTRY(overflow) ENTRY(overflow)
pushl $0 pushl $0
@ -631,17 +632,19 @@ ENTRY(stack_segment)
pushl $do_stack_segment pushl $do_stack_segment
jmp error_code jmp error_code
ENTRY(general_protection) KPROBE_ENTRY(general_protection)
pushl $do_general_protection pushl $do_general_protection
jmp error_code jmp error_code
.previous .text
ENTRY(alignment_check) ENTRY(alignment_check)
pushl $do_alignment_check pushl $do_alignment_check
jmp error_code jmp error_code
ENTRY(page_fault) KPROBE_ENTRY(page_fault)
pushl $do_page_fault pushl $do_page_fault
jmp error_code jmp error_code
.previous .text
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
ENTRY(machine_check) ENTRY(machine_check)

View File

@ -33,6 +33,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/sysdev.h> #include <linux/sysdev.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/desc.h> #include <asm/desc.h>
@ -77,7 +78,7 @@ static struct irq_pin_list {
int apic, pin, next; int apic, pin, next;
} irq_2_pin[PIN_MAP_SIZE]; } irq_2_pin[PIN_MAP_SIZE];
int vector_irq[NR_VECTORS] = { [0 ... NR_VECTORS - 1] = -1}; int vector_irq[NR_VECTORS] __read_mostly = { [0 ... NR_VECTORS - 1] = -1};
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
#define vector_to_irq(vector) \ #define vector_to_irq(vector) \
(platform_legacy_irq(vector) ? vector : vector_irq[vector]) (platform_legacy_irq(vector) ? vector : vector_irq[vector])
@ -222,13 +223,21 @@ static void clear_IO_APIC (void)
clear_IO_APIC_pin(apic, pin); clear_IO_APIC_pin(apic, pin);
} }
#ifdef CONFIG_SMP
static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t cpumask) static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t cpumask)
{ {
unsigned long flags; unsigned long flags;
int pin; int pin;
struct irq_pin_list *entry = irq_2_pin + irq; struct irq_pin_list *entry = irq_2_pin + irq;
unsigned int apicid_value; unsigned int apicid_value;
cpumask_t tmp;
cpus_and(tmp, cpumask, cpu_online_map);
if (cpus_empty(tmp))
tmp = TARGET_CPUS;
cpus_and(cpumask, tmp, CPU_MASK_ALL);
apicid_value = cpu_mask_to_apicid(cpumask); apicid_value = cpu_mask_to_apicid(cpumask);
/* Prepare to do the io_apic_write */ /* Prepare to do the io_apic_write */
apicid_value = apicid_value << 24; apicid_value = apicid_value << 24;
@ -242,6 +251,7 @@ static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t cpumask)
break; break;
entry = irq_2_pin + entry->next; entry = irq_2_pin + entry->next;
} }
set_irq_info(irq, cpumask);
spin_unlock_irqrestore(&ioapic_lock, flags); spin_unlock_irqrestore(&ioapic_lock, flags);
} }
@ -259,7 +269,6 @@ static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t cpumask)
# define Dprintk(x...) # define Dprintk(x...)
# endif # endif
cpumask_t __cacheline_aligned pending_irq_balance_cpumask[NR_IRQS];
#define IRQBALANCE_CHECK_ARCH -999 #define IRQBALANCE_CHECK_ARCH -999
static int irqbalance_disabled = IRQBALANCE_CHECK_ARCH; static int irqbalance_disabled = IRQBALANCE_CHECK_ARCH;
@ -328,12 +337,7 @@ static inline void balance_irq(int cpu, int irq)
cpus_and(allowed_mask, cpu_online_map, irq_affinity[irq]); cpus_and(allowed_mask, cpu_online_map, irq_affinity[irq]);
new_cpu = move(cpu, allowed_mask, now, 1); new_cpu = move(cpu, allowed_mask, now, 1);
if (cpu != new_cpu) { if (cpu != new_cpu) {
irq_desc_t *desc = irq_desc + irq; set_pending_irq(irq, cpumask_of_cpu(new_cpu));
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
pending_irq_balance_cpumask[irq] = cpumask_of_cpu(new_cpu);
spin_unlock_irqrestore(&desc->lock, flags);
} }
} }
@ -528,16 +532,12 @@ tryanotherirq:
cpus_and(tmp, target_cpu_mask, allowed_mask); cpus_and(tmp, target_cpu_mask, allowed_mask);
if (!cpus_empty(tmp)) { if (!cpus_empty(tmp)) {
irq_desc_t *desc = irq_desc + selected_irq;
unsigned long flags;
Dprintk("irq = %d moved to cpu = %d\n", Dprintk("irq = %d moved to cpu = %d\n",
selected_irq, min_loaded); selected_irq, min_loaded);
/* mark for change destination */ /* mark for change destination */
spin_lock_irqsave(&desc->lock, flags); set_pending_irq(selected_irq, cpumask_of_cpu(min_loaded));
pending_irq_balance_cpumask[selected_irq] =
cpumask_of_cpu(min_loaded);
spin_unlock_irqrestore(&desc->lock, flags);
/* Since we made a change, come back sooner to /* Since we made a change, come back sooner to
* check for more variation. * check for more variation.
*/ */
@ -568,7 +568,8 @@ static int balanced_irq(void *unused)
/* push everything to CPU 0 to give us a starting point. */ /* push everything to CPU 0 to give us a starting point. */
for (i = 0 ; i < NR_IRQS ; i++) { for (i = 0 ; i < NR_IRQS ; i++) {
pending_irq_balance_cpumask[i] = cpumask_of_cpu(0); pending_irq_cpumask[i] = cpumask_of_cpu(0);
set_pending_irq(i, cpumask_of_cpu(0));
} }
for ( ; ; ) { for ( ; ; ) {
@ -647,20 +648,9 @@ int __init irqbalance_disable(char *str)
__setup("noirqbalance", irqbalance_disable); __setup("noirqbalance", irqbalance_disable);
static inline void move_irq(int irq)
{
/* note - we hold the desc->lock */
if (unlikely(!cpus_empty(pending_irq_balance_cpumask[irq]))) {
set_ioapic_affinity_irq(irq, pending_irq_balance_cpumask[irq]);
cpus_clear(pending_irq_balance_cpumask[irq]);
}
}
late_initcall(balanced_irq_init); late_initcall(balanced_irq_init);
#else /* !CONFIG_IRQBALANCE */
static inline void move_irq(int irq) { }
#endif /* CONFIG_IRQBALANCE */ #endif /* CONFIG_IRQBALANCE */
#endif /* CONFIG_SMP */
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
void fastcall send_IPI_self(int vector) void fastcall send_IPI_self(int vector)
@ -820,6 +810,7 @@ EXPORT_SYMBOL(IO_APIC_get_PCI_irq_vector);
* we need to reprogram the ioredtbls to cater for the cpus which have come online * we need to reprogram the ioredtbls to cater for the cpus which have come online
* so mask in all cases should simply be TARGET_CPUS * so mask in all cases should simply be TARGET_CPUS
*/ */
#ifdef CONFIG_SMP
void __init setup_ioapic_dest(void) void __init setup_ioapic_dest(void)
{ {
int pin, ioapic, irq, irq_entry; int pin, ioapic, irq, irq_entry;
@ -838,6 +829,7 @@ void __init setup_ioapic_dest(void)
} }
} }
#endif
/* /*
* EISA Edge/Level control register, ELCR * EISA Edge/Level control register, ELCR
@ -1127,7 +1119,7 @@ static inline int IO_APIC_irq_trigger(int irq)
} }
/* irq_vectors is indexed by the sum of all RTEs in all I/O APICs. */ /* irq_vectors is indexed by the sum of all RTEs in all I/O APICs. */
u8 irq_vector[NR_IRQ_VECTORS] = { FIRST_DEVICE_VECTOR , 0 }; u8 irq_vector[NR_IRQ_VECTORS] __read_mostly = { FIRST_DEVICE_VECTOR , 0 };
int assign_irq_vector(int irq) int assign_irq_vector(int irq)
{ {
@ -1249,6 +1241,7 @@ static void __init setup_IO_APIC_irqs(void)
spin_lock_irqsave(&ioapic_lock, flags); spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0)); io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0));
set_native_irq_info(irq, TARGET_CPUS);
spin_unlock_irqrestore(&ioapic_lock, flags); spin_unlock_irqrestore(&ioapic_lock, flags);
} }
} }
@ -1944,6 +1937,7 @@ static void ack_edge_ioapic_vector(unsigned int vector)
{ {
int irq = vector_to_irq(vector); int irq = vector_to_irq(vector);
move_irq(vector);
ack_edge_ioapic_irq(irq); ack_edge_ioapic_irq(irq);
} }
@ -1958,6 +1952,7 @@ static void end_level_ioapic_vector (unsigned int vector)
{ {
int irq = vector_to_irq(vector); int irq = vector_to_irq(vector);
move_irq(vector);
end_level_ioapic_irq(irq); end_level_ioapic_irq(irq);
} }
@ -1975,14 +1970,17 @@ static void unmask_IO_APIC_vector (unsigned int vector)
unmask_IO_APIC_irq(irq); unmask_IO_APIC_irq(irq);
} }
#ifdef CONFIG_SMP
static void set_ioapic_affinity_vector (unsigned int vector, static void set_ioapic_affinity_vector (unsigned int vector,
cpumask_t cpu_mask) cpumask_t cpu_mask)
{ {
int irq = vector_to_irq(vector); int irq = vector_to_irq(vector);
set_native_irq_info(vector, cpu_mask);
set_ioapic_affinity_irq(irq, cpu_mask); set_ioapic_affinity_irq(irq, cpu_mask);
} }
#endif #endif
#endif
/* /*
* Level and edge triggered IO-APIC interrupts need different handling, * Level and edge triggered IO-APIC interrupts need different handling,
@ -1992,7 +1990,7 @@ static void set_ioapic_affinity_vector (unsigned int vector,
* edge-triggered handler, without risking IRQ storms and other ugly * edge-triggered handler, without risking IRQ storms and other ugly
* races. * races.
*/ */
static struct hw_interrupt_type ioapic_edge_type = { static struct hw_interrupt_type ioapic_edge_type __read_mostly = {
.typename = "IO-APIC-edge", .typename = "IO-APIC-edge",
.startup = startup_edge_ioapic, .startup = startup_edge_ioapic,
.shutdown = shutdown_edge_ioapic, .shutdown = shutdown_edge_ioapic,
@ -2000,10 +1998,12 @@ static struct hw_interrupt_type ioapic_edge_type = {
.disable = disable_edge_ioapic, .disable = disable_edge_ioapic,
.ack = ack_edge_ioapic, .ack = ack_edge_ioapic,
.end = end_edge_ioapic, .end = end_edge_ioapic,
#ifdef CONFIG_SMP
.set_affinity = set_ioapic_affinity, .set_affinity = set_ioapic_affinity,
#endif
}; };
static struct hw_interrupt_type ioapic_level_type = { static struct hw_interrupt_type ioapic_level_type __read_mostly = {
.typename = "IO-APIC-level", .typename = "IO-APIC-level",
.startup = startup_level_ioapic, .startup = startup_level_ioapic,
.shutdown = shutdown_level_ioapic, .shutdown = shutdown_level_ioapic,
@ -2011,7 +2011,9 @@ static struct hw_interrupt_type ioapic_level_type = {
.disable = disable_level_ioapic, .disable = disable_level_ioapic,
.ack = mask_and_ack_level_ioapic, .ack = mask_and_ack_level_ioapic,
.end = end_level_ioapic, .end = end_level_ioapic,
#ifdef CONFIG_SMP
.set_affinity = set_ioapic_affinity, .set_affinity = set_ioapic_affinity,
#endif
}; };
static inline void init_IO_APIC_traps(void) static inline void init_IO_APIC_traps(void)
@ -2074,7 +2076,7 @@ static void ack_lapic_irq (unsigned int irq)
static void end_lapic_irq (unsigned int i) { /* nothing */ } static void end_lapic_irq (unsigned int i) { /* nothing */ }
static struct hw_interrupt_type lapic_irq_type = { static struct hw_interrupt_type lapic_irq_type __read_mostly = {
.typename = "local-APIC-edge", .typename = "local-APIC-edge",
.startup = NULL, /* startup_irq() not used for IRQ0 */ .startup = NULL, /* startup_irq() not used for IRQ0 */
.shutdown = NULL, /* shutdown_irq() not used for IRQ0 */ .shutdown = NULL, /* shutdown_irq() not used for IRQ0 */
@ -2569,6 +2571,7 @@ int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int a
spin_lock_irqsave(&ioapic_lock, flags); spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(ioapic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(ioapic, 0x10+2*pin, *(((int *)&entry)+0)); io_apic_write(ioapic, 0x10+2*pin, *(((int *)&entry)+0));
set_native_irq_info(use_pci_vector() ? entry.vector : irq, TARGET_CPUS);
spin_unlock_irqrestore(&ioapic_lock, flags); spin_unlock_irqrestore(&ioapic_lock, flags);
return 0; return 0;

View File

@ -62,32 +62,32 @@ static inline int is_IF_modifier(kprobe_opcode_t opcode)
return 0; return 0;
} }
int arch_prepare_kprobe(struct kprobe *p) int __kprobes arch_prepare_kprobe(struct kprobe *p)
{ {
return 0; return 0;
} }
void arch_copy_kprobe(struct kprobe *p) void __kprobes arch_copy_kprobe(struct kprobe *p)
{ {
memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
p->opcode = *p->addr; p->opcode = *p->addr;
} }
void arch_arm_kprobe(struct kprobe *p) void __kprobes arch_arm_kprobe(struct kprobe *p)
{ {
*p->addr = BREAKPOINT_INSTRUCTION; *p->addr = BREAKPOINT_INSTRUCTION;
flush_icache_range((unsigned long) p->addr, flush_icache_range((unsigned long) p->addr,
(unsigned long) p->addr + sizeof(kprobe_opcode_t)); (unsigned long) p->addr + sizeof(kprobe_opcode_t));
} }
void arch_disarm_kprobe(struct kprobe *p) void __kprobes arch_disarm_kprobe(struct kprobe *p)
{ {
*p->addr = p->opcode; *p->addr = p->opcode;
flush_icache_range((unsigned long) p->addr, flush_icache_range((unsigned long) p->addr,
(unsigned long) p->addr + sizeof(kprobe_opcode_t)); (unsigned long) p->addr + sizeof(kprobe_opcode_t));
} }
void arch_remove_kprobe(struct kprobe *p) void __kprobes arch_remove_kprobe(struct kprobe *p)
{ {
} }
@ -127,7 +127,8 @@ static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
regs->eip = (unsigned long)&p->ainsn.insn; regs->eip = (unsigned long)&p->ainsn.insn;
} }
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) void __kprobes arch_prepare_kretprobe(struct kretprobe *rp,
struct pt_regs *regs)
{ {
unsigned long *sara = (unsigned long *)&regs->esp; unsigned long *sara = (unsigned long *)&regs->esp;
struct kretprobe_instance *ri; struct kretprobe_instance *ri;
@ -150,7 +151,7 @@ void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
* Interrupts are disabled on entry as trap3 is an interrupt gate and they * Interrupts are disabled on entry as trap3 is an interrupt gate and they
* remain disabled thorough out this function. * remain disabled thorough out this function.
*/ */
static int kprobe_handler(struct pt_regs *regs) static int __kprobes kprobe_handler(struct pt_regs *regs)
{ {
struct kprobe *p; struct kprobe *p;
int ret = 0; int ret = 0;
@ -176,7 +177,8 @@ static int kprobe_handler(struct pt_regs *regs)
Disarm the probe we just hit, and ignore it. */ Disarm the probe we just hit, and ignore it. */
p = get_kprobe(addr); p = get_kprobe(addr);
if (p) { if (p) {
if (kprobe_status == KPROBE_HIT_SS) { if (kprobe_status == KPROBE_HIT_SS &&
*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
regs->eflags &= ~TF_MASK; regs->eflags &= ~TF_MASK;
regs->eflags |= kprobe_saved_eflags; regs->eflags |= kprobe_saved_eflags;
unlock_kprobes(); unlock_kprobes();
@ -220,7 +222,10 @@ static int kprobe_handler(struct pt_regs *regs)
* either a probepoint or a debugger breakpoint * either a probepoint or a debugger breakpoint
* at this address. In either case, no further * at this address. In either case, no further
* handling of this interrupt is appropriate. * handling of this interrupt is appropriate.
* Back up over the (now missing) int3 and run
* the original instruction.
*/ */
regs->eip -= sizeof(kprobe_opcode_t);
ret = 1; ret = 1;
} }
/* Not one of ours: let kernel handle it */ /* Not one of ours: let kernel handle it */
@ -259,7 +264,7 @@ no_kprobe:
/* /*
* Called when we hit the probe point at kretprobe_trampoline * Called when we hit the probe point at kretprobe_trampoline
*/ */
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct kretprobe_instance *ri = NULL; struct kretprobe_instance *ri = NULL;
struct hlist_head *head; struct hlist_head *head;
@ -338,7 +343,7 @@ int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
* that is atop the stack is the address following the copied instruction. * that is atop the stack is the address following the copied instruction.
* We need to make it the address following the original instruction. * We need to make it the address following the original instruction.
*/ */
static void resume_execution(struct kprobe *p, struct pt_regs *regs) static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs)
{ {
unsigned long *tos = (unsigned long *)&regs->esp; unsigned long *tos = (unsigned long *)&regs->esp;
unsigned long next_eip = 0; unsigned long next_eip = 0;
@ -444,8 +449,8 @@ static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
/* /*
* Wrapper routine to for handling exceptions. * Wrapper routine to for handling exceptions.
*/ */
int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
void *data) unsigned long val, void *data)
{ {
struct die_args *args = (struct die_args *)data; struct die_args *args = (struct die_args *)data;
switch (val) { switch (val) {
@ -473,7 +478,7 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
return NOTIFY_DONE; return NOTIFY_DONE;
} }
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct jprobe *jp = container_of(p, struct jprobe, kp); struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr; unsigned long addr;
@ -495,7 +500,7 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
return 1; return 1;
} }
void jprobe_return(void) void __kprobes jprobe_return(void)
{ {
preempt_enable_no_resched(); preempt_enable_no_resched();
asm volatile (" xchgl %%ebx,%%esp \n" asm volatile (" xchgl %%ebx,%%esp \n"
@ -506,7 +511,7 @@ void jprobe_return(void)
(jprobe_saved_esp):"memory"); (jprobe_saved_esp):"memory");
} }
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{ {
u8 *addr = (u8 *) (regs->eip - 1); u8 *addr = (u8 *) (regs->eip - 1);
unsigned long stack_addr = (unsigned long)jprobe_saved_esp; unsigned long stack_addr = (unsigned long)jprobe_saved_esp;

View File

@ -478,6 +478,11 @@ void touch_nmi_watchdog (void)
*/ */
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
alert_counter[i] = 0; alert_counter[i] = 0;
/*
* Tickle the softlockup detector too:
*/
touch_softlockup_watchdog();
} }
extern void die_nmi(struct pt_regs *, const char *msg); extern void die_nmi(struct pt_regs *, const char *msg);

View File

@ -82,7 +82,7 @@ EXPORT_SYMBOL(efi_enabled);
/* cpu data as detected by the assembly code in head.S */ /* cpu data as detected by the assembly code in head.S */
struct cpuinfo_x86 new_cpu_data __initdata = { 0, 0, 0, 0, -1, 1, 0, 0, -1 }; struct cpuinfo_x86 new_cpu_data __initdata = { 0, 0, 0, 0, -1, 1, 0, 0, -1 };
/* common cpu data for all cpus */ /* common cpu data for all cpus */
struct cpuinfo_x86 boot_cpu_data = { 0, 0, 0, 0, -1, 1, 0, 0, -1 }; struct cpuinfo_x86 boot_cpu_data __read_mostly = { 0, 0, 0, 0, -1, 1, 0, 0, -1 };
EXPORT_SYMBOL(boot_cpu_data); EXPORT_SYMBOL(boot_cpu_data);
unsigned long mmu_cr4_features; unsigned long mmu_cr4_features;

View File

@ -194,10 +194,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
return 0; return 0;
@ -252,8 +249,7 @@ EXPORT_SYMBOL(profile_pc);
* timer_interrupt() needs to keep up the real-time clock, * timer_interrupt() needs to keep up the real-time clock,
* as well as call the "do_timer()" routine every clocktick * as well as call the "do_timer()" routine every clocktick
*/ */
static inline void do_timer_interrupt(int irq, void *dev_id, static inline void do_timer_interrupt(int irq, struct pt_regs *regs)
struct pt_regs *regs)
{ {
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
if (timer_ack) { if (timer_ack) {
@ -307,7 +303,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
cur_timer->mark_offset(); cur_timer->mark_offset();
do_timer_interrupt(irq, NULL, regs); do_timer_interrupt(irq, regs);
write_sequnlock(&xtime_lock); write_sequnlock(&xtime_lock);
return IRQ_HANDLED; return IRQ_HANDLED;
@ -348,7 +344,7 @@ static void sync_cmos_clock(unsigned long dummy)
* This code is run on a timer. If the clock is set, that timer * This code is run on a timer. If the clock is set, that timer
* may not expire at the correct time. Thus, we adjust... * may not expire at the correct time. Thus, we adjust...
*/ */
if ((time_status & STA_UNSYNC) != 0) if (!ntp_synced())
/* /*
* Not synced, exit, do not restart a timer (if one is * Not synced, exit, do not restart a timer (if one is
* running, let it run out). * running, let it run out).
@ -422,6 +418,7 @@ static int timer_resume(struct sys_device *dev)
last_timer->resume(); last_timer->resume();
cur_timer = last_timer; cur_timer = last_timer;
last_timer = NULL; last_timer = NULL;
touch_softlockup_watchdog();
return 0; return 0;
} }

View File

@ -18,8 +18,8 @@
#include "mach_timer.h" #include "mach_timer.h"
#include <asm/hpet.h> #include <asm/hpet.h>
static unsigned long __read_mostly hpet_usec_quotient; /* convert hpet clks to usec */ static unsigned long hpet_usec_quotient __read_mostly; /* convert hpet clks to usec */
static unsigned long tsc_hpet_quotient; /* convert tsc to hpet clks */ static unsigned long tsc_hpet_quotient __read_mostly; /* convert tsc to hpet clks */
static unsigned long hpet_last; /* hpet counter value at last tick*/ static unsigned long hpet_last; /* hpet counter value at last tick*/
static unsigned long last_tsc_low; /* lsb 32 bits of Time Stamp Counter */ static unsigned long last_tsc_low; /* lsb 32 bits of Time Stamp Counter */
static unsigned long last_tsc_high; /* msb 32 bits of Time Stamp Counter */ static unsigned long last_tsc_high; /* msb 32 bits of Time Stamp Counter */

View File

@ -363,8 +363,9 @@ static inline void die_if_kernel(const char * str, struct pt_regs * regs, long e
die(str, regs, err); die(str, regs, err);
} }
static void do_trap(int trapnr, int signr, char *str, int vm86, static void __kprobes do_trap(int trapnr, int signr, char *str, int vm86,
struct pt_regs * regs, long error_code, siginfo_t *info) struct pt_regs * regs, long error_code,
siginfo_t *info)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
tsk->thread.error_code = error_code; tsk->thread.error_code = error_code;
@ -460,7 +461,8 @@ DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0) DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
DO_ERROR_INFO(32, SIGSEGV, "iret exception", iret_error, ILL_BADSTK, 0) DO_ERROR_INFO(32, SIGSEGV, "iret exception", iret_error, ILL_BADSTK, 0)
fastcall void do_general_protection(struct pt_regs * regs, long error_code) fastcall void __kprobes do_general_protection(struct pt_regs * regs,
long error_code)
{ {
int cpu = get_cpu(); int cpu = get_cpu();
struct tss_struct *tss = &per_cpu(init_tss, cpu); struct tss_struct *tss = &per_cpu(init_tss, cpu);
@ -657,7 +659,7 @@ fastcall void do_nmi(struct pt_regs * regs, long error_code)
++nmi_count(cpu); ++nmi_count(cpu);
if (!nmi_callback(regs, cpu)) if (!rcu_dereference(nmi_callback)(regs, cpu))
default_do_nmi(regs); default_do_nmi(regs);
nmi_exit(); nmi_exit();
@ -665,7 +667,7 @@ fastcall void do_nmi(struct pt_regs * regs, long error_code)
void set_nmi_callback(nmi_callback_t callback) void set_nmi_callback(nmi_callback_t callback)
{ {
nmi_callback = callback; rcu_assign_pointer(nmi_callback, callback);
} }
EXPORT_SYMBOL_GPL(set_nmi_callback); EXPORT_SYMBOL_GPL(set_nmi_callback);
@ -676,7 +678,7 @@ void unset_nmi_callback(void)
EXPORT_SYMBOL_GPL(unset_nmi_callback); EXPORT_SYMBOL_GPL(unset_nmi_callback);
#ifdef CONFIG_KPROBES #ifdef CONFIG_KPROBES
fastcall void do_int3(struct pt_regs *regs, long error_code) fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code)
{ {
if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP) if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP)
== NOTIFY_STOP) == NOTIFY_STOP)
@ -710,7 +712,7 @@ fastcall void do_int3(struct pt_regs *regs, long error_code)
* find every occurrence of the TF bit that could be saved away even * find every occurrence of the TF bit that could be saved away even
* by user code) * by user code)
*/ */
fastcall void do_debug(struct pt_regs * regs, long error_code) fastcall void __kprobes do_debug(struct pt_regs * regs, long error_code)
{ {
unsigned int condition; unsigned int condition;
struct task_struct *tsk = current; struct task_struct *tsk = current;

View File

@ -22,6 +22,7 @@ SECTIONS
*(.text) *(.text)
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT
*(.fixup) *(.fixup)
*(.gnu.warning) *(.gnu.warning)
} = 0x9090 } = 0x9090

View File

@ -76,7 +76,7 @@ static int __init topology_init(void)
for_each_online_node(i) for_each_online_node(i)
arch_register_node(i); arch_register_node(i);
for_each_cpu(i) for_each_present_cpu(i)
arch_register_cpu(i); arch_register_cpu(i);
return 0; return 0;
} }
@ -87,7 +87,7 @@ static int __init topology_init(void)
{ {
int i; int i;
for_each_cpu(i) for_each_present_cpu(i)
arch_register_cpu(i); arch_register_cpu(i);
return 0; return 0;
} }

View File

@ -37,7 +37,7 @@
#include <asm/mmzone.h> #include <asm/mmzone.h>
#include <bios_ebda.h> #include <bios_ebda.h>
struct pglist_data *node_data[MAX_NUMNODES]; struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data); EXPORT_SYMBOL(node_data);
bootmem_data_t node0_bdata; bootmem_data_t node0_bdata;
@ -49,8 +49,8 @@ bootmem_data_t node0_bdata;
* 2) node_start_pfn - the starting page frame number for a node * 2) node_start_pfn - the starting page frame number for a node
* 3) node_end_pfn - the ending page fram number for a node * 3) node_end_pfn - the ending page fram number for a node
*/ */
unsigned long node_start_pfn[MAX_NUMNODES]; unsigned long node_start_pfn[MAX_NUMNODES] __read_mostly;
unsigned long node_end_pfn[MAX_NUMNODES]; unsigned long node_end_pfn[MAX_NUMNODES] __read_mostly;
#ifdef CONFIG_DISCONTIGMEM #ifdef CONFIG_DISCONTIGMEM
@ -66,7 +66,7 @@ unsigned long node_end_pfn[MAX_NUMNODES];
* physnode_map[4-7] = 1; * physnode_map[4-7] = 1;
* physnode_map[8- ] = -1; * physnode_map[8- ] = -1;
*/ */
s8 physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1}; s8 physnode_map[MAX_ELEMENTS] __read_mostly = { [0 ... (MAX_ELEMENTS - 1)] = -1};
EXPORT_SYMBOL(physnode_map); EXPORT_SYMBOL(physnode_map);
void memory_present(int nid, unsigned long start, unsigned long end) void memory_present(int nid, unsigned long start, unsigned long end)

View File

@ -21,6 +21,7 @@
#include <linux/vt_kern.h> /* For unblank_screen() */ #include <linux/vt_kern.h> /* For unblank_screen() */
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kprobes.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -223,7 +224,8 @@ fastcall void do_invalid_op(struct pt_regs *, unsigned long);
* bit 1 == 0 means read, 1 means write * bit 1 == 0 means read, 1 means write
* bit 2 == 0 means kernel, 1 means user-mode * bit 2 == 0 means kernel, 1 means user-mode
*/ */
fastcall void do_page_fault(struct pt_regs *regs, unsigned long error_code) fastcall void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long error_code)
{ {
struct task_struct *tsk; struct task_struct *tsk;
struct mm_struct *mm; struct mm_struct *mm;

View File

@ -393,7 +393,7 @@ void zap_low_mappings (void)
} }
static int disable_nx __initdata = 0; static int disable_nx __initdata = 0;
u64 __supported_pte_mask = ~_PAGE_NX; u64 __supported_pte_mask __read_mostly = ~_PAGE_NX;
/* /*
* noexec = on|off * noexec = on|off

View File

@ -15,9 +15,9 @@
* with the NMI mode driver. * with the NMI mode driver.
*/ */
extern int nmi_init(struct oprofile_operations * ops); extern int op_nmi_init(struct oprofile_operations * ops);
extern int nmi_timer_init(struct oprofile_operations * ops); extern int op_nmi_timer_init(struct oprofile_operations * ops);
extern void nmi_exit(void); extern void op_nmi_exit(void);
extern void x86_backtrace(struct pt_regs * const regs, unsigned int depth); extern void x86_backtrace(struct pt_regs * const regs, unsigned int depth);
@ -28,11 +28,11 @@ int __init oprofile_arch_init(struct oprofile_operations * ops)
ret = -ENODEV; ret = -ENODEV;
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
ret = nmi_init(ops); ret = op_nmi_init(ops);
#endif #endif
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
if (ret < 0) if (ret < 0)
ret = nmi_timer_init(ops); ret = op_nmi_timer_init(ops);
#endif #endif
ops->backtrace = x86_backtrace; ops->backtrace = x86_backtrace;
@ -43,6 +43,6 @@ int __init oprofile_arch_init(struct oprofile_operations * ops)
void oprofile_arch_exit(void) void oprofile_arch_exit(void)
{ {
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
nmi_exit(); op_nmi_exit();
#endif #endif
} }

View File

@ -355,7 +355,7 @@ static int __init ppro_init(char ** cpu_type)
/* in order to get driverfs right */ /* in order to get driverfs right */
static int using_nmi; static int using_nmi;
int __init nmi_init(struct oprofile_operations *ops) int __init op_nmi_init(struct oprofile_operations *ops)
{ {
__u8 vendor = boot_cpu_data.x86_vendor; __u8 vendor = boot_cpu_data.x86_vendor;
__u8 family = boot_cpu_data.x86; __u8 family = boot_cpu_data.x86;
@ -420,7 +420,7 @@ int __init nmi_init(struct oprofile_operations *ops)
} }
void nmi_exit(void) void op_nmi_exit(void)
{ {
if (using_nmi) if (using_nmi)
exit_driverfs(); exit_driverfs();

View File

@ -40,7 +40,7 @@ static void timer_stop(void)
} }
int __init nmi_timer_init(struct oprofile_operations * ops) int __init op_nmi_timer_init(struct oprofile_operations * ops)
{ {
extern int nmi_active; extern int nmi_active;

View File

@ -434,6 +434,11 @@ config GENERIC_IRQ_PROBE
bool bool
default y default y
config GENERIC_PENDING_IRQ
bool
depends on GENERIC_HARDIRQS && SMP
default y
source "arch/ia64/hp/sim/Kconfig" source "arch/ia64/hp/sim/Kconfig"
source "arch/ia64/oprofile/Kconfig" source "arch/ia64/oprofile/Kconfig"

View File

@ -130,7 +130,7 @@ static void rs_stop(struct tty_struct *tty)
static void rs_start(struct tty_struct *tty) static void rs_start(struct tty_struct *tty)
{ {
#if SIMSERIAL_DEBUG #ifdef SIMSERIAL_DEBUG
printk("rs_start: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n", printk("rs_start: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n",
tty->stopped, tty->hw_stopped, tty->flow_stopped); tty->stopped, tty->hw_stopped, tty->flow_stopped);
#endif #endif

View File

@ -215,7 +215,7 @@ ia32_syscall_table:
data8 sys32_fork data8 sys32_fork
data8 sys_read data8 sys_read
data8 sys_write data8 sys_write
data8 sys32_open /* 5 */ data8 compat_sys_open /* 5 */
data8 sys_close data8 sys_close
data8 sys32_waitpid data8 sys32_waitpid
data8 sys_creat data8 sys_creat

View File

@ -2359,37 +2359,6 @@ sys32_brk (unsigned int brk)
return ret; return ret;
} }
/*
* Exactly like fs/open.c:sys_open(), except that it doesn't set the O_LARGEFILE flag.
*/
asmlinkage long
sys32_open (const char __user * filename, int flags, int mode)
{
char * tmp;
int fd, error;
tmp = getname(filename);
fd = PTR_ERR(tmp);
if (!IS_ERR(tmp)) {
fd = get_unused_fd();
if (fd >= 0) {
struct file *f = filp_open(tmp, flags, mode);
error = PTR_ERR(f);
if (IS_ERR(f))
goto out_error;
fd_install(fd, f);
}
out:
putname(tmp);
}
return fd;
out_error:
put_unused_fd(fd);
fd = error;
goto out;
}
/* Structure for ia32 emulation on ia64 */ /* Structure for ia32 emulation on ia64 */
struct epoll_event32 struct epoll_event32
{ {

View File

@ -16,7 +16,7 @@ obj-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += acpi-ext.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_IOSAPIC) += iosapic.o obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SMP) += smp.o smpboot.o domain.o obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
obj-$(CONFIG_IA64_CYCLONE) += cyclone.o obj-$(CONFIG_IA64_CYCLONE) += cyclone.o

View File

@ -1,396 +0,0 @@
/*
* arch/ia64/kernel/domain.c
* Architecture specific sched-domains builder.
*
* Copyright (C) 2004 Jesse Barnes
* Copyright (C) 2004 Silicon Graphics, Inc.
*/
#include <linux/sched.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <linux/cpumask.h>
#include <linux/init.h>
#include <linux/topology.h>
#include <linux/nodemask.h>
#define SD_NODES_PER_DOMAIN 16
#ifdef CONFIG_NUMA
/**
* find_next_best_node - find the next node to include in a sched_domain
* @node: node whose sched_domain we're building
* @used_nodes: nodes already in the sched_domain
*
* Find the next node to include in a given scheduling domain. Simply
* finds the closest node not already in the @used_nodes map.
*
* Should use nodemask_t.
*/
static int find_next_best_node(int node, unsigned long *used_nodes)
{
int i, n, val, min_val, best_node = 0;
min_val = INT_MAX;
for (i = 0; i < MAX_NUMNODES; i++) {
/* Start at @node */
n = (node + i) % MAX_NUMNODES;
if (!nr_cpus_node(n))
continue;
/* Skip already used nodes */
if (test_bit(n, used_nodes))
continue;
/* Simple min distance search */
val = node_distance(node, n);
if (val < min_val) {
min_val = val;
best_node = n;
}
}
set_bit(best_node, used_nodes);
return best_node;
}
/**
* sched_domain_node_span - get a cpumask for a node's sched_domain
* @node: node whose cpumask we're constructing
* @size: number of nodes to include in this span
*
* Given a node, construct a good cpumask for its sched_domain to span. It
* should be one that prevents unnecessary balancing, but also spreads tasks
* out optimally.
*/
static cpumask_t sched_domain_node_span(int node)
{
int i;
cpumask_t span, nodemask;
DECLARE_BITMAP(used_nodes, MAX_NUMNODES);
cpus_clear(span);
bitmap_zero(used_nodes, MAX_NUMNODES);
nodemask = node_to_cpumask(node);
cpus_or(span, span, nodemask);
set_bit(node, used_nodes);
for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
int next_node = find_next_best_node(node, used_nodes);
nodemask = node_to_cpumask(next_node);
cpus_or(span, span, nodemask);
}
return span;
}
#endif
/*
* At the moment, CONFIG_SCHED_SMT is never defined, but leave it in so we
* can switch it on easily if needed.
*/
#ifdef CONFIG_SCHED_SMT
static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
static struct sched_group sched_group_cpus[NR_CPUS];
static int cpu_to_cpu_group(int cpu)
{
return cpu;
}
#endif
static DEFINE_PER_CPU(struct sched_domain, phys_domains);
static struct sched_group sched_group_phys[NR_CPUS];
static int cpu_to_phys_group(int cpu)
{
#ifdef CONFIG_SCHED_SMT
return first_cpu(cpu_sibling_map[cpu]);
#else
return cpu;
#endif
}
#ifdef CONFIG_NUMA
/*
* The init_sched_build_groups can't handle what we want to do with node
* groups, so roll our own. Now each node has its own list of groups which
* gets dynamically allocated.
*/
static DEFINE_PER_CPU(struct sched_domain, node_domains);
static struct sched_group *sched_group_nodes[MAX_NUMNODES];
static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
static struct sched_group sched_group_allnodes[MAX_NUMNODES];
static int cpu_to_allnodes_group(int cpu)
{
return cpu_to_node(cpu);
}
#endif
/*
* Build sched domains for a given set of cpus and attach the sched domains
* to the individual cpus
*/
void build_sched_domains(const cpumask_t *cpu_map)
{
int i;
/*
* Set up domains for cpus specified by the cpu_map.
*/
for_each_cpu_mask(i, *cpu_map) {
int group;
struct sched_domain *sd = NULL, *p;
cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
cpus_and(nodemask, nodemask, *cpu_map);
#ifdef CONFIG_NUMA
if (num_online_cpus()
> SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
sd = &per_cpu(allnodes_domains, i);
*sd = SD_ALLNODES_INIT;
sd->span = *cpu_map;
group = cpu_to_allnodes_group(i);
sd->groups = &sched_group_allnodes[group];
p = sd;
} else
p = NULL;
sd = &per_cpu(node_domains, i);
*sd = SD_NODE_INIT;
sd->span = sched_domain_node_span(cpu_to_node(i));
sd->parent = p;
cpus_and(sd->span, sd->span, *cpu_map);
#endif
p = sd;
sd = &per_cpu(phys_domains, i);
group = cpu_to_phys_group(i);
*sd = SD_CPU_INIT;
sd->span = nodemask;
sd->parent = p;
sd->groups = &sched_group_phys[group];
#ifdef CONFIG_SCHED_SMT
p = sd;
sd = &per_cpu(cpu_domains, i);
group = cpu_to_cpu_group(i);
*sd = SD_SIBLING_INIT;
sd->span = cpu_sibling_map[i];
cpus_and(sd->span, sd->span, *cpu_map);
sd->parent = p;
sd->groups = &sched_group_cpus[group];
#endif
}
#ifdef CONFIG_SCHED_SMT
/* Set up CPU (sibling) groups */
for_each_cpu_mask(i, *cpu_map) {
cpumask_t this_sibling_map = cpu_sibling_map[i];
cpus_and(this_sibling_map, this_sibling_map, *cpu_map);
if (i != first_cpu(this_sibling_map))
continue;
init_sched_build_groups(sched_group_cpus, this_sibling_map,
&cpu_to_cpu_group);
}
#endif
/* Set up physical groups */
for (i = 0; i < MAX_NUMNODES; i++) {
cpumask_t nodemask = node_to_cpumask(i);
cpus_and(nodemask, nodemask, *cpu_map);
if (cpus_empty(nodemask))
continue;
init_sched_build_groups(sched_group_phys, nodemask,
&cpu_to_phys_group);
}
#ifdef CONFIG_NUMA
init_sched_build_groups(sched_group_allnodes, *cpu_map,
&cpu_to_allnodes_group);
for (i = 0; i < MAX_NUMNODES; i++) {
/* Set up node groups */
struct sched_group *sg, *prev;
cpumask_t nodemask = node_to_cpumask(i);
cpumask_t domainspan;
cpumask_t covered = CPU_MASK_NONE;
int j;
cpus_and(nodemask, nodemask, *cpu_map);
if (cpus_empty(nodemask))
continue;
domainspan = sched_domain_node_span(i);
cpus_and(domainspan, domainspan, *cpu_map);
sg = kmalloc(sizeof(struct sched_group), GFP_KERNEL);
sched_group_nodes[i] = sg;
for_each_cpu_mask(j, nodemask) {
struct sched_domain *sd;
sd = &per_cpu(node_domains, j);
sd->groups = sg;
if (sd->groups == NULL) {
/* Turn off balancing if we have no groups */
sd->flags = 0;
}
}
if (!sg) {
printk(KERN_WARNING
"Can not alloc domain group for node %d\n", i);
continue;
}
sg->cpu_power = 0;
sg->cpumask = nodemask;
cpus_or(covered, covered, nodemask);
prev = sg;
for (j = 0; j < MAX_NUMNODES; j++) {
cpumask_t tmp, notcovered;
int n = (i + j) % MAX_NUMNODES;
cpus_complement(notcovered, covered);
cpus_and(tmp, notcovered, *cpu_map);
cpus_and(tmp, tmp, domainspan);
if (cpus_empty(tmp))
break;
nodemask = node_to_cpumask(n);
cpus_and(tmp, tmp, nodemask);
if (cpus_empty(tmp))
continue;
sg = kmalloc(sizeof(struct sched_group), GFP_KERNEL);
if (!sg) {
printk(KERN_WARNING
"Can not alloc domain group for node %d\n", j);
break;
}
sg->cpu_power = 0;
sg->cpumask = tmp;
cpus_or(covered, covered, tmp);
prev->next = sg;
prev = sg;
}
prev->next = sched_group_nodes[i];
}
#endif
/* Calculate CPU power for physical packages and nodes */
for_each_cpu_mask(i, *cpu_map) {
int power;
struct sched_domain *sd;
#ifdef CONFIG_SCHED_SMT
sd = &per_cpu(cpu_domains, i);
power = SCHED_LOAD_SCALE;
sd->groups->cpu_power = power;
#endif
sd = &per_cpu(phys_domains, i);
power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
(cpus_weight(sd->groups->cpumask)-1) / 10;
sd->groups->cpu_power = power;
#ifdef CONFIG_NUMA
sd = &per_cpu(allnodes_domains, i);
if (sd->groups) {
power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
(cpus_weight(sd->groups->cpumask)-1) / 10;
sd->groups->cpu_power = power;
}
#endif
}
#ifdef CONFIG_NUMA
for (i = 0; i < MAX_NUMNODES; i++) {
struct sched_group *sg = sched_group_nodes[i];
int j;
if (sg == NULL)
continue;
next_sg:
for_each_cpu_mask(j, sg->cpumask) {
struct sched_domain *sd;
int power;
sd = &per_cpu(phys_domains, j);
if (j != first_cpu(sd->groups->cpumask)) {
/*
* Only add "power" once for each
* physical package.
*/
continue;
}
power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
(cpus_weight(sd->groups->cpumask)-1) / 10;
sg->cpu_power += power;
}
sg = sg->next;
if (sg != sched_group_nodes[i])
goto next_sg;
}
#endif
/* Attach the domains */
for_each_cpu_mask(i, *cpu_map) {
struct sched_domain *sd;
#ifdef CONFIG_SCHED_SMT
sd = &per_cpu(cpu_domains, i);
#else
sd = &per_cpu(phys_domains, i);
#endif
cpu_attach_domain(sd, i);
}
}
/*
* Set up scheduler domains and groups. Callers must hold the hotplug lock.
*/
void arch_init_sched_domains(const cpumask_t *cpu_map)
{
cpumask_t cpu_default_map;
/*
* Setup mask for cpus without special case scheduling requirements.
* For now this just excludes isolated cpus, but could be used to
* exclude other special cases in the future.
*/
cpus_andnot(cpu_default_map, *cpu_map, cpu_isolated_map);
build_sched_domains(&cpu_default_map);
}
void arch_destroy_sched_domains(const cpumask_t *cpu_map)
{
#ifdef CONFIG_NUMA
int i;
for (i = 0; i < MAX_NUMNODES; i++) {
cpumask_t nodemask = node_to_cpumask(i);
struct sched_group *oldsg, *sg = sched_group_nodes[i];
cpus_and(nodemask, nodemask, *cpu_map);
if (cpus_empty(nodemask))
continue;
if (sg == NULL)
continue;
sg = sg->next;
next_sg:
oldsg = sg;
sg = sg->next;
kfree(oldsg);
if (oldsg != sched_group_nodes[i])
goto next_sg;
sched_group_nodes[i] = NULL;
}
#endif
}

View File

@ -91,23 +91,8 @@ skip:
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/*
* This is updated when the user sets irq affinity via /proc
*/
static cpumask_t __cacheline_aligned pending_irq_cpumask[NR_IRQS];
static unsigned long pending_irq_redir[BITS_TO_LONGS(NR_IRQS)];
static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 }; static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 };
/*
* Arch specific routine for deferred write to iosapic rte to reprogram
* intr destination.
*/
void proc_set_irq_affinity(unsigned int irq, cpumask_t mask_val)
{
pending_irq_cpumask[irq] = mask_val;
}
void set_irq_affinity_info (unsigned int irq, int hwid, int redir) void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
{ {
cpumask_t mask = CPU_MASK_NONE; cpumask_t mask = CPU_MASK_NONE;
@ -116,32 +101,10 @@ void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
if (irq < NR_IRQS) { if (irq < NR_IRQS) {
irq_affinity[irq] = mask; irq_affinity[irq] = mask;
set_irq_info(irq, mask);
irq_redir[irq] = (char) (redir & 0xff); irq_redir[irq] = (char) (redir & 0xff);
} }
} }
void move_irq(int irq)
{
/* note - we hold desc->lock */
cpumask_t tmp;
irq_desc_t *desc = irq_descp(irq);
int redir = test_bit(irq, pending_irq_redir);
if (unlikely(!desc->handler->set_affinity))
return;
if (!cpus_empty(pending_irq_cpumask[irq])) {
cpus_and(tmp, pending_irq_cpumask[irq], cpu_online_map);
if (unlikely(!cpus_empty(tmp))) {
desc->handler->set_affinity(irq | (redir ? IA64_IRQ_REDIRECTED : 0),
pending_irq_cpumask[irq]);
}
cpus_clear(pending_irq_cpumask[irq]);
}
}
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU

View File

@ -49,6 +49,7 @@
/* /*
* void jprobe_break(void) * void jprobe_break(void)
*/ */
.section .kprobes.text, "ax"
ENTRY(jprobe_break) ENTRY(jprobe_break)
break.m 0x80300 break.m 0x80300
END(jprobe_break) END(jprobe_break)

View File

@ -87,12 +87,25 @@ static enum instruction_type bundle_encoding[32][3] = {
* is IP relative instruction and update the kprobe * is IP relative instruction and update the kprobe
* inst flag accordingly * inst flag accordingly
*/ */
static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode, static void __kprobes update_kprobe_inst_flag(uint template, uint slot,
unsigned long kprobe_inst, struct kprobe *p) uint major_opcode,
unsigned long kprobe_inst,
struct kprobe *p)
{ {
p->ainsn.inst_flag = 0; p->ainsn.inst_flag = 0;
p->ainsn.target_br_reg = 0; p->ainsn.target_br_reg = 0;
/* Check for Break instruction
* Bits 37:40 Major opcode to be zero
* Bits 27:32 X6 to be zero
* Bits 32:35 X3 to be zero
*/
if ((!major_opcode) && (!((kprobe_inst >> 27) & 0x1FF)) ) {
/* is a break instruction */
p->ainsn.inst_flag |= INST_FLAG_BREAK_INST;
return;
}
if (bundle_encoding[template][slot] == B) { if (bundle_encoding[template][slot] == B) {
switch (major_opcode) { switch (major_opcode) {
case INDIRECT_CALL_OPCODE: case INDIRECT_CALL_OPCODE:
@ -126,8 +139,10 @@ static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode
* Returns 0 if supported * Returns 0 if supported
* Returns -EINVAL if unsupported * Returns -EINVAL if unsupported
*/ */
static int unsupported_inst(uint template, uint slot, uint major_opcode, static int __kprobes unsupported_inst(uint template, uint slot,
unsigned long kprobe_inst, struct kprobe *p) uint major_opcode,
unsigned long kprobe_inst,
struct kprobe *p)
{ {
unsigned long addr = (unsigned long)p->addr; unsigned long addr = (unsigned long)p->addr;
@ -168,8 +183,9 @@ static int unsupported_inst(uint template, uint slot, uint major_opcode,
* on which we are inserting kprobe is cmp instruction * on which we are inserting kprobe is cmp instruction
* with ctype as unc. * with ctype as unc.
*/ */
static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode, static uint __kprobes is_cmp_ctype_unc_inst(uint template, uint slot,
unsigned long kprobe_inst) uint major_opcode,
unsigned long kprobe_inst)
{ {
cmp_inst_t cmp_inst; cmp_inst_t cmp_inst;
uint ctype_unc = 0; uint ctype_unc = 0;
@ -201,8 +217,10 @@ out:
* In this function we override the bundle with * In this function we override the bundle with
* the break instruction at the given slot. * the break instruction at the given slot.
*/ */
static void prepare_break_inst(uint template, uint slot, uint major_opcode, static void __kprobes prepare_break_inst(uint template, uint slot,
unsigned long kprobe_inst, struct kprobe *p) uint major_opcode,
unsigned long kprobe_inst,
struct kprobe *p)
{ {
unsigned long break_inst = BREAK_INST; unsigned long break_inst = BREAK_INST;
bundle_t *bundle = &p->ainsn.insn.bundle; bundle_t *bundle = &p->ainsn.insn.bundle;
@ -271,7 +289,8 @@ static inline int in_ivt_functions(unsigned long addr)
&& addr < (unsigned long)__end_ivt_text); && addr < (unsigned long)__end_ivt_text);
} }
static int valid_kprobe_addr(int template, int slot, unsigned long addr) static int __kprobes valid_kprobe_addr(int template, int slot,
unsigned long addr)
{ {
if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) { if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) {
printk(KERN_WARNING "Attempting to insert unaligned kprobe " printk(KERN_WARNING "Attempting to insert unaligned kprobe "
@ -323,7 +342,7 @@ static void kretprobe_trampoline(void)
* - cleanup by marking the instance as unused * - cleanup by marking the instance as unused
* - long jump back to the original return address * - long jump back to the original return address
*/ */
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct kretprobe_instance *ri = NULL; struct kretprobe_instance *ri = NULL;
struct hlist_head *head; struct hlist_head *head;
@ -381,7 +400,8 @@ int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
return 1; return 1;
} }
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) void __kprobes arch_prepare_kretprobe(struct kretprobe *rp,
struct pt_regs *regs)
{ {
struct kretprobe_instance *ri; struct kretprobe_instance *ri;
@ -399,7 +419,7 @@ void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
} }
} }
int arch_prepare_kprobe(struct kprobe *p) int __kprobes arch_prepare_kprobe(struct kprobe *p)
{ {
unsigned long addr = (unsigned long) p->addr; unsigned long addr = (unsigned long) p->addr;
unsigned long *kprobe_addr = (unsigned long *)(addr & ~0xFULL); unsigned long *kprobe_addr = (unsigned long *)(addr & ~0xFULL);
@ -430,7 +450,7 @@ int arch_prepare_kprobe(struct kprobe *p)
return 0; return 0;
} }
void arch_arm_kprobe(struct kprobe *p) void __kprobes arch_arm_kprobe(struct kprobe *p)
{ {
unsigned long addr = (unsigned long)p->addr; unsigned long addr = (unsigned long)p->addr;
unsigned long arm_addr = addr & ~0xFULL; unsigned long arm_addr = addr & ~0xFULL;
@ -439,7 +459,7 @@ void arch_arm_kprobe(struct kprobe *p)
flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t)); flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t));
} }
void arch_disarm_kprobe(struct kprobe *p) void __kprobes arch_disarm_kprobe(struct kprobe *p)
{ {
unsigned long addr = (unsigned long)p->addr; unsigned long addr = (unsigned long)p->addr;
unsigned long arm_addr = addr & ~0xFULL; unsigned long arm_addr = addr & ~0xFULL;
@ -449,7 +469,7 @@ void arch_disarm_kprobe(struct kprobe *p)
flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t)); flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t));
} }
void arch_remove_kprobe(struct kprobe *p) void __kprobes arch_remove_kprobe(struct kprobe *p)
{ {
} }
@ -461,7 +481,7 @@ void arch_remove_kprobe(struct kprobe *p)
* to original stack address, handle the case where we need to fixup the * to original stack address, handle the case where we need to fixup the
* relative IP address and/or fixup branch register. * relative IP address and/or fixup branch register.
*/ */
static void resume_execution(struct kprobe *p, struct pt_regs *regs) static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs)
{ {
unsigned long bundle_addr = ((unsigned long) (&p->opcode.bundle)) & ~0xFULL; unsigned long bundle_addr = ((unsigned long) (&p->opcode.bundle)) & ~0xFULL;
unsigned long resume_addr = (unsigned long)p->addr & ~0xFULL; unsigned long resume_addr = (unsigned long)p->addr & ~0xFULL;
@ -528,13 +548,16 @@ turn_ss_off:
ia64_psr(regs)->ss = 0; ia64_psr(regs)->ss = 0;
} }
static void prepare_ss(struct kprobe *p, struct pt_regs *regs) static void __kprobes prepare_ss(struct kprobe *p, struct pt_regs *regs)
{ {
unsigned long bundle_addr = (unsigned long) &p->opcode.bundle; unsigned long bundle_addr = (unsigned long) &p->opcode.bundle;
unsigned long slot = (unsigned long)p->addr & 0xf; unsigned long slot = (unsigned long)p->addr & 0xf;
/* Update instruction pointer (IIP) and slot number (IPSR.ri) */ /* single step inline if break instruction */
regs->cr_iip = bundle_addr & ~0xFULL; if (p->ainsn.inst_flag == INST_FLAG_BREAK_INST)
regs->cr_iip = (unsigned long)p->addr & ~0xFULL;
else
regs->cr_iip = bundle_addr & ~0xFULL;
if (slot > 2) if (slot > 2)
slot = 0; slot = 0;
@ -545,7 +568,39 @@ static void prepare_ss(struct kprobe *p, struct pt_regs *regs)
ia64_psr(regs)->ss = 1; ia64_psr(regs)->ss = 1;
} }
static int pre_kprobes_handler(struct die_args *args) static int __kprobes is_ia64_break_inst(struct pt_regs *regs)
{
unsigned int slot = ia64_psr(regs)->ri;
unsigned int template, major_opcode;
unsigned long kprobe_inst;
unsigned long *kprobe_addr = (unsigned long *)regs->cr_iip;
bundle_t bundle;
memcpy(&bundle, kprobe_addr, sizeof(bundle_t));
template = bundle.quad0.template;
/* Move to slot 2, if bundle is MLX type and kprobe slot is 1 */
if (slot == 1 && bundle_encoding[template][1] == L)
slot++;
/* Get Kprobe probe instruction at given slot*/
get_kprobe_inst(&bundle, slot, &kprobe_inst, &major_opcode);
/* For break instruction,
* Bits 37:40 Major opcode to be zero
* Bits 27:32 X6 to be zero
* Bits 32:35 X3 to be zero
*/
if (major_opcode || ((kprobe_inst >> 27) & 0x1FF) ) {
/* Not a break instruction */
return 0;
}
/* Is a break instruction */
return 1;
}
static int __kprobes pre_kprobes_handler(struct die_args *args)
{ {
struct kprobe *p; struct kprobe *p;
int ret = 0; int ret = 0;
@ -558,7 +613,9 @@ static int pre_kprobes_handler(struct die_args *args)
if (kprobe_running()) { if (kprobe_running()) {
p = get_kprobe(addr); p = get_kprobe(addr);
if (p) { if (p) {
if (kprobe_status == KPROBE_HIT_SS) { if ( (kprobe_status == KPROBE_HIT_SS) &&
(p->ainsn.inst_flag == INST_FLAG_BREAK_INST)) {
ia64_psr(regs)->ss = 0;
unlock_kprobes(); unlock_kprobes();
goto no_kprobe; goto no_kprobe;
} }
@ -592,6 +649,19 @@ static int pre_kprobes_handler(struct die_args *args)
p = get_kprobe(addr); p = get_kprobe(addr);
if (!p) { if (!p) {
unlock_kprobes(); unlock_kprobes();
if (!is_ia64_break_inst(regs)) {
/*
* The breakpoint instruction was removed right
* after we hit it. Another cpu has removed
* either a probepoint or a debugger breakpoint
* at this address. In either case, no further
* handling of this interrupt is appropriate.
*/
ret = 1;
}
/* Not one of our break, let kernel handle it */
goto no_kprobe; goto no_kprobe;
} }
@ -616,7 +686,7 @@ no_kprobe:
return ret; return ret;
} }
static int post_kprobes_handler(struct pt_regs *regs) static int __kprobes post_kprobes_handler(struct pt_regs *regs)
{ {
if (!kprobe_running()) if (!kprobe_running())
return 0; return 0;
@ -641,7 +711,7 @@ out:
return 1; return 1;
} }
static int kprobes_fault_handler(struct pt_regs *regs, int trapnr) static int __kprobes kprobes_fault_handler(struct pt_regs *regs, int trapnr)
{ {
if (!kprobe_running()) if (!kprobe_running())
return 0; return 0;
@ -659,8 +729,8 @@ static int kprobes_fault_handler(struct pt_regs *regs, int trapnr)
return 0; return 0;
} }
int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
void *data) unsigned long val, void *data)
{ {
struct die_args *args = (struct die_args *)data; struct die_args *args = (struct die_args *)data;
switch(val) { switch(val) {
@ -681,7 +751,7 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
return NOTIFY_DONE; return NOTIFY_DONE;
} }
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct jprobe *jp = container_of(p, struct jprobe, kp); struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr = ((struct fnptr *)(jp->entry))->ip; unsigned long addr = ((struct fnptr *)(jp->entry))->ip;
@ -703,7 +773,7 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
return 1; return 1;
} }
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{ {
*regs = jprobe_saved_regs; *regs = jprobe_saved_regs;
return 1; return 1;

View File

@ -15,6 +15,7 @@
#include <linux/vt_kern.h> /* For unblank_screen() */ #include <linux/vt_kern.h> /* For unblank_screen() */
#include <linux/module.h> /* for EXPORT_SYMBOL */ #include <linux/module.h> /* for EXPORT_SYMBOL */
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/kprobes.h>
#include <asm/fpswa.h> #include <asm/fpswa.h>
#include <asm/ia32.h> #include <asm/ia32.h>
@ -122,7 +123,7 @@ die_if_kernel (char *str, struct pt_regs *regs, long err)
} }
void void
ia64_bad_break (unsigned long break_num, struct pt_regs *regs) __kprobes ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
{ {
siginfo_t siginfo; siginfo_t siginfo;
int sig, code; int sig, code;
@ -444,7 +445,7 @@ ia64_illegal_op_fault (unsigned long ec, long arg1, long arg2, long arg3,
return rv; return rv;
} }
void void __kprobes
ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa, ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa,
unsigned long iim, unsigned long itir, long arg5, long arg6, unsigned long iim, unsigned long itir, long arg5, long arg6,
long arg7, struct pt_regs regs) long arg7, struct pt_regs regs)

View File

@ -48,6 +48,7 @@ SECTIONS
*(.text) *(.text)
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT
*(.gnu.linkonce.t*) *(.gnu.linkonce.t*)
} }
.text2 : AT(ADDR(.text2) - LOAD_OFFSET) .text2 : AT(ADDR(.text2) - LOAD_OFFSET)

View File

@ -20,6 +20,7 @@
* *
* Note: "in0" and "in1" are preserved for debugging purposes. * Note: "in0" and "in1" are preserved for debugging purposes.
*/ */
.section .kprobes.text,"ax"
GLOBAL_ENTRY(flush_icache_range) GLOBAL_ENTRY(flush_icache_range)
.prologue .prologue

View File

@ -9,6 +9,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/kprobes.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/processor.h> #include <asm/processor.h>
@ -76,7 +77,7 @@ mapped_kernel_page_is_present (unsigned long address)
return pte_present(pte); return pte_present(pte);
} }
void void __kprobes
ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs) ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs)
{ {
int signal = SIGSEGV, code = SEGV_MAPERR; int signal = SIGSEGV, code = SEGV_MAPERR;

View File

@ -431,7 +431,7 @@ void sn_bus_store_sysdata(struct pci_dev *dev)
{ {
struct sysdata_el *element; struct sysdata_el *element;
element = kcalloc(1, sizeof(struct sysdata_el), GFP_KERNEL); element = kzalloc(sizeof(struct sysdata_el), GFP_KERNEL);
if (!element) { if (!element) {
dev_dbg(dev, "%s: out of memory!\n", __FUNCTION__); dev_dbg(dev, "%s: out of memory!\n", __FUNCTION__);
return; return;

View File

@ -191,7 +191,7 @@ cx_device_register(nasid_t nasid, int part_num, int mfg_num,
{ {
struct cx_dev *cx_dev; struct cx_dev *cx_dev;
cx_dev = kcalloc(1, sizeof(struct cx_dev), GFP_KERNEL); cx_dev = kzalloc(sizeof(struct cx_dev), GFP_KERNEL);
DBG("cx_dev= 0x%p\n", cx_dev); DBG("cx_dev= 0x%p\n", cx_dev);
if (cx_dev == NULL) if (cx_dev == NULL)
return -ENOMEM; return -ENOMEM;

View File

@ -148,7 +148,7 @@ tioca_gart_init(struct tioca_kernel *tioca_kern)
tioca_kern->ca_pcigart_entries = tioca_kern->ca_pcigart_entries =
tioca_kern->ca_pciap_size / tioca_kern->ca_ap_pagesize; tioca_kern->ca_pciap_size / tioca_kern->ca_ap_pagesize;
tioca_kern->ca_pcigart_pagemap = tioca_kern->ca_pcigart_pagemap =
kcalloc(1, tioca_kern->ca_pcigart_entries / 8, GFP_KERNEL); kzalloc(tioca_kern->ca_pcigart_entries / 8, GFP_KERNEL);
if (!tioca_kern->ca_pcigart_pagemap) { if (!tioca_kern->ca_pcigart_pagemap) {
free_pages((unsigned long)tioca_kern->ca_gart, free_pages((unsigned long)tioca_kern->ca_gart,
get_order(tioca_kern->ca_gart_size)); get_order(tioca_kern->ca_gart_size));
@ -392,7 +392,7 @@ tioca_dma_mapped(struct pci_dev *pdev, uint64_t paddr, size_t req_size)
* allocate a map struct * allocate a map struct
*/ */
ca_dmamap = kcalloc(1, sizeof(struct tioca_dmamap), GFP_ATOMIC); ca_dmamap = kzalloc(sizeof(struct tioca_dmamap), GFP_ATOMIC);
if (!ca_dmamap) if (!ca_dmamap)
goto map_return; goto map_return;
@ -600,7 +600,7 @@ tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
* Allocate kernel bus soft and copy from prom. * Allocate kernel bus soft and copy from prom.
*/ */
tioca_common = kcalloc(1, sizeof(struct tioca_common), GFP_KERNEL); tioca_common = kzalloc(sizeof(struct tioca_common), GFP_KERNEL);
if (!tioca_common) if (!tioca_common)
return NULL; return NULL;
@ -609,7 +609,7 @@ tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
/* init kernel-private area */ /* init kernel-private area */
tioca_kern = kcalloc(1, sizeof(struct tioca_kernel), GFP_KERNEL); tioca_kern = kzalloc(sizeof(struct tioca_kernel), GFP_KERNEL);
if (!tioca_kern) { if (!tioca_kern) {
kfree(tioca_common); kfree(tioca_common);
return NULL; return NULL;

View File

@ -171,10 +171,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&xtime, sec, nsec); set_normalized_timespec(&xtime, sec, nsec);
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec); set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
time_adjust = 0; /* stop active adjtime() */ ntp_clear();
time_status |= STA_UNSYNC;
time_maxerror = NTP_PHASE_LIMIT;
time_esterror = NTP_PHASE_LIMIT;
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);
clock_was_set(); clock_was_set();
@ -221,7 +218,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
* called as close as possible to 500 ms before the new second starts. * called as close as possible to 500 ms before the new second starts.
*/ */
write_seqlock(&xtime_lock); write_seqlock(&xtime_lock);
if ((time_status & STA_UNSYNC) == 0 if (ntp_synced()
&& xtime.tv_sec > last_rtc_update + 660 && xtime.tv_sec > last_rtc_update + 660
&& (xtime.tv_nsec / 1000) >= 500000 - ((unsigned)TICK_SIZE) / 2 && (xtime.tv_nsec / 1000) >= 500000 - ((unsigned)TICK_SIZE) / 2
&& (xtime.tv_nsec / 1000) <= 500000 + ((unsigned)TICK_SIZE) / 2) && (xtime.tv_nsec / 1000) <= 500000 + ((unsigned)TICK_SIZE) / 2)

Some files were not shown because too many files have changed in this diff Show More