Merge branch 'upstream'

Conflicts:

	drivers/scsi/libata-core.c
This commit is contained in:
Jeff Garzik 2006-03-24 12:29:39 -05:00
commit 4bbf7bc4c7
1282 changed files with 38304 additions and 22085 deletions

View File

@ -270,25 +270,6 @@ CPU B: spin_unlock_irqrestore(&dev_lock, flags)
</para>
</sect1>
<sect1>
<title>ISA legacy functions</title>
<para>
On older kernels (2.2 and earlier) the ISA bus could be read or
written with these functions and without ioremap being used. This is
no longer true in Linux 2.4. A set of equivalent functions exist for
easy legacy driver porting. The functions available are prefixed
with 'isa_' and are <function>isa_readb</function>,
<function>isa_writeb</function>, <function>isa_readw</function>,
<function>isa_writew</function>, <function>isa_readl</function>,
<function>isa_writel</function>, <function>isa_memcpy_fromio</function>
and <function>isa_memcpy_toio</function>
</para>
<para>
These functions should not be used in new drivers, and will
eventually be going away.
</para>
</sect1>
</chapter>
<chapter>

View File

@ -18,7 +18,8 @@ CONTENTS:
1.4 What are exclusive cpusets ?
1.5 What does notify_on_release do ?
1.6 What is memory_pressure ?
1.7 How do I use cpusets ?
1.7 What is memory spread ?
1.8 How do I use cpusets ?
2. Usage Examples and Syntax
2.1 Basic Usage
2.2 Adding/removing cpus
@ -317,7 +318,78 @@ the tasks in the cpuset, in units of reclaims attempted per second,
times 1000.
1.7 How do I use cpusets ?
1.7 What is memory spread ?
---------------------------
There are two boolean flag files per cpuset that control where the
kernel allocates pages for the file system buffers and related in
kernel data structures. They are called 'memory_spread_page' and
'memory_spread_slab'.
If the per-cpuset boolean flag file 'memory_spread_page' is set, then
the kernel will spread the file system buffers (page cache) evenly
over all the nodes that the faulting task is allowed to use, instead
of preferring to put those pages on the node where the task is running.
If the per-cpuset boolean flag file 'memory_spread_slab' is set,
then the kernel will spread some file system related slab caches,
such as for inodes and dentries evenly over all the nodes that the
faulting task is allowed to use, instead of preferring to put those
pages on the node where the task is running.
The setting of these flags does not affect anonymous data segment or
stack segment pages of a task.
By default, both kinds of memory spreading are off, and memory
pages are allocated on the node local to where the task is running,
except perhaps as modified by the tasks NUMA mempolicy or cpuset
configuration, so long as sufficient free memory pages are available.
When new cpusets are created, they inherit the memory spread settings
of their parent.
Setting memory spreading causes allocations for the affected page
or slab caches to ignore the tasks NUMA mempolicy and be spread
instead. Tasks using mbind() or set_mempolicy() calls to set NUMA
mempolicies will not notice any change in these calls as a result of
their containing tasks memory spread settings. If memory spreading
is turned off, then the currently specified NUMA mempolicy once again
applies to memory page allocations.
Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag
files. By default they contain "0", meaning that the feature is off
for that cpuset. If a "1" is written to that file, then that turns
the named feature on.
The implementation is simple.
Setting the flag 'memory_spread_page' turns on a per-process flag
PF_SPREAD_PAGE for each task that is in that cpuset or subsequently
joins that cpuset. The page allocation calls for the page cache
is modified to perform an inline check for this PF_SPREAD_PAGE task
flag, and if set, a call to a new routine cpuset_mem_spread_node()
returns the node to prefer for the allocation.
Similarly, setting 'memory_spread_cache' turns on the flag
PF_SPREAD_SLAB, and appropriately marked slab caches will allocate
pages from the node returned by cpuset_mem_spread_node().
The cpuset_mem_spread_node() routine is also simple. It uses the
value of a per-task rotor cpuset_mem_spread_rotor to select the next
node in the current tasks mems_allowed to prefer for the allocation.
This memory placement policy is also known (in other contexts) as
round-robin or interleave.
This policy can provide substantial improvements for jobs that need
to place thread local data on the corresponding node, but that need
to access large file system data sets that need to be spread across
the several nodes in the jobs cpuset in order to fit. Without this
policy, especially for jobs that might have one thread reading in the
data set, the memory allocation across the nodes in the jobs cpuset
can become very uneven.
1.8 How do I use cpusets ?
--------------------------
In order to minimize the impact of cpusets on critical kernel

View File

@ -116,6 +116,17 @@ Who: Harald Welte <laforge@netfilter.org>
---------------------------
What: remove EXPORT_SYMBOL(kernel_thread)
When: August 2006
Files: arch/*/kernel/*_ksyms.c
Why: kernel_thread is a low-level implementation detail. Drivers should
use the <linux/kthread.h> API instead which shields them from
implementation details and provides a higherlevel interface that
prevents bugs and code duplication
Who: Christoph Hellwig <hch@lst.de>
---------------------------
What: EXPORT_SYMBOL(lookup_hash)
When: January 2006
Why: Too low-level interface. Use lookup_one_len or lookup_create instead.
@ -158,13 +169,6 @@ Who: Adrian Bunk <bunk@stusta.de>
---------------------------
What: Legacy /proc/pci interface (PCI_LEGACY_PROC)
When: March 2006
Why: deprecated since 2.5.53 in favor of lspci(8)
Who: Adrian Bunk <bunk@stusta.de>
---------------------------
What: pci_module_init(driver)
When: January 2007
Why: Is replaced by pci_register_driver(pci_driver).
@ -181,6 +185,17 @@ Who: Jean Delvare <khali@linux-fr.org>
---------------------------
What: remove EXPORT_SYMBOL(tasklist_lock)
When: August 2006
Files: kernel/fork.c
Why: tasklist_lock protects the kernel internal task list. Modules have
no business looking at it, and all instances in drivers have been due
to use of too-lowlevel APIs. Having this symbol exported prevents
moving to more scalable locking schemes for the task list.
Who: Christoph Hellwig <hch@lst.de>
---------------------------
What: mount/umount uevents
When: February 2007
Why: These events are not correct, and do not properly let userspace know

View File

@ -457,6 +457,11 @@ ChangeLog
Note, a technical ChangeLog aimed at kernel hackers is in fs/ntfs/ChangeLog.
2.1.27:
- Implement page migration support so the kernel can move memory used
by NTFS files and directories around for management purposes.
- Add support for writing to sparse files created with Windows XP SP2.
- Many minor improvements and bug fixes.
2.1.26:
- Implement support for sector sizes above 512 bytes (up to the maximum
supported by NTFS which is 4096 bytes).

View File

@ -18,6 +18,10 @@ Supported chips:
Prefix: 'w83637hf'
Addresses scanned: ISA address retrieved from Super I/O registers
Datasheet: http://www.winbond.com/PDF/sheet/w83637hf.pdf
* Winbond W83687THF
Prefix: 'w83687thf'
Addresses scanned: ISA address retrieved from Super I/O registers
Datasheet: Provided by Winbond on request
Authors:
Frodo Looijaard <frodol@dds.nl>,

View File

@ -36,6 +36,11 @@ Module parameters
Use 'init=0' to bypass initializing the chip.
Try this if your computer crashes when you load the module.
* reset int
(default 0)
The driver used to reset the chip on load, but does no more. Use
'reset=1' to restore the old behavior. Report if you need to do this.
force_subclients=bus,caddr,saddr,saddr
This is used to force the i2c addresses for subclients of
a certain chip. Typical usage is `force_subclients=0,0x2d,0x4a,0x4b'
@ -123,6 +128,25 @@ When an alarm goes off, you can be warned by a beeping signal through
your computer speaker. It is possible to enable all beeping globally,
or only the beeping for some alarms.
Individual alarm and beep bits:
0x000001: in0
0x000002: in1
0x000004: in2
0x000008: in3
0x000010: temp1
0x000020: temp2 (+temp3 on W83781D)
0x000040: fan1
0x000080: fan2
0x000100: in4
0x000200: in5
0x000400: in6
0x000800: fan3
0x001000: chassis
0x002000: temp3 (W83782D and W83627HF only)
0x010000: in7 (W83782D and W83627HF only)
0x020000: in8 (W83782D and W83627HF only)
If an alarm triggers, it will remain triggered until the hardware register
is read at least once. This means that the cause for the alarm may
already have disappeared! Note that in the current implementation, all

View File

@ -4,7 +4,7 @@ Supported adapters:
* Intel 82371AB PIIX4 and PIIX4E
* Intel 82443MX (440MX)
Datasheet: Publicly available at the Intel website
* ServerWorks OSB4, CSB5 and CSB6 southbridges
* ServerWorks OSB4, CSB5, CSB6 and HT-1000 southbridges
Datasheet: Only available via NDA from ServerWorks
* Standard Microsystems (SMSC) SLC90E66 (Victory66) southbridge
Datasheet: Publicly available at the SMSC website http://www.smsc.com

View File

@ -6,9 +6,10 @@ Module Parameters
-----------------
* base: int
Base addresses for the ACCESS.bus controllers
Base addresses for the ACCESS.bus controllers on SCx200 and SC1100 devices
Description
-----------
Enable the use of the ACCESS.bus controllers of a SCx200 processor.
Enable the use of the ACCESS.bus controller on the Geode SCx200 and
SC1100 processors and the CS5535 and CS5536 Geode companion devices.

View File

@ -49,6 +49,7 @@ restrictions referred to are that the relevant option is valid if:
MCA MCA bus support is enabled.
MDA MDA console support is enabled.
MOUSE Appropriate mouse support is enabled.
MSI Message Signaled Interrupts (PCI).
MTD MTD support is enabled.
NET Appropriate network support is enabled.
NUMA NUMA support is enabled.
@ -1008,7 +1009,9 @@ running once the system is up.
noexec=on: enable non-executable mappings (default)
noexec=off: disable nn-executable mappings
nofxsr [BUGS=IA-32]
nofxsr [BUGS=IA-32] Disables x86 floating point extended
register save and restore. The kernel will only save
legacy floating-point registers on task switch.
nohlt [BUGS=ARM]
@ -1053,6 +1056,8 @@ running once the system is up.
nosbagart [IA-64]
nosep [BUGS=IA-32] Disables x86 SYSENTER/SYSEXIT support.
nosmp [SMP] Tells an SMP kernel to act as a UP kernel.
nosync [HW,M68K] Disables sync negotiation for all devices.
@ -1122,6 +1127,11 @@ running once the system is up.
pas16= [HW,SCSI]
See header of drivers/scsi/pas16.c.
pause_on_oops=
Halt all CPUs after the first oops has been printed for
the specified number of seconds. This is to be used if
your oopses keep scrolling off the screen.
pcbit= [HW,ISDN]
pcd. [PARIDE]
@ -1143,6 +1153,9 @@ running once the system is up.
Mechanism 2.
nommconf [IA-32,X86_64] Disable use of MMCONFIG for PCI
Configuration
nomsi [MSI] If the PCI_MSI kernel config parameter is
enabled, this kernel boot option can be used to
disable the use of MSI interrupts system-wide.
nosort [IA-32] Don't sort PCI devices according to
order given by the PCI BIOS. This sorting is
done to get a device order compatible with

View File

@ -109,6 +109,22 @@ Examples:
cycle through the port range.
pgset "udp_dst_max 9" set UDP destination port max.
pgset "mpls 0001000a,0002000a,0000000a" set MPLS labels (in this example
outer label=16,middle label=32,
inner label=0 (IPv4 NULL)) Note that
there must be no spaces between the
arguments. Leading zeros are required.
Do not set the bottom of stack bit,
thats done automatically. If you do
set the bottom of stack bit, that
indicates that you want to randomly
generate that address and the flag
MPLS_RND will be turned on. You
can have any mix of random and fixed
labels in the label stack.
pgset "mpls 0" turn off mpls (or any invalid argument works too!)
pgset stop aborts injection. Also, ^C aborts generator.
@ -167,6 +183,8 @@ pkt_size
min_pkt_size
max_pkt_size
mpls
udp_src_min
udp_src_max
@ -211,4 +229,4 @@ Grant Grundler for testing on IA-64 and parisc, Harald Welte, Lennert Buytenhek
Stephen Hemminger, Andi Kleen, Dave Miller and many others.
Good luck with the linux net-development.
Good luck with the linux net-development.

View File

@ -3,6 +3,7 @@ Mounting the root filesystem via NFS (nfsroot)
Written 1996 by Gero Kuhlmann <gero@gkminix.han.de>
Updated 1997 by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
Updated 2006 by Nico Schottelius <nico-kernel-nfsroot@schottelius.org>
@ -168,7 +169,6 @@ depend on what facilities are available:
root. If it got a BOOTP answer the directory name in that answer
is used.
3.2) Using LILO
When using LILO you can specify all necessary command line
parameters with the 'append=' command in the LILO configuration
@ -177,7 +177,11 @@ depend on what facilities are available:
LILO and its 'append=' command please refer to the LILO
documentation.
3.3) Using loadlin
3.3) Using GRUB
When you use GRUB, you simply append the parameters after the kernel
specification: "kernel <kernel> <parameters>" (without the quotes).
3.4) Using loadlin
When you want to boot Linux from a DOS command prompt without
having a local hard disk to mount as root, you can use loadlin.
I was told that it works, but haven't used it myself yet. In
@ -185,7 +189,7 @@ depend on what facilities are available:
lar to how LILO is doing it. Please refer to the loadlin docu-
mentation for further information.
3.4) Using a boot ROM
3.5) Using a boot ROM
This is probably the most elegant way of booting a diskless
client. With a boot ROM the kernel gets loaded using the TFTP
protocol. As far as I know, no commercial boot ROMs yet
@ -194,6 +198,13 @@ depend on what facilities are available:
and its mirrors. They are called 'netboot-nfs' and 'etherboot'.
Both contain everything you need to boot a diskless Linux client.
3.6) Using pxelinux
Using pxelinux you specify the kernel you built with
"kernel <relative-path-below /tftpboot>". The nfsroot parameters
are passed to the kernel by adding them to the "append" line.
You may perhaps also want to fine tune the console output,
see Documentation/serial-console.txt for serial console help.

View File

@ -17,6 +17,11 @@ Some warnings, first.
* but it will probably only crash.
*
* (*) suspend/resume support is needed to make it safe.
*
* If you have any filesystems on USB devices mounted before suspend,
* they won't be accessible after resume and you may lose data, as though
* you have unplugged the USB devices with mounted filesystems on them
* (see the FAQ below for details).
You need to append resume=/dev/your_swap_partition to kernel command
line. Then you suspend by
@ -27,19 +32,18 @@ echo shutdown > /sys/power/disk; echo disk > /sys/power/state
echo platform > /sys/power/disk; echo disk > /sys/power/state
. If you have SATA disks, you'll need recent kernels with SATA suspend
support. For suspend and resume to work, make sure your disk drivers
are built into kernel -- not modules. [There's way to make
suspend/resume with modular disk drivers, see FAQ, but you probably
should not do that.]
If you want to limit the suspend image size to N bytes, do
echo N > /sys/power/image_size
before suspend (it is limited to 500 MB by default).
Encrypted suspend image:
------------------------
If you want to store your suspend image encrypted with a temporary
key to prevent data gathering after resume you must compile
crypto and the aes algorithm into the kernel - modules won't work
as they cannot be loaded at resume time.
Article about goals and implementation of Software Suspend for Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -333,4 +337,37 @@ init=/bin/bash, then swapon and starting suspend sequence manually
usually does the trick. Then it is good idea to try with latest
vanilla kernel.
Q: How can distributions ship a swsusp-supporting kernel with modular
disk drivers (especially SATA)?
A: Well, it can be done, load the drivers, then do echo into
/sys/power/disk/resume file from initrd. Be sure not to mount
anything, not even read-only mount, or you are going to lose your
data.
Q: How do I make suspend more verbose?
A: If you want to see any non-error kernel messages on the virtual
terminal the kernel switches to during suspend, you have to set the
kernel console loglevel to at least 5, for example by doing
echo 5 > /proc/sys/kernel/printk
Q: Is this true that if I have a mounted filesystem on a USB device and
I suspend to disk, I can lose data unless the filesystem has been mounted
with "sync"?
A: That's right. It depends on your hardware, and it could be true even for
suspend-to-RAM. In fact, even with "-o sync" you can lose data if your
programs have information in buffers they haven't written out to disk.
If you're lucky, your hardware will support low-power modes for USB
controllers while the system is asleep. Lots of hardware doesn't,
however. Shutting off the power to a USB controller is equivalent to
unplugging all the attached devices.
Remember that it's always a bad idea to unplug a disk drive containing a
mounted filesystem. With USB that's true even when your system is asleep!
The safest thing is to unmount all USB-based filesystems before suspending
and remount them after resuming.

View File

@ -0,0 +1,149 @@
Documentation for userland software suspend interface
(C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
First, the warnings at the beginning of swsusp.txt still apply.
Second, you should read the FAQ in swsusp.txt _now_ if you have not
done it already.
Now, to use the userland interface for software suspend you need special
utilities that will read/write the system memory snapshot from/to the
kernel. Such utilities are available, for example, from
<http://www.sisk.pl/kernel/utilities/suspend>. You may want to have
a look at them if you are going to develop your own suspend/resume
utilities.
The interface consists of a character device providing the open(),
release(), read(), and write() operations as well as several ioctl()
commands defined in kernel/power/power.h. The major and minor
numbers of the device are, respectively, 10 and 231, and they can
be read from /sys/class/misc/snapshot/dev.
The device can be open either for reading or for writing. If open for
reading, it is considered to be in the suspend mode. Otherwise it is
assumed to be in the resume mode. The device cannot be open for reading
and writing. It is also impossible to have the device open more than once
at a time.
The ioctl() commands recognized by the device are:
SNAPSHOT_FREEZE - freeze user space processes (the current process is
not frozen); this is required for SNAPSHOT_ATOMIC_SNAPSHOT
and SNAPSHOT_ATOMIC_RESTORE to succeed
SNAPSHOT_UNFREEZE - thaw user space processes frozen by SNAPSHOT_FREEZE
SNAPSHOT_ATOMIC_SNAPSHOT - create a snapshot of the system memory; the
last argument of ioctl() should be a pointer to an int variable,
the value of which will indicate whether the call returned after
creating the snapshot (1) or after restoring the system memory state
from it (0) (after resume the system finds itself finishing the
SNAPSHOT_ATOMIC_SNAPSHOT ioctl() again); after the snapshot
has been created the read() operation can be used to transfer
it out of the kernel
SNAPSHOT_ATOMIC_RESTORE - restore the system memory state from the
uploaded snapshot image; before calling it you should transfer
the system memory snapshot back to the kernel using the write()
operation; this call will not succeed if the snapshot
image is not available to the kernel
SNAPSHOT_FREE - free memory allocated for the snapshot image
SNAPSHOT_SET_IMAGE_SIZE - set the preferred maximum size of the image
(the kernel will do its best to ensure the image size will not exceed
this number, but if it turns out to be impossible, the kernel will
create the smallest image possible)
SNAPSHOT_AVAIL_SWAP - return the amount of available swap in bytes (the last
argument should be a pointer to an unsigned int variable that will
contain the result if the call is successful).
SNAPSHOT_GET_SWAP_PAGE - allocate a swap page from the resume partition
(the last argument should be a pointer to a loff_t variable that
will contain the swap page offset if the call is successful)
SNAPSHOT_FREE_SWAP_PAGES - free all swap pages allocated with
SNAPSHOT_GET_SWAP_PAGE
SNAPSHOT_SET_SWAP_FILE - set the resume partition (the last ioctl() argument
should specify the device's major and minor numbers in the old
two-byte format, as returned by the stat() function in the .st_rdev
member of the stat structure); it is recommended to always use this
call, because the code to set the resume partition could be removed from
future kernels
The device's read() operation can be used to transfer the snapshot image from
the kernel. It has the following limitations:
- you cannot read() more than one virtual memory page at a time
- read()s accross page boundaries are impossible (ie. if ypu read() 1/2 of
a page in the previous call, you will only be able to read()
_at_ _most_ 1/2 of the page in the next call)
The device's write() operation is used for uploading the system memory snapshot
into the kernel. It has the same limitations as the read() operation.
The release() operation frees all memory allocated for the snapshot image
and all swap pages allocated with SNAPSHOT_GET_SWAP_PAGE (if any).
Thus it is not necessary to use either SNAPSHOT_FREE or
SNAPSHOT_FREE_SWAP_PAGES before closing the device (in fact it will also
unfreeze user space processes frozen by SNAPSHOT_UNFREEZE if they are
still frozen when the device is being closed).
Currently it is assumed that the userland utilities reading/writing the
snapshot image from/to the kernel will use a swap parition, called the resume
partition, as storage space. However, this is not really required, as they
can use, for example, a special (blank) suspend partition or a file on a partition
that is unmounted before SNAPSHOT_ATOMIC_SNAPSHOT and mounted afterwards.
These utilities SHOULD NOT make any assumptions regarding the ordering of
data within the snapshot image, except for the image header that MAY be
assumed to start with an swsusp_info structure, as specified in
kernel/power/power.h. This structure MAY be used by the userland utilities
to obtain some information about the snapshot image, such as the size
of the snapshot image, including the metadata and the header itself,
contained in the .size member of swsusp_info.
The snapshot image MUST be written to the kernel unaltered (ie. all of the image
data, metadata and header MUST be written in _exactly_ the same amount, form
and order in which they have been read). Otherwise, the behavior of the
resumed system may be totally unpredictable.
While executing SNAPSHOT_ATOMIC_RESTORE the kernel checks if the
structure of the snapshot image is consistent with the information stored
in the image header. If any inconsistencies are detected,
SNAPSHOT_ATOMIC_RESTORE will not succeed. Still, this is not a fool-proof
mechanism and the userland utilities using the interface SHOULD use additional
means, such as checksums, to ensure the integrity of the snapshot image.
The suspending and resuming utilities MUST lock themselves in memory,
preferrably using mlockall(), before calling SNAPSHOT_FREEZE.
The suspending utility MUST check the value stored by SNAPSHOT_ATOMIC_SNAPSHOT
in the memory location pointed to by the last argument of ioctl() and proceed
in accordance with it:
1. If the value is 1 (ie. the system memory snapshot has just been
created and the system is ready for saving it):
(a) The suspending utility MUST NOT close the snapshot device
_unless_ the whole suspend procedure is to be cancelled, in
which case, if the snapshot image has already been saved, the
suspending utility SHOULD destroy it, preferrably by zapping
its header. If the suspend is not to be cancelled, the
system MUST be powered off or rebooted after the snapshot
image has been saved.
(b) The suspending utility SHOULD NOT attempt to perform any
file system operations (including reads) on the file systems
that were mounted before SNAPSHOT_ATOMIC_SNAPSHOT has been
called. However, it MAY mount a file system that was not
mounted at that time and perform some operations on it (eg.
use it for saving the image).
2. If the value is 0 (ie. the system state has just been restored from
the snapshot image), the suspending utility MUST close the snapshot
device. Afterwards it will be treated as a regular userland process,
so it need not exit.
The resuming utility SHOULD NOT attempt to mount any file systems that could
be mounted before suspend and SHOULD NOT attempt to perform any operations
involving such file systems.
For details, please refer to the source code.

View File

@ -1,7 +1,7 @@
Video issues with S3 resume
~~~~~~~~~~~~~~~~~~~~~~~~~~~
2003-2005, Pavel Machek
2003-2006, Pavel Machek
During S3 resume, hardware needs to be reinitialized. For most
devices, this is easy, and kernel driver knows how to do
@ -15,6 +15,27 @@ run normally so video card is normally initialized. It should not be
problem for S1 standby, because hardware should retain its state over
that.
We either have to run video BIOS during early resume, or interpret it
using vbetool later, or maybe nothing is neccessary on particular
system because video state is preserved. Unfortunately different
methods work on different systems, and no known method suits all of
them.
Userland application called s2ram has been developed; it contains long
whitelist of systems, and automatically selects working method for a
given system. It can be downloaded from CVS at
www.sf.net/projects/suspend . If you get a system that is not in the
whitelist, please try to find a working solution, and submit whitelist
entry so that work does not need to be repeated.
Currently, VBE_SAVE method (6 below) works on most
systems. Unfortunately, vbetool only runs after userland is resumed,
so it makes debugging of early resume problems
hard/impossible. Methods that do not rely on userland are preferable.
Details
~~~~~~~
There are a few types of systems where video works after S3 resume:
(1) systems where video state is preserved over S3.
@ -104,6 +125,7 @@ HP NX7000 ??? (*)
HP Pavilion ZD7000 vbetool post needed, need open-source nv driver for X
HP Omnibook XE3 athlon version none (1)
HP Omnibook XE3GC none (1), video is S3 Savage/IX-MV
HP Omnibook 5150 none (1), (S1 also works OK)
IBM TP T20, model 2647-44G none (1), video is S3 Inc. 86C270-294 Savage/IX-MV, vesafb gets "interesting" but X work.
IBM TP A31 / Type 2652-M5G s3_mode (3) [works ok with BIOS 1.04 2002-08-23, but not at all with BIOS 1.11 2004-11-05 :-(]
IBM TP R32 / Type 2658-MMG none (1)
@ -120,18 +142,24 @@ IBM ThinkPad T42p (2373-GTG) s3_bios (2)
IBM TP X20 ??? (*)
IBM TP X30 s3_bios (2)
IBM TP X31 / Type 2672-XXH none (1), use radeontool (http://fdd.com/software/radeon/) to turn off backlight.
IBM TP X32 none (1), but backlight is on and video is trashed after long suspend
IBM TP X32 none (1), but backlight is on and video is trashed after long suspend. s3_bios,s3_mode (4) works too. Perhaps that gets better results?
IBM Thinkpad X40 Type 2371-7JG s3_bios,s3_mode (4)
IBM TP 600e none(1), but a switch to console and back to X is needed
Medion MD4220 ??? (*)
Samsung P35 vbetool needed (6)
Sharp PC-AR10 (ATI rage) none (1)
Sharp PC-AR10 (ATI rage) none (1), backlight does not switch off
Sony Vaio PCG-C1VRX/K s3_bios (2)
Sony Vaio PCG-F403 ??? (*)
Sony Vaio PCG-GRT995MP none (1), works with 'nv' X driver
Sony Vaio PCG-GR7/K none (1), but needs radeonfb, use radeontool (http://fdd.com/software/radeon/) to turn off backlight.
Sony Vaio PCG-N505SN ??? (*)
Sony Vaio vgn-s260 X or boot-radeon can init it (5)
Sony Vaio vgn-S580BH vga=normal, but suspend from X. Console will be blank unless you return to X.
Sony Vaio vgn-FS115B s3_bios (2),s3_mode (4)
Toshiba Libretto L5 none (1)
Toshiba Satellite 4030CDT s3_mode (3)
Toshiba Satellite 4080XCDT s3_mode (3)
Toshiba Portege 3020CT s3_mode (3)
Toshiba Satellite 4030CDT s3_mode (3) (S1 also works OK)
Toshiba Satellite 4080XCDT s3_mode (3) (S1 also works OK)
Toshiba Satellite 4090XCDT ??? (*)
Toshiba Satellite P10-554 s3_bios,s3_mode (4)(****)
Toshiba M30 (2) xor X with nvidia driver using internal AGP
@ -151,39 +179,3 @@ Asus A7V8X nVidia RIVA TNT2 model 64 s3_bios,s3_mode (4)
(***) To be tested with a newer kernel.
(****) Not with SMP kernel, UP only.
VBEtool details
~~~~~~~~~~~~~~~
(with thanks to Carl-Daniel Hailfinger)
First, boot into X and run the following script ONCE:
#!/bin/bash
statedir=/root/s3/state
mkdir -p $statedir
chvt 2
sleep 1
vbetool vbestate save >$statedir/vbe
To suspend and resume properly, call the following script as root:
#!/bin/bash
statedir=/root/s3/state
curcons=`fgconsole`
fuser /dev/tty$curcons 2>/dev/null|xargs ps -o comm= -p|grep -q X && chvt 2
cat /dev/vcsa >$statedir/vcsa
sync
echo 3 >/proc/acpi/sleep
sync
vbetool post
vbetool vbestate restore <$statedir/vbe
cat $statedir/vcsa >/dev/vcsa
rckbd restart
chvt $[curcons%6+1]
chvt $curcons
Unless you change your graphics card or other hardware configuration,
the state once saved will be OK for every resume afterwards.
NOTE: The "rckbd restart" command may be different for your
distribution. Simply replace it with the command you would use to
set the fonts on screen.

View File

@ -1365,6 +1365,78 @@ platforms are moved over to use the flattened-device-tree model.
};
g) Freescale SOC SEC Security Engines
Required properties:
- device_type : Should be "crypto"
- model : Model of the device. Should be "SEC1" or "SEC2"
- compatible : Should be "talitos"
- reg : Offset and length of the register set for the device
- interrupts : <a b> where a is the interrupt number and b is a
field that represents an encoding of the sense and level
information for the interrupt. This should be encoded based on
the information in section 2) depending on the type of interrupt
controller you have.
- interrupt-parent : the phandle for the interrupt controller that
services interrupts for this device.
- num-channels : An integer representing the number of channels
available.
- channel-fifo-len : An integer representing the number of
descriptor pointers each channel fetch fifo can hold.
- exec-units-mask : The bitmask representing what execution units
(EUs) are available. It's a single 32 bit cell. EU information
should be encoded following the SEC's Descriptor Header Dword
EU_SEL0 field documentation, i.e. as follows:
bit 0 = reserved - should be 0
bit 1 = set if SEC has the ARC4 EU (AFEU)
bit 2 = set if SEC has the DES/3DES EU (DEU)
bit 3 = set if SEC has the message digest EU (MDEU)
bit 4 = set if SEC has the random number generator EU (RNG)
bit 5 = set if SEC has the public key EU (PKEU)
bit 6 = set if SEC has the AES EU (AESU)
bit 7 = set if SEC has the Kasumi EU (KEU)
bits 8 through 31 are reserved for future SEC EUs.
- descriptor-types-mask : The bitmask representing what descriptors
are available. It's a single 32 bit cell. Descriptor type
information should be encoded following the SEC's Descriptor
Header Dword DESC_TYPE field documentation, i.e. as follows:
bit 0 = set if SEC supports the aesu_ctr_nonsnoop desc. type
bit 1 = set if SEC supports the ipsec_esp descriptor type
bit 2 = set if SEC supports the common_nonsnoop desc. type
bit 3 = set if SEC supports the 802.11i AES ccmp desc. type
bit 4 = set if SEC supports the hmac_snoop_no_afeu desc. type
bit 5 = set if SEC supports the srtp descriptor type
bit 6 = set if SEC supports the non_hmac_snoop_no_afeu desc.type
bit 7 = set if SEC supports the pkeu_assemble descriptor type
bit 8 = set if SEC supports the aesu_key_expand_output desc.type
bit 9 = set if SEC supports the pkeu_ptmul descriptor type
bit 10 = set if SEC supports the common_nonsnoop_afeu desc. type
bit 11 = set if SEC supports the pkeu_ptadd_dbl descriptor type
..and so on and so forth.
Example:
/* MPC8548E */
crypto@30000 {
device_type = "crypto";
model = "SEC2";
compatible = "talitos";
reg = <30000 10000>;
interrupts = <1d 3>;
interrupt-parent = <40000>;
num-channels = <4>;
channel-fifo-len = <24>;
exec-units-mask = <000000fe>;
descriptor-types-mask = <073f1127>;
};
More devices will be defined as this spec matures.

View File

@ -121,7 +121,7 @@ accomplished.
EEH must be enabled in the PHB's very early during the boot process,
and if a PCI slot is hot-plugged. The former is performed by
eeh_init() in arch/ppc64/kernel/eeh.c, and the later by
eeh_init() in arch/powerpc/platforms/pseries/eeh.c, and the later by
drivers/pci/hotplug/pSeries_pci.c calling in to the eeh.c code.
EEH must be enabled before a PCI scan of the device can proceed.
Current Power5 hardware will not work unless EEH is enabled;
@ -133,7 +133,7 @@ error. Given an arbitrary address, the routine
pci_get_device_by_addr() will find the pci device associated
with that address (if any).
The default include/asm-ppc64/io.h macros readb(), inb(), insb(),
The default include/asm-powerpc/io.h macros readb(), inb(), insb(),
etc. include a check to see if the i/o read returned all-0xff's.
If so, these make a call to eeh_dn_check_failure(), which in turn
asks the firmware if the all-ff's value is the sign of a true EEH
@ -143,11 +143,12 @@ seen in /proc/ppc64/eeh (subject to change). Normally, almost
all of these occur during boot, when the PCI bus is scanned, where
a large number of 0xff reads are part of the bus scan procedure.
If a frozen slot is detected, code in arch/ppc64/kernel/eeh.c will
print a stack trace to syslog (/var/log/messages). This stack trace
has proven to be very useful to device-driver authors for finding
out at what point the EEH error was detected, as the error itself
usually occurs slightly beforehand.
If a frozen slot is detected, code in
arch/powerpc/platforms/pseries/eeh.c will print a stack trace to
syslog (/var/log/messages). This stack trace has proven to be very
useful to device-driver authors for finding out at what point the EEH
error was detected, as the error itself usually occurs slightly
beforehand.
Next, it uses the Linux kernel notifier chain/work queue mechanism to
allow any interested parties to find out about the failure. Device

View File

@ -558,9 +558,9 @@ partitions.
The proper channel for reporting bugs is either through the Linux OS
distribution company that provided your OS or by posting issues to the
ppc64 development mailing list at:
PowerPC development mailing list at:
linuxppc64-dev@lists.linuxppc.org
linuxppc-dev@ozlabs.org
This request is to provide a documented and searchable public exchange
of the problems and solutions surrounding this driver for the benefit of

View File

@ -16,10 +16,12 @@ devices/
- 0.0.0000/0.0.0815/
- 0.0.0001/0.0.4711/
- 0.0.0002/
- 0.1.0000/0.1.1234/
...
In this example, device 0815 is accessed via subchannel 0, device 4711 via
subchannel 1, and subchannel 2 is a non-I/O subchannel.
In this example, device 0815 is accessed via subchannel 0 in subchannel set 0,
device 4711 via subchannel 1 in subchannel set 0, and subchannel 2 is a non-I/O
subchannel. Device 1234 is accessed via subchannel 0 in subchannel set 1.
You should address a ccw device via its bus id (e.g. 0.0.4711); the device can
be found under bus/ccw/devices/.
@ -97,7 +99,7 @@ is not available to the device driver.
Each driver should declare in a MODULE_DEVICE_TABLE into which CU types/models
and/or device types/models it is interested. This information can later be found
found in the struct ccw_device_id fields:
in the struct ccw_device_id fields:
struct ccw_device_id {
__u16 match_flags;
@ -208,6 +210,11 @@ Each ccwgroup device also provides an 'ungroup' attribute to destroy the device
again (only when offline). This is a generic ccwgroup mechanism (the driver does
not need to implement anything beyond normal removal routines).
A ccw device which is a member of a ccwgroup device carries a pointer to the
ccwgroup device in the driver_data of its device struct. This field must not be
touched by the driver - it should use the ccwgroup device's driver_data for its
private data.
To implement a ccwgroup driver, please refer to include/asm/ccwgroup.h. Keep in
mind that most drivers will need to implement both a ccwgroup and a ccw driver
(unless you have a meta ccw driver, like cu3088 for lcs and ctc).
@ -230,6 +237,8 @@ status - Can be 'online' or 'offline'.
a channel path the user knows to be online, but the machine hasn't
created a machine check for.
type - The physical type of the channel path.
3. System devices
-----------------

View File

@ -0,0 +1,31 @@
Kernel driver ds2482
====================
Supported chips:
* Maxim DS2482-100, Maxim DS2482-800
Prefix: 'ds2482'
Addresses scanned: None
Datasheets:
http://pdfserv.maxim-ic.com/en/ds/DS2482-100-DS2482S-100.pdf
http://pdfserv.maxim-ic.com/en/ds/DS2482-800-DS2482S-800.pdf
Author: Ben Gardner <bgardner@wabtec.com>
Description
-----------
The Maixm/Dallas Semiconductor DS2482 is a I2C device that provides
one (DS2482-100) or eight (DS2482-800) 1-wire busses.
General Remarks
---------------
Valid addresses are 0x18, 0x19, 0x1a, and 0x1b.
However, the device cannot be detected without writing to the i2c bus, so no
detection is done.
You should force the device address.
$ modprobe ds2482 force=0,0x18

View File

@ -534,7 +534,7 @@ S: Supported
BROADBAND PROCESSOR ARCHITECTURE
P: Arnd Bergmann
M: arnd@arndb.de
L: linuxppc64-dev@ozlabs.org
L: linuxppc-dev@ozlabs.org
W: http://linuxppc64.org
S: Supported
@ -1624,7 +1624,7 @@ P: Anton Blanchard
M: anton@samba.org
M: anton@au.ibm.com
W: http://linuxppc64.org
L: linuxppc64-dev@ozlabs.org
L: linuxppc-dev@ozlabs.org
S: Supported
LINUX SECURITY MODULE (LSM) FRAMEWORK
@ -2488,6 +2488,13 @@ M: kristen.c.accardi@intel.com
L: pcihpd-discuss@lists.sourceforge.net
S: Maintained
SECURE DIGITAL HOST CONTROLLER INTERFACE DRIVER
P: Pierre Ossman
M: drzeus-sdhci@drzeus.cx
L: sdhci-devel@list.drzeus.cx
W: http://mmc.drzeus.cx/wiki/Linux/Drivers/sdhci
S: Maintained
SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS
P: Stephen Hemminger
M: shemminger@osdl.org

View File

@ -517,6 +517,10 @@ else
CFLAGS += -fomit-frame-pointer
endif
ifdef CONFIG_UNWIND_INFO
CFLAGS += -fasynchronous-unwind-tables
endif
ifdef CONFIG_DEBUG_INFO
CFLAGS += -g
endif

View File

@ -85,7 +85,7 @@ void mainstone_leds_event(led_event_t evt)
break;
case led_green_on:
hw_led_state |= D21;;
hw_led_state |= D21;
break;
case led_green_off:
@ -93,7 +93,7 @@ void mainstone_leds_event(led_event_t evt)
break;
case led_amber_on:
hw_led_state |= D22;;
hw_led_state |= D22;
break;
case led_amber_off:
@ -101,7 +101,7 @@ void mainstone_leds_event(led_event_t evt)
break;
case led_red_on:
hw_led_state |= D23;;
hw_led_state |= D23;
break;
case led_red_off:

View File

@ -146,7 +146,7 @@ void s3c24xx_set_board(struct s3c24xx_board *b)
board = b;
if (b->clocks_count != 0) {
struct clk **ptr = b->clocks;;
struct clk **ptr = b->clocks;
for (i = b->clocks_count; i > 0; i--, ptr++)
s3c24xx_register_clock(*ptr);

View File

@ -2944,7 +2944,7 @@ static int cryptocop_ioctl_process(struct inode *inode, struct file *filp, unsig
int spdl_err;
/* Mark output pages dirty. */
spdl_err = set_page_dirty_lock(outpages[i]);
DEBUG(if (spdl_err)printk("cryptocop_ioctl_process: set_page_dirty_lock returned %d\n", spdl_err));
DEBUG(if (spdl_err < 0)printk("cryptocop_ioctl_process: set_page_dirty_lock returned %d\n", spdl_err));
}
for (i = 0; i < nooutpages; i++){
put_page(outpages[i]);

View File

@ -52,9 +52,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -67,9 +66,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -116,6 +116,7 @@
#include <asm/pgtable.h>
#include <asm/uaccess.h>
#include <asm/irq.h>
#include <asm/system.h>
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/fs_struct.h>
@ -194,8 +195,6 @@ EXPORT_SYMBOL(enable_hlt);
*/
void (*pm_idle)(void);
extern void default_idle(void);
/*
* The idle thread. There's no useful work to be
* done, so just try to conserve power and have a

View File

@ -1406,7 +1406,7 @@ void gdbstub(int sigval)
__debug_frame->psr |= PSR_S;
__debug_regs->brr = (__debug_frame->tbr & TBR_TT) << 12;
__debug_regs->brr |= BRR_EB;
sigval = SIGXCPU;;
sigval = SIGXCPU;
}
LEDS(0x5002);

View File

@ -75,9 +75,8 @@ int show_interrupts(struct seq_file *p, void *v)
switch (i) {
case 0:
seq_printf(p, " ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
break;
@ -100,9 +99,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i - 1]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i - 1]);
#endif
level = group->sources[ix]->level - frv_irq_levels;

View File

@ -54,7 +54,7 @@ asmlinkage void ret_from_fork(void);
* The idle loop on an H8/300..
*/
#if !defined(CONFIG_H8300H_SIM) && !defined(CONFIG_H8S_SIM)
void default_idle(void)
static void default_idle(void)
{
local_irq_disable();
if (!need_resched()) {
@ -65,7 +65,7 @@ void default_idle(void)
local_irq_enable();
}
#else
void default_idle(void)
static void default_idle(void)
{
cpu_relax();
}

View File

@ -80,6 +80,7 @@ config X86_VOYAGER
config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)"
select SMP
select NUMA
help
This option is used for getting Linux to run on a (IBM/Sequent) NUMA
@ -400,6 +401,7 @@ choice
config NOHIGHMEM
bool "off"
depends on !X86_NUMAQ
---help---
Linux can use up to 64 Gigabytes of physical memory on x86 systems.
However, the address space of 32-bit x86 processors is only 4
@ -436,6 +438,7 @@ config NOHIGHMEM
config HIGHMEM4G
bool "4GB"
depends on !X86_NUMAQ
help
Select this if you have a 32-bit processor and between 1 and 4
gigabytes of physical RAM.
@ -503,10 +506,6 @@ config NUMA
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)
# Need comments to help the hapless user trying to turn on NUMA support
comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
depends on X86_NUMAQ && (!HIGHMEM64G || !SMP)
comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
depends on X86_SUMMIT && (!HIGHMEM64G || !ACPI)
@ -660,13 +659,18 @@ config BOOT_IOREMAP
default y
config REGPARM
bool "Use register arguments (EXPERIMENTAL)"
depends on EXPERIMENTAL
default n
bool "Use register arguments"
default y
help
Compile the kernel with -mregparm=3. This uses a different ABI
and passes the first three arguments of a function call in registers.
This will probably break binary only modules.
Compile the kernel with -mregparm=3. This instructs gcc to use
a more efficient function call ABI which passes the first three
arguments of a function call via registers, which results in denser
and faster code.
If this option is disabled, then the default ABI of passing
arguments via the stack is used.
If unsure, say Y.
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"

View File

@ -31,6 +31,15 @@ config DEBUG_STACK_USAGE
This option will slow down process creation somewhat.
config STACK_BACKTRACE_COLS
int "Stack backtraces per line" if DEBUG_KERNEL
range 1 3
default 2
help
Selects how many stack backtrace entries per line to display.
This can save screen space when displaying traces.
comment "Page alloc debug is incompatible with Software Suspend on i386"
depends on DEBUG_KERNEL && SOFTWARE_SUSPEND

View File

@ -7,7 +7,7 @@ extra-y := head.o init_task.o vmlinux.lds
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \
pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \
quirks.o i8237.o topology.o
quirks.o i8237.o topology.o alternative.o
obj-y += cpu/
obj-y += timers/

View File

@ -0,0 +1,321 @@
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/list.h>
#include <asm/alternative.h>
#include <asm/sections.h>
#define DEBUG 0
#if DEBUG
# define DPRINTK(fmt, args...) printk(fmt, args)
#else
# define DPRINTK(fmt, args...)
#endif
/* Use inline assembly to define this because the nops are defined
as inline assembly strings in the include files and we cannot
get them easily into strings. */
asm("\t.data\nintelnops: "
GENERIC_NOP1 GENERIC_NOP2 GENERIC_NOP3 GENERIC_NOP4 GENERIC_NOP5 GENERIC_NOP6
GENERIC_NOP7 GENERIC_NOP8);
asm("\t.data\nk8nops: "
K8_NOP1 K8_NOP2 K8_NOP3 K8_NOP4 K8_NOP5 K8_NOP6
K8_NOP7 K8_NOP8);
asm("\t.data\nk7nops: "
K7_NOP1 K7_NOP2 K7_NOP3 K7_NOP4 K7_NOP5 K7_NOP6
K7_NOP7 K7_NOP8);
extern unsigned char intelnops[], k8nops[], k7nops[];
static unsigned char *intel_nops[ASM_NOP_MAX+1] = {
NULL,
intelnops,
intelnops + 1,
intelnops + 1 + 2,
intelnops + 1 + 2 + 3,
intelnops + 1 + 2 + 3 + 4,
intelnops + 1 + 2 + 3 + 4 + 5,
intelnops + 1 + 2 + 3 + 4 + 5 + 6,
intelnops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k8_nops[ASM_NOP_MAX+1] = {
NULL,
k8nops,
k8nops + 1,
k8nops + 1 + 2,
k8nops + 1 + 2 + 3,
k8nops + 1 + 2 + 3 + 4,
k8nops + 1 + 2 + 3 + 4 + 5,
k8nops + 1 + 2 + 3 + 4 + 5 + 6,
k8nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k7_nops[ASM_NOP_MAX+1] = {
NULL,
k7nops,
k7nops + 1,
k7nops + 1 + 2,
k7nops + 1 + 2 + 3,
k7nops + 1 + 2 + 3 + 4,
k7nops + 1 + 2 + 3 + 4 + 5,
k7nops + 1 + 2 + 3 + 4 + 5 + 6,
k7nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static struct nop {
int cpuid;
unsigned char **noptable;
} noptypes[] = {
{ X86_FEATURE_K8, k8_nops },
{ X86_FEATURE_K7, k7_nops },
{ -1, NULL }
};
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
extern struct alt_instr __smp_alt_instructions[], __smp_alt_instructions_end[];
extern u8 *__smp_locks[], *__smp_locks_end[];
extern u8 __smp_alt_begin[], __smp_alt_end[];
static unsigned char** find_nop_table(void)
{
unsigned char **noptable = intel_nops;
int i;
for (i = 0; noptypes[i].cpuid >= 0; i++) {
if (boot_cpu_has(noptypes[i].cpuid)) {
noptable = noptypes[i].noptable;
break;
}
}
return noptable;
}
/* Replace instructions with better alternatives for this CPU type.
This runs before SMP is initialized to avoid SMP problems with
self modifying code. This implies that assymetric systems where
APs have less capabilities than the boot processor are not handled.
Tough. Make sure you disable such features by hand. */
void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
{
unsigned char **noptable = find_nop_table();
struct alt_instr *a;
int diff, i, k;
DPRINTK("%s: alt table %p -> %p\n", __FUNCTION__, start, end);
for (a = start; a < end; a++) {
BUG_ON(a->replacementlen > a->instrlen);
if (!boot_cpu_has(a->cpuid))
continue;
memcpy(a->instr, a->replacement, a->replacementlen);
diff = a->instrlen - a->replacementlen;
/* Pad the rest with nops */
for (i = a->replacementlen; diff > 0; diff -= k, i += k) {
k = diff;
if (k > ASM_NOP_MAX)
k = ASM_NOP_MAX;
memcpy(a->instr + i, noptable[k], k);
}
}
}
static void alternatives_smp_save(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
DPRINTK("%s: alt table %p-%p\n", __FUNCTION__, start, end);
for (a = start; a < end; a++) {
memcpy(a->replacement + a->replacementlen,
a->instr,
a->instrlen);
}
}
static void alternatives_smp_apply(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
for (a = start; a < end; a++) {
memcpy(a->instr,
a->replacement + a->replacementlen,
a->instrlen);
}
}
static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{
u8 **ptr;
for (ptr = start; ptr < end; ptr++) {
if (*ptr < text)
continue;
if (*ptr > text_end)
continue;
**ptr = 0xf0; /* lock prefix */
};
}
static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{
unsigned char **noptable = find_nop_table();
u8 **ptr;
for (ptr = start; ptr < end; ptr++) {
if (*ptr < text)
continue;
if (*ptr > text_end)
continue;
**ptr = noptable[1][0];
};
}
struct smp_alt_module {
/* what is this ??? */
struct module *mod;
char *name;
/* ptrs to lock prefixes */
u8 **locks;
u8 **locks_end;
/* .text segment, needed to avoid patching init code ;) */
u8 *text;
u8 *text_end;
struct list_head next;
};
static LIST_HEAD(smp_alt_modules);
static DEFINE_SPINLOCK(smp_alt);
static int smp_alt_once = 0;
static int __init bootonly(char *str)
{
smp_alt_once = 1;
return 1;
}
__setup("smp-alt-boot", bootonly);
void alternatives_smp_module_add(struct module *mod, char *name,
void *locks, void *locks_end,
void *text, void *text_end)
{
struct smp_alt_module *smp;
unsigned long flags;
if (smp_alt_once) {
if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(locks, locks_end,
text, text_end);
return;
}
smp = kzalloc(sizeof(*smp), GFP_KERNEL);
if (NULL == smp)
return; /* we'll run the (safe but slow) SMP code then ... */
smp->mod = mod;
smp->name = name;
smp->locks = locks;
smp->locks_end = locks_end;
smp->text = text;
smp->text_end = text_end;
DPRINTK("%s: locks %p -> %p, text %p -> %p, name %s\n",
__FUNCTION__, smp->locks, smp->locks_end,
smp->text, smp->text_end, smp->name);
spin_lock_irqsave(&smp_alt, flags);
list_add_tail(&smp->next, &smp_alt_modules);
if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(smp->locks, smp->locks_end,
smp->text, smp->text_end);
spin_unlock_irqrestore(&smp_alt, flags);
}
void alternatives_smp_module_del(struct module *mod)
{
struct smp_alt_module *item;
unsigned long flags;
if (smp_alt_once)
return;
spin_lock_irqsave(&smp_alt, flags);
list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod)
continue;
list_del(&item->next);
spin_unlock_irqrestore(&smp_alt, flags);
DPRINTK("%s: %s\n", __FUNCTION__, item->name);
kfree(item);
return;
}
spin_unlock_irqrestore(&smp_alt, flags);
}
void alternatives_smp_switch(int smp)
{
struct smp_alt_module *mod;
unsigned long flags;
if (smp_alt_once)
return;
BUG_ON(!smp && (num_online_cpus() > 1));
spin_lock_irqsave(&smp_alt, flags);
if (smp) {
printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
clear_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
alternatives_smp_apply(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_lock(mod->locks, mod->locks_end,
mod->text, mod->text_end);
} else {
printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_unlock(mod->locks, mod->locks_end,
mod->text, mod->text_end);
}
spin_unlock_irqrestore(&smp_alt, flags);
}
void __init alternative_instructions(void)
{
apply_alternatives(__alt_instructions, __alt_instructions_end);
/* switch to patch-once-at-boottime-only mode and free the
* tables in case we know the number of CPUs will never ever
* change */
#ifdef CONFIG_HOTPLUG_CPU
if (num_possible_cpus() < 2)
smp_alt_once = 1;
#else
smp_alt_once = 1;
#endif
if (smp_alt_once) {
if (1 == num_possible_cpus()) {
printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_unlock(__smp_locks, __smp_locks_end,
_text, _etext);
}
free_init_pages("SMP alternatives",
(unsigned long)__smp_alt_begin,
(unsigned long)__smp_alt_end);
} else {
alternatives_smp_save(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_module_add(NULL, "core kernel",
__smp_locks, __smp_locks_end,
_text, _etext);
alternatives_smp_switch(0);
}
}

View File

@ -38,6 +38,7 @@
#include <asm/i8253.h>
#include <mach_apic.h>
#include <mach_apicdef.h>
#include <mach_ipi.h>
#include "io_ports.h"

View File

@ -824,8 +824,6 @@ static void apm_do_busy(void)
static void (*original_pm_idle)(void);
extern void default_idle(void);
/**
* apm_cpu_idle - cpu idling for APM capable Linux
*

View File

@ -4,6 +4,7 @@
#include <asm/processor.h>
#include <asm/msr.h>
#include <asm/e820.h>
#include <asm/mtrr.h>
#include "cpu.h"
#ifdef CONFIG_X86_OOSTORE

View File

@ -25,9 +25,10 @@ EXPORT_PER_CPU_SYMBOL(cpu_gdt_descr);
DEFINE_PER_CPU(unsigned char, cpu_16bit_stack[CPU_16BIT_STACK_SIZE]);
EXPORT_PER_CPU_SYMBOL(cpu_16bit_stack);
static int cachesize_override __devinitdata = -1;
static int disable_x86_fxsr __devinitdata = 0;
static int disable_x86_serial_nr __devinitdata = 1;
static int cachesize_override __cpuinitdata = -1;
static int disable_x86_fxsr __cpuinitdata;
static int disable_x86_serial_nr __cpuinitdata = 1;
static int disable_x86_sep __cpuinitdata;
struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {};
@ -59,7 +60,7 @@ static int __init cachesize_setup(char *str)
}
__setup("cachesize=", cachesize_setup);
int __devinit get_model_name(struct cpuinfo_x86 *c)
int __cpuinit get_model_name(struct cpuinfo_x86 *c)
{
unsigned int *v;
char *p, *q;
@ -89,7 +90,7 @@ int __devinit get_model_name(struct cpuinfo_x86 *c)
}
void __devinit display_cacheinfo(struct cpuinfo_x86 *c)
void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
{
unsigned int n, dummy, ecx, edx, l2size;
@ -130,7 +131,7 @@ void __devinit display_cacheinfo(struct cpuinfo_x86 *c)
/* in particular, if CPUID levels 0x80000002..4 are supported, this isn't used */
/* Look up CPU names by table lookup. */
static char __devinit *table_lookup_model(struct cpuinfo_x86 *c)
static char __cpuinit *table_lookup_model(struct cpuinfo_x86 *c)
{
struct cpu_model_info *info;
@ -151,7 +152,7 @@ static char __devinit *table_lookup_model(struct cpuinfo_x86 *c)
}
static void __devinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
static void __cpuinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
{
char *v = c->x86_vendor_id;
int i;
@ -187,6 +188,14 @@ static int __init x86_fxsr_setup(char * s)
__setup("nofxsr", x86_fxsr_setup);
static int __init x86_sep_setup(char * s)
{
disable_x86_sep = 1;
return 1;
}
__setup("nosep", x86_sep_setup);
/* Standard macro to see if a specific flag is changeable */
static inline int flag_is_changeable_p(u32 flag)
{
@ -210,7 +219,7 @@ static inline int flag_is_changeable_p(u32 flag)
/* Probe for the CPUID instruction */
static int __devinit have_cpuid_p(void)
static int __cpuinit have_cpuid_p(void)
{
return flag_is_changeable_p(X86_EFLAGS_ID);
}
@ -254,7 +263,7 @@ static void __init early_cpu_detect(void)
}
}
void __devinit generic_identify(struct cpuinfo_x86 * c)
void __cpuinit generic_identify(struct cpuinfo_x86 * c)
{
u32 tfms, xlvl;
int junk;
@ -307,7 +316,7 @@ void __devinit generic_identify(struct cpuinfo_x86 * c)
#endif
}
static void __devinit squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
static void __cpuinit squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PN) && disable_x86_serial_nr ) {
/* Disable processor serial number */
@ -335,7 +344,7 @@ __setup("serialnumber", x86_serial_nr_setup);
/*
* This does the hard work of actually picking apart the CPU stuff...
*/
void __devinit identify_cpu(struct cpuinfo_x86 *c)
void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
{
int i;
@ -405,6 +414,10 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
clear_bit(X86_FEATURE_XMM, c->x86_capability);
}
/* SEP disabled? */
if (disable_x86_sep)
clear_bit(X86_FEATURE_SEP, c->x86_capability);
if (disable_pse)
clear_bit(X86_FEATURE_PSE, c->x86_capability);
@ -417,7 +430,7 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
else
/* Last resort... */
sprintf(c->x86_model_id, "%02x/%02x",
c->x86_vendor, c->x86_model);
c->x86, c->x86_model);
}
/* Now the feature flags better reflect actual CPU features! */
@ -453,7 +466,7 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
}
#ifdef CONFIG_X86_HT
void __devinit detect_ht(struct cpuinfo_x86 *c)
void __cpuinit detect_ht(struct cpuinfo_x86 *c)
{
u32 eax, ebx, ecx, edx;
int index_msb, core_bits;
@ -500,7 +513,7 @@ void __devinit detect_ht(struct cpuinfo_x86 *c)
}
#endif
void __devinit print_cpu_info(struct cpuinfo_x86 *c)
void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
{
char *vendor = NULL;
@ -523,7 +536,7 @@ void __devinit print_cpu_info(struct cpuinfo_x86 *c)
printk("\n");
}
cpumask_t cpu_initialized __devinitdata = CPU_MASK_NONE;
cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
/* This is hacky. :)
* We're emulating future behavior.
@ -570,7 +583,7 @@ void __init early_cpu_init(void)
* and IDT. We reload them nevertheless, this function acts as a
* 'CPU state barrier', nothing should get across.
*/
void __devinit cpu_init(void)
void __cpuinit cpu_init(void)
{
int cpu = smp_processor_id();
struct tss_struct * t = &per_cpu(init_tss, cpu);
@ -670,7 +683,7 @@ void __devinit cpu_init(void)
}
#ifdef CONFIG_HOTPLUG_CPU
void __devinit cpu_uninit(void)
void __cpuinit cpu_uninit(void)
{
int cpu = raw_smp_processor_id();
cpu_clear(cpu, cpu_initialized);

View File

@ -1145,16 +1145,14 @@ static int __cpuinit powernowk8_init(void)
{
unsigned int i, supported_cpus = 0;
for (i=0; i<NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (check_supported_cpu(i))
supported_cpus++;
}
if (supported_cpus == num_online_cpus()) {
printk(KERN_INFO PFX "Found %d AMD Athlon 64 / Opteron processors (" VERSION ")\n",
supported_cpus);
printk(KERN_INFO PFX "Found %d AMD Athlon 64 / Opteron "
"processors (" VERSION ")\n", supported_cpus);
return cpufreq_register_driver(&cpufreq_amd64_driver);
}

View File

@ -29,7 +29,7 @@ extern int trap_init_f00f_bug(void);
struct movsl_mask movsl_mask __read_mostly;
#endif
void __devinit early_intel_workaround(struct cpuinfo_x86 *c)
void __cpuinit early_intel_workaround(struct cpuinfo_x86 *c)
{
if (c->x86_vendor != X86_VENDOR_INTEL)
return;
@ -44,7 +44,7 @@ void __devinit early_intel_workaround(struct cpuinfo_x86 *c)
* This is called before we do cpu ident work
*/
int __devinit ppro_with_ram_bug(void)
int __cpuinit ppro_with_ram_bug(void)
{
/* Uses data from early_cpu_detect now */
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
@ -62,7 +62,7 @@ int __devinit ppro_with_ram_bug(void)
* P4 Xeon errata 037 workaround.
* Hardware prefetcher may cause stale data to be loaded into the cache.
*/
static void __devinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
static void __cpuinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
{
unsigned long lo, hi;
@ -81,7 +81,7 @@ static void __devinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
/*
* find out the number of processor cores on the die
*/
static int __devinit num_cpu_cores(struct cpuinfo_x86 *c)
static int __cpuinit num_cpu_cores(struct cpuinfo_x86 *c)
{
unsigned int eax, ebx, ecx, edx;
@ -96,7 +96,7 @@ static int __devinit num_cpu_cores(struct cpuinfo_x86 *c)
return 1;
}
static void __devinit init_intel(struct cpuinfo_x86 *c)
static void __cpuinit init_intel(struct cpuinfo_x86 *c)
{
unsigned int l2 = 0;
char *p = NULL;
@ -205,7 +205,7 @@ static unsigned int intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
return size;
}
static struct cpu_dev intel_cpu_dev __devinitdata = {
static struct cpu_dev intel_cpu_dev __cpuinitdata = {
.c_vendor = "Intel",
.c_ident = { "GenuineIntel" },
.c_models = {

View File

@ -174,7 +174,7 @@ unsigned int __cpuinit init_intel_cacheinfo(struct cpuinfo_x86 *c)
unsigned int new_l1d = 0, new_l1i = 0; /* Cache sizes from cpuid(4) */
unsigned int new_l2 = 0, new_l3 = 0, i; /* Cache sizes from cpuid(4) */
if (c->cpuid_level > 4) {
if (c->cpuid_level > 3) {
static int is_initialized;
if (is_initialized == 0) {
@ -330,7 +330,7 @@ static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)
}
}
}
static void __devinit cache_remove_shared_cpu_map(unsigned int cpu, int index)
static void __cpuinit cache_remove_shared_cpu_map(unsigned int cpu, int index)
{
struct _cpuid4_info *this_leaf, *sibling_leaf;
int sibling;

View File

@ -40,7 +40,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
/* Other (Linux-defined) */
"cxmmx", "k6_mtrr", "cyrix_arr", "centaur_mcr",
NULL, NULL, NULL, NULL,
"constant_tsc", NULL, NULL, NULL, NULL, NULL, NULL, NULL,
"constant_tsc", "up", NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,

View File

@ -105,7 +105,7 @@ static int crash_nmi_callback(struct pt_regs *regs, int cpu)
return 1;
local_irq_disable();
if (!user_mode(regs)) {
if (!user_mode_vm(regs)) {
crash_fixup_ss_esp(&fixed_regs, regs);
regs = &fixed_regs;
}

View File

@ -543,7 +543,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
if ((md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT)) >
0x100000000ULL)
continue;
res = alloc_bootmem_low(sizeof(struct resource));
res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
switch (md->type) {
case EFI_RESERVED_TYPE:
res->name = "Reserved Memory";

View File

@ -226,6 +226,10 @@ ENTRY(system_call)
pushl %eax # save orig_eax
SAVE_ALL
GET_THREAD_INFO(%ebp)
testl $TF_MASK,EFLAGS(%esp)
jz no_singlestep
orl $_TIF_SINGLESTEP,TI_flags(%ebp)
no_singlestep:
# system call tracing in operation / emulation
/* Note, _TIF_SECCOMP is bit number 8, and so it needs testw and not testb */
testw $(_TIF_SYSCALL_EMU|_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT),TI_flags(%ebp)

View File

@ -450,7 +450,6 @@ int_msg:
.globl boot_gdt_descr
.globl idt_descr
.globl cpu_gdt_descr
ALIGN
# early boot GDT descriptor (must use 1:1 address mapping)
@ -470,8 +469,6 @@ cpu_gdt_descr:
.word GDT_ENTRIES*8-1
.long cpu_gdt_table
.fill NR_CPUS-1,8,0 # space for the other GDT descriptors
/*
* The boot_gdt_table must mirror the equivalent in setup.S and is
* used only for booting.
@ -485,7 +482,7 @@ ENTRY(boot_gdt_table)
/*
* The Global Descriptor Table contains 28 quadwords, per-CPU.
*/
.align PAGE_SIZE_asm
.align L1_CACHE_BYTES
ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* NULL descriptor */
.quad 0x0000000000000000 /* 0x0b reserved */

View File

@ -351,8 +351,8 @@ static inline void rotate_irqs_among_cpus(unsigned long useful_load_threshold)
{
int i, j;
Dprintk("Rotating IRQs among CPUs.\n");
for (i = 0; i < NR_CPUS; i++) {
for (j = 0; cpu_online(i) && (j < NR_IRQS); j++) {
for_each_online_cpu(i) {
for (j = 0; j < NR_IRQS; j++) {
if (!irq_desc[j].action)
continue;
/* Is it a significant load ? */
@ -381,7 +381,7 @@ static void do_irq_balance(void)
unsigned long imbalance = 0;
cpumask_t allowed_mask, target_cpu_mask, tmp;
for (i = 0; i < NR_CPUS; i++) {
for_each_cpu(i) {
int package_index;
CPU_IRQ(i) = 0;
if (!cpu_online(i))
@ -422,9 +422,7 @@ static void do_irq_balance(void)
}
}
/* Find the least loaded processor package */
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (i != CPU_TO_PACKAGEINDEX(i))
continue;
if (min_cpu_irq > CPU_IRQ(i)) {
@ -441,9 +439,7 @@ tryanothercpu:
*/
tmp_cpu_irq = 0;
tmp_loaded = -1;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (i != CPU_TO_PACKAGEINDEX(i))
continue;
if (max_cpu_irq <= CPU_IRQ(i))
@ -619,9 +615,7 @@ static int __init balanced_irq_init(void)
if (smp_num_siblings > 1 && !cpus_empty(tmp))
physical_balance = 1;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
irq_cpu_data[i].irq_delta = kmalloc(sizeof(unsigned long) * NR_IRQS, GFP_KERNEL);
irq_cpu_data[i].last_irq = kmalloc(sizeof(unsigned long) * NR_IRQS, GFP_KERNEL);
if (irq_cpu_data[i].irq_delta == NULL || irq_cpu_data[i].last_irq == NULL) {
@ -638,9 +632,11 @@ static int __init balanced_irq_init(void)
else
printk(KERN_ERR "balanced_irq_init: failed to spawn balanced_irq");
failed:
for (i = 0; i < NR_CPUS; i++) {
for_each_cpu(i) {
kfree(irq_cpu_data[i].irq_delta);
irq_cpu_data[i].irq_delta = NULL;
kfree(irq_cpu_data[i].last_irq);
irq_cpu_data[i].last_irq = NULL;
}
return 0;
}
@ -1761,7 +1757,8 @@ static void __init setup_ioapic_ids_from_mpc(void)
* Don't check I/O APIC IDs for xAPIC systems. They have
* no meaning without the serial APIC bus.
*/
if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 < 15))
if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
|| APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
return;
/*
* This is broken; anything with a real cpu count has to

View File

@ -84,9 +84,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
down(&kprobe_mutex);
mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
up(&kprobe_mutex);
mutex_unlock(&kprobe_mutex);
}
static inline void save_previous_kprobe(struct kprobe_ctlblk *kcb)

View File

@ -104,26 +104,38 @@ int apply_relocate_add(Elf32_Shdr *sechdrs,
return -ENOEXEC;
}
extern void apply_alternatives(void *start, void *end);
int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *me)
{
const Elf_Shdr *s;
const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL;
char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
/* look for .altinstructions to patch */
for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
void *seg;
if (strcmp(".altinstructions", secstrings + s->sh_name))
continue;
seg = (void *)s->sh_addr;
apply_alternatives(seg, seg + s->sh_size);
}
if (!strcmp(".text", secstrings + s->sh_name))
text = s;
if (!strcmp(".altinstructions", secstrings + s->sh_name))
alt = s;
if (!strcmp(".smp_locks", secstrings + s->sh_name))
locks= s;
}
if (alt) {
/* patch .altinstructions */
void *aseg = (void *)alt->sh_addr;
apply_alternatives(aseg, aseg + alt->sh_size);
}
if (locks && text) {
void *lseg = (void *)locks->sh_addr;
void *tseg = (void *)text->sh_addr;
alternatives_smp_module_add(me, me->name,
lseg, lseg + locks->sh_size,
tseg, tseg + text->sh_size);
}
return 0;
}
void module_arch_cleanup(struct module *mod)
{
alternatives_smp_module_del(mod);
}

View File

@ -828,6 +828,8 @@ void __init find_smp_config (void)
smp_scan_config(address, 0x400);
}
int es7000_plat;
/* --------------------------------------------------------------------------
ACPI-based MP Configuration
-------------------------------------------------------------------------- */
@ -935,7 +937,8 @@ void __init mp_register_ioapic (
mp_ioapics[idx].mpc_apicaddr = address;
set_fixmap_nocache(FIX_IO_APIC_BASE_0 + idx, address);
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) && (boot_cpu_data.x86 < 15))
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
&& !APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
tmpid = io_apic_get_unique_id(idx, id);
else
tmpid = id;
@ -1011,8 +1014,6 @@ void __init mp_override_legacy_irq (
return;
}
int es7000_plat;
void __init mp_config_acpi_legacy_irqs (void)
{
struct mpc_config_intsrc intsrc;

View File

@ -143,7 +143,7 @@ static int __init check_nmi_watchdog(void)
local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks
for (cpu = 0; cpu < NR_CPUS; cpu++) {
for_each_cpu(cpu) {
#ifdef CONFIG_SMP
/* Check cpu_callin_map here because that is set
after the timer is started. */
@ -510,7 +510,7 @@ void touch_nmi_watchdog (void)
* Just reset the alert counters, (other CPUs might be
* spinning on locks we hold):
*/
for (i = 0; i < NR_CPUS; i++)
for_each_cpu(i)
alert_counter[i] = 0;
/*
@ -543,7 +543,7 @@ void nmi_watchdog_tick (struct pt_regs * regs)
/*
* die_nmi will return ONLY if NOTIFY_STOP happens..
*/
die_nmi(regs, "NMI Watchdog detected LOCKUP");
die_nmi(regs, "BUG: NMI Watchdog detected LOCKUP");
} else {
last_irq_sums[cpu] = sum;
alert_counter[cpu] = 0;

View File

@ -295,7 +295,7 @@ void show_regs(struct pt_regs * regs)
printk("EIP: %04x:[<%08lx>] CPU: %d\n",0xffff & regs->xcs,regs->eip, smp_processor_id());
print_symbol("EIP is at %s\n", regs->eip);
if (user_mode(regs))
if (user_mode_vm(regs))
printk(" ESP: %04x:%08lx",0xffff & regs->xss,regs->esp);
printk(" EFLAGS: %08lx %s (%s %.*s)\n",
regs->eflags, print_tainted(), system_utsname.release,

View File

@ -34,10 +34,10 @@
/*
* Determines which flags the user has access to [1 = access, 0 = no access].
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), NT(14), IOPL(12-13), IF(9).
* Also masks reserved bits (31-22, 15, 5, 3, 1).
*/
#define FLAG_MASK 0x00054dd5
#define FLAG_MASK 0x00050dd5
/* set's the trap flag. */
#define TRAP_FLAG 0x100

View File

@ -110,11 +110,11 @@ asm(
".align 4\n"
".globl __write_lock_failed\n"
"__write_lock_failed:\n\t"
LOCK "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
LOCK_PREFIX "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
"1: rep; nop\n\t"
"cmpl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jne 1b\n\t"
LOCK "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
LOCK_PREFIX "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jnz __write_lock_failed\n\t"
"ret"
);
@ -124,11 +124,11 @@ asm(
".align 4\n"
".globl __read_lock_failed\n"
"__read_lock_failed:\n\t"
LOCK "incl (%eax)\n"
LOCK_PREFIX "incl (%eax)\n"
"1: rep; nop\n\t"
"cmpl $1,(%eax)\n\t"
"js 1b\n\t"
LOCK "decl (%eax)\n\t"
LOCK_PREFIX "decl (%eax)\n\t"
"js __read_lock_failed\n\t"
"ret"
);

View File

@ -1288,7 +1288,7 @@ legacy_init_iomem_resources(struct resource *code_resource, struct resource *dat
struct resource *res;
if (e820.map[i].addr + e820.map[i].size > 0x100000000ULL)
continue;
res = alloc_bootmem_low(sizeof(struct resource));
res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
switch (e820.map[i].type) {
case E820_RAM: res->name = "System RAM"; break;
case E820_ACPI: res->name = "ACPI Tables"; break;
@ -1316,13 +1316,15 @@ legacy_init_iomem_resources(struct resource *code_resource, struct resource *dat
/*
* Request address space for all standard resources
*
* This is called just before pcibios_assign_resources(), which is also
* an fs_initcall, but is linked in later (in arch/i386/pci/i386.c).
*/
static void __init register_memory(void)
static int __init request_standard_resources(void)
{
unsigned long gapstart, gapsize, round;
unsigned long long last;
int i;
int i;
printk("Setting up standard PCI resources\n");
if (efi_enabled)
efi_initialize_iomem_resources(&code_resource, &data_resource);
else
@ -1334,6 +1336,16 @@ static void __init register_memory(void)
/* request I/O space for devices used on all i[345]86 PCs */
for (i = 0; i < STANDARD_IO_RESOURCES; i++)
request_resource(&ioport_resource, &standard_io_resources[i]);
return 0;
}
fs_initcall(request_standard_resources);
static void __init register_memory(void)
{
unsigned long gapstart, gapsize, round;
unsigned long long last;
int i;
/*
* Search for the bigest gap in the low 32 bits of the e820
@ -1377,101 +1389,6 @@ static void __init register_memory(void)
pci_mem_start, gapstart, gapsize);
}
/* Use inline assembly to define this because the nops are defined
as inline assembly strings in the include files and we cannot
get them easily into strings. */
asm("\t.data\nintelnops: "
GENERIC_NOP1 GENERIC_NOP2 GENERIC_NOP3 GENERIC_NOP4 GENERIC_NOP5 GENERIC_NOP6
GENERIC_NOP7 GENERIC_NOP8);
asm("\t.data\nk8nops: "
K8_NOP1 K8_NOP2 K8_NOP3 K8_NOP4 K8_NOP5 K8_NOP6
K8_NOP7 K8_NOP8);
asm("\t.data\nk7nops: "
K7_NOP1 K7_NOP2 K7_NOP3 K7_NOP4 K7_NOP5 K7_NOP6
K7_NOP7 K7_NOP8);
extern unsigned char intelnops[], k8nops[], k7nops[];
static unsigned char *intel_nops[ASM_NOP_MAX+1] = {
NULL,
intelnops,
intelnops + 1,
intelnops + 1 + 2,
intelnops + 1 + 2 + 3,
intelnops + 1 + 2 + 3 + 4,
intelnops + 1 + 2 + 3 + 4 + 5,
intelnops + 1 + 2 + 3 + 4 + 5 + 6,
intelnops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k8_nops[ASM_NOP_MAX+1] = {
NULL,
k8nops,
k8nops + 1,
k8nops + 1 + 2,
k8nops + 1 + 2 + 3,
k8nops + 1 + 2 + 3 + 4,
k8nops + 1 + 2 + 3 + 4 + 5,
k8nops + 1 + 2 + 3 + 4 + 5 + 6,
k8nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k7_nops[ASM_NOP_MAX+1] = {
NULL,
k7nops,
k7nops + 1,
k7nops + 1 + 2,
k7nops + 1 + 2 + 3,
k7nops + 1 + 2 + 3 + 4,
k7nops + 1 + 2 + 3 + 4 + 5,
k7nops + 1 + 2 + 3 + 4 + 5 + 6,
k7nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static struct nop {
int cpuid;
unsigned char **noptable;
} noptypes[] = {
{ X86_FEATURE_K8, k8_nops },
{ X86_FEATURE_K7, k7_nops },
{ -1, NULL }
};
/* Replace instructions with better alternatives for this CPU type.
This runs before SMP is initialized to avoid SMP problems with
self modifying code. This implies that assymetric systems where
APs have less capabilities than the boot processor are not handled.
Tough. Make sure you disable such features by hand. */
void apply_alternatives(void *start, void *end)
{
struct alt_instr *a;
int diff, i, k;
unsigned char **noptable = intel_nops;
for (i = 0; noptypes[i].cpuid >= 0; i++) {
if (boot_cpu_has(noptypes[i].cpuid)) {
noptable = noptypes[i].noptable;
break;
}
}
for (a = start; (void *)a < end; a++) {
if (!boot_cpu_has(a->cpuid))
continue;
BUG_ON(a->replacementlen > a->instrlen);
memcpy(a->instr, a->replacement, a->replacementlen);
diff = a->instrlen - a->replacementlen;
/* Pad the rest with nops */
for (i = a->replacementlen; diff > 0; diff -= k, i += k) {
k = diff;
if (k > ASM_NOP_MAX)
k = ASM_NOP_MAX;
memcpy(a->instr + i, noptable[k], k);
}
}
}
void __init alternative_instructions(void)
{
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
apply_alternatives(__alt_instructions, __alt_instructions_end);
}
static char * __init machine_specific_memory_setup(void);
#ifdef CONFIG_MCA
@ -1554,6 +1471,16 @@ void __init setup_arch(char **cmdline_p)
parse_cmdline_early(cmdline_p);
#ifdef CONFIG_EARLY_PRINTK
{
char *s = strstr(*cmdline_p, "earlyprintk=");
if (s) {
setup_early_printk(strchr(s, '=') + 1);
printk("early console enabled\n");
}
}
#endif
max_low_pfn = setup_memory();
/*
@ -1578,19 +1505,6 @@ void __init setup_arch(char **cmdline_p)
* NOTE: at this point the bootmem allocator is fully available.
*/
#ifdef CONFIG_EARLY_PRINTK
{
char *s = strstr(*cmdline_p, "earlyprintk=");
if (s) {
extern void setup_early_printk(char *);
setup_early_printk(strchr(s, '=') + 1);
printk("early console enabled\n");
}
}
#endif
dmi_scan_machine();
#ifdef CONFIG_X86_GENERICARCH

View File

@ -123,7 +123,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
err |= __get_user(tmp, &sc->seg); \
loadsegment(seg,tmp); }
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_OF | X86_EFLAGS_DF | \
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_RF | \
X86_EFLAGS_OF | X86_EFLAGS_DF | \
X86_EFLAGS_TF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
@ -582,9 +583,6 @@ static void fastcall do_signal(struct pt_regs *regs)
if (!user_mode(regs))
return;
if (try_to_freeze())
goto no_signal;
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = &current->saved_sigmask;
else
@ -613,7 +611,6 @@ static void fastcall do_signal(struct pt_regs *regs)
return;
}
no_signal:
/* Did we come from a system call? */
if (regs->orig_eax >= 0) {
/* Restart the system call - no handlers present */

View File

@ -899,6 +899,7 @@ static int __devinit do_boot_cpu(int apicid, int cpu)
unsigned short nmi_high = 0, nmi_low = 0;
++cpucount;
alternatives_smp_switch(1);
/*
* We can't use kernel_thread since we must avoid to
@ -1368,6 +1369,8 @@ void __cpu_die(unsigned int cpu)
/* They ack this in play_dead by setting CPU_DEAD */
if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
printk ("CPU %d is now offline\n", cpu);
if (1 == num_online_cpus())
alternatives_smp_switch(0);
return;
}
msleep(100);

View File

@ -41,6 +41,15 @@ int arch_register_cpu(int num){
parent = &node_devices[node].node;
#endif /* CONFIG_NUMA */
/*
* CPU0 cannot be offlined due to several
* restrictions and assumptions in kernel. This basically
* doesnt add a control file, one cannot attempt to offline
* BSP.
*/
if (!num)
cpu_devices[num].cpu.no_control = 1;
return register_cpu(&cpu_devices[num].cpu, num, parent);
}

View File

@ -99,6 +99,8 @@ int register_die_notifier(struct notifier_block *nb)
{
int err = 0;
unsigned long flags;
vmalloc_sync_all();
spin_lock_irqsave(&die_notifier_lock, flags);
err = notifier_chain_register(&i386die_chain, nb);
spin_unlock_irqrestore(&die_notifier_lock, flags);
@ -112,12 +114,30 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
p < (void *)tinfo + THREAD_SIZE - 3;
}
static void print_addr_and_symbol(unsigned long addr, char *log_lvl)
/*
* Print CONFIG_STACK_BACKTRACE_COLS address/symbol entries per line.
*/
static inline int print_addr_and_symbol(unsigned long addr, char *log_lvl,
int printed)
{
printk(log_lvl);
if (!printed)
printk(log_lvl);
#if CONFIG_STACK_BACKTRACE_COLS == 1
printk(" [<%08lx>] ", addr);
#else
printk(" <%08lx> ", addr);
#endif
print_symbol("%s", addr);
printk("\n");
printed = (printed + 1) % CONFIG_STACK_BACKTRACE_COLS;
if (printed)
printk(" ");
else
printk("\n");
return printed;
}
static inline unsigned long print_context_stack(struct thread_info *tinfo,
@ -125,20 +145,24 @@ static inline unsigned long print_context_stack(struct thread_info *tinfo,
char *log_lvl)
{
unsigned long addr;
int printed = 0; /* nr of entries already printed on current line */
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
print_addr_and_symbol(addr, log_lvl);
printed = print_addr_and_symbol(addr, log_lvl, printed);
ebp = *(unsigned long *)ebp;
}
#else
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr))
print_addr_and_symbol(addr, log_lvl);
printed = print_addr_and_symbol(addr, log_lvl, printed);
}
#endif
if (printed)
printk("\n");
return ebp;
}
@ -166,8 +190,7 @@ static void show_trace_log_lvl(struct task_struct *task,
stack = (unsigned long*)context->previous_esp;
if (!stack)
break;
printk(log_lvl);
printk(" =======================\n");
printk("%s =======================\n", log_lvl);
}
}
@ -194,21 +217,17 @@ static void show_stack_log_lvl(struct task_struct *task, unsigned long *esp,
for(i = 0; i < kstack_depth_to_print; i++) {
if (kstack_end(stack))
break;
if (i && ((i % 8) == 0)) {
printk("\n");
printk(log_lvl);
printk(" ");
}
if (i && ((i % 8) == 0))
printk("\n%s ", log_lvl);
printk("%08lx ", *stack++);
}
printk("\n");
printk(log_lvl);
printk("Call Trace:\n");
printk("\n%sCall Trace:\n", log_lvl);
show_trace_log_lvl(task, esp, log_lvl);
}
void show_stack(struct task_struct *task, unsigned long *esp)
{
printk(" ");
show_stack_log_lvl(task, esp, "");
}
@ -233,7 +252,7 @@ void show_registers(struct pt_regs *regs)
esp = (unsigned long) (&regs->esp);
savesegment(ss, ss);
if (user_mode(regs)) {
if (user_mode_vm(regs)) {
in_kernel = 0;
esp = regs->esp;
ss = regs->xss & 0xffff;
@ -333,6 +352,8 @@ void die(const char * str, struct pt_regs * regs, long err)
static int die_counter;
unsigned long flags;
oops_enter();
if (die.lock_owner != raw_smp_processor_id()) {
console_verbose();
spin_lock_irqsave(&die.lock, flags);
@ -385,6 +406,7 @@ void die(const char * str, struct pt_regs * regs, long err)
ssleep(5);
panic("Fatal exception");
}
oops_exit();
do_exit(SIGSEGV);
}
@ -623,7 +645,7 @@ void die_nmi (struct pt_regs *regs, const char *msg)
/* If we are in kernel we are probably nested up pretty bad
* and might aswell get out now while we still can.
*/
if (!user_mode(regs)) {
if (!user_mode_vm(regs)) {
current->thread.trap_no = 2;
crash_kexec(regs);
}
@ -694,6 +716,7 @@ fastcall void do_nmi(struct pt_regs * regs, long error_code)
void set_nmi_callback(nmi_callback_t callback)
{
vmalloc_sync_all();
rcu_assign_pointer(nmi_callback, callback);
}
EXPORT_SYMBOL_GPL(set_nmi_callback);

View File

@ -68,6 +68,26 @@ SECTIONS
*(.data.init_task)
}
/* might get freed after init */
. = ALIGN(4096);
__smp_alt_begin = .;
__smp_alt_instructions = .;
.smp_altinstructions : AT(ADDR(.smp_altinstructions) - LOAD_OFFSET) {
*(.smp_altinstructions)
}
__smp_alt_instructions_end = .;
. = ALIGN(4);
__smp_locks = .;
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
*(.smp_locks)
}
__smp_locks_end = .;
.smp_altinstr_replacement : AT(ADDR(.smp_altinstr_replacement) - LOAD_OFFSET) {
*(.smp_altinstr_replacement)
}
. = ALIGN(4096);
__smp_alt_end = .;
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;

View File

@ -21,6 +21,9 @@
* instruction clobbers %esp, the user's %esp won't even survive entry
* into the kernel. We store %esp in %ebp. Code in entry.S must fetch
* arg6 from the stack.
*
* You can not use this vsyscall for the clone() syscall because the
* three dwords on the parent stack do not get copied to the child.
*/
.text
.globl __kernel_vsyscall

View File

@ -83,6 +83,7 @@ struct es7000_oem_table {
struct psai psai;
};
#ifdef CONFIG_ACPI
struct acpi_table_sdt {
unsigned long pa;
unsigned long count;
@ -99,6 +100,9 @@ struct oem_table {
u32 OEMTableSize;
};
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
#endif
struct mip_reg {
unsigned long long off_0;
unsigned long long off_8;
@ -114,7 +118,6 @@ struct mip_reg {
#define MIP_FUNC(VALUE) (VALUE & 0xff)
extern int parse_unisys_oem (char *oemptr);
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
extern void setup_unisys(void);
extern int es7000_start_cpu(int cpu, unsigned long eip);
extern void es7000_sw_apic(void);

View File

@ -51,8 +51,6 @@ struct mip_reg *host_reg;
int mip_port;
unsigned long mip_addr, host_addr;
#if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_ACPI)
/*
* GSI override for ES7000 platforms.
*/
@ -76,8 +74,6 @@ es7000_rename_gsi(int ioapic, int gsi)
return gsi;
}
#endif /* (CONFIG_X86_IO_APIC) && (CONFIG_ACPI) */
void __init
setup_unisys(void)
{
@ -160,6 +156,7 @@ parse_unisys_oem (char *oemptr)
return es7000_plat;
}
#ifdef CONFIG_ACPI
int __init
find_unisys_acpi_oem_table(unsigned long *oem_addr)
{
@ -212,6 +209,7 @@ find_unisys_acpi_oem_table(unsigned long *oem_addr)
}
return -1;
}
#endif
static void
es7000_spin(int n)

View File

@ -1,7 +1,6 @@
#include <linux/module.h>
#include <linux/smp.h>
#include <linux/delay.h>
#include <linux/platform.h>
#include <asm/io.h>
#include "piix4.h"

View File

@ -214,6 +214,68 @@ static noinline void force_sig_info_fault(int si_signo, int si_code,
fastcall void do_invalid_op(struct pt_regs *, unsigned long);
static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
{
unsigned index = pgd_index(address);
pgd_t *pgd_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pgd += index;
pgd_k = init_mm.pgd + index;
if (!pgd_present(*pgd_k))
return NULL;
/*
* set_pgd(pgd, *pgd_k); here would be useless on PAE
* and redundant with the set_pmd() on non-PAE. As would
* set_pud.
*/
pud = pud_offset(pgd, address);
pud_k = pud_offset(pgd_k, address);
if (!pud_present(*pud_k))
return NULL;
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
if (!pmd_present(*pmd_k))
return NULL;
if (!pmd_present(*pmd))
set_pmd(pmd, *pmd_k);
else
BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
return pmd_k;
}
/*
* Handle a fault on the vmalloc or module mapping area
*
* This assumes no large pages in there.
*/
static inline int vmalloc_fault(unsigned long address)
{
unsigned long pgd_paddr;
pmd_t *pmd_k;
pte_t *pte_k;
/*
* Synchronize this task's top level page-table
* with the 'reference' page table.
*
* Do _not_ use "current" here. We might be inside
* an interrupt in the middle of a task switch..
*/
pgd_paddr = read_cr3();
pmd_k = vmalloc_sync_one(__va(pgd_paddr), address);
if (!pmd_k)
return -1;
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
return -1;
return 0;
}
/*
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
@ -223,6 +285,8 @@ fastcall void do_invalid_op(struct pt_regs *, unsigned long);
* bit 0 == 0 means no page found, 1 means protection fault
* bit 1 == 0 means read, 1 means write
* bit 2 == 0 means kernel, 1 means user-mode
* bit 3 == 1 means use of reserved bit detected
* bit 4 == 1 means fault was an instruction fetch
*/
fastcall void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long error_code)
@ -237,13 +301,6 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
/* get the address */
address = read_cr2();
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/* It's safe to allow irq's after cr2 has been saved */
if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
local_irq_enable();
tsk = current;
si_code = SEGV_MAPERR;
@ -259,17 +316,29 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
*
* This verifies that the fault happens in kernel space
* (error_code & 4) == 0, and that the fault was not a
* protection error (error_code & 1) == 0.
* protection error (error_code & 9) == 0.
*/
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 5))
goto vmalloc_fault;
/*
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0)
return;
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
* fault we could otherwise deadlock.
*/
goto bad_area_nosemaphore;
}
}
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/* It's safe to allow irq's after cr2 has been saved and the vmalloc
fault has been handled. */
if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
local_irq_enable();
mm = tsk->mm;
@ -440,24 +509,31 @@ no_context:
bust_spinlocks(1);
#ifdef CONFIG_X86_PAE
if (error_code & 16) {
pte_t *pte = lookup_address(address);
if (oops_may_print()) {
#ifdef CONFIG_X86_PAE
if (error_code & 16) {
pte_t *pte = lookup_address(address);
if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
printk(KERN_CRIT "kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", current->uid);
if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
printk(KERN_CRIT "kernel tried to execute "
"NX-protected page - exploit attempt? "
"(uid: %d)\n", current->uid);
}
#endif
if (address < PAGE_SIZE)
printk(KERN_ALERT "BUG: unable to handle kernel NULL "
"pointer dereference");
else
printk(KERN_ALERT "BUG: unable to handle kernel paging"
" request");
printk(" at virtual address %08lx\n",address);
printk(KERN_ALERT " printing eip:\n");
printk("%08lx\n", regs->eip);
}
#endif
if (address < PAGE_SIZE)
printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
else
printk(KERN_ALERT "Unable to handle kernel paging request");
printk(" at virtual address %08lx\n",address);
printk(KERN_ALERT " printing eip:\n");
printk("%08lx\n", regs->eip);
page = read_cr3();
page = ((unsigned long *) __va(page))[address >> 22];
printk(KERN_ALERT "*pde = %08lx\n", page);
if (oops_may_print())
printk(KERN_ALERT "*pde = %08lx\n", page);
/*
* We must not directly access the pte in the highpte
* case, the page table might be allocated in highmem.
@ -465,7 +541,7 @@ no_context:
* it's allocated already.
*/
#ifndef CONFIG_HIGHPTE
if (page & 1) {
if ((page & 1) && oops_may_print()) {
page &= PAGE_MASK;
address &= 0x003ff000;
page = ((unsigned long *) __va(page))[address >> PAGE_SHIFT];
@ -510,51 +586,41 @@ do_sigbus:
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 14;
force_sig_info_fault(SIGBUS, BUS_ADRERR, address, tsk);
return;
}
vmalloc_fault:
{
/*
* Synchronize this task's top level page-table
* with the 'reference' page table.
*
* Do _not_ use "tsk" here. We might be inside
* an interrupt in the middle of a task switch..
*/
int index = pgd_index(address);
unsigned long pgd_paddr;
pgd_t *pgd, *pgd_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pte_t *pte_k;
#ifndef CONFIG_X86_PAE
void vmalloc_sync_all(void)
{
/*
* Note that races in the updates of insync and start aren't
* problematic: insync can only get set bits added, and updates to
* start are only improving performance (without affecting correctness
* if undone).
*/
static DECLARE_BITMAP(insync, PTRS_PER_PGD);
static unsigned long start = TASK_SIZE;
unsigned long address;
pgd_paddr = read_cr3();
pgd = index + (pgd_t *)__va(pgd_paddr);
pgd_k = init_mm.pgd + index;
BUILD_BUG_ON(TASK_SIZE & ~PGDIR_MASK);
for (address = start; address >= TASK_SIZE; address += PGDIR_SIZE) {
if (!test_bit(pgd_index(address), insync)) {
unsigned long flags;
struct page *page;
if (!pgd_present(*pgd_k))
goto no_context;
/*
* set_pgd(pgd, *pgd_k); here would be useless on PAE
* and redundant with the set_pmd() on non-PAE. As would
* set_pud.
*/
pud = pud_offset(pgd, address);
pud_k = pud_offset(pgd_k, address);
if (!pud_present(*pud_k))
goto no_context;
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
if (!pmd_present(*pmd_k))
goto no_context;
set_pmd(pmd, *pmd_k);
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
goto no_context;
return;
spin_lock_irqsave(&pgd_lock, flags);
for (page = pgd_list; page; page =
(struct page *)page->index)
if (!vmalloc_sync_one(page_address(page),
address)) {
BUG_ON(page != pgd_list);
break;
}
spin_unlock_irqrestore(&pgd_lock, flags);
if (!page)
set_bit(pgd_index(address), insync);
}
if (address == start && test_bit(pgd_index(address), insync))
start = address + PGDIR_SIZE;
}
}
#endif

View File

@ -720,21 +720,6 @@ static int noinline do_test_wp_bit(void)
return flag;
}
void free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %dk freed\n", (__init_end - __init_begin) >> 10);
}
#ifdef CONFIG_DEBUG_RODATA
extern char __start_rodata, __end_rodata;
@ -758,17 +743,31 @@ void mark_rodata_ro(void)
}
#endif
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (start < end)
printk (KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_init_pages("initrd memory", start, end);
}
#endif

View File

@ -122,7 +122,7 @@ static void nmi_save_registers(void * dummy)
static void free_msrs(void)
{
int i;
for (i = 0; i < NR_CPUS; ++i) {
for_each_cpu(i) {
kfree(cpu_msrs[i].counters);
cpu_msrs[i].counters = NULL;
kfree(cpu_msrs[i].controls);
@ -138,10 +138,7 @@ static int allocate_msrs(void)
size_t counters_size = sizeof(struct op_msr) * model->num_counters;
int i;
for (i = 0; i < NR_CPUS; ++i) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
cpu_msrs[i].counters = kmalloc(counters_size, GFP_KERNEL);
if (!cpu_msrs[i].counters) {
success = 0;

View File

@ -1,4 +1,4 @@
obj-y := i386.o
obj-y := i386.o init.o
obj-$(CONFIG_PCI_BIOS) += pcbios.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconfig.o direct.o

View File

@ -8,6 +8,7 @@
#include <linux/pci.h>
#include <linux/ioport.h>
#include <linux/init.h>
#include <linux/dmi.h>
#include <asm/acpi.h>
#include <asm/segment.h>
@ -120,11 +121,42 @@ void __devinit pcibios_fixup_bus(struct pci_bus *b)
pci_read_bridge_bases(b);
}
/*
* Enable renumbering of PCI bus# ranges to reach all PCI busses (Cardbus)
*/
#ifdef __i386__
static int __devinit assign_all_busses(struct dmi_system_id *d)
{
pci_probe |= PCI_ASSIGN_ALL_BUSSES;
printk(KERN_INFO "%s detected: enabling PCI bus# renumbering"
" (pci=assign-busses)\n", d->ident);
return 0;
}
#endif
/*
* Laptops which need pci=assign-busses to see Cardbus cards
*/
static struct dmi_system_id __devinitdata pciprobe_dmi_table[] = {
#ifdef __i386__
{
.callback = assign_all_busses,
.ident = "Samsung X20 Laptop",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Samsung Electronics"),
DMI_MATCH(DMI_PRODUCT_NAME, "SX20S"),
},
},
#endif /* __i386__ */
{}
};
struct pci_bus * __devinit pcibios_scan_root(int busnum)
{
struct pci_bus *bus = NULL;
dmi_check_system(pciprobe_dmi_table);
while ((bus = pci_find_next_bus(bus)) != NULL) {
if (bus->number == busnum) {
/* Already scanned */

View File

@ -245,7 +245,7 @@ static int __init pci_check_type2(void)
return works;
}
static int __init pci_direct_init(void)
void __init pci_direct_init(void)
{
struct resource *region, *region2;
@ -258,16 +258,16 @@ static int __init pci_direct_init(void)
if (pci_check_type1()) {
printk(KERN_INFO "PCI: Using configuration type 1\n");
raw_pci_ops = &pci_direct_conf1;
return 0;
return;
}
release_resource(region);
type2:
if ((pci_probe & PCI_PROBE_CONF2) == 0)
goto out;
return;
region = request_region(0xCF8, 4, "PCI conf2");
if (!region)
goto out;
return;
region2 = request_region(0xC000, 0x1000, "PCI conf2");
if (!region2)
goto fail2;
@ -275,15 +275,10 @@ static int __init pci_direct_init(void)
if (pci_check_type2()) {
printk(KERN_INFO "PCI: Using configuration type 2\n");
raw_pci_ops = &pci_direct_conf2;
return 0;
return;
}
release_resource(region2);
fail2:
release_resource(region);
out:
return 0;
}
arch_initcall(pci_direct_init);

25
arch/i386/pci/init.c Normal file
View File

@ -0,0 +1,25 @@
#include <linux/config.h>
#include <linux/pci.h>
#include <linux/init.h>
#include "pci.h"
/* arch_initcall has too random ordering, so call the initializers
in the right sequence from here. */
static __init int pci_access_init(void)
{
#ifdef CONFIG_PCI_MMCONFIG
pci_mmcfg_init();
#endif
if (raw_pci_ops)
return 0;
#ifdef CONFIG_PCI_BIOS
pci_pcbios_init();
#endif
if (raw_pci_ops)
return 0;
#ifdef CONFIG_PCI_DIRECT
pci_direct_init();
#endif
return 0;
}
arch_initcall(pci_access_init);

View File

@ -172,25 +172,20 @@ static __init void unreachable_devices(void)
}
}
static int __init pci_mmcfg_init(void)
void __init pci_mmcfg_init(void)
{
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
goto out;
return;
acpi_table_parse(ACPI_MCFG, acpi_parse_mcfg);
if ((pci_mmcfg_config_num == 0) ||
(pci_mmcfg_config == NULL) ||
(pci_mmcfg_config[0].base_address == 0))
goto out;
return;
printk(KERN_INFO "PCI: Using MMCONFIG\n");
raw_pci_ops = &pci_mmcfg;
pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
unreachable_devices();
out:
return 0;
}
arch_initcall(pci_mmcfg_init);

View File

@ -476,14 +476,12 @@ int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq)
}
EXPORT_SYMBOL(pcibios_set_irq_routing);
static int __init pci_pcbios_init(void)
void __init pci_pcbios_init(void)
{
if ((pci_probe & PCI_PROBE_BIOS)
&& ((raw_pci_ops = pci_find_bios()))) {
pci_probe |= PCI_BIOS_SORT;
pci_bios_present = 1;
}
return 0;
}
arch_initcall(pci_pcbios_init);

View File

@ -80,4 +80,7 @@ extern int pci_conf1_write(unsigned int seg, unsigned int bus,
extern int pci_conf1_read(unsigned int seg, unsigned int bus,
unsigned int devfn, int reg, int len, u32 *value);
extern void pci_direct_init(void);
extern void pci_pcbios_init(void);
extern void pci_mmcfg_init(void);

View File

@ -46,11 +46,6 @@
#define KEYBOARD_INTR 3 /* must match with simulator! */
#define NR_PORTS 1 /* only one port for now */
#define SERIAL_INLINE 1
#ifdef SERIAL_INLINE
#define _INLINE_ inline
#endif
#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
@ -237,7 +232,7 @@ static void rs_put_char(struct tty_struct *tty, unsigned char ch)
local_irq_restore(flags);
}
static _INLINE_ void transmit_chars(struct async_struct *info, int *intr_done)
static void transmit_chars(struct async_struct *info, int *intr_done)
{
int count;
unsigned long flags;

View File

@ -41,7 +41,6 @@
#include <linux/serial_core.h>
#include <linux/efi.h>
#include <linux/initrd.h>
#include <linux/platform.h>
#include <linux/pm.h>
#include <linux/cpufreq.h>

View File

@ -36,7 +36,7 @@ static struct bteinfo_s *bte_if_on_node(nasid_t nasid, int interface)
nodepda_t *tmp_nodepda;
if (nasid_to_cnodeid(nasid) == -1)
return (struct bteinfo_s *)NULL;;
return (struct bteinfo_s *)NULL;
tmp_nodepda = NODEPDA(nasid_to_cnodeid(nasid));
return &tmp_nodepda->bte_if[interface];

View File

@ -377,7 +377,7 @@ tioca_dma_mapped(struct pci_dev *pdev, u64 paddr, size_t req_size)
struct tioca_dmamap *ca_dmamap;
void *map;
unsigned long flags;
struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(pdev);;
struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(pdev);
tioca_common = (struct tioca_common *)pcidev_info->pdi_pcibus_info;
tioca_kern = (struct tioca_kernel *)tioca_common->ca_kernel_private;

View File

@ -37,9 +37,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -52,9 +51,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -18,6 +18,7 @@
#include <linux/module.h>
#include <linux/mc146818rtc.h> /* For struct rtc_time and ioctls, etc */
#include <linux/smp_lock.h>
#include <linux/bcd.h>
#include <asm/bvme6000hw.h>
#include <asm/io.h>
@ -32,9 +33,6 @@
* ioctls.
*/
#define BCD2BIN(val) (((val)&15) + ((val)>>4)*10)
#define BIN2BCD(val) ((((val)/10)<<4) + (val)%10)
static unsigned char days_in_mo[] =
{0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};

View File

@ -77,7 +77,7 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
/*
* The idle loop on an m68k..
*/
void default_idle(void)
static void default_idle(void)
{
if (!need_resched())
#if defined(MACH_ATARI_ONLY) && !defined(CONFIG_HADES)

View File

@ -51,7 +51,7 @@ EXPORT_SYMBOL(pm_power_off);
/*
* The idle loop on an m68knommu..
*/
void default_idle(void)
static void default_idle(void)
{
local_irq_disable();
while (!need_resched()) {

View File

@ -68,9 +68,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -83,9 +82,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -167,8 +167,8 @@ int smp_call_function (void (*func) (void *info), void *info, int retry,
mb();
/* Send a message to all other CPUs and wait for them to respond */
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i) && i != cpu)
for_each_online_cpu(i)
if (i != cpu)
core_send_ipi(i, SMP_CALL_FUNCTION);
/* Wait for response */

View File

@ -138,7 +138,7 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
BUG();
}
addr = virt_to_phys(ptr)&RAM_OFFSET_MASK;;
addr = virt_to_phys(ptr)&RAM_OFFSET_MASK;
if(dev == NULL)
addr+=CRIME_HI_MEM_BASE;
return (dma_addr_t)addr;
@ -179,7 +179,7 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
addr = (unsigned long) page_address(sg->page)+sg->offset;
if (addr)
__dma_sync(addr, sg->length, direction);
addr = __pa(addr)&RAM_OFFSET_MASK;;
addr = __pa(addr)&RAM_OFFSET_MASK;
if(dev == NULL)
addr += CRIME_HI_MEM_BASE;
sg->dma_address = (dma_addr_t)addr;
@ -199,7 +199,7 @@ dma_addr_t dma_map_page(struct device *dev, struct page *page,
addr = (unsigned long) page_address(page) + offset;
dma_cache_wback_inv(addr, size);
addr = __pa(addr)&RAM_OFFSET_MASK;;
addr = __pa(addr)&RAM_OFFSET_MASK;
if(dev == NULL)
addr += CRIME_HI_MEM_BASE;

View File

@ -88,12 +88,9 @@ static inline int find_level(cpuid_t *cpunum, int irq)
{
int cpu, i;
for (cpu = 0; cpu <= NR_CPUS; cpu++) {
for_each_online_cpu(cpu) {
struct slice_data *si = cpu_data[cpu].data;
if (!cpu_online(cpu))
continue;
for (i = BASE_PCI_IRQ; i < LEVELS_PER_SLICE; i++)
if (si->level_to_irq[i] == irq) {
*cpunum = cpu;

View File

@ -54,11 +54,6 @@
#include <asm/uaccess.h>
#include <asm/unwind.h>
void default_idle(void)
{
barrier();
}
/*
* The idle thread. There's no useful work to be
* done, so just try to conserve power and have a

View File

@ -298,8 +298,8 @@ send_IPI_allbutself(enum ipi_message_type op)
{
int i;
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i) && i != smp_processor_id())
for_each_online_cpu(i) {
if (i != smp_processor_id())
send_IPI_single(i, op);
}
}
@ -643,14 +643,13 @@ int sys_cpus(int argc, char **argv)
if ( argc == 1 ){
#ifdef DUMP_MORE_STATE
for(i=0; i<NR_CPUS; i++) {
for_each_online_cpu(i) {
int cpus_per_line = 4;
if(cpu_online(i)) {
if (j++ % cpus_per_line)
printk(" %3d",i);
else
printk("\n %3d",i);
}
if (j++ % cpus_per_line)
printk(" %3d",i);
else
printk("\n %3d",i);
}
printk("\n");
#else
@ -659,9 +658,7 @@ int sys_cpus(int argc, char **argv)
} else if((argc==2) && !(strcmp(argv[1],"-l"))) {
printk("\nCPUSTATE TASK CPUNUM CPUID HARDCPU(HPA)\n");
#ifdef DUMP_MORE_STATE
for(i=0;i<NR_CPUS;i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
case STATE_RENDEZVOUS:
@ -695,9 +692,7 @@ int sys_cpus(int argc, char **argv)
} else if ((argc==2) && !(strcmp(argv[1],"-s"))) {
#ifdef DUMP_MORE_STATE
printk("\nCPUSTATE CPUID\n");
for (i=0;i<NR_CPUS;i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
case STATE_RENDEZVOUS:

View File

@ -127,6 +127,12 @@ config PPC_83xx
select 83xx
select PPC_FPU
config PPC_85xx
bool "Freescale 85xx"
select E500
select FSL_SOC
select 85xx
config 40x
bool "AMCC 40x"
@ -139,8 +145,6 @@ config 8xx
config E200
bool "Freescale e200"
config E500
bool "Freescale e500"
endchoice
config POWER4_ONLY
@ -168,6 +172,13 @@ config 6xx
config 83xx
bool
# this is temp to handle compat with arch=ppc
config 85xx
bool
config E500
bool
config PPC_FPU
bool
default y if PPC64
@ -217,6 +228,7 @@ config ALTIVEC
config SPE
bool "SPE Support"
depends on E200 || E500
default y
---help---
This option enables kernel support for the Signal Processing
Extensions (SPE) to the PowerPC processor. The kernel currently
@ -238,6 +250,21 @@ config PPC_STD_MMU_32
def_bool y
depends on PPC_STD_MMU && PPC32
config VIRT_CPU_ACCOUNTING
bool "Deterministic task and CPU time accounting"
depends on PPC64
default y
help
Select this option to enable more accurate task and CPU time
accounting. This is done by reading a CPU counter on each
kernel entry and exit and on transitions within the kernel
between system, softirq and hardirq state, so there is a
small performance impact. This also enables accounting of
stolen time on logically-partitioned systems running on
IBM POWER5-based machines.
If in doubt, say Y here.
config SMP
depends on PPC_STD_MMU
bool "Symmetric multi-processing support"
@ -734,13 +761,12 @@ config GENERIC_ISA_DMA
config PPC_I8259
bool
default y if 85xx
default n
config PPC_INDIRECT_PCI
bool
depends on PCI
default y if 40x || 44x || 85xx
default y if 40x || 44x
default n
config EISA
@ -757,8 +783,8 @@ config MCA
bool
config PCI
bool "PCI support" if 40x || CPM2 || PPC_83xx || 85xx || PPC_MPC52xx || (EMBEDDED && PPC_ISERIES)
default y if !40x && !CPM2 && !8xx && !APUS && !PPC_83xx && !85xx
bool "PCI support" if 40x || CPM2 || PPC_83xx || PPC_85xx || PPC_MPC52xx || (EMBEDDED && PPC_ISERIES)
default y if !40x && !CPM2 && !8xx && !APUS && !PPC_83xx && !PPC_85xx
default PCI_PERMEDIA if !4xx && !CPM2 && !8xx && APUS
default PCI_QSPAN if !4xx && !CPM2 && 8xx
help

View File

@ -148,7 +148,7 @@ all: $(KBUILD_IMAGE)
CPPFLAGS_vmlinux.lds := -Upowerpc
BOOT_TARGETS = zImage zImage.initrd znetboot znetboot.initrd vmlinux.sm uImage
BOOT_TARGETS = zImage zImage.initrd znetboot znetboot.initrd vmlinux.sm uImage vmlinux.bin
.PHONY: $(BOOT_TARGETS)

View File

@ -1,7 +1,5 @@
#!/bin/sh
#
# arch/ppc64/boot/install.sh
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.

View File

@ -152,7 +152,7 @@ static int is_elf64(void *hdr)
elf64ph = (Elf64_Phdr *)((unsigned long)elf64 +
(unsigned long)elf64->e_phoff);
for (i = 0; i < (unsigned int)elf64->e_phnum; i++, elf64ph++)
if (elf64ph->p_type == PT_LOAD && elf64ph->p_offset != 0)
if (elf64ph->p_type == PT_LOAD)
break;
if (i >= (unsigned int)elf64->e_phnum)
return 0;
@ -193,7 +193,7 @@ static int is_elf32(void *hdr)
elf32 = (Elf32_Ehdr *)elfheader;
elf32ph = (Elf32_Phdr *) ((unsigned long)elf32 + elf32->e_phoff);
for (i = 0; i < elf32->e_phnum; i++, elf32ph++)
if (elf32ph->p_type == PT_LOAD && elf32ph->p_offset != 0)
if (elf32ph->p_type == PT_LOAD)
break;
if (i >= elf32->e_phnum)
return 0;

View File

@ -0,0 +1,721 @@
#
# Automatically generated make config: don't edit
# Linux kernel version:
# Sat Jan 14 15:57:54 2006
#
# CONFIG_PPC64 is not set
CONFIG_PPC32=y
CONFIG_PPC_MERGE=y
CONFIG_MMU=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PPC=y
CONFIG_EARLY_PRINTK=y
CONFIG_GENERIC_NVRAM=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_PPC_OF=y
CONFIG_PPC_UDBG_16550=y
# CONFIG_GENERIC_TBSYNC is not set
#
# Processor support
#
# CONFIG_CLASSIC32 is not set
# CONFIG_PPC_52xx is not set
# CONFIG_PPC_82xx is not set
# CONFIG_PPC_83xx is not set
CONFIG_PPC_85xx=y
# CONFIG_40x is not set
# CONFIG_44x is not set
# CONFIG_8xx is not set
# CONFIG_E200 is not set
CONFIG_85xx=y
CONFIG_E500=y
CONFIG_BOOKE=y
CONFIG_FSL_BOOKE=y
# CONFIG_PHYS_64BIT is not set
CONFIG_SPE=y
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
# CONFIG_IKCONFIG is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EMBEDDED=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SHMEM=y
CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
CONFIG_SLAB=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
# CONFIG_SLOB is not set
#
# Loadable module support
#
# CONFIG_MODULES is not set
#
# Block layer
#
# CONFIG_LBD is not set
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_AS=y
# CONFIG_DEFAULT_DEADLINE is not set
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory"
CONFIG_MPIC=y
# CONFIG_WANT_EARLY_SERIAL is not set
#
# Platform support
#
CONFIG_MPC8540_ADS=y
CONFIG_MPC8540=y
CONFIG_PPC_INDIRECT_PCI_BE=y
#
# Kernel options
#
# CONFIG_HIGHMEM is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=y
CONFIG_MATH_EMULATION=y
CONFIG_ARCH_FLATMEM_ENABLE=y
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
# CONFIG_DISCONTIGMEM_MANUAL is not set
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
# CONFIG_PM is not set
# CONFIG_SOFTWARE_SUSPEND is not set
# CONFIG_SECCOMP is not set
CONFIG_ISA_DMA_API=y
#
# Bus options
#
# CONFIG_PPC_I8259 is not set
CONFIG_PPC_INDIRECT_PCI=y
CONFIG_FSL_SOC=y
# CONFIG_PCI is not set
# CONFIG_PCI_DOMAINS is not set
#
# PCCARD (PCMCIA/CardBus) support
#
# CONFIG_PCCARD is not set
#
# PCI Hotplug Support
#
#
# Advanced setup
#
# CONFIG_ADVANCED_OPTIONS is not set
#
# Default settings for advanced configuration options are used
#
CONFIG_HIGHMEM_START=0xfe000000
CONFIG_LOWMEM_SIZE=0x30000000
CONFIG_KERNEL_START=0xc0000000
CONFIG_TASK_SIZE=0x80000000
CONFIG_BOOT_LOAD=0x00800000
#
# Networking
#
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
#
# TIPC Configuration (EXPERIMENTAL)
#
# CONFIG_TIPC is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
#
#
# Generic Driver Options
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
# CONFIG_FW_LOADER is not set
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Plug and Play support
#
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=32768
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_IDE is not set
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
#
# Multi-device support (RAID and LVM)
#
# CONFIG_MD is not set
#
# Fusion MPT device support
#
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
#
# I2O device support
#
#
# Macintosh device drivers
#
# CONFIG_WINDFARM is not set
#
# Network device support
#
CONFIG_NETDEVICES=y
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
#
# PHY device support
#
CONFIG_PHYLIB=y
#
# MII PHY device drivers
#
# CONFIG_MARVELL_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_QSEMI_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_CICADA_PHY is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
#
# Ethernet (1000 Mbit)
#
CONFIG_GIANFAR=y
CONFIG_GFAR_NAPI=y
#
# Ethernet (10000 Mbit)
#
#
# Token Ring devices
#
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
#
# CONFIG_ISDN is not set
#
# Telephony Support
#
# CONFIG_PHONE is not set
#
# Input device support
#
CONFIG_INPUT=y
#
# Userland interfaces
#
# CONFIG_INPUT_MOUSEDEV is not set
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_TSDEV is not set
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set
#
# Input Device Drivers
#
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
#
# Hardware I/O ports
#
# CONFIG_SERIO is not set
# CONFIG_GAMEPORT is not set
#
# Character devices
#
# CONFIG_VT is not set
# CONFIG_SERIAL_NONSTANDARD is not set
#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
#
# IPMI
#
# CONFIG_IPMI_HANDLER is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
# CONFIG_GEN_RTC_X is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_AGP is not set
# CONFIG_RAW_DRIVER is not set
#
# TPM devices
#
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
#
# I2C support
#
# CONFIG_I2C is not set
#
# Dallas's 1-wire bus
#
# CONFIG_W1 is not set
#
# Hardware Monitoring support
#
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# Digital Video Broadcasting Devices
#
# CONFIG_DVB is not set
#
# Graphics support
#
# CONFIG_FB is not set
#
# Sound
#
# CONFIG_SOUND is not set
#
# USB support
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
#
# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
#
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
#
# MMC/SD Card support
#
# CONFIG_MMC is not set
#
# InfiniBand support
#
#
# SN Devices
#
#
# File systems
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT2_FS_XIP is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
# CONFIG_EXT3_FS_POSIX_ACL is not set
# CONFIG_EXT3_FS_SECURITY is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# CONFIG_XFS_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
# CONFIG_QUOTA is not set
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
#
# CONFIG_ISO9660_FS is not set
# CONFIG_UDF_FS is not set
#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
#
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
#
# Network File Systems
#
CONFIG_NFS_FS=y
# CONFIG_NFS_V3 is not set
# CONFIG_NFS_V4 is not set
# CONFIG_NFS_DIRECTIO is not set
# CONFIG_NFSD is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
# CONFIG_MSDOS_PARTITION is not set
# CONFIG_LDM_PARTITION is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_EFI_PARTITION is not set
#
# Native Language Support
#
# CONFIG_NLS is not set
#
# Library routines
#
# CONFIG_CRC_CCITT is not set
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
#
# Instrumentation Support
#
# CONFIG_PROFILING is not set
#
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_DEBUG_MUTEXES=y
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_FS is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_DEBUGGER is not set
# CONFIG_BDI_SWITCH is not set
# CONFIG_BOOTX_TEXT is not set
# CONFIG_PPC_EARLY_DEBUG_LPAR is not set
# CONFIG_PPC_EARLY_DEBUG_G5 is not set
# CONFIG_PPC_EARLY_DEBUG_RTAS is not set
# CONFIG_PPC_EARLY_DEBUG_MAPLE is not set
# CONFIG_PPC_EARLY_DEBUG_ISERIES is not set
#
# Security options
#
# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
# CONFIG_CRYPTO is not set
#
# Hardware crypto devices
#

View File

@ -136,6 +136,9 @@ int main(void)
DEFINE(PACAEMERGSP, offsetof(struct paca_struct, emergency_sp));
DEFINE(PACALPPACAPTR, offsetof(struct paca_struct, lppaca_ptr));
DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
DEFINE(PACA_STARTPURR, offsetof(struct paca_struct, startpurr));
DEFINE(PACA_USER_TIME, offsetof(struct paca_struct, user_time));
DEFINE(PACA_SYSTEM_TIME, offsetof(struct paca_struct, system_time));
DEFINE(LPPACASRR0, offsetof(struct lppaca, saved_srr0));
DEFINE(LPPACASRR1, offsetof(struct lppaca, saved_srr1));

View File

@ -894,7 +894,7 @@ struct cpu_spec cpu_specs[] = {
.platform = "ppc405",
},
{ /* Xilinx Virtex-II Pro */
.pvr_mask = 0xffff0000,
.pvr_mask = 0xfffff000,
.pvr_value = 0x20010000,
.cpu_name = "Virtex-II Pro",
.cpu_features = CPU_FTRS_40X,
@ -904,6 +904,16 @@ struct cpu_spec cpu_specs[] = {
.dcache_bsize = 32,
.platform = "ppc405",
},
{ /* Xilinx Virtex-4 FX */
.pvr_mask = 0xfffff000,
.pvr_value = 0x20011000,
.cpu_name = "Virtex-4 FX",
.cpu_features = CPU_FTRS_40X,
.cpu_user_features = PPC_FEATURE_32 |
PPC_FEATURE_HAS_MMU | PPC_FEATURE_HAS_4xxMAC,
.icache_bsize = 32,
.dcache_bsize = 32,
},
{ /* 405EP */
.pvr_mask = 0xffff0000,
.pvr_value = 0x51210000,

View File

@ -1,6 +1,4 @@
/*
* arch/ppc64/kernel/entry.S
*
* PowerPC version
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
* Rewritten by Cort Dougan (cort@cs.nmt.edu) for PReP
@ -63,6 +61,7 @@ system_call_common:
std r12,_MSR(r1)
std r0,GPR0(r1)
std r10,GPR1(r1)
ACCOUNT_CPU_USER_ENTRY(r10, r11)
std r2,GPR2(r1)
std r3,GPR3(r1)
std r4,GPR4(r1)
@ -170,8 +169,9 @@ syscall_error_cont:
stdcx. r0,0,r1 /* to clear the reservation */
andi. r6,r8,MSR_PR
ld r4,_LINK(r1)
beq- 1f /* only restore r13 if */
ld r13,GPR13(r1) /* returning to usermode */
beq- 1f
ACCOUNT_CPU_USER_EXIT(r11, r12)
ld r13,GPR13(r1) /* only restore r13 if returning to usermode */
1: ld r2,GPR2(r1)
li r12,MSR_RI
andc r11,r10,r12
@ -322,7 +322,7 @@ _GLOBAL(ret_from_fork)
* the fork code also.
*
* The code which creates the new task context is in 'copy_thread'
* in arch/ppc64/kernel/process.c
* in arch/powerpc/kernel/process.c
*/
.align 7
_GLOBAL(_switch)
@ -486,6 +486,7 @@ restore:
* userspace
*/
beq 1f
ACCOUNT_CPU_USER_EXIT(r3, r4)
REST_GPR(13, r1)
1:
ld r3,_CTR(r1)

View File

@ -18,28 +18,3 @@
#include <asm/firmware.h>
unsigned long ppc64_firmware_features;
#ifdef CONFIG_PPC_PSERIES
firmware_feature_t firmware_features_table[FIRMWARE_MAX_FEATURES] = {
{FW_FEATURE_PFT, "hcall-pft"},
{FW_FEATURE_TCE, "hcall-tce"},
{FW_FEATURE_SPRG0, "hcall-sprg0"},
{FW_FEATURE_DABR, "hcall-dabr"},
{FW_FEATURE_COPY, "hcall-copy"},
{FW_FEATURE_ASR, "hcall-asr"},
{FW_FEATURE_DEBUG, "hcall-debug"},
{FW_FEATURE_PERF, "hcall-perf"},
{FW_FEATURE_DUMP, "hcall-dump"},
{FW_FEATURE_INTERRUPT, "hcall-interrupt"},
{FW_FEATURE_MIGRATE, "hcall-migrate"},
{FW_FEATURE_PERFMON, "hcall-perfmon"},
{FW_FEATURE_CRQ, "hcall-crq"},
{FW_FEATURE_VIO, "hcall-vio"},
{FW_FEATURE_RDMA, "hcall-rdma"},
{FW_FEATURE_LLAN, "hcall-lLAN"},
{FW_FEATURE_BULK, "hcall-bulk"},
{FW_FEATURE_XDABR, "hcall-xdabr"},
{FW_FEATURE_MULTITCE, "hcall-multi-tce"},
{FW_FEATURE_SPLPAR, "hcall-splpar"},
};
#endif

View File

@ -1,6 +1,4 @@
/*
* arch/ppc/kernel/head_44x.S
*
* Kernel execution entry point code.
*
* Copyright (c) 1995-1996 Gary Thomas <gdt@linuxppc.org>

View File

@ -1,6 +1,4 @@
/*
* arch/ppc64/kernel/head.S
*
* PowerPC version
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
*
@ -279,6 +277,7 @@ exception_marker:
std r10,0(r1); /* make stack chain pointer */ \
std r0,GPR0(r1); /* save r0 in stackframe */ \
std r10,GPR1(r1); /* save r1 in stackframe */ \
ACCOUNT_CPU_USER_ENTRY(r9, r10); \
std r2,GPR2(r1); /* save r2 in stackframe */ \
SAVE_4GPRS(3, r1); /* save r3 - r6 in stackframe */ \
SAVE_2GPRS(7, r1); /* save r7, r8 in stackframe */ \
@ -846,6 +845,14 @@ fast_exception_return:
ld r11,_NIP(r1)
andi. r3,r12,MSR_RI /* check if RI is set */
beq- unrecov_fer
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
andi. r3,r12,MSR_PR
beq 2f
ACCOUNT_CPU_USER_EXIT(r3, r4)
2:
#endif
ld r3,_CCR(r1)
ld r4,_LINK(r1)
ld r5,_CTR(r1)

View File

@ -1,6 +1,4 @@
/*
* arch/ppc/kernel/except_8xx.S
*
* PowerPC version
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
* Rewritten by Cort Dougan (cort@cs.nmt.edu) for PReP

View File

@ -0,0 +1,363 @@
#ifndef __HEAD_BOOKE_H__
#define __HEAD_BOOKE_H__
/*
* Macros used for common Book-e exception handling
*/
#define SET_IVOR(vector_number, vector_label) \
li r26,vector_label@l; \
mtspr SPRN_IVOR##vector_number,r26; \
sync
#define NORMAL_EXCEPTION_PROLOG \
mtspr SPRN_SPRG0,r10; /* save two registers to work with */\
mtspr SPRN_SPRG1,r11; \
mtspr SPRN_SPRG4W,r1; \
mfcr r10; /* save CR in r10 for now */\
mfspr r11,SPRN_SRR1; /* check whether user or kernel */\
andi. r11,r11,MSR_PR; \
beq 1f; \
mfspr r1,SPRN_SPRG3; /* if from user, start at top of */\
lwz r1,THREAD_INFO-THREAD(r1); /* this thread's kernel stack */\
addi r1,r1,THREAD_SIZE; \
1: subi r1,r1,INT_FRAME_SIZE; /* Allocate an exception frame */\
mr r11,r1; \
stw r10,_CCR(r11); /* save various registers */\
stw r12,GPR12(r11); \
stw r9,GPR9(r11); \
mfspr r10,SPRN_SPRG0; \
stw r10,GPR10(r11); \
mfspr r12,SPRN_SPRG1; \
stw r12,GPR11(r11); \
mflr r10; \
stw r10,_LINK(r11); \
mfspr r10,SPRN_SPRG4R; \
mfspr r12,SPRN_SRR0; \
stw r10,GPR1(r11); \
mfspr r9,SPRN_SRR1; \
stw r10,0(r11); \
rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
stw r0,GPR0(r11); \
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)
/* To handle the additional exception priority levels on 40x and Book-E
* processors we allocate a 4k stack per additional priority level. The various
* head_xxx.S files allocate space (exception_stack_top) for each priority's
* stack times the number of CPUs
*
* On 40x critical is the only additional level
* On 44x/e500 we have critical and machine check
* On e200 we have critical and debug (machine check occurs via critical)
*
* Additionally we reserve a SPRG for each priority level so we can free up a
* GPR to use as the base for indirect access to the exception stacks. This
* is necessary since the MMU is always on, for Book-E parts, and the stacks
* are offset from KERNELBASE.
*
*/
#define BOOKE_EXCEPTION_STACK_SIZE (8192)
/* CRIT_SPRG only used in critical exception handling */
#define CRIT_SPRG SPRN_SPRG2
/* MCHECK_SPRG only used in machine check exception handling */
#define MCHECK_SPRG SPRN_SPRG6W
#define MCHECK_STACK_TOP (exception_stack_top - 4096)
#define CRIT_STACK_TOP (exception_stack_top)
/* only on e200 for now */
#define DEBUG_STACK_TOP (exception_stack_top - 4096)
#define DEBUG_SPRG SPRN_SPRG6W
#ifdef CONFIG_SMP
#define BOOKE_LOAD_EXC_LEVEL_STACK(level) \
mfspr r8,SPRN_PIR; \
mulli r8,r8,BOOKE_EXCEPTION_STACK_SIZE; \
neg r8,r8; \
addis r8,r8,level##_STACK_TOP@ha; \
addi r8,r8,level##_STACK_TOP@l
#else
#define BOOKE_LOAD_EXC_LEVEL_STACK(level) \
lis r8,level##_STACK_TOP@h; \
ori r8,r8,level##_STACK_TOP@l
#endif
/*
* Exception prolog for critical/machine check exceptions. This is a
* little different from the normal exception prolog above since a
* critical/machine check exception can potentially occur at any point
* during normal exception processing. Thus we cannot use the same SPRG
* registers as the normal prolog above. Instead we use a portion of the
* critical/machine check exception stack at low physical addresses.
*/
#define EXC_LEVEL_EXCEPTION_PROLOG(exc_level, exc_level_srr0, exc_level_srr1) \
mtspr exc_level##_SPRG,r8; \
BOOKE_LOAD_EXC_LEVEL_STACK(exc_level);/* r8 points to the exc_level stack*/ \
stw r10,GPR10-INT_FRAME_SIZE(r8); \
stw r11,GPR11-INT_FRAME_SIZE(r8); \
mfcr r10; /* save CR in r10 for now */\
mfspr r11,exc_level_srr1; /* check whether user or kernel */\
andi. r11,r11,MSR_PR; \
mr r11,r8; \
mfspr r8,exc_level##_SPRG; \
beq 1f; \
/* COMING FROM USER MODE */ \
mfspr r11,SPRN_SPRG3; /* if from user, start at top of */\
lwz r11,THREAD_INFO-THREAD(r11); /* this thread's kernel stack */\
addi r11,r11,THREAD_SIZE; \
1: subi r11,r11,INT_FRAME_SIZE; /* Allocate an exception frame */\
stw r10,_CCR(r11); /* save various registers */\
stw r12,GPR12(r11); \
stw r9,GPR9(r11); \
mflr r10; \
stw r10,_LINK(r11); \
mfspr r12,SPRN_DEAR; /* save DEAR and ESR in the frame */\
stw r12,_DEAR(r11); /* since they may have had stuff */\
mfspr r9,SPRN_ESR; /* in them at the point where the */\
stw r9,_ESR(r11); /* exception was taken */\
mfspr r12,exc_level_srr0; \
stw r1,GPR1(r11); \
mfspr r9,exc_level_srr1; \
stw r1,0(r11); \
mr r1,r11; \
rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
stw r0,GPR0(r11); \
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)
#define CRITICAL_EXCEPTION_PROLOG \
EXC_LEVEL_EXCEPTION_PROLOG(CRIT, SPRN_CSRR0, SPRN_CSRR1)
#define DEBUG_EXCEPTION_PROLOG \
EXC_LEVEL_EXCEPTION_PROLOG(DEBUG, SPRN_DSRR0, SPRN_DSRR1)
#define MCHECK_EXCEPTION_PROLOG \
EXC_LEVEL_EXCEPTION_PROLOG(MCHECK, SPRN_MCSRR0, SPRN_MCSRR1)
/*
* Exception vectors.
*/
#define START_EXCEPTION(label) \
.align 5; \
label:
#define FINISH_EXCEPTION(func) \
bl transfer_to_handler_full; \
.long func; \
.long ret_from_except_full
#define EXCEPTION(n, label, hdlr, xfer) \
START_EXCEPTION(label); \
NORMAL_EXCEPTION_PROLOG; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
xfer(n, hdlr)
#define CRITICAL_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(label); \
CRITICAL_EXCEPTION_PROLOG; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
NOCOPY, crit_transfer_to_handler, \
ret_from_crit_exc)
#define MCHECK_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(label); \
MCHECK_EXCEPTION_PROLOG; \
mfspr r5,SPRN_ESR; \
stw r5,_ESR(r11); \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
NOCOPY, mcheck_transfer_to_handler, \
ret_from_mcheck_exc)
#define EXC_XFER_TEMPLATE(hdlr, trap, msr, copyee, tfer, ret) \
li r10,trap; \
stw r10,_TRAP(r11); \
lis r10,msr@h; \
ori r10,r10,msr@l; \
copyee(r10, r9); \
bl tfer; \
.long hdlr; \
.long ret
#define COPY_EE(d, s) rlwimi d,s,0,16,16
#define NOCOPY(d, s)
#define EXC_XFER_STD(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, NOCOPY, transfer_to_handler_full, \
ret_from_except_full)
#define EXC_XFER_LITE(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, NOCOPY, transfer_to_handler, \
ret_from_except)
#define EXC_XFER_EE(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, COPY_EE, transfer_to_handler_full, \
ret_from_except_full)
#define EXC_XFER_EE_LITE(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, COPY_EE, transfer_to_handler, \
ret_from_except)
/* Check for a single step debug exception while in an exception
* handler before state has been saved. This is to catch the case
* where an instruction that we are trying to single step causes
* an exception (eg ITLB/DTLB miss) and thus the first instruction of
* the exception handler generates a single step debug exception.
*
* If we get a debug trap on the first instruction of an exception handler,
* we reset the MSR_DE in the _exception handler's_ MSR (the debug trap is
* a critical exception, so we are using SPRN_CSRR1 to manipulate the MSR).
* The exception handler was handling a non-critical interrupt, so it will
* save (and later restore) the MSR via SPRN_CSRR1, which will still have
* the MSR_DE bit set.
*/
#ifdef CONFIG_E200
#define DEBUG_EXCEPTION \
START_EXCEPTION(Debug); \
DEBUG_EXCEPTION_PROLOG; \
\
/* \
* If there is a single step or branch-taken exception in an \
* exception entry sequence, it was probably meant to apply to \
* the code where the exception occurred (since exception entry \
* doesn't turn off DE automatically). We simulate the effect \
* of turning off DE on entry to an exception handler by turning \
* off DE in the CSRR1 value and clearing the debug status. \
*/ \
mfspr r10,SPRN_DBSR; /* check single-step/branch taken */ \
andis. r10,r10,DBSR_IC@h; \
beq+ 2f; \
\
lis r10,KERNELBASE@h; /* check if exception in vectors */ \
ori r10,r10,KERNELBASE@l; \
cmplw r12,r10; \
blt+ 2f; /* addr below exception vectors */ \
\
lis r10,Debug@h; \
ori r10,r10,Debug@l; \
cmplw r12,r10; \
bgt+ 2f; /* addr above exception vectors */ \
\
/* here it looks like we got an inappropriate debug exception. */ \
1: rlwinm r9,r9,0,~MSR_DE; /* clear DE in the CDRR1 value */ \
lis r10,DBSR_IC@h; /* clear the IC event */ \
mtspr SPRN_DBSR,r10; \
/* restore state and get out */ \
lwz r10,_CCR(r11); \
lwz r0,GPR0(r11); \
lwz r1,GPR1(r11); \
mtcrf 0x80,r10; \
mtspr SPRN_DSRR0,r12; \
mtspr SPRN_DSRR1,r9; \
lwz r9,GPR9(r11); \
lwz r12,GPR12(r11); \
mtspr DEBUG_SPRG,r8; \
BOOKE_LOAD_EXC_LEVEL_STACK(DEBUG); /* r8 points to the debug stack */ \
lwz r10,GPR10-INT_FRAME_SIZE(r8); \
lwz r11,GPR11-INT_FRAME_SIZE(r8); \
mfspr r8,DEBUG_SPRG; \
\
RFDI; \
b .; \
\
/* continue normal handling for a critical exception... */ \
2: mfspr r4,SPRN_DBSR; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), NOCOPY, debug_transfer_to_handler, ret_from_debug_exc)
#else
#define DEBUG_EXCEPTION \
START_EXCEPTION(Debug); \
CRITICAL_EXCEPTION_PROLOG; \
\
/* \
* If there is a single step or branch-taken exception in an \
* exception entry sequence, it was probably meant to apply to \
* the code where the exception occurred (since exception entry \
* doesn't turn off DE automatically). We simulate the effect \
* of turning off DE on entry to an exception handler by turning \
* off DE in the CSRR1 value and clearing the debug status. \
*/ \
mfspr r10,SPRN_DBSR; /* check single-step/branch taken */ \
andis. r10,r10,DBSR_IC@h; \
beq+ 2f; \
\
lis r10,KERNELBASE@h; /* check if exception in vectors */ \
ori r10,r10,KERNELBASE@l; \
cmplw r12,r10; \
blt+ 2f; /* addr below exception vectors */ \
\
lis r10,Debug@h; \
ori r10,r10,Debug@l; \
cmplw r12,r10; \
bgt+ 2f; /* addr above exception vectors */ \
\
/* here it looks like we got an inappropriate debug exception. */ \
1: rlwinm r9,r9,0,~MSR_DE; /* clear DE in the CSRR1 value */ \
lis r10,DBSR_IC@h; /* clear the IC event */ \
mtspr SPRN_DBSR,r10; \
/* restore state and get out */ \
lwz r10,_CCR(r11); \
lwz r0,GPR0(r11); \
lwz r1,GPR1(r11); \
mtcrf 0x80,r10; \
mtspr SPRN_CSRR0,r12; \
mtspr SPRN_CSRR1,r9; \
lwz r9,GPR9(r11); \
lwz r12,GPR12(r11); \
mtspr CRIT_SPRG,r8; \
BOOKE_LOAD_EXC_LEVEL_STACK(CRIT); /* r8 points to the debug stack */ \
lwz r10,GPR10-INT_FRAME_SIZE(r8); \
lwz r11,GPR11-INT_FRAME_SIZE(r8); \
mfspr r8,CRIT_SPRG; \
\
rfci; \
b .; \
\
/* continue normal handling for a critical exception... */ \
2: mfspr r4,SPRN_DBSR; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), NOCOPY, crit_transfer_to_handler, ret_from_crit_exc)
#endif
#define INSTRUCTION_STORAGE_EXCEPTION \
START_EXCEPTION(InstructionStorage) \
NORMAL_EXCEPTION_PROLOG; \
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
mr r4,r12; /* Pass SRR0 as arg2 */ \
li r5,0; /* Pass zero as arg3 */ \
EXC_XFER_EE_LITE(0x0400, handle_page_fault)
#define ALIGNMENT_EXCEPTION \
START_EXCEPTION(Alignment) \
NORMAL_EXCEPTION_PROLOG; \
mfspr r4,SPRN_DEAR; /* Grab the DEAR and save it */ \
stw r4,_DEAR(r11); \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_EE(0x0600, alignment_exception)
#define PROGRAM_EXCEPTION \
START_EXCEPTION(Program) \
NORMAL_EXCEPTION_PROLOG; \
mfspr r4,SPRN_ESR; /* Grab the ESR and save it */ \
stw r4,_ESR(r11); \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_STD(0x0700, program_check_exception)
#define DECREMENTER_EXCEPTION \
START_EXCEPTION(Decrementer) \
NORMAL_EXCEPTION_PROLOG; \
lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */ \
mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */ \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_LITE(0x0900, timer_interrupt)
#define FP_UNAVAILABLE_EXCEPTION \
START_EXCEPTION(FloatingPointUnavailable) \
NORMAL_EXCEPTION_PROLOG; \
bne load_up_fpu; /* if from user, just load it up */ \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_EE_LITE(0x800, kernel_fp_unavailable_exception)
#endif /* __HEAD_BOOKE_H__ */

Some files were not shown because too many files have changed in this diff Show More