Merge branch 'master' of /home/davem/src/GIT/linux-2.6/

Conflicts:
	include/linux/mod_devicetable.h
	scripts/mod/file2alias.c
This commit is contained in:
David S. Miller 2010-05-18 23:01:55 -07:00
commit 2ec8c6bb5d
577 changed files with 24623 additions and 14821 deletions

View File

@ -3,35 +3,79 @@ Using RCU's CPU Stall Detector
The CONFIG_RCU_CPU_STALL_DETECTOR kernel config parameter enables The CONFIG_RCU_CPU_STALL_DETECTOR kernel config parameter enables
RCU's CPU stall detector, which detects conditions that unduly delay RCU's CPU stall detector, which detects conditions that unduly delay
RCU grace periods. The stall detector's idea of what constitutes RCU grace periods. The stall detector's idea of what constitutes
"unduly delayed" is controlled by a pair of C preprocessor macros: "unduly delayed" is controlled by a set of C preprocessor macros:
RCU_SECONDS_TILL_STALL_CHECK RCU_SECONDS_TILL_STALL_CHECK
This macro defines the period of time that RCU will wait from This macro defines the period of time that RCU will wait from
the beginning of a grace period until it issues an RCU CPU the beginning of a grace period until it issues an RCU CPU
stall warning. It is normally ten seconds. stall warning. This time period is normally ten seconds.
RCU_SECONDS_TILL_STALL_RECHECK RCU_SECONDS_TILL_STALL_RECHECK
This macro defines the period of time that RCU will wait after This macro defines the period of time that RCU will wait after
issuing a stall warning until it issues another stall warning. issuing a stall warning until it issues another stall warning
It is normally set to thirty seconds. for the same stall. This time period is normally set to thirty
seconds.
RCU_STALL_RAT_DELAY RCU_STALL_RAT_DELAY
The CPU stall detector tries to make the offending CPU rat on itself, The CPU stall detector tries to make the offending CPU print its
as this often gives better-quality stack traces. However, if own warnings, as this often gives better-quality stack traces.
the offending CPU does not detect its own stall in the number However, if the offending CPU does not detect its own stall in
of jiffies specified by RCU_STALL_RAT_DELAY, then other CPUs will the number of jiffies specified by RCU_STALL_RAT_DELAY, then
complain. This is normally set to two jiffies. some other CPU will complain. This delay is normally set to
two jiffies.
The following problems can result in an RCU CPU stall warning: When a CPU detects that it is stalling, it will print a message similar
to the following:
INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
This message indicates that CPU 5 detected that it was causing a stall,
and that the stall was affecting RCU-sched. This message will normally be
followed by a stack dump of the offending CPU. On TREE_RCU kernel builds,
RCU and RCU-sched are implemented by the same underlying mechanism,
while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented
by rcu_preempt_state.
On the other hand, if the offending CPU fails to print out a stall-warning
message quickly enough, some other CPU will print a message similar to
the following:
INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies)
This message indicates that CPU 2 detected that CPUs 3 and 5 were both
causing stalls, and that the stall was affecting RCU-bh. This message
will normally be followed by stack dumps for each CPU. Please note that
TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
and that the tasks will be indicated by PID, for example, "P3421".
It is even possible for a rcu_preempt_state stall to be caused by both
CPUs -and- tasks, in which case the offending CPUs and tasks will all
be called out in the list.
Finally, if the grace period ends just as the stall warning starts
printing, there will be a spurious stall-warning message:
INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies)
This is rare, but does happen from time to time in real life.
So your kernel printed an RCU CPU stall warning. The next question is
"What caused it?" The following problems can result in RCU CPU stall
warnings:
o A CPU looping in an RCU read-side critical section. o A CPU looping in an RCU read-side critical section.
o A CPU looping with interrupts disabled. o A CPU looping with interrupts disabled. This condition can
result in RCU-sched and RCU-bh stalls.
o A CPU looping with preemption disabled. o A CPU looping with preemption disabled. This condition can
result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh
stalls.
o A CPU looping with bottom halves disabled. This condition can
result in RCU-sched and RCU-bh stalls.
o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
without invoking schedule(). without invoking schedule().
@ -39,20 +83,24 @@ o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
o A bug in the RCU implementation. o A bug in the RCU implementation.
o A hardware failure. This is quite unlikely, but has occurred o A hardware failure. This is quite unlikely, but has occurred
at least once in a former life. A CPU failed in a running system, at least once in real life. A CPU failed in a running system,
becoming unresponsive, but not causing an immediate crash. becoming unresponsive, but not causing an immediate crash.
This resulted in a series of RCU CPU stall warnings, eventually This resulted in a series of RCU CPU stall warnings, eventually
leading the realization that the CPU had failed. leading the realization that the CPU had failed.
The RCU, RCU-sched, and RCU-bh implementations have CPU stall warning. The RCU, RCU-sched, and RCU-bh implementations have CPU stall
SRCU does not do so directly, but its calls to synchronize_sched() will warning. SRCU does not have its own CPU stall warnings, but its
result in RCU-sched detecting any CPU stalls that might be occurring. calls to synchronize_sched() will result in RCU-sched detecting
RCU-sched-related CPU stalls. Please note that RCU only detects
CPU stalls when there is a grace period in progress. No grace period,
no CPU stall warnings.
To diagnose the cause of the stall, inspect the stack traces. The offending To diagnose the cause of the stall, inspect the stack traces.
function will usually be near the top of the stack. If you have a series The offending function will usually be near the top of the stack.
of stall warnings from a single extended stall, comparing the stack traces If you have a series of stall warnings from a single extended stall,
can often help determine where the stall is occurring, which will usually comparing the stack traces can often help determine where the stall
be in the function nearest the top of the stack that stays the same from is occurring, which will usually be in the function nearest the top of
trace to trace. that portion of the stack which remains the same from trace to trace.
If you can reliably trigger the stall, ftrace can be quite helpful.
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE. RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE.

View File

@ -182,16 +182,6 @@ Similarly, sched_expedited RCU provides the following:
sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0 sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0 sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0 sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0
state: -1 / 0:0 3:0 4:0
As before, the first four lines are similar to those for RCU.
The last line shows the task-migration state. The first number is
-1 if synchronize_sched_expedited() is idle, -2 if in the process of
posting wakeups to the migration kthreads, and N when waiting on CPU N.
Each of the colon-separated fields following the "/" is a CPU:state pair.
Valid states are "0" for idle, "1" for waiting for quiescent state,
"2" for passed through quiescent state, and "3" when a race with a
CPU-hotplug event forces use of the synchronize_sched() primitive.
USAGE USAGE

View File

@ -256,23 +256,23 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
The output of "cat rcu/rcu_pending" looks as follows: The output of "cat rcu/rcu_pending" looks as follows:
rcu_sched: rcu_sched:
0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741 0 np=255892 qsp=53936 rpq=85 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792 1 np=261224 qsp=54638 rpq=33 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629 2 np=237496 qsp=49664 rpq=23 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
3 np=236249 qsp=48766 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723 3 np=236249 qsp=48766 rpq=98 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
4 np=221310 qsp=46850 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110 4 np=221310 qsp=46850 rpq=7 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
5 np=237332 qsp=48449 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456 5 np=237332 qsp=48449 rpq=9 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
6 np=219995 qsp=46718 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834 6 np=219995 qsp=46718 rpq=12 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
7 np=249893 qsp=49390 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888 7 np=249893 qsp=49390 rpq=42 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
rcu_bh: rcu_bh:
0 np=146741 qsp=1419 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314 0 np=146741 qsp=1419 rpq=6 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
1 np=155792 qsp=12597 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180 1 np=155792 qsp=12597 rpq=3 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
2 np=136629 qsp=18680 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936 2 np=136629 qsp=18680 rpq=1 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
3 np=137723 qsp=2843 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863 3 np=137723 qsp=2843 rpq=0 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
4 np=123110 qsp=12433 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671 4 np=123110 qsp=12433 rpq=0 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
5 np=137456 qsp=4210 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235 5 np=137456 qsp=4210 rpq=1 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921 6 np=120834 qsp=9902 rpq=2 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542 7 np=144888 qsp=26336 rpq=0 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
As always, this is once again split into "rcu_sched" and "rcu_bh" As always, this is once again split into "rcu_sched" and "rcu_bh"
portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional
@ -284,6 +284,9 @@ o "np" is the number of times that __rcu_pending() has been invoked
o "qsp" is the number of times that the RCU was waiting for a o "qsp" is the number of times that the RCU was waiting for a
quiescent state from this CPU. quiescent state from this CPU.
o "rpq" is the number of times that the CPU had passed through
a quiescent state, but not yet reported it to RCU.
o "cbr" is the number of times that this CPU had RCU callbacks o "cbr" is the number of times that this CPU had RCU callbacks
that had passed through a grace period, and were thus ready that had passed through a grace period, and were thus ready
to be invoked. to be invoked.

View File

@ -161,13 +161,15 @@ o In order to put a system into any of the sleep states after a TXT
has been restored, it will restore the TPM PCRs and then has been restored, it will restore the TPM PCRs and then
transfer control back to the kernel's S3 resume vector. transfer control back to the kernel's S3 resume vector.
In order to preserve system integrity across S3, the kernel In order to preserve system integrity across S3, the kernel
provides tboot with a set of memory ranges (kernel provides tboot with a set of memory ranges (RAM and RESERVED_KERN
code/data/bss, S3 resume code, and AP trampoline) that tboot in the e820 table, but not any memory that BIOS might alter over
will calculate a MAC (message authentication code) over and then the S3 transition) that tboot will calculate a MAC (message
seal with the TPM. On resume and once the measured environment authentication code) over and then seal with the TPM. On resume
has been re-established, tboot will re-calculate the MAC and and once the measured environment has been re-established, tboot
verify it against the sealed value. Tboot's policy determines will re-calculate the MAC and verify it against the sealed value.
what happens if the verification fails. Tboot's policy determines what happens if the verification fails.
Note that the c/s 194 of tboot which has the new MAC code supports
this.
That's pretty much it for TXT support. That's pretty much it for TXT support.

View File

@ -324,6 +324,8 @@ and is between 256 and 4096 characters. It is defined in the file
they are unmapped. Otherwise they are they are unmapped. Otherwise they are
flushed before they will be reused, which flushed before they will be reused, which
is a lot of faster is a lot of faster
off - do not initialize any AMD IOMMU found in
the system
amijoy.map= [HW,JOY] Amiga joystick support amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT Map of devices attached to JOY0DAT and JOY1DAT
@ -784,8 +786,12 @@ and is between 256 and 4096 characters. It is defined in the file
as early as possible in order to facilitate early as early as possible in order to facilitate early
boot debugging. boot debugging.
ftrace_dump_on_oops ftrace_dump_on_oops[=orig_cpu]
[FTRACE] will dump the trace buffers on oops. [FTRACE] will dump the trace buffers on oops.
If no parameter is passed, ftrace will dump
buffers of all CPUs, but if you pass orig_cpu, it will
dump only the buffer of the CPU that triggered the
oops.
ftrace_filter=[function-list] ftrace_filter=[function-list]
[FTRACE] Limit the functions traced by the function [FTRACE] Limit the functions traced by the function

View File

@ -165,8 +165,8 @@ the user entry_handler invocation is also skipped.
1.4 How Does Jump Optimization Work? 1.4 How Does Jump Optimization Work?
If you configured your kernel with CONFIG_OPTPROBES=y (currently If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
this option is supported on x86/x86-64, non-preemptive kernel) and is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
the "debug.kprobes_optimization" kernel parameter is set to 1 (see the "debug.kprobes_optimization" kernel parameter is set to 1 (see
sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
instruction instead of a breakpoint instruction at each probepoint. instruction instead of a breakpoint instruction at each probepoint.
@ -271,8 +271,6 @@ tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques: using one of the following techniques:
- Specify an empty function for the kprobe's post_handler or break_handler. - Specify an empty function for the kprobe's post_handler or break_handler.
or or
- Config CONFIG_OPTPROBES=n.
or
- Execute 'sysctl -w debug.kprobes_optimization=n' - Execute 'sysctl -w debug.kprobes_optimization=n'
2. Architectures Supported 2. Architectures Supported
@ -307,10 +305,6 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
so you can use "objdump -d -l vmlinux" to see the source-to-object so you can use "objdump -d -l vmlinux" to see the source-to-object
code mapping. code mapping.
If you want to reduce probing overhead, set "Kprobes jump optimization
support" (CONFIG_OPTPROBES) to "y". You can find this option under the
"Kprobes" line.
4. API Reference 4. API Reference
The Kprobes API includes a "register" function and an "unregister" The Kprobes API includes a "register" function and an "unregister"

View File

@ -190,3 +190,61 @@ Example:
for (node = rb_first(&mytree); node; node = rb_next(node)) for (node = rb_first(&mytree); node; node = rb_next(node))
printk("key=%s\n", rb_entry(node, struct mytype, node)->keystring); printk("key=%s\n", rb_entry(node, struct mytype, node)->keystring);
Support for Augmented rbtrees
-----------------------------
Augmented rbtree is an rbtree with "some" additional data stored in each node.
This data can be used to augment some new functionality to rbtree.
Augmented rbtree is an optional feature built on top of basic rbtree
infrastructure. rbtree user who wants this feature will have an augment
callback function in rb_root initialized.
This callback function will be called from rbtree core routines whenever
a node has a change in one or both of its children. It is the responsibility
of the callback function to recalculate the additional data that is in the
rb node using new children information. Note that if this new additional
data affects the parent node's additional data, then callback function has
to handle it and do the recursive updates.
Interval tree is an example of augmented rb tree. Reference -
"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
More details about interval trees:
Classical rbtree has a single key and it cannot be directly used to store
interval ranges like [lo:hi] and do a quick lookup for any overlap with a new
lo:hi or to find whether there is an exact match for a new lo:hi.
However, rbtree can be augmented to store such interval ranges in a structured
way making it possible to do efficient lookup and exact match.
This "extra information" stored in each node is the maximum hi
(max_hi) value among all the nodes that are its descendents. This
information can be maintained at each node just be looking at the node
and its immediate children. And this will be used in O(log n) lookup
for lowest match (lowest start address among all possible matches)
with something like:
find_lowest_match(lo, hi, node)
{
lowest_match = NULL;
while (node) {
if (max_hi(node->left) > lo) {
// Lowest overlap if any must be on left side
node = node->left;
} else if (overlap(lo, hi, node)) {
lowest_match = node;
break;
} else if (lo > node->lo) {
// Lowest overlap if any must be on right side
node = node->right;
} else {
break;
}
}
return lowest_match;
}
Finding exact match will be to first find lowest match and then to follow
successor nodes looking for exact match, until the start of a node is beyond
the hi value we are looking for.

View File

@ -211,7 +211,7 @@ provide fair CPU time to each such task group. For example, it may be
desirable to first provide fair CPU time to each user on the system and then to desirable to first provide fair CPU time to each user on the system and then to
each task belonging to a user. each task belonging to a user.
CONFIG_GROUP_SCHED strives to achieve exactly that. It lets tasks to be CONFIG_CGROUP_SCHED strives to achieve exactly that. It lets tasks to be
grouped and divides CPU time fairly among such groups. grouped and divides CPU time fairly among such groups.
CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and
@ -220,38 +220,11 @@ SCHED_RR) tasks.
CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and
SCHED_BATCH) tasks. SCHED_BATCH) tasks.
At present, there are two (mutually exclusive) mechanisms to group tasks for These options need CONFIG_CGROUPS to be defined, and let the administrator
CPU bandwidth control purposes:
- Based on user id (CONFIG_USER_SCHED)
With this option, tasks are grouped according to their user id.
- Based on "cgroup" pseudo filesystem (CONFIG_CGROUP_SCHED)
This options needs CONFIG_CGROUPS to be defined, and lets the administrator
create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See
Documentation/cgroups/cgroups.txt for more information about this filesystem. Documentation/cgroups/cgroups.txt for more information about this filesystem.
Only one of these options to group tasks can be chosen and not both. When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each
When CONFIG_USER_SCHED is defined, a directory is created in sysfs for each new
user and a "cpu_share" file is added in that directory.
# cd /sys/kernel/uids
# cat 512/cpu_share # Display user 512's CPU share
1024
# echo 2048 > 512/cpu_share # Modify user 512's CPU share
# cat 512/cpu_share # Display user 512's CPU share
2048
#
CPU bandwidth between two users is divided in the ratio of their CPU shares.
For example: if you would like user "root" to get twice the bandwidth of user
"guest," then set the cpu_share for both the users such that "root"'s cpu_share
is twice "guest"'s cpu_share.
When CONFIG_CGROUP_SCHED is defined, a "cpu.shares" file is created for each
group created using the pseudo filesystem. See example steps below to create group created using the pseudo filesystem. See example steps below to create
task groups and modify their CPU share using the "cgroups" pseudo filesystem. task groups and modify their CPU share using the "cgroups" pseudo filesystem.
@ -273,24 +246,3 @@ task groups and modify their CPU share using the "cgroups" pseudo filesystem.
# #Launch gmplayer (or your favourite movie player) # #Launch gmplayer (or your favourite movie player)
# echo <movie_player_pid> > multimedia/tasks # echo <movie_player_pid> > multimedia/tasks
8. Implementation note: user namespaces
User namespaces are intended to be hierarchical. But they are currently
only partially implemented. Each of those has ramifications for CFS.
First, since user namespaces are hierarchical, the /sys/kernel/uids
presentation is inadequate. Eventually we will likely want to use sysfs
tagging to provide private views of /sys/kernel/uids within each user
namespace.
Second, the hierarchical nature is intended to support completely
unprivileged use of user namespaces. So if using user groups, then
we want the users in a user namespace to be children of the user
who created it.
That is currently unimplemented. So instead, every user in a new
user namespace will receive 1024 shares just like any user in the
initial user namespace. Note that at the moment creation of a new
user namespace requires each of CAP_SYS_ADMIN, CAP_SETUID, and
CAP_SETGID.

View File

@ -126,23 +126,12 @@ priority!
2.3 Basis for grouping tasks 2.3 Basis for grouping tasks
---------------------------- ----------------------------
There are two compile-time settings for allocating CPU bandwidth. These are Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real
configured using the "Basis for grouping tasks" multiple choice menu under CPU bandwidth to task groups.
General setup > Group CPU Scheduler:
a. CONFIG_USER_SCHED (aka "Basis for grouping tasks" = "user id")
This lets you use the virtual files under
"/sys/kernel/uids/<uid>/cpu_rt_runtime_us" to control he CPU time reserved for
each user .
The other option is:
.o CONFIG_CGROUP_SCHED (aka "Basis for grouping tasks" = "Control groups")
This uses the /cgroup virtual file system and This uses the /cgroup virtual file system and
"/cgroup/<cgroup>/cpu.rt_runtime_us" to control the CPU time reserved for each "/cgroup/<cgroup>/cpu.rt_runtime_us" to control the CPU time reserved for each
control group instead. control group.
For more information on working with control groups, you should read For more information on working with control groups, you should read
Documentation/cgroups/cgroups.txt as well. Documentation/cgroups/cgroups.txt as well.
@ -161,8 +150,7 @@ For now, this can be simplified to just the following (but see Future plans):
=============== ===============
There is work in progress to make the scheduling period for each group There is work in progress to make the scheduling period for each group
("/sys/kernel/uids/<uid>/cpu_rt_period_us" or ("/cgroup/<cgroup>/cpu.rt_period_us") configurable as well.
"/cgroup/<cgroup>/cpu.rt_period_us" respectively) configurable as well.
The constraint on the period is that a subgroup must have a smaller or The constraint on the period is that a subgroup must have a smaller or
equal period to its parent. But realistically its not very useful _yet_ equal period to its parent. But realistically its not very useful _yet_

View File

@ -90,7 +90,8 @@ In order to facilitate early boot debugging, use boot option:
trace_event=[event-list] trace_event=[event-list]
The format of this boot option is the same as described in section 2.1. event-list is a comma separated list of events. See section 2.1 for event
format.
3. Defining an event-enabled tracepoint 3. Defining an event-enabled tracepoint
======================================= =======================================

View File

@ -155,6 +155,9 @@ of ftrace. Here is a list of some of the key files:
to be traced. Echoing names of functions into this file to be traced. Echoing names of functions into this file
will limit the trace to only those functions. will limit the trace to only those functions.
This interface also allows for commands to be used. See the
"Filter commands" section for more details.
set_ftrace_notrace: set_ftrace_notrace:
This has an effect opposite to that of This has an effect opposite to that of
@ -1337,12 +1340,14 @@ ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one
can either use the sysctl function or set it via the proc system can either use the sysctl function or set it via the proc system
interface. interface.
sysctl kernel.ftrace_dump_on_oops=1 sysctl kernel.ftrace_dump_on_oops=n
or or
echo 1 > /proc/sys/kernel/ftrace_dump_on_oops echo n > /proc/sys/kernel/ftrace_dump_on_oops
If n = 1, ftrace will dump buffers of all CPUs, if n = 2 ftrace will
only dump the buffer of the CPU that triggered the oops.
Here's an example of such a dump after a null pointer Here's an example of such a dump after a null pointer
dereference in a kernel module: dereference in a kernel module:
@ -1822,6 +1827,47 @@ this special filter via:
echo > set_graph_function echo > set_graph_function
Filter commands
---------------
A few commands are supported by the set_ftrace_filter interface.
Trace commands have the following format:
<function>:<command>:<parameter>
The following commands are supported:
- mod
This command enables function filtering per module. The
parameter defines the module. For example, if only the write*
functions in the ext3 module are desired, run:
echo 'write*:mod:ext3' > set_ftrace_filter
This command interacts with the filter in the same way as
filtering based on function names. Thus, adding more functions
in a different module is accomplished by appending (>>) to the
filter file. Remove specific module functions by prepending
'!':
echo '!writeback*:mod:ext3' >> set_ftrace_filter
- traceon/traceoff
These commands turn tracing on and off when the specified
functions are hit. The parameter determines how many times the
tracing system is turned on and off. If unspecified, there is
no limit. For example, to disable tracing when a schedule bug
is hit the first 5 times, run:
echo '__schedule_bug:traceoff:5' > set_ftrace_filter
These commands are cumulative whether or not they are appended
to set_ftrace_filter. To remove a command, prepend it by '!'
and drop the parameter:
echo '!__schedule_bug:traceoff' > set_ftrace_filter
trace_pipe trace_pipe
---------- ----------

View File

@ -40,7 +40,9 @@ Synopsis of kprobe_events
$stack : Fetch stack address. $stack : Fetch stack address.
$retval : Fetch return value.(*) $retval : Fetch return value.(*)
+|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**) +|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**)
NAME=FETCHARG: Set NAME as the argument name of FETCHARG. NAME=FETCHARG : Set NAME as the argument name of FETCHARG.
FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types
(u8/u16/u32/u64/s8/s16/s32/s64) are supported.
(*) only for return probe. (*) only for return probe.
(**) this is useful for fetching a field of data structures. (**) this is useful for fetching a field of data structures.

View File

@ -2954,6 +2954,17 @@ S: Odd Fixes
F: Documentation/networking/README.ipw2200 F: Documentation/networking/README.ipw2200
F: drivers/net/wireless/ipw2x00/ipw2200.* F: drivers/net/wireless/ipw2x00/ipw2200.*
INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT)
M: Joseph Cihula <joseph.cihula@intel.com>
M: Shane Wang <shane.wang@intel.com>
L: tboot-devel@lists.sourceforge.net
W: http://tboot.sourceforge.net
T: Mercurial http://www.bughost.org/repos.hg/tboot.hg
S: Supported
F: Documentation/intel_txt.txt
F: include/linux/tboot.h
F: arch/x86/kernel/tboot.c
INTEL WIRELESS WIMAX CONNECTION 2400 INTEL WIRELESS WIMAX CONNECTION 2400
M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
M: linux-wimax@intel.com M: linux-wimax@intel.com
@ -4165,6 +4176,7 @@ OPROFILE
M: Robert Richter <robert.richter@amd.com> M: Robert Richter <robert.richter@amd.com>
L: oprofile-list@lists.sf.net L: oprofile-list@lists.sf.net
S: Maintained S: Maintained
F: arch/*/include/asm/oprofile*.h
F: arch/*/oprofile/ F: arch/*/oprofile/
F: drivers/oprofile/ F: drivers/oprofile/
F: include/linux/oprofile.h F: include/linux/oprofile.h
@ -4353,13 +4365,13 @@ M: Paul Mackerras <paulus@samba.org>
M: Ingo Molnar <mingo@elte.hu> M: Ingo Molnar <mingo@elte.hu>
M: Arnaldo Carvalho de Melo <acme@redhat.com> M: Arnaldo Carvalho de Melo <acme@redhat.com>
S: Supported S: Supported
F: kernel/perf_event.c F: kernel/perf_event*.c
F: include/linux/perf_event.h F: include/linux/perf_event.h
F: arch/*/kernel/perf_event.c F: arch/*/kernel/perf_event*.c
F: arch/*/kernel/*/perf_event.c F: arch/*/kernel/*/perf_event*.c
F: arch/*/kernel/*/*/perf_event.c F: arch/*/kernel/*/*/perf_event*.c
F: arch/*/include/asm/perf_event.h F: arch/*/include/asm/perf_event.h
F: arch/*/lib/perf_event.c F: arch/*/lib/perf_event*.c
F: arch/*/kernel/perf_callchain.c F: arch/*/kernel/perf_callchain.c
F: tools/perf/ F: tools/perf/
@ -5493,7 +5505,7 @@ S: Maintained
F: drivers/mmc/host/tmio_mmc.* F: drivers/mmc/host/tmio_mmc.*
TMPFS (SHMEM FILESYSTEM) TMPFS (SHMEM FILESYSTEM)
M: Hugh Dickins <hugh.dickins@tiscali.co.uk> M: Hugh Dickins <hughd@google.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
F: include/linux/shmem_fs.h F: include/linux/shmem_fs.h

View File

@ -1,7 +1,7 @@
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 34 SUBLEVEL = 34
EXTRAVERSION = -rc7 EXTRAVERSION =
NAME = Sheep on Meth NAME = Sheep on Meth
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -42,15 +42,10 @@ config KPROBES
If in doubt, say "N". If in doubt, say "N".
config OPTPROBES config OPTPROBES
bool "Kprobes jump optimization support (EXPERIMENTAL)" def_bool y
default y depends on KPROBES && HAVE_OPTPROBES
depends on KPROBES
depends on !PREEMPT depends on !PREEMPT
depends on HAVE_OPTPROBES
select KALLSYMS_ALL select KALLSYMS_ALL
help
This option will allow kprobes to optimize breakpoint to
a jump for reducing its overhead.
config HAVE_EFFICIENT_UNALIGNED_ACCESS config HAVE_EFFICIENT_UNALIGNED_ACCESS
bool bool
@ -142,6 +137,17 @@ config HAVE_HW_BREAKPOINT
bool bool
depends on PERF_EVENTS depends on PERF_EVENTS
config HAVE_MIXED_BREAKPOINTS_REGS
bool
depends on HAVE_HW_BREAKPOINT
help
Depending on the arch implementation of hardware breakpoints,
some of them have separate registers for data and instruction
breakpoints addresses, others have mixed registers to store
them but define the access type in a control register.
Select this option if your arch implements breakpoints under the
latter fashion.
config HAVE_USER_RETURN_NOTIFIER config HAVE_USER_RETURN_NOTIFIER
bool bool

View File

@ -17,8 +17,8 @@
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) #define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
#define ATOMIC64_INIT(i) ( (atomic64_t) { (i) } ) #define ATOMIC64_INIT(i) ( (atomic64_t) { (i) } )
#define atomic_read(v) ((v)->counter + 0) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic64_read(v) ((v)->counter + 0) #define atomic64_read(v) (*(volatile long *)&(v)->counter)
#define atomic_set(v,i) ((v)->counter = (i)) #define atomic_set(v,i) ((v)->counter = (i))
#define atomic64_set(v,i) ((v)->counter = (i)) #define atomic64_set(v,i) ((v)->counter = (i))

View File

@ -405,29 +405,31 @@ static inline int fls(int x)
#if defined(CONFIG_ALPHA_EV6) && defined(CONFIG_ALPHA_EV67) #if defined(CONFIG_ALPHA_EV6) && defined(CONFIG_ALPHA_EV67)
/* Whee. EV67 can calculate it directly. */ /* Whee. EV67 can calculate it directly. */
static inline unsigned long hweight64(unsigned long w) static inline unsigned long __arch_hweight64(unsigned long w)
{ {
return __kernel_ctpop(w); return __kernel_ctpop(w);
} }
static inline unsigned int hweight32(unsigned int w) static inline unsigned int __arch_weight32(unsigned int w)
{ {
return hweight64(w); return __arch_hweight64(w);
} }
static inline unsigned int hweight16(unsigned int w) static inline unsigned int __arch_hweight16(unsigned int w)
{ {
return hweight64(w & 0xffff); return __arch_hweight64(w & 0xffff);
} }
static inline unsigned int hweight8(unsigned int w) static inline unsigned int __arch_hweight8(unsigned int w)
{ {
return hweight64(w & 0xff); return __arch_hweight64(w & 0xff);
} }
#else #else
#include <asm-generic/bitops/hweight.h> #include <asm-generic/bitops/arch_hweight.h>
#endif #endif
#include <asm-generic/bitops/const_hweight.h>
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#include <asm-generic/bitops/find.h> #include <asm-generic/bitops/find.h>

View File

@ -24,7 +24,7 @@
* strex/ldrex monitor on some implementations. The reason we can use it for * strex/ldrex monitor on some implementations. The reason we can use it for
* atomic_set() is the clrex or dummy strex done on every exception return. * atomic_set() is the clrex or dummy strex done on every exception return.
*/ */
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v,i) (((v)->counter) = (i)) #define atomic_set(v,i) (((v)->counter) = (i))
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6

View File

@ -371,6 +371,10 @@ static inline void __flush_icache_all(void)
#ifdef CONFIG_ARM_ERRATA_411920 #ifdef CONFIG_ARM_ERRATA_411920
extern void v6_icache_inval_all(void); extern void v6_icache_inval_all(void);
v6_icache_inval_all(); v6_icache_inval_all();
#elif defined(CONFIG_SMP) && __LINUX_ARM_ARCH__ >= 7
asm("mcr p15, 0, %0, c7, c1, 0 @ invalidate I-cache inner shareable\n"
:
: "r" (0));
#else #else
asm("mcr p15, 0, %0, c7, c5, 0 @ invalidate I-cache\n" asm("mcr p15, 0, %0, c7, c5, 0 @ invalidate I-cache\n"
: :

View File

@ -1,6 +1,23 @@
#ifndef __ASMARM_SMP_TWD_H #ifndef __ASMARM_SMP_TWD_H
#define __ASMARM_SMP_TWD_H #define __ASMARM_SMP_TWD_H
#define TWD_TIMER_LOAD 0x00
#define TWD_TIMER_COUNTER 0x04
#define TWD_TIMER_CONTROL 0x08
#define TWD_TIMER_INTSTAT 0x0C
#define TWD_WDOG_LOAD 0x20
#define TWD_WDOG_COUNTER 0x24
#define TWD_WDOG_CONTROL 0x28
#define TWD_WDOG_INTSTAT 0x2C
#define TWD_WDOG_RESETSTAT 0x30
#define TWD_WDOG_DISABLE 0x34
#define TWD_TIMER_CONTROL_ENABLE (1 << 0)
#define TWD_TIMER_CONTROL_ONESHOT (0 << 1)
#define TWD_TIMER_CONTROL_PERIODIC (1 << 1)
#define TWD_TIMER_CONTROL_IT_ENABLE (1 << 2)
struct clock_event_device; struct clock_event_device;
extern void __iomem *twd_base; extern void __iomem *twd_base;

View File

@ -46,6 +46,9 @@
#define TLB_V7_UIS_FULL (1 << 20) #define TLB_V7_UIS_FULL (1 << 20)
#define TLB_V7_UIS_ASID (1 << 21) #define TLB_V7_UIS_ASID (1 << 21)
/* Inner Shareable BTB operation (ARMv7 MP extensions) */
#define TLB_V7_IS_BTB (1 << 22)
#define TLB_L2CLEAN_FR (1 << 29) /* Feroceon */ #define TLB_L2CLEAN_FR (1 << 29) /* Feroceon */
#define TLB_DCLEAN (1 << 30) #define TLB_DCLEAN (1 << 30)
#define TLB_WB (1 << 31) #define TLB_WB (1 << 31)
@ -183,7 +186,7 @@
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BTB | \ #define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_V7_IS_BTB | \
TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | TLB_V7_UIS_ASID) TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | TLB_V7_UIS_ASID)
#else #else
#define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BTB | \ #define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BTB | \
@ -339,6 +342,12 @@ static inline void local_flush_tlb_all(void)
dsb(); dsb();
isb(); isb();
} }
if (tlb_flag(TLB_V7_IS_BTB)) {
/* flush the branch target cache */
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc");
dsb();
isb();
}
} }
static inline void local_flush_tlb_mm(struct mm_struct *mm) static inline void local_flush_tlb_mm(struct mm_struct *mm)
@ -376,6 +385,12 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc"); asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc");
dsb(); dsb();
} }
if (tlb_flag(TLB_V7_IS_BTB)) {
/* flush the branch target cache */
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc");
dsb();
isb();
}
} }
static inline void static inline void
@ -416,6 +431,12 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc"); asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc");
dsb(); dsb();
} }
if (tlb_flag(TLB_V7_IS_BTB)) {
/* flush the branch target cache */
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc");
dsb();
isb();
}
} }
static inline void local_flush_tlb_kernel_page(unsigned long kaddr) static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
@ -454,6 +475,12 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
dsb(); dsb();
isb(); isb();
} }
if (tlb_flag(TLB_V7_IS_BTB)) {
/* flush the branch target cache */
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc");
dsb();
isb();
}
} }
/* /*

View File

@ -21,23 +21,6 @@
#include <asm/smp_twd.h> #include <asm/smp_twd.h>
#include <asm/hardware/gic.h> #include <asm/hardware/gic.h>
#define TWD_TIMER_LOAD 0x00
#define TWD_TIMER_COUNTER 0x04
#define TWD_TIMER_CONTROL 0x08
#define TWD_TIMER_INTSTAT 0x0C
#define TWD_WDOG_LOAD 0x20
#define TWD_WDOG_COUNTER 0x24
#define TWD_WDOG_CONTROL 0x28
#define TWD_WDOG_INTSTAT 0x2C
#define TWD_WDOG_RESETSTAT 0x30
#define TWD_WDOG_DISABLE 0x34
#define TWD_TIMER_CONTROL_ENABLE (1 << 0)
#define TWD_TIMER_CONTROL_ONESHOT (0 << 1)
#define TWD_TIMER_CONTROL_PERIODIC (1 << 1)
#define TWD_TIMER_CONTROL_IT_ENABLE (1 << 2)
/* set up by the platform code */ /* set up by the platform code */
void __iomem *twd_base; void __iomem *twd_base;

View File

@ -45,6 +45,7 @@ USER( strnebt r2, [r0])
mov r0, #0 mov r0, #0
ldmfd sp!, {r1, pc} ldmfd sp!, {r1, pc}
ENDPROC(__clear_user) ENDPROC(__clear_user)
ENDPROC(__clear_user_std)
.pushsection .fixup,"ax" .pushsection .fixup,"ax"
.align 0 .align 0

View File

@ -93,6 +93,7 @@ WEAK(__copy_to_user)
#include "copy_template.S" #include "copy_template.S"
ENDPROC(__copy_to_user) ENDPROC(__copy_to_user)
ENDPROC(__copy_to_user_std)
.pushsection .fixup,"ax" .pushsection .fixup,"ax"
.align 0 .align 0

View File

@ -410,7 +410,7 @@ static struct clk_lookup da830_clks[] = {
CLK("davinci-mcasp.0", NULL, &mcasp0_clk), CLK("davinci-mcasp.0", NULL, &mcasp0_clk),
CLK("davinci-mcasp.1", NULL, &mcasp1_clk), CLK("davinci-mcasp.1", NULL, &mcasp1_clk),
CLK("davinci-mcasp.2", NULL, &mcasp2_clk), CLK("davinci-mcasp.2", NULL, &mcasp2_clk),
CLK("musb_hdrc", NULL, &usb20_clk), CLK(NULL, "usb20", &usb20_clk),
CLK(NULL, "aemif", &aemif_clk), CLK(NULL, "aemif", &aemif_clk),
CLK(NULL, "aintc", &aintc_clk), CLK(NULL, "aintc", &aintc_clk),
CLK(NULL, "secu_mgr", &secu_mgr_clk), CLK(NULL, "secu_mgr", &secu_mgr_clk),

View File

@ -211,6 +211,9 @@ v6_dma_inv_range:
mcrne p15, 0, r1, c7, c15, 1 @ clean & invalidate unified line mcrne p15, 0, r1, c7, c15, 1 @ clean & invalidate unified line
#endif #endif
1: 1:
#ifdef CONFIG_SMP
str r0, [r0] @ write for ownership
#endif
#ifdef HARVARD_CACHE #ifdef HARVARD_CACHE
mcr p15, 0, r0, c7, c6, 1 @ invalidate D line mcr p15, 0, r0, c7, c6, 1 @ invalidate D line
#else #else
@ -231,6 +234,9 @@ v6_dma_inv_range:
v6_dma_clean_range: v6_dma_clean_range:
bic r0, r0, #D_CACHE_LINE_SIZE - 1 bic r0, r0, #D_CACHE_LINE_SIZE - 1
1: 1:
#ifdef CONFIG_SMP
ldr r2, [r0] @ read for ownership
#endif
#ifdef HARVARD_CACHE #ifdef HARVARD_CACHE
mcr p15, 0, r0, c7, c10, 1 @ clean D line mcr p15, 0, r0, c7, c10, 1 @ clean D line
#else #else
@ -251,6 +257,10 @@ v6_dma_clean_range:
ENTRY(v6_dma_flush_range) ENTRY(v6_dma_flush_range)
bic r0, r0, #D_CACHE_LINE_SIZE - 1 bic r0, r0, #D_CACHE_LINE_SIZE - 1
1: 1:
#ifdef CONFIG_SMP
ldr r2, [r0] @ read for ownership
str r2, [r0] @ write for ownership
#endif
#ifdef HARVARD_CACHE #ifdef HARVARD_CACHE
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line
#else #else
@ -273,7 +283,9 @@ ENTRY(v6_dma_map_area)
add r1, r1, r0 add r1, r1, r0
teq r2, #DMA_FROM_DEVICE teq r2, #DMA_FROM_DEVICE
beq v6_dma_inv_range beq v6_dma_inv_range
b v6_dma_clean_range teq r2, #DMA_TO_DEVICE
beq v6_dma_clean_range
b v6_dma_flush_range
ENDPROC(v6_dma_map_area) ENDPROC(v6_dma_map_area)
/* /*
@ -283,9 +295,6 @@ ENDPROC(v6_dma_map_area)
* - dir - DMA direction * - dir - DMA direction
*/ */
ENTRY(v6_dma_unmap_area) ENTRY(v6_dma_unmap_area)
add r1, r1, r0
teq r2, #DMA_TO_DEVICE
bne v6_dma_inv_range
mov pc, lr mov pc, lr
ENDPROC(v6_dma_unmap_area) ENDPROC(v6_dma_unmap_area)

View File

@ -167,7 +167,11 @@ ENTRY(v7_coherent_user_range)
cmp r0, r1 cmp r0, r1
blo 1b blo 1b
mov r0, #0 mov r0, #0
#ifdef CONFIG_SMP
mcr p15, 0, r0, c7, c1, 6 @ invalidate BTB Inner Shareable
#else
mcr p15, 0, r0, c7, c5, 6 @ invalidate BTB mcr p15, 0, r0, c7, c5, 6 @ invalidate BTB
#endif
dsb dsb
isb isb
mov pc, lr mov pc, lr

View File

@ -65,6 +65,15 @@ void flush_dcache_page(struct page *page)
} }
EXPORT_SYMBOL(flush_dcache_page); EXPORT_SYMBOL(flush_dcache_page);
void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long uaddr, void *dst, const void *src,
unsigned long len)
{
memcpy(dst, src, len);
if (vma->vm_flags & VM_EXEC)
__cpuc_coherent_user_range(uaddr, uaddr + len);
}
void __iomem *__arm_ioremap_pfn(unsigned long pfn, unsigned long offset, void __iomem *__arm_ioremap_pfn(unsigned long pfn, unsigned long offset,
size_t size, unsigned int mtype) size_t size, unsigned int mtype)
{ {
@ -87,8 +96,8 @@ void __iomem *__arm_ioremap(unsigned long phys_addr, size_t size,
} }
EXPORT_SYMBOL(__arm_ioremap); EXPORT_SYMBOL(__arm_ioremap);
void __iomem *__arm_ioremap(unsigned long phys_addr, size_t size, void __iomem *__arm_ioremap_caller(unsigned long phys_addr, size_t size,
unsigned int mtype, void *caller) unsigned int mtype, void *caller)
{ {
return __arm_ioremap(phys_addr, size, mtype); return __arm_ioremap(phys_addr, size, mtype);
} }

View File

@ -50,7 +50,11 @@ ENTRY(v7wbi_flush_user_tlb_range)
cmp r0, r1 cmp r0, r1
blo 1b blo 1b
mov ip, #0 mov ip, #0
#ifdef CONFIG_SMP
mcr p15, 0, ip, c7, c1, 6 @ flush BTAC/BTB Inner Shareable
#else
mcr p15, 0, ip, c7, c5, 6 @ flush BTAC/BTB mcr p15, 0, ip, c7, c5, 6 @ flush BTAC/BTB
#endif
dsb dsb
mov pc, lr mov pc, lr
ENDPROC(v7wbi_flush_user_tlb_range) ENDPROC(v7wbi_flush_user_tlb_range)
@ -79,7 +83,11 @@ ENTRY(v7wbi_flush_kern_tlb_range)
cmp r0, r1 cmp r0, r1
blo 1b blo 1b
mov r2, #0 mov r2, #0
#ifdef CONFIG_SMP
mcr p15, 0, r2, c7, c1, 6 @ flush BTAC/BTB Inner Shareable
#else
mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB
#endif
dsb dsb
isb isb
mov pc, lr mov pc, lr

View File

@ -19,7 +19,7 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = i) #define atomic_set(v, i) (((v)->counter) = i)
/* /*

View File

@ -15,7 +15,7 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v,i) (((v)->counter) = (i)) #define atomic_set(v,i) (((v)->counter) = (i))
/* These should be written in asm but we do it in C for now. */ /* These should be written in asm but we do it in C for now. */

View File

@ -36,7 +36,7 @@
#define smp_mb__after_atomic_inc() barrier() #define smp_mb__after_atomic_inc() barrier()
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = (i)) #define atomic_set(v, i) (((v)->counter) = (i))
#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS #ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS

View File

@ -10,7 +10,7 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = i) #define atomic_set(v, i) (((v)->counter) = i)
#include <asm/system.h> #include <asm/system.h>

View File

@ -21,8 +21,8 @@
#define ATOMIC_INIT(i) ((atomic_t) { (i) }) #define ATOMIC_INIT(i) ((atomic_t) { (i) })
#define ATOMIC64_INIT(i) ((atomic64_t) { (i) }) #define ATOMIC64_INIT(i) ((atomic64_t) { (i) })
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic64_read(v) ((v)->counter) #define atomic64_read(v) (*(volatile long *)&(v)->counter)
#define atomic_set(v,i) (((v)->counter) = (i)) #define atomic_set(v,i) (((v)->counter) = (i))
#define atomic64_set(v,i) (((v)->counter) = (i)) #define atomic64_set(v,i) (((v)->counter) = (i))

View File

@ -437,17 +437,18 @@ __fls (unsigned long x)
* hweightN: returns the hamming weight (i.e. the number * hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word * of bits set) of a N-bit word
*/ */
static __inline__ unsigned long static __inline__ unsigned long __arch_hweight64(unsigned long x)
hweight64 (unsigned long x)
{ {
unsigned long result; unsigned long result;
result = ia64_popcnt(x); result = ia64_popcnt(x);
return result; return result;
} }
#define hweight32(x) (unsigned int) hweight64((x) & 0xfffffffful) #define __arch_hweight32(x) ((unsigned int) __arch_hweight64((x) & 0xfffffffful))
#define hweight16(x) (unsigned int) hweight64((x) & 0xfffful) #define __arch_hweight16(x) ((unsigned int) __arch_hweight64((x) & 0xfffful))
#define hweight8(x) (unsigned int) hweight64((x) & 0xfful) #define __arch_hweight8(x) ((unsigned int) __arch_hweight64((x) & 0xfful))
#include <asm-generic/bitops/const_hweight.h>
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */

View File

@ -785,6 +785,14 @@ int acpi_gsi_to_irq(u32 gsi, unsigned int *irq)
return 0; return 0;
} }
int acpi_isa_irq_to_gsi(unsigned isa_irq, u32 *gsi)
{
if (isa_irq >= 16)
return -1;
*gsi = isa_irq;
return 0;
}
/* /*
* ACPI based hotplug CPU support * ACPI based hotplug CPU support
*/ */

View File

@ -26,7 +26,7 @@
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
*/ */
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
/** /**
* atomic_set - set atomic variable * atomic_set - set atomic variable

View File

@ -2,6 +2,6 @@
# Makefile for Linux arch/m68k/amiga source directory # Makefile for Linux arch/m68k/amiga source directory
# #
obj-y := config.o amiints.o cia.o chipram.o amisound.o obj-y := config.o amiints.o cia.o chipram.o amisound.o platform.o
obj-$(CONFIG_AMIGA_PCMCIA) += pcmcia.o obj-$(CONFIG_AMIGA_PCMCIA) += pcmcia.o

View File

@ -0,0 +1,83 @@
/*
* Copyright (C) 2007-2009 Geert Uytterhoeven
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file COPYING in the main directory of this archive
* for more details.
*/
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/zorro.h>
#include <asm/amigahw.h>
#ifdef CONFIG_ZORRO
static const struct resource zorro_resources[] __initconst = {
/* Zorro II regions (on Zorro II/III) */
{
.name = "Zorro II exp",
.start = 0x00e80000,
.end = 0x00efffff,
.flags = IORESOURCE_MEM,
}, {
.name = "Zorro II mem",
.start = 0x00200000,
.end = 0x009fffff,
.flags = IORESOURCE_MEM,
},
/* Zorro III regions (on Zorro III only) */
{
.name = "Zorro III exp",
.start = 0xff000000,
.end = 0xffffffff,
.flags = IORESOURCE_MEM,
}, {
.name = "Zorro III cfg",
.start = 0x40000000,
.end = 0x7fffffff,
.flags = IORESOURCE_MEM,
}
};
static int __init amiga_init_bus(void)
{
if (!MACH_IS_AMIGA || !AMIGAHW_PRESENT(ZORRO))
return -ENODEV;
platform_device_register_simple("amiga-zorro", -1, zorro_resources,
AMIGAHW_PRESENT(ZORRO3) ? 4 : 2);
return 0;
}
subsys_initcall(amiga_init_bus);
#endif /* CONFIG_ZORRO */
static int __init amiga_init_devices(void)
{
if (!MACH_IS_AMIGA)
return -ENODEV;
/* video hardware */
if (AMIGAHW_PRESENT(AMI_VIDEO))
platform_device_register_simple("amiga-video", -1, NULL, 0);
/* sound hardware */
if (AMIGAHW_PRESENT(AMI_AUDIO))
platform_device_register_simple("amiga-audio", -1, NULL, 0);
/* storage interfaces */
if (AMIGAHW_PRESENT(AMI_FLOPPY))
platform_device_register_simple("amiga-floppy", -1, NULL, 0);
return 0;
}
device_initcall(amiga_init_devices);

View File

@ -9,7 +9,6 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/smp_lock.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/fcntl.h> #include <linux/fcntl.h>
@ -35,10 +34,9 @@
static unsigned char days_in_mo[] = static unsigned char days_in_mo[] =
{0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
static char rtc_status; static atomic_t rtc_status = ATOMIC_INIT(1);
static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, static long rtc_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
unsigned long arg)
{ {
volatile RtcPtr_t rtc = (RtcPtr_t)BVME_RTC_BASE; volatile RtcPtr_t rtc = (RtcPtr_t)BVME_RTC_BASE;
unsigned char msr; unsigned char msr;
@ -132,29 +130,20 @@ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
} }
/* /*
* We enforce only one user at a time here with the open/close. * We enforce only one user at a time here with the open/close.
* Also clear the previous interrupt data on an open, and clean
* up things on a close.
*/ */
static int rtc_open(struct inode *inode, struct file *file) static int rtc_open(struct inode *inode, struct file *file)
{ {
lock_kernel(); if (!atomic_dec_and_test(&rtc_status)) {
if(rtc_status) { atomic_inc(&rtc_status);
unlock_kernel();
return -EBUSY; return -EBUSY;
} }
rtc_status = 1;
unlock_kernel();
return 0; return 0;
} }
static int rtc_release(struct inode *inode, struct file *file) static int rtc_release(struct inode *inode, struct file *file)
{ {
lock_kernel(); atomic_inc(&rtc_status);
rtc_status = 0;
unlock_kernel();
return 0; return 0;
} }
@ -163,9 +152,9 @@ static int rtc_release(struct inode *inode, struct file *file)
*/ */
static const struct file_operations rtc_fops = { static const struct file_operations rtc_fops = {
.ioctl = rtc_ioctl, .unlocked_ioctl = rtc_ioctl,
.open = rtc_open, .open = rtc_open,
.release = rtc_release, .release = rtc_release,
}; };
static struct miscdevice rtc_dev = { static struct miscdevice rtc_dev = {

View File

@ -1,4 +1,2 @@
extern void hp300_sched_init(irq_handler_t vector); extern void hp300_sched_init(irq_handler_t vector);
extern unsigned long hp300_gettimeoffset (void); extern unsigned long hp300_gettimeoffset(void);

View File

@ -15,7 +15,7 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = i) #define atomic_set(v, i) (((v)->counter) = i)
static inline void atomic_add(int i, atomic_t *v) static inline void atomic_add(int i, atomic_t *v)

View File

@ -15,7 +15,7 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = i) #define atomic_set(v, i) (((v)->counter) = i)
static __inline__ void atomic_add(int i, atomic_t *v) static __inline__ void atomic_add(int i, atomic_t *v)

View File

@ -365,6 +365,10 @@ static inline int minix_test_bit(int nr, const void *vaddr)
#define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr) ^ 24, (unsigned long *)(addr)) #define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr) ^ 24, (unsigned long *)(addr))
#define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr)) #define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr))
#define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr)) #define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr))
#define ext2_find_next_zero_bit(addr, size, offset) \
generic_find_next_zero_le_bit((unsigned long *)addr, size, offset)
#define ext2_find_next_bit(addr, size, offset) \
generic_find_next_le_bit((unsigned long *)addr, size, offset)
static inline int ext2_test_bit(int nr, const void *vaddr) static inline int ext2_test_bit(int nr, const void *vaddr)
{ {
@ -394,10 +398,9 @@ static inline int ext2_find_first_zero_bit(const void *vaddr, unsigned size)
return (p - addr) * 32 + res; return (p - addr) * 32 + res;
} }
static inline int ext2_find_next_zero_bit(const void *vaddr, unsigned size, static inline unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
unsigned offset) unsigned long size, unsigned long offset)
{ {
const unsigned long *addr = vaddr;
const unsigned long *p = addr + (offset >> 5); const unsigned long *p = addr + (offset >> 5);
int bit = offset & 31UL, res; int bit = offset & 31UL, res;
@ -437,10 +440,9 @@ static inline int ext2_find_first_bit(const void *vaddr, unsigned size)
return (p - addr) * 32 + res; return (p - addr) * 32 + res;
} }
static inline int ext2_find_next_bit(const void *vaddr, unsigned size, static inline unsigned long generic_find_next_le_bit(const unsigned long *addr,
unsigned offset) unsigned long size, unsigned long offset)
{ {
const unsigned long *addr = vaddr;
const unsigned long *p = addr + (offset >> 5); const unsigned long *p = addr + (offset >> 5);
int bit = offset & 31UL, res; int bit = offset & 31UL, res;

View File

@ -1,26 +1,12 @@
#ifndef _M68K_PARAM_H #ifndef _M68K_PARAM_H
#define _M68K_PARAM_H #define _M68K_PARAM_H
#ifdef __KERNEL__
# define HZ CONFIG_HZ /* Internal kernel timer frequency */
# define USER_HZ 100 /* .. some user interfaces are in "ticks" */
# define CLOCKS_PER_SEC (USER_HZ) /* like times() */
#endif
#ifndef HZ
#define HZ 100
#endif
#ifdef __uClinux__ #ifdef __uClinux__
#define EXEC_PAGESIZE 4096 #define EXEC_PAGESIZE 4096
#else #else
#define EXEC_PAGESIZE 8192 #define EXEC_PAGESIZE 8192
#endif #endif
#ifndef NOGROUP #include <asm-generic/param.h>
#define NOGROUP (-1)
#endif
#define MAXHOSTNAMELEN 64 /* max length of hostname */
#endif /* _M68K_PARAM_H */ #endif /* _M68K_PARAM_H */

View File

@ -455,7 +455,7 @@ static inline void access_error040(struct frame *fp)
if (do_page_fault(&fp->ptregs, addr, errorcode)) { if (do_page_fault(&fp->ptregs, addr, errorcode)) {
#ifdef DEBUG #ifdef DEBUG
printk("do_page_fault() !=0 \n"); printk("do_page_fault() !=0\n");
#endif #endif
if (user_mode(&fp->ptregs)){ if (user_mode(&fp->ptregs)){
/* delay writebacks after signal delivery */ /* delay writebacks after signal delivery */

View File

@ -148,7 +148,7 @@ static void mac_cache_card_flush(int writeback)
void __init config_mac(void) void __init config_mac(void)
{ {
if (!MACH_IS_MAC) if (!MACH_IS_MAC)
printk(KERN_ERR "ERROR: no Mac, but config_mac() called!! \n"); printk(KERN_ERR "ERROR: no Mac, but config_mac() called!!\n");
mach_sched_init = mac_sched_init; mach_sched_init = mac_sched_init;
mach_init_IRQ = mac_init_IRQ; mach_init_IRQ = mac_init_IRQ;
@ -867,7 +867,7 @@ static void __init mac_identify(void)
*/ */
iop_preinit(); iop_preinit();
printk(KERN_INFO "Detected Macintosh model: %d \n", model); printk(KERN_INFO "Detected Macintosh model: %d\n", model);
/* /*
* Report booter data: * Report booter data:
@ -878,12 +878,12 @@ static void __init mac_identify(void)
mac_bi_data.videoaddr, mac_bi_data.videorow, mac_bi_data.videoaddr, mac_bi_data.videorow,
mac_bi_data.videodepth, mac_bi_data.dimensions & 0xFFFF, mac_bi_data.videodepth, mac_bi_data.dimensions & 0xFFFF,
mac_bi_data.dimensions >> 16); mac_bi_data.dimensions >> 16);
printk(KERN_DEBUG " Videological 0x%lx phys. 0x%lx, SCC at 0x%lx \n", printk(KERN_DEBUG " Videological 0x%lx phys. 0x%lx, SCC at 0x%lx\n",
mac_bi_data.videological, mac_orig_videoaddr, mac_bi_data.videological, mac_orig_videoaddr,
mac_bi_data.sccbase); mac_bi_data.sccbase);
printk(KERN_DEBUG " Boottime: 0x%lx GMTBias: 0x%lx \n", printk(KERN_DEBUG " Boottime: 0x%lx GMTBias: 0x%lx\n",
mac_bi_data.boottime, mac_bi_data.gmtbias); mac_bi_data.boottime, mac_bi_data.gmtbias);
printk(KERN_DEBUG " Machine ID: %ld CPUid: 0x%lx memory size: 0x%lx \n", printk(KERN_DEBUG " Machine ID: %ld CPUid: 0x%lx memory size: 0x%lx\n",
mac_bi_data.id, mac_bi_data.cpuid, mac_bi_data.memsize); mac_bi_data.id, mac_bi_data.cpuid, mac_bi_data.memsize);
iop_init(); iop_init();

View File

@ -154,7 +154,6 @@ good_area:
* the fault. * the fault.
*/ */
survive:
fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0); fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0);
#ifdef DEBUG #ifdef DEBUG
printk("handle_mm_fault returns %d\n",fault); printk("handle_mm_fault returns %d\n",fault);
@ -180,15 +179,10 @@ good_area:
*/ */
out_of_memory: out_of_memory:
up_read(&mm->mmap_sem); up_read(&mm->mmap_sem);
if (is_global_init(current)) { if (!user_mode(regs))
yield(); goto no_context;
down_read(&mm->mmap_sem); pagefault_out_of_memory();
goto survive; return 0;
}
printk("VM: killing process %s\n", current->comm);
if (user_mode(regs))
do_group_exit(SIGKILL);
no_context: no_context:
current->thread.signo = SIGBUS; current->thread.signo = SIGBUS;

View File

@ -9,7 +9,6 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/smp_lock.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/fcntl.h> #include <linux/fcntl.h>
@ -36,8 +35,7 @@ static const unsigned char days_in_mo[] =
static atomic_t rtc_ready = ATOMIC_INIT(1); static atomic_t rtc_ready = ATOMIC_INIT(1);
static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, static long rtc_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
unsigned long arg)
{ {
volatile MK48T08ptr_t rtc = (MK48T08ptr_t)MVME_RTC_BASE; volatile MK48T08ptr_t rtc = (MK48T08ptr_t)MVME_RTC_BASE;
unsigned long flags; unsigned long flags;
@ -120,22 +118,15 @@ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
} }
/* /*
* We enforce only one user at a time here with the open/close. * We enforce only one user at a time here with the open/close.
* Also clear the previous interrupt data on an open, and clean
* up things on a close.
*/ */
static int rtc_open(struct inode *inode, struct file *file) static int rtc_open(struct inode *inode, struct file *file)
{ {
lock_kernel();
if( !atomic_dec_and_test(&rtc_ready) ) if( !atomic_dec_and_test(&rtc_ready) )
{ {
atomic_inc( &rtc_ready ); atomic_inc( &rtc_ready );
unlock_kernel();
return -EBUSY; return -EBUSY;
} }
unlock_kernel();
return 0; return 0;
} }
@ -150,9 +141,9 @@ static int rtc_release(struct inode *inode, struct file *file)
*/ */
static const struct file_operations rtc_fops = { static const struct file_operations rtc_fops = {
.ioctl = rtc_ioctl, .unlocked_ioctl = rtc_ioctl,
.open = rtc_open, .open = rtc_open,
.release = rtc_release, .release = rtc_release,
}; };
static struct miscdevice rtc_dev= static struct miscdevice rtc_dev=

View File

@ -126,7 +126,7 @@ static void q40_reset(void)
{ {
halted = 1; halted = 1;
printk("\n\n*******************************************\n" printk("\n\n*******************************************\n"
"Called q40_reset : press the RESET button!! \n" "Called q40_reset : press the RESET button!!\n"
"*******************************************\n"); "*******************************************\n");
Q40_LED_ON(); Q40_LED_ON();
while (1) while (1)

View File

@ -182,6 +182,39 @@ extern long __user_bad(void);
* Returns zero on success, or -EFAULT on error. * Returns zero on success, or -EFAULT on error.
* On error, the variable @x is set to zero. * On error, the variable @x is set to zero.
*/ */
#define get_user(x, ptr) \
__get_user_check((x), (ptr), sizeof(*(ptr)))
#define __get_user_check(x, ptr, size) \
({ \
unsigned long __gu_val = 0; \
const typeof(*(ptr)) __user *__gu_addr = (ptr); \
int __gu_err = 0; \
\
if (access_ok(VERIFY_READ, __gu_addr, size)) { \
switch (size) { \
case 1: \
__get_user_asm("lbu", __gu_addr, __gu_val, \
__gu_err); \
break; \
case 2: \
__get_user_asm("lhu", __gu_addr, __gu_val, \
__gu_err); \
break; \
case 4: \
__get_user_asm("lw", __gu_addr, __gu_val, \
__gu_err); \
break; \
default: \
__gu_err = __user_bad(); \
break; \
} \
} else { \
__gu_err = -EFAULT; \
} \
x = (typeof(*(ptr)))__gu_val; \
__gu_err; \
})
#define __get_user(x, ptr) \ #define __get_user(x, ptr) \
({ \ ({ \
@ -206,12 +239,6 @@ extern long __user_bad(void);
}) })
#define get_user(x, ptr) \
({ \
access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \
? __get_user((x), (ptr)) : -EFAULT; \
})
#define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ #define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \
({ \ ({ \
__asm__ __volatile__ ( \ __asm__ __volatile__ ( \
@ -266,6 +293,42 @@ extern long __user_bad(void);
* *
* Returns zero on success, or -EFAULT on error. * Returns zero on success, or -EFAULT on error.
*/ */
#define put_user(x, ptr) \
__put_user_check((x), (ptr), sizeof(*(ptr)))
#define __put_user_check(x, ptr, size) \
({ \
typeof(*(ptr)) __pu_val; \
typeof(*(ptr)) __user *__pu_addr = (ptr); \
int __pu_err = 0; \
\
__pu_val = (x); \
if (access_ok(VERIFY_WRITE, __pu_addr, size)) { \
switch (size) { \
case 1: \
__put_user_asm("sb", __pu_addr, __pu_val, \
__pu_err); \
break; \
case 2: \
__put_user_asm("sh", __pu_addr, __pu_val, \
__pu_err); \
break; \
case 4: \
__put_user_asm("sw", __pu_addr, __pu_val, \
__pu_err); \
break; \
case 8: \
__put_user_asm_8(__pu_addr, __pu_val, __pu_err);\
break; \
default: \
__pu_err = __user_bad(); \
break; \
} \
} else { \
__pu_err = -EFAULT; \
} \
__pu_err; \
})
#define __put_user(x, ptr) \ #define __put_user(x, ptr) \
({ \ ({ \
@ -290,18 +353,6 @@ extern long __user_bad(void);
__gu_err; \ __gu_err; \
}) })
#ifndef CONFIG_MMU
#define put_user(x, ptr) __put_user((x), (ptr))
#else /* CONFIG_MMU */
#define put_user(x, ptr) \
({ \
access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \
? __put_user((x), (ptr)) : -EFAULT; \
})
#endif /* CONFIG_MMU */
/* copy_to_from_user */ /* copy_to_from_user */
#define __copy_from_user(to, from, n) \ #define __copy_from_user(to, from, n) \

View File

@ -137,8 +137,9 @@ do { \
do { \ do { \
int step = -line_length; \ int step = -line_length; \
int align = ~(line_length - 1); \ int align = ~(line_length - 1); \
int count; \
end = ((end & align) == end) ? end - line_length : end & align; \ end = ((end & align) == end) ? end - line_length : end & align; \
int count = end - start; \ count = end - start; \
WARN_ON(count < 0); \ WARN_ON(count < 0); \
\ \
__asm__ __volatile__ (" 1: " #op " %0, %1; \ __asm__ __volatile__ (" 1: " #op " %0, %1; \

View File

@ -476,6 +476,8 @@ ENTRY(ret_from_fork)
nop nop
work_pending: work_pending:
enable_irq
andi r11, r19, _TIF_NEED_RESCHED andi r11, r19, _TIF_NEED_RESCHED
beqi r11, 1f beqi r11, 1f
bralid r15, schedule bralid r15, schedule

View File

@ -52,3 +52,14 @@ EXPORT_SYMBOL_GPL(_ebss);
extern void _mcount(void); extern void _mcount(void);
EXPORT_SYMBOL(_mcount); EXPORT_SYMBOL(_mcount);
#endif #endif
/*
* Assembly functions that may be used (directly or indirectly) by modules
*/
EXPORT_SYMBOL(__copy_tofrom_user);
EXPORT_SYMBOL(__strncpy_user);
#ifdef CONFIG_OPT_LIB_ASM
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
#endif

View File

@ -16,6 +16,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/cacheflush.h>
void *module_alloc(unsigned long size) void *module_alloc(unsigned long size)
{ {
@ -151,6 +152,7 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab,
int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs, int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *module) struct module *module)
{ {
flush_dcache();
return 0; return 0;
} }

View File

@ -47,6 +47,7 @@ unsigned long memory_start;
EXPORT_SYMBOL(memory_start); EXPORT_SYMBOL(memory_start);
unsigned long memory_end; /* due to mm/nommu.c */ unsigned long memory_end; /* due to mm/nommu.c */
unsigned long memory_size; unsigned long memory_size;
EXPORT_SYMBOL(memory_size);
/* /*
* paging_init() sets up the page tables - in fact we've already done this. * paging_init() sets up the page tables - in fact we've already done this.

View File

@ -42,6 +42,7 @@
unsigned long ioremap_base; unsigned long ioremap_base;
unsigned long ioremap_bot; unsigned long ioremap_bot;
EXPORT_SYMBOL(ioremap_bot);
/* The maximum lowmem defaults to 768Mb, but this can be configured to /* The maximum lowmem defaults to 768Mb, but this can be configured to
* another value. * another value.

View File

@ -1507,7 +1507,7 @@ void pcibios_finish_adding_to_bus(struct pci_bus *bus)
pci_bus_add_devices(bus); pci_bus_add_devices(bus);
/* Fixup EEH */ /* Fixup EEH */
eeh_add_device_tree_late(bus); /* eeh_add_device_tree_late(bus); */
} }
EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus); EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus);

View File

@ -29,7 +29,7 @@
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
*/ */
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
/* /*
* atomic_set - set atomic variable * atomic_set - set atomic variable
@ -410,7 +410,7 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
* @v: pointer of type atomic64_t * @v: pointer of type atomic64_t
* *
*/ */
#define atomic64_read(v) ((v)->counter) #define atomic64_read(v) (*(volatile long *)&(v)->counter)
/* /*
* atomic64_set - set atomic variable * atomic64_set - set atomic variable

View File

@ -12,7 +12,7 @@
#define PIT_CH0 0x40 #define PIT_CH0 0x40
#define PIT_CH2 0x42 #define PIT_CH2 0x42
extern spinlock_t i8253_lock; extern raw_spinlock_t i8253_lock;
extern void setup_pit_timer(void); extern void setup_pit_timer(void);

View File

@ -134,6 +134,12 @@
#define FPU_CSR_COND6 0x40000000 /* $fcc6 */ #define FPU_CSR_COND6 0x40000000 /* $fcc6 */
#define FPU_CSR_COND7 0x80000000 /* $fcc7 */ #define FPU_CSR_COND7 0x80000000 /* $fcc7 */
/*
* Bits 18 - 20 of the FPU Status Register will be read as 0,
* and should be written as zero.
*/
#define FPU_CSR_RSVD 0x001c0000
/* /*
* X the exception cause indicator * X the exception cause indicator
* E the exception enable * E the exception enable
@ -161,7 +167,8 @@
#define FPU_CSR_UDF_S 0x00000008 #define FPU_CSR_UDF_S 0x00000008
#define FPU_CSR_INE_S 0x00000004 #define FPU_CSR_INE_S 0x00000004
/* rounding mode */ /* Bits 0 and 1 of FPU Status Register specify the rounding mode */
#define FPU_CSR_RM 0x00000003
#define FPU_CSR_RN 0x0 /* nearest */ #define FPU_CSR_RN 0x0 /* nearest */
#define FPU_CSR_RZ 0x1 /* towards zero */ #define FPU_CSR_RZ 0x1 /* towards zero */
#define FPU_CSR_RU 0x2 /* towards +Infinity */ #define FPU_CSR_RU 0x2 /* towards +Infinity */

View File

@ -15,7 +15,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/time.h> #include <asm/time.h>
DEFINE_SPINLOCK(i8253_lock); DEFINE_RAW_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock); EXPORT_SYMBOL(i8253_lock);
/* /*
@ -26,7 +26,7 @@ EXPORT_SYMBOL(i8253_lock);
static void init_pit_timer(enum clock_event_mode mode, static void init_pit_timer(enum clock_event_mode mode,
struct clock_event_device *evt) struct clock_event_device *evt)
{ {
spin_lock(&i8253_lock); raw_spin_lock(&i8253_lock);
switch(mode) { switch(mode) {
case CLOCK_EVT_MODE_PERIODIC: case CLOCK_EVT_MODE_PERIODIC:
@ -55,7 +55,7 @@ static void init_pit_timer(enum clock_event_mode mode,
/* Nothing to do here */ /* Nothing to do here */
break; break;
} }
spin_unlock(&i8253_lock); raw_spin_unlock(&i8253_lock);
} }
/* /*
@ -65,10 +65,10 @@ static void init_pit_timer(enum clock_event_mode mode,
*/ */
static int pit_next_event(unsigned long delta, struct clock_event_device *evt) static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
{ {
spin_lock(&i8253_lock); raw_spin_lock(&i8253_lock);
outb_p(delta & 0xff , PIT_CH0); /* LSB */ outb_p(delta & 0xff , PIT_CH0); /* LSB */
outb(delta >> 8 , PIT_CH0); /* MSB */ outb(delta >> 8 , PIT_CH0); /* MSB */
spin_unlock(&i8253_lock); raw_spin_unlock(&i8253_lock);
return 0; return 0;
} }
@ -137,7 +137,7 @@ static cycle_t pit_read(struct clocksource *cs)
static int old_count; static int old_count;
static u32 old_jifs; static u32 old_jifs;
spin_lock_irqsave(&i8253_lock, flags); raw_spin_lock_irqsave(&i8253_lock, flags);
/* /*
* Although our caller may have the read side of xtime_lock, * Although our caller may have the read side of xtime_lock,
* this is now a seqlock, and we are cheating in this routine * this is now a seqlock, and we are cheating in this routine
@ -183,7 +183,7 @@ static cycle_t pit_read(struct clocksource *cs)
old_count = count; old_count = count;
old_jifs = jifs; old_jifs = jifs;
spin_unlock_irqrestore(&i8253_lock, flags); raw_spin_unlock_irqrestore(&i8253_lock, flags);
count = (LATCH - 1) - count; count = (LATCH - 1) - count;

View File

@ -385,7 +385,7 @@ EXPORT(sysn32_call_table)
PTR sys_fchmodat PTR sys_fchmodat
PTR sys_faccessat PTR sys_faccessat
PTR compat_sys_pselect6 PTR compat_sys_pselect6
PTR sys_ppoll /* 6265 */ PTR compat_sys_ppoll /* 6265 */
PTR sys_unshare PTR sys_unshare
PTR sys_splice PTR sys_splice
PTR sys_sync_file_range PTR sys_sync_file_range

View File

@ -78,6 +78,9 @@ DEFINE_PER_CPU(struct mips_fpu_emulator_stats, fpuemustats);
#define FPCREG_RID 0 /* $0 = revision id */ #define FPCREG_RID 0 /* $0 = revision id */
#define FPCREG_CSR 31 /* $31 = csr */ #define FPCREG_CSR 31 /* $31 = csr */
/* Determine rounding mode from the RM bits of the FCSR */
#define modeindex(v) ((v) & FPU_CSR_RM)
/* Convert Mips rounding mode (0..3) to IEEE library modes. */ /* Convert Mips rounding mode (0..3) to IEEE library modes. */
static const unsigned char ieee_rm[4] = { static const unsigned char ieee_rm[4] = {
[FPU_CSR_RN] = IEEE754_RN, [FPU_CSR_RN] = IEEE754_RN,
@ -384,10 +387,14 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx)
(void *) (xcp->cp0_epc), (void *) (xcp->cp0_epc),
MIPSInst_RT(ir), value); MIPSInst_RT(ir), value);
#endif #endif
value &= (FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03);
ctx->fcr31 &= ~(FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03); /*
/* convert to ieee library modes */ * Don't write reserved bits,
ctx->fcr31 |= (value & ~0x3) | ieee_rm[value & 0x3]; * and convert to ieee library modes
*/
ctx->fcr31 = (value &
~(FPU_CSR_RSVD | FPU_CSR_RM)) |
ieee_rm[modeindex(value)];
} }
if ((ctx->fcr31 >> 5) & ctx->fcr31 & FPU_CSR_ALL_E) { if ((ctx->fcr31 >> 5) & ctx->fcr31 & FPU_CSR_ALL_E) {
return SIGFPE; return SIGFPE;

View File

@ -122,7 +122,7 @@ static irqreturn_t loongson2_perfcount_handler(int irq, void *dev_id)
*/ */
/* Check whether the irq belongs to me */ /* Check whether the irq belongs to me */
enabled = read_c0_perfcnt() & LOONGSON2_PERFCNT_INT_EN; enabled = read_c0_perfctrl() & LOONGSON2_PERFCNT_INT_EN;
if (!enabled) if (!enabled)
return IRQ_NONE; return IRQ_NONE;
enabled = reg.cnt1_enabled | reg.cnt2_enabled; enabled = reg.cnt1_enabled | reg.cnt2_enabled;

View File

@ -31,7 +31,7 @@
* Atomically reads the value of @v. Note that the guaranteed * Atomically reads the value of @v. Note that the guaranteed
* useful range of an atomic_t is only 24 bits. * useful range of an atomic_t is only 24 bits.
*/ */
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
/** /**
* atomic_set - set atomic variable * atomic_set - set atomic variable

View File

@ -189,7 +189,7 @@ static __inline__ void atomic_set(atomic_t *v, int i)
static __inline__ int atomic_read(const atomic_t *v) static __inline__ int atomic_read(const atomic_t *v)
{ {
return v->counter; return (*(volatile int *)&(v)->counter);
} }
/* exported interface */ /* exported interface */
@ -286,7 +286,7 @@ atomic64_set(atomic64_t *v, s64 i)
static __inline__ s64 static __inline__ s64
atomic64_read(const atomic64_t *v) atomic64_read(const atomic64_t *v)
{ {
return v->counter; return (*(volatile long *)&(v)->counter);
} }
#define atomic64_add(i,v) ((void)(__atomic64_add_return( ((s64)(i)),(v)))) #define atomic64_add(i,v) ((void)(__atomic64_add_return( ((s64)(i)),(v))))

View File

@ -130,43 +130,5 @@ static inline int irqs_disabled_flags(unsigned long flags)
*/ */
struct irq_chip; struct irq_chip;
#ifdef CONFIG_PERF_EVENTS
#ifdef CONFIG_PPC64
static inline unsigned long test_perf_event_pending(void)
{
unsigned long x;
asm volatile("lbz %0,%1(13)"
: "=r" (x)
: "i" (offsetof(struct paca_struct, perf_event_pending)));
return x;
}
static inline void set_perf_event_pending(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (1),
"i" (offsetof(struct paca_struct, perf_event_pending)));
}
static inline void clear_perf_event_pending(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (0),
"i" (offsetof(struct paca_struct, perf_event_pending)));
}
#endif /* CONFIG_PPC64 */
#else /* CONFIG_PERF_EVENTS */
static inline unsigned long test_perf_event_pending(void)
{
return 0;
}
static inline void clear_perf_event_pending(void) {}
#endif /* CONFIG_PERF_EVENTS */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_HW_IRQ_H */ #endif /* _ASM_POWERPC_HW_IRQ_H */

View File

@ -133,7 +133,6 @@ int main(void)
DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr)); DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr));
DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled)); DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled));
DEFINE(PACAHARDIRQEN, offsetof(struct paca_struct, hard_enabled)); DEFINE(PACAHARDIRQEN, offsetof(struct paca_struct, hard_enabled));
DEFINE(PACAPERFPEND, offsetof(struct paca_struct, perf_event_pending));
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id)); DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
#ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_PPC_MM_SLICES
DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct, DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,

View File

@ -1,7 +1,8 @@
/* /*
* Contains routines needed to support swiotlb for ppc. * Contains routines needed to support swiotlb for ppc.
* *
* Copyright (C) 2009 Becky Bruce, Freescale Semiconductor * Copyright (C) 2009-2010 Freescale Semiconductor, Inc.
* Author: Becky Bruce
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
@ -70,7 +71,7 @@ static int ppc_swiotlb_bus_notify(struct notifier_block *nb,
sd->max_direct_dma_addr = 0; sd->max_direct_dma_addr = 0;
/* May need to bounce if the device can't address all of DRAM */ /* May need to bounce if the device can't address all of DRAM */
if (dma_get_mask(dev) < lmb_end_of_DRAM()) if ((dma_get_mask(dev) + 1) < lmb_end_of_DRAM())
set_dma_ops(dev, &swiotlb_dma_ops); set_dma_ops(dev, &swiotlb_dma_ops);
return NOTIFY_DONE; return NOTIFY_DONE;

View File

@ -556,15 +556,6 @@ ALT_FW_FTR_SECTION_END_IFCLR(FW_FEATURE_ISERIES)
2: 2:
TRACE_AND_RESTORE_IRQ(r5); TRACE_AND_RESTORE_IRQ(r5);
#ifdef CONFIG_PERF_EVENTS
/* check paca->perf_event_pending if we're enabling ints */
lbz r3,PACAPERFPEND(r13)
and. r3,r3,r5
beq 27f
bl .perf_event_do_pending
27:
#endif /* CONFIG_PERF_EVENTS */
/* extract EE bit and use it to restore paca->hard_enabled */ /* extract EE bit and use it to restore paca->hard_enabled */
ld r3,_MSR(r1) ld r3,_MSR(r1)
rldicl r4,r3,49,63 /* r0 = (r3 >> 15) & 1 */ rldicl r4,r3,49,63 /* r0 = (r3 >> 15) & 1 */

View File

@ -53,7 +53,6 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/perf_event.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/system.h> #include <asm/system.h>
@ -145,11 +144,6 @@ notrace void raw_local_irq_restore(unsigned long en)
} }
#endif /* CONFIG_PPC_STD_MMU_64 */ #endif /* CONFIG_PPC_STD_MMU_64 */
if (test_perf_event_pending()) {
clear_perf_event_pending();
perf_event_do_pending();
}
/* /*
* if (get_paca()->hard_enabled) return; * if (get_paca()->hard_enabled) return;
* But again we need to take care that gcc gets hard_enabled directly * But again we need to take care that gcc gets hard_enabled directly

View File

@ -35,6 +35,9 @@ struct cpu_hw_events {
u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned long amasks[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long amasks[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned long avalues[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long avalues[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned int group_flag;
int n_txn_start;
}; };
DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
@ -718,66 +721,6 @@ static int collect_events(struct perf_event *group, int max_count,
return n; return n;
} }
static void event_sched_in(struct perf_event *event)
{
event->state = PERF_EVENT_STATE_ACTIVE;
event->oncpu = smp_processor_id();
event->tstamp_running += event->ctx->time - event->tstamp_stopped;
if (is_software_event(event))
event->pmu->enable(event);
}
/*
* Called to enable a whole group of events.
* Returns 1 if the group was enabled, or -EAGAIN if it could not be.
* Assumes the caller has disabled interrupts and has
* frozen the PMU with hw_perf_save_disable.
*/
int hw_perf_group_sched_in(struct perf_event *group_leader,
struct perf_cpu_context *cpuctx,
struct perf_event_context *ctx)
{
struct cpu_hw_events *cpuhw;
long i, n, n0;
struct perf_event *sub;
if (!ppmu)
return 0;
cpuhw = &__get_cpu_var(cpu_hw_events);
n0 = cpuhw->n_events;
n = collect_events(group_leader, ppmu->n_counter - n0,
&cpuhw->event[n0], &cpuhw->events[n0],
&cpuhw->flags[n0]);
if (n < 0)
return -EAGAIN;
if (check_excludes(cpuhw->event, cpuhw->flags, n0, n))
return -EAGAIN;
i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n + n0);
if (i < 0)
return -EAGAIN;
cpuhw->n_events = n0 + n;
cpuhw->n_added += n;
/*
* OK, this group can go on; update event states etc.,
* and enable any software events
*/
for (i = n0; i < n0 + n; ++i)
cpuhw->event[i]->hw.config = cpuhw->events[i];
cpuctx->active_oncpu += n;
n = 1;
event_sched_in(group_leader);
list_for_each_entry(sub, &group_leader->sibling_list, group_entry) {
if (sub->state != PERF_EVENT_STATE_OFF) {
event_sched_in(sub);
++n;
}
}
ctx->nr_active += n;
return 1;
}
/* /*
* Add a event to the PMU. * Add a event to the PMU.
* If all events are not already frozen, then we disable and * If all events are not already frozen, then we disable and
@ -805,12 +748,22 @@ static int power_pmu_enable(struct perf_event *event)
cpuhw->event[n0] = event; cpuhw->event[n0] = event;
cpuhw->events[n0] = event->hw.config; cpuhw->events[n0] = event->hw.config;
cpuhw->flags[n0] = event->hw.event_base; cpuhw->flags[n0] = event->hw.event_base;
/*
* If group events scheduling transaction was started,
* skip the schedulability test here, it will be peformed
* at commit time(->commit_txn) as a whole
*/
if (cpuhw->group_flag & PERF_EVENT_TXN_STARTED)
goto nocheck;
if (check_excludes(cpuhw->event, cpuhw->flags, n0, 1)) if (check_excludes(cpuhw->event, cpuhw->flags, n0, 1))
goto out; goto out;
if (power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n0 + 1)) if (power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n0 + 1))
goto out; goto out;
event->hw.config = cpuhw->events[n0]; event->hw.config = cpuhw->events[n0];
nocheck:
++cpuhw->n_events; ++cpuhw->n_events;
++cpuhw->n_added; ++cpuhw->n_added;
@ -896,11 +849,65 @@ static void power_pmu_unthrottle(struct perf_event *event)
local_irq_restore(flags); local_irq_restore(flags);
} }
/*
* Start group events scheduling transaction
* Set the flag to make pmu::enable() not perform the
* schedulability test, it will be performed at commit time
*/
void power_pmu_start_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
cpuhw->group_flag |= PERF_EVENT_TXN_STARTED;
cpuhw->n_txn_start = cpuhw->n_events;
}
/*
* Stop group events scheduling transaction
* Clear the flag and pmu::enable() will perform the
* schedulability test.
*/
void power_pmu_cancel_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
cpuhw->group_flag &= ~PERF_EVENT_TXN_STARTED;
}
/*
* Commit group events scheduling transaction
* Perform the group schedulability test as a whole
* Return 0 if success
*/
int power_pmu_commit_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw;
long i, n;
if (!ppmu)
return -EAGAIN;
cpuhw = &__get_cpu_var(cpu_hw_events);
n = cpuhw->n_events;
if (check_excludes(cpuhw->event, cpuhw->flags, 0, n))
return -EAGAIN;
i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n);
if (i < 0)
return -EAGAIN;
for (i = cpuhw->n_txn_start; i < n; ++i)
cpuhw->event[i]->hw.config = cpuhw->events[i];
return 0;
}
struct pmu power_pmu = { struct pmu power_pmu = {
.enable = power_pmu_enable, .enable = power_pmu_enable,
.disable = power_pmu_disable, .disable = power_pmu_disable,
.read = power_pmu_read, .read = power_pmu_read,
.unthrottle = power_pmu_unthrottle, .unthrottle = power_pmu_unthrottle,
.start_txn = power_pmu_start_txn,
.cancel_txn = power_pmu_cancel_txn,
.commit_txn = power_pmu_commit_txn,
}; };
/* /*

View File

@ -532,25 +532,60 @@ void __init iSeries_time_init_early(void)
} }
#endif /* CONFIG_PPC_ISERIES */ #endif /* CONFIG_PPC_ISERIES */
#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_PPC32) #ifdef CONFIG_PERF_EVENTS
DEFINE_PER_CPU(u8, perf_event_pending);
void set_perf_event_pending(void) /*
* 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable...
*/
#ifdef CONFIG_PPC64
static inline unsigned long test_perf_event_pending(void)
{ {
get_cpu_var(perf_event_pending) = 1; unsigned long x;
set_dec(1);
put_cpu_var(perf_event_pending); asm volatile("lbz %0,%1(13)"
: "=r" (x)
: "i" (offsetof(struct paca_struct, perf_event_pending)));
return x;
} }
static inline void set_perf_event_pending_flag(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (1),
"i" (offsetof(struct paca_struct, perf_event_pending)));
}
static inline void clear_perf_event_pending(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (0),
"i" (offsetof(struct paca_struct, perf_event_pending)));
}
#else /* 32-bit */
DEFINE_PER_CPU(u8, perf_event_pending);
#define set_perf_event_pending_flag() __get_cpu_var(perf_event_pending) = 1
#define test_perf_event_pending() __get_cpu_var(perf_event_pending) #define test_perf_event_pending() __get_cpu_var(perf_event_pending)
#define clear_perf_event_pending() __get_cpu_var(perf_event_pending) = 0 #define clear_perf_event_pending() __get_cpu_var(perf_event_pending) = 0
#else /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */ #endif /* 32 vs 64 bit */
void set_perf_event_pending(void)
{
preempt_disable();
set_perf_event_pending_flag();
set_dec(1);
preempt_enable();
}
#else /* CONFIG_PERF_EVENTS */
#define test_perf_event_pending() 0 #define test_perf_event_pending() 0
#define clear_perf_event_pending() #define clear_perf_event_pending()
#endif /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */ #endif /* CONFIG_PERF_EVENTS */
/* /*
* For iSeries shared processors, we have to let the hypervisor * For iSeries shared processors, we have to let the hypervisor
@ -582,10 +617,6 @@ void timer_interrupt(struct pt_regs * regs)
set_dec(DECREMENTER_MAX); set_dec(DECREMENTER_MAX);
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
if (test_perf_event_pending()) {
clear_perf_event_pending();
perf_event_do_pending();
}
if (atomic_read(&ppc_n_lost_interrupts) != 0) if (atomic_read(&ppc_n_lost_interrupts) != 0)
do_IRQ(regs); do_IRQ(regs);
#endif #endif
@ -604,6 +635,11 @@ void timer_interrupt(struct pt_regs * regs)
calculate_steal_time(); calculate_steal_time();
if (test_perf_event_pending()) {
clear_perf_event_pending();
perf_event_do_pending();
}
#ifdef CONFIG_PPC_ISERIES #ifdef CONFIG_PPC_ISERIES
if (firmware_has_feature(FW_FEATURE_ISERIES)) if (firmware_has_feature(FW_FEATURE_ISERIES))
get_lppaca()->int_dword.fields.decr_int = 0; get_lppaca()->int_dword.fields.decr_int = 0;

View File

@ -440,7 +440,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
unsigned int gtlb_index; unsigned int gtlb_index;
gtlb_index = kvmppc_get_gpr(vcpu, ra); gtlb_index = kvmppc_get_gpr(vcpu, ra);
if (gtlb_index > KVM44x_GUEST_TLB_SIZE) { if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) {
printk("%s: index %d\n", __func__, gtlb_index); printk("%s: index %d\n", __func__, gtlb_index);
kvmppc_dump_vcpu(vcpu); kvmppc_dump_vcpu(vcpu);
return EMULATE_FAIL; return EMULATE_FAIL;

View File

@ -82,7 +82,7 @@ startup_continue:
_ehead: _ehead:
#ifdef CONFIG_SHARED_KERNEL #ifdef CONFIG_SHARED_KERNEL
.org 0x100000 .org 0x100000 - 0x11000 # head.o ends at 0x11000
#endif #endif
# #

View File

@ -80,7 +80,7 @@ startup_continue:
_ehead: _ehead:
#ifdef CONFIG_SHARED_KERNEL #ifdef CONFIG_SHARED_KERNEL
.org 0x100000 .org 0x100000 - 0x11000 # head.o ends at 0x11000
#endif #endif
# #

View File

@ -640,7 +640,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
{ {
long ret; long ret = 0;
/* Do the secure computing check first. */ /* Do the secure computing check first. */
secure_computing(regs->gprs[2]); secure_computing(regs->gprs[2]);
@ -649,7 +649,6 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
* The sysc_tracesys code in entry.S stored the system * The sysc_tracesys code in entry.S stored the system
* call number to gprs[2]. * call number to gprs[2].
*/ */
ret = regs->gprs[2];
if (test_thread_flag(TIF_SYSCALL_TRACE) && if (test_thread_flag(TIF_SYSCALL_TRACE) &&
(tracehook_report_syscall_entry(regs) || (tracehook_report_syscall_entry(regs) ||
regs->gprs[2] >= NR_syscalls)) { regs->gprs[2] >= NR_syscalls)) {
@ -671,7 +670,7 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
regs->gprs[2], regs->orig_gpr2, regs->gprs[2], regs->orig_gpr2,
regs->gprs[3], regs->gprs[4], regs->gprs[3], regs->gprs[4],
regs->gprs[5]); regs->gprs[5]);
return ret; return ret ?: regs->gprs[2];
} }
asmlinkage void do_syscall_trace_exit(struct pt_regs *regs) asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)

View File

@ -391,7 +391,6 @@ static void __init time_init_wq(void)
if (time_sync_wq) if (time_sync_wq)
return; return;
time_sync_wq = create_singlethread_workqueue("timesync"); time_sync_wq = create_singlethread_workqueue("timesync");
stop_machine_create();
} }
/* /*

View File

@ -44,6 +44,7 @@ config SUPERH32
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_MIXED_BREAKPOINTS_REGS
select PERF_EVENTS if HAVE_HW_BREAKPOINT select PERF_EVENTS if HAVE_HW_BREAKPOINT
select ARCH_HIBERNATION_POSSIBLE if MMU select ARCH_HIBERNATION_POSSIBLE if MMU

View File

@ -13,7 +13,7 @@
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) #define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v,i) ((v)->counter = (i)) #define atomic_set(v,i) ((v)->counter = (i))
#if defined(CONFIG_GUSA_RB) #if defined(CONFIG_GUSA_RB)

View File

@ -46,10 +46,14 @@ struct pmu;
/* Maximum number of UBC channels */ /* Maximum number of UBC channels */
#define HBP_NUM 2 #define HBP_NUM 2
static inline int hw_breakpoint_slots(int type)
{
return HBP_NUM;
}
/* arch/sh/kernel/hw_breakpoint.c */ /* arch/sh/kernel/hw_breakpoint.c */
extern int arch_check_va_in_userspace(unsigned long va, u16 hbp_len); extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp, extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
struct task_struct *tsk);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);

View File

@ -119,26 +119,17 @@ static int get_hbp_len(u16 hbp_len)
return len_in_bytes; return len_in_bytes;
} }
/*
* Check for virtual address in user space.
*/
int arch_check_va_in_userspace(unsigned long va, u16 hbp_len)
{
unsigned int len;
len = get_hbp_len(hbp_len);
return (va <= TASK_SIZE - len);
}
/* /*
* Check for virtual address in kernel space. * Check for virtual address in kernel space.
*/ */
static int arch_check_va_in_kernelspace(unsigned long va, u8 hbp_len) int arch_check_bp_in_kernelspace(struct perf_event *bp)
{ {
unsigned int len; unsigned int len;
unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
len = get_hbp_len(hbp_len); va = info->address;
len = get_hbp_len(info->len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
@ -226,8 +217,7 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp, int arch_validate_hwbkpt_settings(struct perf_event *bp)
struct task_struct *tsk)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); struct arch_hw_breakpoint *info = counter_arch_bp(bp);
unsigned int align; unsigned int align;
@ -270,15 +260,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp,
if (info->address & align) if (info->address & align)
return -EINVAL; return -EINVAL;
/* Check that the virtual address is in the proper range */
if (tsk) {
if (!arch_check_va_in_userspace(info->address, info->len))
return -EFAULT;
} else {
if (!arch_check_va_in_kernelspace(info->address, info->len))
return -EFAULT;
}
return 0; return 0;
} }
@ -363,8 +344,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args)
perf_bp_event(bp, args->regs); perf_bp_event(bp, args->regs);
/* Deliver the signal to userspace */ /* Deliver the signal to userspace */
if (arch_check_va_in_userspace(bp->attr.bp_addr, if (!arch_check_bp_in_kernelspace(bp)) {
bp->attr.bp_len)) {
siginfo_t info; siginfo_t info;
info.si_signo = args->signr; info.si_signo = args->signr;

View File

@ -85,7 +85,7 @@ static int set_single_step(struct task_struct *tsk, unsigned long addr)
bp = thread->ptrace_bps[0]; bp = thread->ptrace_bps[0];
if (!bp) { if (!bp) {
hw_breakpoint_init(&attr); ptrace_breakpoint_init(&attr);
attr.bp_addr = addr; attr.bp_addr = addr;
attr.bp_len = HW_BREAKPOINT_LEN_2; attr.bp_len = HW_BREAKPOINT_LEN_2;

View File

@ -25,7 +25,7 @@ extern int atomic_cmpxchg(atomic_t *, int, int);
extern int atomic_add_unless(atomic_t *, int, int); extern int atomic_add_unless(atomic_t *, int, int);
extern void atomic_set(atomic_t *, int); extern void atomic_set(atomic_t *, int);
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_add(i, v) ((void)__atomic_add_return( (int)(i), (v))) #define atomic_add(i, v) ((void)__atomic_add_return( (int)(i), (v)))
#define atomic_sub(i, v) ((void)__atomic_add_return(-(int)(i), (v))) #define atomic_sub(i, v) ((void)__atomic_add_return(-(int)(i), (v)))

View File

@ -13,8 +13,8 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) }
#define atomic_read(v) ((v)->counter) #define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic64_read(v) ((v)->counter) #define atomic64_read(v) (*(volatile long *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = i) #define atomic_set(v, i) (((v)->counter) = i)
#define atomic64_set(v, i) (((v)->counter) = i) #define atomic64_set(v, i) (((v)->counter) = i)

View File

@ -44,7 +44,7 @@ extern void change_bit(unsigned long nr, volatile unsigned long *addr);
#ifdef ULTRA_HAS_POPULATION_COUNT #ifdef ULTRA_HAS_POPULATION_COUNT
static inline unsigned int hweight64(unsigned long w) static inline unsigned int __arch_hweight64(unsigned long w)
{ {
unsigned int res; unsigned int res;
@ -52,7 +52,7 @@ static inline unsigned int hweight64(unsigned long w)
return res; return res;
} }
static inline unsigned int hweight32(unsigned int w) static inline unsigned int __arch_hweight32(unsigned int w)
{ {
unsigned int res; unsigned int res;
@ -60,7 +60,7 @@ static inline unsigned int hweight32(unsigned int w)
return res; return res;
} }
static inline unsigned int hweight16(unsigned int w) static inline unsigned int __arch_hweight16(unsigned int w)
{ {
unsigned int res; unsigned int res;
@ -68,7 +68,7 @@ static inline unsigned int hweight16(unsigned int w)
return res; return res;
} }
static inline unsigned int hweight8(unsigned int w) static inline unsigned int __arch_hweight8(unsigned int w)
{ {
unsigned int res; unsigned int res;
@ -78,9 +78,10 @@ static inline unsigned int hweight8(unsigned int w)
#else #else
#include <asm-generic/bitops/hweight.h> #include <asm-generic/bitops/arch_hweight.h>
#endif #endif
#include <asm-generic/bitops/const_hweight.h>
#include <asm-generic/bitops/lock.h> #include <asm-generic/bitops/lock.h>
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */

View File

@ -53,11 +53,15 @@ config X86
select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZMA
select HAVE_KERNEL_LZO select HAVE_KERNEL_LZO
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_MIXED_BREAKPOINTS_REGS
select PERF_EVENTS select PERF_EVENTS
select ANON_INODES select ANON_INODES
select HAVE_ARCH_KMEMCHECK select HAVE_ARCH_KMEMCHECK
select HAVE_USER_RETURN_NOTIFIER select HAVE_USER_RETURN_NOTIFIER
config INSTRUCTION_DECODER
def_bool (KPROBES || PERF_EVENTS)
config OUTPUT_FORMAT config OUTPUT_FORMAT
string string
default "elf32-i386" if X86_32 default "elf32-i386" if X86_32
@ -197,20 +201,17 @@ config HAVE_INTEL_TXT
# Use the generic interrupt handling code in kernel/irq/: # Use the generic interrupt handling code in kernel/irq/:
config GENERIC_HARDIRQS config GENERIC_HARDIRQS
bool def_bool y
default y
config GENERIC_HARDIRQS_NO__DO_IRQ config GENERIC_HARDIRQS_NO__DO_IRQ
def_bool y def_bool y
config GENERIC_IRQ_PROBE config GENERIC_IRQ_PROBE
bool def_bool y
default y
config GENERIC_PENDING_IRQ config GENERIC_PENDING_IRQ
bool def_bool y
depends on GENERIC_HARDIRQS && SMP depends on GENERIC_HARDIRQS && SMP
default y
config USE_GENERIC_SMP_HELPERS config USE_GENERIC_SMP_HELPERS
def_bool y def_bool y
@ -225,19 +226,22 @@ config X86_64_SMP
depends on X86_64 && SMP depends on X86_64 && SMP
config X86_HT config X86_HT
bool def_bool y
depends on SMP depends on SMP
default y
config X86_TRAMPOLINE config X86_TRAMPOLINE
bool def_bool y
depends on SMP || (64BIT && ACPI_SLEEP) depends on SMP || (64BIT && ACPI_SLEEP)
default y
config X86_32_LAZY_GS config X86_32_LAZY_GS
def_bool y def_bool y
depends on X86_32 && !CC_STACKPROTECTOR depends on X86_32 && !CC_STACKPROTECTOR
config ARCH_HWEIGHT_CFLAGS
string
default "-fcall-saved-ecx -fcall-saved-edx" if X86_32
default "-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11" if X86_64
config KTIME_SCALAR config KTIME_SCALAR
def_bool X86_32 def_bool X86_32
source "init/Kconfig" source "init/Kconfig"
@ -447,7 +451,7 @@ config X86_NUMAQ
firmware with - send email to <Martin.Bligh@us.ibm.com>. firmware with - send email to <Martin.Bligh@us.ibm.com>.
config X86_SUPPORTS_MEMORY_FAILURE config X86_SUPPORTS_MEMORY_FAILURE
bool def_bool y
# MCE code calls memory_failure(): # MCE code calls memory_failure():
depends on X86_MCE depends on X86_MCE
# On 32-bit this adds too big of NODES_SHIFT and we run out of page flags: # On 32-bit this adds too big of NODES_SHIFT and we run out of page flags:
@ -455,7 +459,6 @@ config X86_SUPPORTS_MEMORY_FAILURE
# On 32-bit SPARSEMEM adds too big of SECTIONS_WIDTH: # On 32-bit SPARSEMEM adds too big of SECTIONS_WIDTH:
depends on X86_64 || !SPARSEMEM depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_MEMORY_FAILURE
default y
config X86_VISWS config X86_VISWS
bool "SGI 320/540 (Visual Workstation)" bool "SGI 320/540 (Visual Workstation)"
@ -570,7 +573,6 @@ config PARAVIRT_SPINLOCKS
config PARAVIRT_CLOCK config PARAVIRT_CLOCK
bool bool
default n
endif endif
@ -749,7 +751,6 @@ config MAXSMP
bool "Configure Maximum number of SMP Processors and NUMA Nodes" bool "Configure Maximum number of SMP Processors and NUMA Nodes"
depends on X86_64 && SMP && DEBUG_KERNEL && EXPERIMENTAL depends on X86_64 && SMP && DEBUG_KERNEL && EXPERIMENTAL
select CPUMASK_OFFSTACK select CPUMASK_OFFSTACK
default n
---help--- ---help---
Configure maximum number of CPUS and NUMA Nodes for this architecture. Configure maximum number of CPUS and NUMA Nodes for this architecture.
If unsure, say N. If unsure, say N.
@ -829,7 +830,6 @@ config X86_VISWS_APIC
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
bool "Reroute for broken boot IRQs" bool "Reroute for broken boot IRQs"
default n
depends on X86_IO_APIC depends on X86_IO_APIC
---help--- ---help---
This option enables a workaround that fixes a source of This option enables a workaround that fixes a source of
@ -876,9 +876,8 @@ config X86_MCE_AMD
the DRAM Error Threshold. the DRAM Error Threshold.
config X86_ANCIENT_MCE config X86_ANCIENT_MCE
def_bool n bool "Support for old Pentium 5 / WinChip machine checks"
depends on X86_32 && X86_MCE depends on X86_32 && X86_MCE
prompt "Support for old Pentium 5 / WinChip machine checks"
---help--- ---help---
Include support for machine check handling on old Pentium 5 or WinChip Include support for machine check handling on old Pentium 5 or WinChip
systems. These typically need to be enabled explicitely on the command systems. These typically need to be enabled explicitely on the command
@ -886,8 +885,7 @@ config X86_ANCIENT_MCE
config X86_MCE_THRESHOLD config X86_MCE_THRESHOLD
depends on X86_MCE_AMD || X86_MCE_INTEL depends on X86_MCE_AMD || X86_MCE_INTEL
bool def_bool y
default y
config X86_MCE_INJECT config X86_MCE_INJECT
depends on X86_MCE depends on X86_MCE
@ -1026,8 +1024,8 @@ config X86_CPUID
choice choice
prompt "High Memory Support" prompt "High Memory Support"
default HIGHMEM4G if !X86_NUMAQ
default HIGHMEM64G if X86_NUMAQ default HIGHMEM64G if X86_NUMAQ
default HIGHMEM4G
depends on X86_32 depends on X86_32
config NOHIGHMEM config NOHIGHMEM
@ -1285,7 +1283,7 @@ source "mm/Kconfig"
config HIGHPTE config HIGHPTE
bool "Allocate 3rd-level pagetables from highmem" bool "Allocate 3rd-level pagetables from highmem"
depends on X86_32 && (HIGHMEM4G || HIGHMEM64G) depends on HIGHMEM
---help--- ---help---
The VM uses one page table entry for each page of physical memory. The VM uses one page table entry for each page of physical memory.
For systems with a lot of RAM, this can be wasteful of precious For systems with a lot of RAM, this can be wasteful of precious
@ -1369,8 +1367,7 @@ config MATH_EMULATION
kernel, it won't hurt. kernel, it won't hurt.
config MTRR config MTRR
bool def_bool y
default y
prompt "MTRR (Memory Type Range Register) support" if EMBEDDED prompt "MTRR (Memory Type Range Register) support" if EMBEDDED
---help--- ---help---
On Intel P6 family processors (Pentium Pro, Pentium II and later) On Intel P6 family processors (Pentium Pro, Pentium II and later)
@ -1436,8 +1433,7 @@ config MTRR_SANITIZER_SPARE_REG_NR_DEFAULT
mtrr_spare_reg_nr=N on the kernel command line. mtrr_spare_reg_nr=N on the kernel command line.
config X86_PAT config X86_PAT
bool def_bool y
default y
prompt "x86 PAT support" if EMBEDDED prompt "x86 PAT support" if EMBEDDED
depends on MTRR depends on MTRR
---help--- ---help---
@ -1605,8 +1601,7 @@ config X86_NEED_RELOCS
depends on X86_32 && RELOCATABLE depends on X86_32 && RELOCATABLE
config PHYSICAL_ALIGN config PHYSICAL_ALIGN
hex hex "Alignment value to which kernel should be aligned" if X86_32
prompt "Alignment value to which kernel should be aligned" if X86_32
default "0x1000000" default "0x1000000"
range 0x2000 0x1000000 range 0x2000 0x1000000
---help--- ---help---
@ -1653,7 +1648,6 @@ config COMPAT_VDSO
config CMDLINE_BOOL config CMDLINE_BOOL
bool "Built-in kernel command line" bool "Built-in kernel command line"
default n
---help--- ---help---
Allow for specifying boot arguments to the kernel at Allow for specifying boot arguments to the kernel at
build time. On some systems (e.g. embedded ones), it is build time. On some systems (e.g. embedded ones), it is
@ -1687,7 +1681,6 @@ config CMDLINE
config CMDLINE_OVERRIDE config CMDLINE_OVERRIDE
bool "Built-in command line overrides boot loader arguments" bool "Built-in command line overrides boot loader arguments"
default n
depends on CMDLINE_BOOL depends on CMDLINE_BOOL
---help--- ---help---
Set this option to 'Y' to have the kernel ignore the boot loader Set this option to 'Y' to have the kernel ignore the boot loader
@ -1723,8 +1716,7 @@ source "drivers/acpi/Kconfig"
source "drivers/sfi/Kconfig" source "drivers/sfi/Kconfig"
config X86_APM_BOOT config X86_APM_BOOT
bool def_bool y
default y
depends on APM || APM_MODULE depends on APM || APM_MODULE
menuconfig APM menuconfig APM
@ -1953,8 +1945,7 @@ config DMAR_DEFAULT_ON
experimental. experimental.
config DMAR_BROKEN_GFX_WA config DMAR_BROKEN_GFX_WA
def_bool n bool "Workaround broken graphics drivers (going away soon)"
prompt "Workaround broken graphics drivers (going away soon)"
depends on DMAR && BROKEN depends on DMAR && BROKEN
---help--- ---help---
Current Graphics drivers tend to use physical address Current Graphics drivers tend to use physical address
@ -2052,7 +2043,6 @@ config SCx200HR_TIMER
config OLPC config OLPC
bool "One Laptop Per Child support" bool "One Laptop Per Child support"
select GPIOLIB select GPIOLIB
default n
---help--- ---help---
Add support for detecting the unique features of the OLPC Add support for detecting the unique features of the OLPC
XO hardware. XO hardware.

View File

@ -338,6 +338,10 @@ config X86_F00F_BUG
def_bool y def_bool y
depends on M586MMX || M586TSC || M586 || M486 || M386 depends on M586MMX || M586TSC || M586 || M486 || M386
config X86_INVD_BUG
def_bool y
depends on M486 || M386
config X86_WP_WORKS_OK config X86_WP_WORKS_OK
def_bool y def_bool y
depends on !M386 depends on !M386
@ -502,23 +506,3 @@ config CPU_SUP_UMC_32
CPU might render the kernel unbootable. CPU might render the kernel unbootable.
If unsure, say N. If unsure, say N.
config X86_DS
def_bool X86_PTRACE_BTS
depends on X86_DEBUGCTLMSR
select HAVE_HW_BRANCH_TRACER
config X86_PTRACE_BTS
bool "Branch Trace Store"
default y
depends on X86_DEBUGCTLMSR
depends on BROKEN
---help---
This adds a ptrace interface to the hardware's branch trace store.
Debuggers may use it to collect an execution trace of the debugged
application in order to answer the question 'how did I get here?'.
Debuggers may trace user mode as well as kernel mode.
Say Y unless there is no application development on this machine
and you want to save a small amount of code size.

View File

@ -45,7 +45,6 @@ config EARLY_PRINTK
config EARLY_PRINTK_DBGP config EARLY_PRINTK_DBGP
bool "Early printk via EHCI debug port" bool "Early printk via EHCI debug port"
default n
depends on EARLY_PRINTK && PCI depends on EARLY_PRINTK && PCI
---help--- ---help---
Write kernel log output directly into the EHCI debug port. Write kernel log output directly into the EHCI debug port.
@ -76,7 +75,6 @@ config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps" bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on SMP depends on SMP
default n
---help--- ---help---
Say Y to verify that the per_cpu map being accessed has Say Y to verify that the per_cpu map being accessed has
been setup. Adds a fair amount of code to kernel memory been setup. Adds a fair amount of code to kernel memory
@ -174,15 +172,6 @@ config IOMMU_LEAK
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
config X86_DS_SELFTEST
bool "DS selftest"
default y
depends on DEBUG_KERNEL
depends on X86_DS
---help---
Perform Debug Store selftests at boot time.
If in doubt, say "N".
config HAVE_MMIOTRACE_SUPPORT config HAVE_MMIOTRACE_SUPPORT
def_bool y def_bool y

View File

@ -95,8 +95,9 @@ sp-$(CONFIG_X86_64) := rsp
cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1) cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too? # is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1) cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections)
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections)
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)

View File

@ -6,8 +6,8 @@
.macro LOCK_PREFIX .macro LOCK_PREFIX
1: lock 1: lock
.section .smp_locks,"a" .section .smp_locks,"a"
_ASM_ALIGN .balign 4
_ASM_PTR 1b .long 1b - .
.previous .previous
.endm .endm
#else #else

View File

@ -28,20 +28,20 @@
*/ */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define LOCK_PREFIX \ #define LOCK_PREFIX_HERE \
".section .smp_locks,\"a\"\n" \ ".section .smp_locks,\"a\"\n" \
_ASM_ALIGN "\n" \ ".balign 4\n" \
_ASM_PTR "661f\n" /* address */ \ ".long 671f - .\n" /* offset */ \
".previous\n" \ ".previous\n" \
"661:\n\tlock; " "671:"
#define LOCK_PREFIX LOCK_PREFIX_HERE "\n\tlock; "
#else /* ! CONFIG_SMP */ #else /* ! CONFIG_SMP */
#define LOCK_PREFIX_HERE ""
#define LOCK_PREFIX "" #define LOCK_PREFIX ""
#endif #endif
/* This must be included *after* the definition of LOCK_PREFIX */
#include <asm/cpufeature.h>
struct alt_instr { struct alt_instr {
u8 *instr; /* original instruction */ u8 *instr; /* original instruction */
u8 *replacement; u8 *replacement;
@ -95,6 +95,12 @@ static inline int alternatives_text_reserved(void *start, void *end)
"663:\n\t" newinstr "\n664:\n" /* replacement */ \ "663:\n\t" newinstr "\n664:\n" /* replacement */ \
".previous" ".previous"
/*
* This must be included *after* the definition of ALTERNATIVE due to
* <asm/arch_hweight.h>
*/
#include <asm/cpufeature.h>
/* /*
* Alternative instructions for different CPU types or capabilities. * Alternative instructions for different CPU types or capabilities.
* *

View File

@ -174,6 +174,40 @@
(~((1ULL << (12 + ((lvl) * 9))) - 1))) (~((1ULL << (12 + ((lvl) * 9))) - 1)))
#define PM_ALIGNED(lvl, addr) ((PM_MAP_MASK(lvl) & (addr)) == (addr)) #define PM_ALIGNED(lvl, addr) ((PM_MAP_MASK(lvl) & (addr)) == (addr))
/*
* Returns the page table level to use for a given page size
* Pagesize is expected to be a power-of-two
*/
#define PAGE_SIZE_LEVEL(pagesize) \
((__ffs(pagesize) - 12) / 9)
/*
* Returns the number of ptes to use for a given page size
* Pagesize is expected to be a power-of-two
*/
#define PAGE_SIZE_PTE_COUNT(pagesize) \
(1ULL << ((__ffs(pagesize) - 12) % 9))
/*
* Aligns a given io-virtual address to a given page size
* Pagesize is expected to be a power-of-two
*/
#define PAGE_SIZE_ALIGN(address, pagesize) \
((address) & ~((pagesize) - 1))
/*
* Creates an IOMMU PTE for an address an a given pagesize
* The PTE has no permission bits set
* Pagesize is expected to be a power-of-two larger than 4096
*/
#define PAGE_SIZE_PTE(address, pagesize) \
(((address) | ((pagesize) - 1)) & \
(~(pagesize >> 1)) & PM_ADDR_MASK)
/*
* Takes a PTE value with mode=0x07 and returns the page size it maps
*/
#define PTE_PAGE_SIZE(pte) \
(1ULL << (1 + ffz(((pte) | 0xfffULL))))
#define IOMMU_PTE_P (1ULL << 0) #define IOMMU_PTE_P (1ULL << 0)
#define IOMMU_PTE_TV (1ULL << 1) #define IOMMU_PTE_TV (1ULL << 1)
#define IOMMU_PTE_U (1ULL << 59) #define IOMMU_PTE_U (1ULL << 59)

View File

@ -373,6 +373,7 @@ extern atomic_t init_deasserted;
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip); extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
#endif #endif
#ifdef CONFIG_X86_LOCAL_APIC
static inline u32 apic_read(u32 reg) static inline u32 apic_read(u32 reg)
{ {
return apic->read(reg); return apic->read(reg);
@ -403,10 +404,19 @@ static inline u32 safe_apic_wait_icr_idle(void)
return apic->safe_wait_icr_idle(); return apic->safe_wait_icr_idle();
} }
#else /* CONFIG_X86_LOCAL_APIC */
static inline u32 apic_read(u32 reg) { return 0; }
static inline void apic_write(u32 reg, u32 val) { }
static inline u64 apic_icr_read(void) { return 0; }
static inline void apic_icr_write(u32 low, u32 high) { }
static inline void apic_wait_icr_idle(void) { }
static inline u32 safe_apic_wait_icr_idle(void) { return 0; }
#endif /* CONFIG_X86_LOCAL_APIC */
static inline void ack_APIC_irq(void) static inline void ack_APIC_irq(void)
{ {
#ifdef CONFIG_X86_LOCAL_APIC
/* /*
* ack_APIC_irq() actually gets compiled as a single instruction * ack_APIC_irq() actually gets compiled as a single instruction
* ... yummie. * ... yummie.
@ -414,7 +424,6 @@ static inline void ack_APIC_irq(void)
/* Docs say use 0 for future compatibility */ /* Docs say use 0 for future compatibility */
apic_write(APIC_EOI, 0); apic_write(APIC_EOI, 0);
#endif
} }
static inline unsigned default_get_apic_id(unsigned long x) static inline unsigned default_get_apic_id(unsigned long x)

View File

@ -0,0 +1,61 @@
#ifndef _ASM_X86_HWEIGHT_H
#define _ASM_X86_HWEIGHT_H
#ifdef CONFIG_64BIT
/* popcnt %edi, %eax -- redundant REX prefix for alignment */
#define POPCNT32 ".byte 0xf3,0x40,0x0f,0xb8,0xc7"
/* popcnt %rdi, %rax */
#define POPCNT64 ".byte 0xf3,0x48,0x0f,0xb8,0xc7"
#define REG_IN "D"
#define REG_OUT "a"
#else
/* popcnt %eax, %eax */
#define POPCNT32 ".byte 0xf3,0x0f,0xb8,0xc0"
#define REG_IN "a"
#define REG_OUT "a"
#endif
/*
* __sw_hweightXX are called from within the alternatives below
* and callee-clobbered registers need to be taken care of. See
* ARCH_HWEIGHT_CFLAGS in <arch/x86/Kconfig> for the respective
* compiler switches.
*/
static inline unsigned int __arch_hweight32(unsigned int w)
{
unsigned int res = 0;
asm (ALTERNATIVE("call __sw_hweight32", POPCNT32, X86_FEATURE_POPCNT)
: "="REG_OUT (res)
: REG_IN (w));
return res;
}
static inline unsigned int __arch_hweight16(unsigned int w)
{
return __arch_hweight32(w & 0xffff);
}
static inline unsigned int __arch_hweight8(unsigned int w)
{
return __arch_hweight32(w & 0xff);
}
static inline unsigned long __arch_hweight64(__u64 w)
{
unsigned long res = 0;
#ifdef CONFIG_X86_32
return __arch_hweight32((u32)w) +
__arch_hweight32((u32)(w >> 32));
#else
asm (ALTERNATIVE("call __sw_hweight64", POPCNT64, X86_FEATURE_POPCNT)
: "="REG_OUT (res)
: REG_IN (w));
#endif /* CONFIG_X86_32 */
return res;
}
#endif

View File

@ -22,7 +22,7 @@
*/ */
static inline int atomic_read(const atomic_t *v) static inline int atomic_read(const atomic_t *v)
{ {
return v->counter; return (*(volatile int *)&(v)->counter);
} }
/** /**
@ -246,6 +246,29 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
/*
* atomic_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t
*
* The function returns the old value of *v minus 1, even if
* the atomic variable, v, was not decremented.
*/
static inline int atomic_dec_if_positive(atomic_t *v)
{
int c, old, dec;
c = atomic_read(v);
for (;;) {
dec = c - 1;
if (unlikely(dec < 0))
break;
old = atomic_cmpxchg((v), c, dec);
if (likely(old == c))
break;
c = old;
}
return dec;
}
/** /**
* atomic_inc_short - increment of a short integer * atomic_inc_short - increment of a short integer
* @v: pointer to type int * @v: pointer to type int

View File

@ -14,109 +14,193 @@ typedef struct {
#define ATOMIC64_INIT(val) { (val) } #define ATOMIC64_INIT(val) { (val) }
extern u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old_val, u64 new_val); #ifdef CONFIG_X86_CMPXCHG64
#define ATOMIC64_ALTERNATIVE_(f, g) "call atomic64_" #g "_cx8"
#else
#define ATOMIC64_ALTERNATIVE_(f, g) ALTERNATIVE("call atomic64_" #f "_386", "call atomic64_" #g "_cx8", X86_FEATURE_CX8)
#endif
#define ATOMIC64_ALTERNATIVE(f) ATOMIC64_ALTERNATIVE_(f, f)
/**
* atomic64_cmpxchg - cmpxchg atomic64 variable
* @p: pointer to type atomic64_t
* @o: expected value
* @n: new value
*
* Atomically sets @v to @n if it was equal to @o and returns
* the old value.
*/
static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
{
return cmpxchg64(&v->counter, o, n);
}
/** /**
* atomic64_xchg - xchg atomic64 variable * atomic64_xchg - xchg atomic64 variable
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* @new_val: value to assign * @n: value to assign
* *
* Atomically xchgs the value of @ptr to @new_val and returns * Atomically xchgs the value of @v to @n and returns
* the old value. * the old value.
*/ */
extern u64 atomic64_xchg(atomic64_t *ptr, u64 new_val); static inline long long atomic64_xchg(atomic64_t *v, long long n)
{
long long o;
unsigned high = (unsigned)(n >> 32);
unsigned low = (unsigned)n;
asm volatile(ATOMIC64_ALTERNATIVE(xchg)
: "=A" (o), "+b" (low), "+c" (high)
: "S" (v)
: "memory"
);
return o;
}
/** /**
* atomic64_set - set atomic64 variable * atomic64_set - set atomic64 variable
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* @new_val: value to assign * @n: value to assign
* *
* Atomically sets the value of @ptr to @new_val. * Atomically sets the value of @v to @n.
*/ */
extern void atomic64_set(atomic64_t *ptr, u64 new_val); static inline void atomic64_set(atomic64_t *v, long long i)
{
unsigned high = (unsigned)(i >> 32);
unsigned low = (unsigned)i;
asm volatile(ATOMIC64_ALTERNATIVE(set)
: "+b" (low), "+c" (high)
: "S" (v)
: "eax", "edx", "memory"
);
}
/** /**
* atomic64_read - read atomic64 variable * atomic64_read - read atomic64 variable
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically reads the value of @ptr and returns it. * Atomically reads the value of @v and returns it.
*/ */
static inline u64 atomic64_read(atomic64_t *ptr) static inline long long atomic64_read(atomic64_t *v)
{ {
u64 res; long long r;
asm volatile(ATOMIC64_ALTERNATIVE(read)
/* : "=A" (r), "+c" (v)
* Note, we inline this atomic64_t primitive because : : "memory"
* it only clobbers EAX/EDX and leaves the others );
* untouched. We also (somewhat subtly) rely on the return r;
* fact that cmpxchg8b returns the current 64-bit value }
* of the memory location we are touching:
*/
asm volatile(
"mov %%ebx, %%eax\n\t"
"mov %%ecx, %%edx\n\t"
LOCK_PREFIX "cmpxchg8b %1\n"
: "=&A" (res)
: "m" (*ptr)
);
return res;
}
extern u64 atomic64_read(atomic64_t *ptr);
/** /**
* atomic64_add_return - add and return * atomic64_add_return - add and return
* @delta: integer value to add * @i: integer value to add
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically adds @delta to @ptr and returns @delta + *@ptr * Atomically adds @i to @v and returns @i + *@v
*/ */
extern u64 atomic64_add_return(u64 delta, atomic64_t *ptr); static inline long long atomic64_add_return(long long i, atomic64_t *v)
{
asm volatile(ATOMIC64_ALTERNATIVE(add_return)
: "+A" (i), "+c" (v)
: : "memory"
);
return i;
}
/* /*
* Other variants with different arithmetic operators: * Other variants with different arithmetic operators:
*/ */
extern u64 atomic64_sub_return(u64 delta, atomic64_t *ptr); static inline long long atomic64_sub_return(long long i, atomic64_t *v)
extern u64 atomic64_inc_return(atomic64_t *ptr); {
extern u64 atomic64_dec_return(atomic64_t *ptr); asm volatile(ATOMIC64_ALTERNATIVE(sub_return)
: "+A" (i), "+c" (v)
: : "memory"
);
return i;
}
static inline long long atomic64_inc_return(atomic64_t *v)
{
long long a;
asm volatile(ATOMIC64_ALTERNATIVE(inc_return)
: "=A" (a)
: "S" (v)
: "memory", "ecx"
);
return a;
}
static inline long long atomic64_dec_return(atomic64_t *v)
{
long long a;
asm volatile(ATOMIC64_ALTERNATIVE(dec_return)
: "=A" (a)
: "S" (v)
: "memory", "ecx"
);
return a;
}
/** /**
* atomic64_add - add integer to atomic64 variable * atomic64_add - add integer to atomic64 variable
* @delta: integer value to add * @i: integer value to add
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically adds @delta to @ptr. * Atomically adds @i to @v.
*/ */
extern void atomic64_add(u64 delta, atomic64_t *ptr); static inline long long atomic64_add(long long i, atomic64_t *v)
{
asm volatile(ATOMIC64_ALTERNATIVE_(add, add_return)
: "+A" (i), "+c" (v)
: : "memory"
);
return i;
}
/** /**
* atomic64_sub - subtract the atomic64 variable * atomic64_sub - subtract the atomic64 variable
* @delta: integer value to subtract * @i: integer value to subtract
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically subtracts @delta from @ptr. * Atomically subtracts @i from @v.
*/ */
extern void atomic64_sub(u64 delta, atomic64_t *ptr); static inline long long atomic64_sub(long long i, atomic64_t *v)
{
asm volatile(ATOMIC64_ALTERNATIVE_(sub, sub_return)
: "+A" (i), "+c" (v)
: : "memory"
);
return i;
}
/** /**
* atomic64_sub_and_test - subtract value from variable and test result * atomic64_sub_and_test - subtract value from variable and test result
* @delta: integer value to subtract * @i: integer value to subtract
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically subtracts @delta from @ptr and returns * Atomically subtracts @i from @v and returns
* true if the result is zero, or false for all * true if the result is zero, or false for all
* other cases. * other cases.
*/ */
extern int atomic64_sub_and_test(u64 delta, atomic64_t *ptr); static inline int atomic64_sub_and_test(long long i, atomic64_t *v)
{
return atomic64_sub_return(i, v) == 0;
}
/** /**
* atomic64_inc - increment atomic64 variable * atomic64_inc - increment atomic64 variable
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically increments @ptr by 1. * Atomically increments @v by 1.
*/ */
extern void atomic64_inc(atomic64_t *ptr); static inline void atomic64_inc(atomic64_t *v)
{
asm volatile(ATOMIC64_ALTERNATIVE_(inc, inc_return)
: : "S" (v)
: "memory", "eax", "ecx", "edx"
);
}
/** /**
* atomic64_dec - decrement atomic64 variable * atomic64_dec - decrement atomic64 variable
@ -124,37 +208,97 @@ extern void atomic64_inc(atomic64_t *ptr);
* *
* Atomically decrements @ptr by 1. * Atomically decrements @ptr by 1.
*/ */
extern void atomic64_dec(atomic64_t *ptr); static inline void atomic64_dec(atomic64_t *v)
{
asm volatile(ATOMIC64_ALTERNATIVE_(dec, dec_return)
: : "S" (v)
: "memory", "eax", "ecx", "edx"
);
}
/** /**
* atomic64_dec_and_test - decrement and test * atomic64_dec_and_test - decrement and test
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically decrements @ptr by 1 and * Atomically decrements @v by 1 and
* returns true if the result is 0, or false for all other * returns true if the result is 0, or false for all other
* cases. * cases.
*/ */
extern int atomic64_dec_and_test(atomic64_t *ptr); static inline int atomic64_dec_and_test(atomic64_t *v)
{
return atomic64_dec_return(v) == 0;
}
/** /**
* atomic64_inc_and_test - increment and test * atomic64_inc_and_test - increment and test
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically increments @ptr by 1 * Atomically increments @v by 1
* and returns true if the result is zero, or false for all * and returns true if the result is zero, or false for all
* other cases. * other cases.
*/ */
extern int atomic64_inc_and_test(atomic64_t *ptr); static inline int atomic64_inc_and_test(atomic64_t *v)
{
return atomic64_inc_return(v) == 0;
}
/** /**
* atomic64_add_negative - add and test if negative * atomic64_add_negative - add and test if negative
* @delta: integer value to add * @i: integer value to add
* @ptr: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically adds @delta to @ptr and returns true * Atomically adds @i to @v and returns true
* if the result is negative, or false when * if the result is negative, or false when
* result is greater than or equal to zero. * result is greater than or equal to zero.
*/ */
extern int atomic64_add_negative(u64 delta, atomic64_t *ptr); static inline int atomic64_add_negative(long long i, atomic64_t *v)
{
return atomic64_add_return(i, v) < 0;
}
/**
* atomic64_add_unless - add unless the number is a given value
* @v: pointer of type atomic64_t
* @a: the amount to add to v...
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as it was not @u.
* Returns non-zero if @v was not @u, and zero otherwise.
*/
static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
{
unsigned low = (unsigned)u;
unsigned high = (unsigned)(u >> 32);
asm volatile(ATOMIC64_ALTERNATIVE(add_unless) "\n\t"
: "+A" (a), "+c" (v), "+S" (low), "+D" (high)
: : "memory");
return (int)a;
}
static inline int atomic64_inc_not_zero(atomic64_t *v)
{
int r;
asm volatile(ATOMIC64_ALTERNATIVE(inc_not_zero)
: "=a" (r)
: "S" (v)
: "ecx", "edx", "memory"
);
return r;
}
static inline long long atomic64_dec_if_positive(atomic64_t *v)
{
long long r;
asm volatile(ATOMIC64_ALTERNATIVE(dec_if_positive)
: "=A" (r)
: "S" (v)
: "ecx", "memory"
);
return r;
}
#undef ATOMIC64_ALTERNATIVE
#undef ATOMIC64_ALTERNATIVE_
#endif /* _ASM_X86_ATOMIC64_32_H */ #endif /* _ASM_X86_ATOMIC64_32_H */

View File

@ -18,7 +18,7 @@
*/ */
static inline long atomic64_read(const atomic64_t *v) static inline long atomic64_read(const atomic64_t *v)
{ {
return v->counter; return (*(volatile long *)&(v)->counter);
} }
/** /**
@ -221,4 +221,27 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
/*
* atomic64_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t
*
* The function returns the old value of *v minus 1, even if
* the atomic variable, v, was not decremented.
*/
static inline long atomic64_dec_if_positive(atomic64_t *v)
{
long c, old, dec;
c = atomic64_read(v);
for (;;) {
dec = c - 1;
if (unlikely(dec < 0))
break;
old = atomic64_cmpxchg((v), c, dec);
if (likely(old == c))
break;
c = old;
}
return dec;
}
#endif /* _ASM_X86_ATOMIC64_64_H */ #endif /* _ASM_X86_ATOMIC64_64_H */

View File

@ -444,7 +444,9 @@ static inline int fls(int x)
#define ARCH_HAS_FAST_MULTIPLIER 1 #define ARCH_HAS_FAST_MULTIPLIER 1
#include <asm-generic/bitops/hweight.h> #include <asm/arch_hweight.h>
#include <asm-generic/bitops/const_hweight.h>
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */

Some files were not shown because too many files have changed in this diff Show More