Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-arm
Pull ARM updates from Russell King: "The major items included in here are: - MCPM, multi-cluster power management, part of the infrastructure required for ARMs big.LITTLE support. - A rework of the ARM KVM code to allow re-use by ARM64. - Error handling cleanups of the IS_ERR_OR_NULL() madness and fixes of that stuff for arch/arm - Preparatory patches for Cortex-M3 support from Uwe Kleine-König. There is also a set of three patches in here from Hugh/Catalin to address freeing of inappropriate page tables on LPAE. You already have these from akpm, but they were already part of my tree at the time he sent them, so unfortunately they'll end up with duplicate commits" * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits) ARM: EXYNOS: remove unnecessary use of IS_ERR_VALUE() ARM: IMX: remove unnecessary use of IS_ERR_VALUE() ARM: OMAP: use consistent error checking ARM: cleanup: OMAP hwmod error checking ARM: 7709/1: mcpm: Add explicit AFLAGS to support v6/v7 multiplatform kernels ARM: 7700/2: Make cpu_init() notrace ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE ARM: 7701/1: mm: Allow arch code to control the user page table ceiling ARM: 7703/1: Disable preemption in broadcast_tlb*_a15_erratum() ARM: mcpm: provide an interface to set the SMP ops at run time ARM: mcpm: generic SMP secondary bringup and hotplug support ARM: mcpm_head.S: vlock-based first man election ARM: mcpm: Add baremetal voting mutexes ARM: mcpm: introduce helpers for platform coherency exit/setup ARM: mcpm: introduce the CPU/cluster power API ARM: multi-cluster PM: secondary kernel entry code ARM: cacheflush: add synchronization helpers for mixed cache state accesses ARM: cpu hotplug: remove majority of cache flushing from platforms ARM: smp: flush L1 cache in cpu_die() ARM: tegra: remove tegra specific cpu_disable() ...
This commit is contained in:
commit
8546dc1d4b
|
@ -0,0 +1,498 @@
|
|||
Cluster-wide Power-up/power-down race avoidance algorithm
|
||||
=========================================================
|
||||
|
||||
This file documents the algorithm which is used to coordinate CPU and
|
||||
cluster setup and teardown operations and to manage hardware coherency
|
||||
controls safely.
|
||||
|
||||
The section "Rationale" explains what the algorithm is for and why it is
|
||||
needed. "Basic model" explains general concepts using a simplified view
|
||||
of the system. The other sections explain the actual details of the
|
||||
algorithm in use.
|
||||
|
||||
|
||||
Rationale
|
||||
---------
|
||||
|
||||
In a system containing multiple CPUs, it is desirable to have the
|
||||
ability to turn off individual CPUs when the system is idle, reducing
|
||||
power consumption and thermal dissipation.
|
||||
|
||||
In a system containing multiple clusters of CPUs, it is also desirable
|
||||
to have the ability to turn off entire clusters.
|
||||
|
||||
Turning entire clusters off and on is a risky business, because it
|
||||
involves performing potentially destructive operations affecting a group
|
||||
of independently running CPUs, while the OS continues to run. This
|
||||
means that we need some coordination in order to ensure that critical
|
||||
cluster-level operations are only performed when it is truly safe to do
|
||||
so.
|
||||
|
||||
Simple locking may not be sufficient to solve this problem, because
|
||||
mechanisms like Linux spinlocks may rely on coherency mechanisms which
|
||||
are not immediately enabled when a cluster powers up. Since enabling or
|
||||
disabling those mechanisms may itself be a non-atomic operation (such as
|
||||
writing some hardware registers and invalidating large caches), other
|
||||
methods of coordination are required in order to guarantee safe
|
||||
power-down and power-up at the cluster level.
|
||||
|
||||
The mechanism presented in this document describes a coherent memory
|
||||
based protocol for performing the needed coordination. It aims to be as
|
||||
lightweight as possible, while providing the required safety properties.
|
||||
|
||||
|
||||
Basic model
|
||||
-----------
|
||||
|
||||
Each cluster and CPU is assigned a state, as follows:
|
||||
|
||||
DOWN
|
||||
COMING_UP
|
||||
UP
|
||||
GOING_DOWN
|
||||
|
||||
+---------> UP ----------+
|
||||
| v
|
||||
|
||||
COMING_UP GOING_DOWN
|
||||
|
||||
^ |
|
||||
+--------- DOWN <--------+
|
||||
|
||||
|
||||
DOWN: The CPU or cluster is not coherent, and is either powered off or
|
||||
suspended, or is ready to be powered off or suspended.
|
||||
|
||||
COMING_UP: The CPU or cluster has committed to moving to the UP state.
|
||||
It may be part way through the process of initialisation and
|
||||
enabling coherency.
|
||||
|
||||
UP: The CPU or cluster is active and coherent at the hardware
|
||||
level. A CPU in this state is not necessarily being used
|
||||
actively by the kernel.
|
||||
|
||||
GOING_DOWN: The CPU or cluster has committed to moving to the DOWN
|
||||
state. It may be part way through the process of teardown and
|
||||
coherency exit.
|
||||
|
||||
|
||||
Each CPU has one of these states assigned to it at any point in time.
|
||||
The CPU states are described in the "CPU state" section, below.
|
||||
|
||||
Each cluster is also assigned a state, but it is necessary to split the
|
||||
state value into two parts (the "cluster" state and "inbound" state) and
|
||||
to introduce additional states in order to avoid races between different
|
||||
CPUs in the cluster simultaneously modifying the state. The cluster-
|
||||
level states are described in the "Cluster state" section.
|
||||
|
||||
To help distinguish the CPU states from cluster states in this
|
||||
discussion, the state names are given a CPU_ prefix for the CPU states,
|
||||
and a CLUSTER_ or INBOUND_ prefix for the cluster states.
|
||||
|
||||
|
||||
CPU state
|
||||
---------
|
||||
|
||||
In this algorithm, each individual core in a multi-core processor is
|
||||
referred to as a "CPU". CPUs are assumed to be single-threaded:
|
||||
therefore, a CPU can only be doing one thing at a single point in time.
|
||||
|
||||
This means that CPUs fit the basic model closely.
|
||||
|
||||
The algorithm defines the following states for each CPU in the system:
|
||||
|
||||
CPU_DOWN
|
||||
CPU_COMING_UP
|
||||
CPU_UP
|
||||
CPU_GOING_DOWN
|
||||
|
||||
cluster setup and
|
||||
CPU setup complete policy decision
|
||||
+-----------> CPU_UP ------------+
|
||||
| v
|
||||
|
||||
CPU_COMING_UP CPU_GOING_DOWN
|
||||
|
||||
^ |
|
||||
+----------- CPU_DOWN <----------+
|
||||
policy decision CPU teardown complete
|
||||
or hardware event
|
||||
|
||||
|
||||
The definitions of the four states correspond closely to the states of
|
||||
the basic model.
|
||||
|
||||
Transitions between states occur as follows.
|
||||
|
||||
A trigger event (spontaneous) means that the CPU can transition to the
|
||||
next state as a result of making local progress only, with no
|
||||
requirement for any external event to happen.
|
||||
|
||||
|
||||
CPU_DOWN:
|
||||
|
||||
A CPU reaches the CPU_DOWN state when it is ready for
|
||||
power-down. On reaching this state, the CPU will typically
|
||||
power itself down or suspend itself, via a WFI instruction or a
|
||||
firmware call.
|
||||
|
||||
Next state: CPU_COMING_UP
|
||||
Conditions: none
|
||||
|
||||
Trigger events:
|
||||
|
||||
a) an explicit hardware power-up operation, resulting
|
||||
from a policy decision on another CPU;
|
||||
|
||||
b) a hardware event, such as an interrupt.
|
||||
|
||||
|
||||
CPU_COMING_UP:
|
||||
|
||||
A CPU cannot start participating in hardware coherency until the
|
||||
cluster is set up and coherent. If the cluster is not ready,
|
||||
then the CPU will wait in the CPU_COMING_UP state until the
|
||||
cluster has been set up.
|
||||
|
||||
Next state: CPU_UP
|
||||
Conditions: The CPU's parent cluster must be in CLUSTER_UP.
|
||||
Trigger events: Transition of the parent cluster to CLUSTER_UP.
|
||||
|
||||
Refer to the "Cluster state" section for a description of the
|
||||
CLUSTER_UP state.
|
||||
|
||||
|
||||
CPU_UP:
|
||||
When a CPU reaches the CPU_UP state, it is safe for the CPU to
|
||||
start participating in local coherency.
|
||||
|
||||
This is done by jumping to the kernel's CPU resume code.
|
||||
|
||||
Note that the definition of this state is slightly different
|
||||
from the basic model definition: CPU_UP does not mean that the
|
||||
CPU is coherent yet, but it does mean that it is safe to resume
|
||||
the kernel. The kernel handles the rest of the resume
|
||||
procedure, so the remaining steps are not visible as part of the
|
||||
race avoidance algorithm.
|
||||
|
||||
The CPU remains in this state until an explicit policy decision
|
||||
is made to shut down or suspend the CPU.
|
||||
|
||||
Next state: CPU_GOING_DOWN
|
||||
Conditions: none
|
||||
Trigger events: explicit policy decision
|
||||
|
||||
|
||||
CPU_GOING_DOWN:
|
||||
|
||||
While in this state, the CPU exits coherency, including any
|
||||
operations required to achieve this (such as cleaning data
|
||||
caches).
|
||||
|
||||
Next state: CPU_DOWN
|
||||
Conditions: local CPU teardown complete
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
|
||||
Cluster state
|
||||
-------------
|
||||
|
||||
A cluster is a group of connected CPUs with some common resources.
|
||||
Because a cluster contains multiple CPUs, it can be doing multiple
|
||||
things at the same time. This has some implications. In particular, a
|
||||
CPU can start up while another CPU is tearing the cluster down.
|
||||
|
||||
In this discussion, the "outbound side" is the view of the cluster state
|
||||
as seen by a CPU tearing the cluster down. The "inbound side" is the
|
||||
view of the cluster state as seen by a CPU setting the CPU up.
|
||||
|
||||
In order to enable safe coordination in such situations, it is important
|
||||
that a CPU which is setting up the cluster can advertise its state
|
||||
independently of the CPU which is tearing down the cluster. For this
|
||||
reason, the cluster state is split into two parts:
|
||||
|
||||
"cluster" state: The global state of the cluster; or the state
|
||||
on the outbound side:
|
||||
|
||||
CLUSTER_DOWN
|
||||
CLUSTER_UP
|
||||
CLUSTER_GOING_DOWN
|
||||
|
||||
"inbound" state: The state of the cluster on the inbound side.
|
||||
|
||||
INBOUND_NOT_COMING_UP
|
||||
INBOUND_COMING_UP
|
||||
|
||||
|
||||
The different pairings of these states results in six possible
|
||||
states for the cluster as a whole:
|
||||
|
||||
CLUSTER_UP
|
||||
+==========> INBOUND_NOT_COMING_UP -------------+
|
||||
# |
|
||||
|
|
||||
CLUSTER_UP <----+ |
|
||||
INBOUND_COMING_UP | v
|
||||
|
||||
^ CLUSTER_GOING_DOWN CLUSTER_GOING_DOWN
|
||||
# INBOUND_COMING_UP <=== INBOUND_NOT_COMING_UP
|
||||
|
||||
CLUSTER_DOWN | |
|
||||
INBOUND_COMING_UP <----+ |
|
||||
|
|
||||
^ |
|
||||
+=========== CLUSTER_DOWN <------------+
|
||||
INBOUND_NOT_COMING_UP
|
||||
|
||||
Transitions -----> can only be made by the outbound CPU, and
|
||||
only involve changes to the "cluster" state.
|
||||
|
||||
Transitions ===##> can only be made by the inbound CPU, and only
|
||||
involve changes to the "inbound" state, except where there is no
|
||||
further transition possible on the outbound side (i.e., the
|
||||
outbound CPU has put the cluster into the CLUSTER_DOWN state).
|
||||
|
||||
The race avoidance algorithm does not provide a way to determine
|
||||
which exact CPUs within the cluster play these roles. This must
|
||||
be decided in advance by some other means. Refer to the section
|
||||
"Last man and first man selection" for more explanation.
|
||||
|
||||
|
||||
CLUSTER_DOWN/INBOUND_NOT_COMING_UP is the only state where the
|
||||
cluster can actually be powered down.
|
||||
|
||||
The parallelism of the inbound and outbound CPUs is observed by
|
||||
the existence of two different paths from CLUSTER_GOING_DOWN/
|
||||
INBOUND_NOT_COMING_UP (corresponding to GOING_DOWN in the basic
|
||||
model) to CLUSTER_DOWN/INBOUND_COMING_UP (corresponding to
|
||||
COMING_UP in the basic model). The second path avoids cluster
|
||||
teardown completely.
|
||||
|
||||
CLUSTER_UP/INBOUND_COMING_UP is equivalent to UP in the basic
|
||||
model. The final transition to CLUSTER_UP/INBOUND_NOT_COMING_UP
|
||||
is trivial and merely resets the state machine ready for the
|
||||
next cycle.
|
||||
|
||||
Details of the allowable transitions follow.
|
||||
|
||||
The next state in each case is notated
|
||||
|
||||
<cluster state>/<inbound state> (<transitioner>)
|
||||
|
||||
where the <transitioner> is the side on which the transition
|
||||
can occur; either the inbound or the outbound side.
|
||||
|
||||
|
||||
CLUSTER_DOWN/INBOUND_NOT_COMING_UP:
|
||||
|
||||
Next state: CLUSTER_DOWN/INBOUND_COMING_UP (inbound)
|
||||
Conditions: none
|
||||
Trigger events:
|
||||
|
||||
a) an explicit hardware power-up operation, resulting
|
||||
from a policy decision on another CPU;
|
||||
|
||||
b) a hardware event, such as an interrupt.
|
||||
|
||||
|
||||
CLUSTER_DOWN/INBOUND_COMING_UP:
|
||||
|
||||
In this state, an inbound CPU sets up the cluster, including
|
||||
enabling of hardware coherency at the cluster level and any
|
||||
other operations (such as cache invalidation) which are required
|
||||
in order to achieve this.
|
||||
|
||||
The purpose of this state is to do sufficient cluster-level
|
||||
setup to enable other CPUs in the cluster to enter coherency
|
||||
safely.
|
||||
|
||||
Next state: CLUSTER_UP/INBOUND_COMING_UP (inbound)
|
||||
Conditions: cluster-level setup and hardware coherency complete
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
|
||||
CLUSTER_UP/INBOUND_COMING_UP:
|
||||
|
||||
Cluster-level setup is complete and hardware coherency is
|
||||
enabled for the cluster. Other CPUs in the cluster can safely
|
||||
enter coherency.
|
||||
|
||||
This is a transient state, leading immediately to
|
||||
CLUSTER_UP/INBOUND_NOT_COMING_UP. All other CPUs on the cluster
|
||||
should consider treat these two states as equivalent.
|
||||
|
||||
Next state: CLUSTER_UP/INBOUND_NOT_COMING_UP (inbound)
|
||||
Conditions: none
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
|
||||
CLUSTER_UP/INBOUND_NOT_COMING_UP:
|
||||
|
||||
Cluster-level setup is complete and hardware coherency is
|
||||
enabled for the cluster. Other CPUs in the cluster can safely
|
||||
enter coherency.
|
||||
|
||||
The cluster will remain in this state until a policy decision is
|
||||
made to power the cluster down.
|
||||
|
||||
Next state: CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP (outbound)
|
||||
Conditions: none
|
||||
Trigger events: policy decision to power down the cluster
|
||||
|
||||
|
||||
CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP:
|
||||
|
||||
An outbound CPU is tearing the cluster down. The selected CPU
|
||||
must wait in this state until all CPUs in the cluster are in the
|
||||
CPU_DOWN state.
|
||||
|
||||
When all CPUs are in the CPU_DOWN state, the cluster can be torn
|
||||
down, for example by cleaning data caches and exiting
|
||||
cluster-level coherency.
|
||||
|
||||
To avoid wasteful unnecessary teardown operations, the outbound
|
||||
should check the inbound cluster state for asynchronous
|
||||
transitions to INBOUND_COMING_UP. Alternatively, individual
|
||||
CPUs can be checked for entry into CPU_COMING_UP or CPU_UP.
|
||||
|
||||
|
||||
Next states:
|
||||
|
||||
CLUSTER_DOWN/INBOUND_NOT_COMING_UP (outbound)
|
||||
Conditions: cluster torn down and ready to power off
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
CLUSTER_GOING_DOWN/INBOUND_COMING_UP (inbound)
|
||||
Conditions: none
|
||||
Trigger events:
|
||||
|
||||
a) an explicit hardware power-up operation,
|
||||
resulting from a policy decision on another
|
||||
CPU;
|
||||
|
||||
b) a hardware event, such as an interrupt.
|
||||
|
||||
|
||||
CLUSTER_GOING_DOWN/INBOUND_COMING_UP:
|
||||
|
||||
The cluster is (or was) being torn down, but another CPU has
|
||||
come online in the meantime and is trying to set up the cluster
|
||||
again.
|
||||
|
||||
If the outbound CPU observes this state, it has two choices:
|
||||
|
||||
a) back out of teardown, restoring the cluster to the
|
||||
CLUSTER_UP state;
|
||||
|
||||
b) finish tearing the cluster down and put the cluster
|
||||
in the CLUSTER_DOWN state; the inbound CPU will
|
||||
set up the cluster again from there.
|
||||
|
||||
Choice (a) permits the removal of some latency by avoiding
|
||||
unnecessary teardown and setup operations in situations where
|
||||
the cluster is not really going to be powered down.
|
||||
|
||||
|
||||
Next states:
|
||||
|
||||
CLUSTER_UP/INBOUND_COMING_UP (outbound)
|
||||
Conditions: cluster-level setup and hardware
|
||||
coherency complete
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
CLUSTER_DOWN/INBOUND_COMING_UP (outbound)
|
||||
Conditions: cluster torn down and ready to power off
|
||||
Trigger events: (spontaneous)
|
||||
|
||||
|
||||
Last man and First man selection
|
||||
--------------------------------
|
||||
|
||||
The CPU which performs cluster tear-down operations on the outbound side
|
||||
is commonly referred to as the "last man".
|
||||
|
||||
The CPU which performs cluster setup on the inbound side is commonly
|
||||
referred to as the "first man".
|
||||
|
||||
The race avoidance algorithm documented above does not provide a
|
||||
mechanism to choose which CPUs should play these roles.
|
||||
|
||||
|
||||
Last man:
|
||||
|
||||
When shutting down the cluster, all the CPUs involved are initially
|
||||
executing Linux and hence coherent. Therefore, ordinary spinlocks can
|
||||
be used to select a last man safely, before the CPUs become
|
||||
non-coherent.
|
||||
|
||||
|
||||
First man:
|
||||
|
||||
Because CPUs may power up asynchronously in response to external wake-up
|
||||
events, a dynamic mechanism is needed to make sure that only one CPU
|
||||
attempts to play the first man role and do the cluster-level
|
||||
initialisation: any other CPUs must wait for this to complete before
|
||||
proceeding.
|
||||
|
||||
Cluster-level initialisation may involve actions such as configuring
|
||||
coherency controls in the bus fabric.
|
||||
|
||||
The current implementation in mcpm_head.S uses a separate mutual exclusion
|
||||
mechanism to do this arbitration. This mechanism is documented in
|
||||
detail in vlocks.txt.
|
||||
|
||||
|
||||
Features and Limitations
|
||||
------------------------
|
||||
|
||||
Implementation:
|
||||
|
||||
The current ARM-based implementation is split between
|
||||
arch/arm/common/mcpm_head.S (low-level inbound CPU operations) and
|
||||
arch/arm/common/mcpm_entry.c (everything else):
|
||||
|
||||
__mcpm_cpu_going_down() signals the transition of a CPU to the
|
||||
CPU_GOING_DOWN state.
|
||||
|
||||
__mcpm_cpu_down() signals the transition of a CPU to the CPU_DOWN
|
||||
state.
|
||||
|
||||
A CPU transitions to CPU_COMING_UP and then to CPU_UP via the
|
||||
low-level power-up code in mcpm_head.S. This could
|
||||
involve CPU-specific setup code, but in the current
|
||||
implementation it does not.
|
||||
|
||||
__mcpm_outbound_enter_critical() and __mcpm_outbound_leave_critical()
|
||||
handle transitions from CLUSTER_UP to CLUSTER_GOING_DOWN
|
||||
and from there to CLUSTER_DOWN or back to CLUSTER_UP (in
|
||||
the case of an aborted cluster power-down).
|
||||
|
||||
These functions are more complex than the __mcpm_cpu_*()
|
||||
functions due to the extra inter-CPU coordination which
|
||||
is needed for safe transitions at the cluster level.
|
||||
|
||||
A cluster transitions from CLUSTER_DOWN back to CLUSTER_UP via
|
||||
the low-level power-up code in mcpm_head.S. This
|
||||
typically involves platform-specific setup code,
|
||||
provided by the platform-specific power_up_setup
|
||||
function registered via mcpm_sync_init.
|
||||
|
||||
Deep topologies:
|
||||
|
||||
As currently described and implemented, the algorithm does not
|
||||
support CPU topologies involving more than two levels (i.e.,
|
||||
clusters of clusters are not supported). The algorithm could be
|
||||
extended by replicating the cluster-level states for the
|
||||
additional topological levels, and modifying the transition
|
||||
rules for the intermediate (non-outermost) cluster levels.
|
||||
|
||||
|
||||
Colophon
|
||||
--------
|
||||
|
||||
Originally created and documented by Dave Martin for Linaro Limited, in
|
||||
collaboration with Nicolas Pitre and Achin Gupta.
|
||||
|
||||
Copyright (C) 2012-2013 Linaro Limited
|
||||
Distributed under the terms of Version 2 of the GNU General Public
|
||||
License, as defined in linux/COPYING.
|
|
@ -0,0 +1,211 @@
|
|||
vlocks for Bare-Metal Mutual Exclusion
|
||||
======================================
|
||||
|
||||
Voting Locks, or "vlocks" provide a simple low-level mutual exclusion
|
||||
mechanism, with reasonable but minimal requirements on the memory
|
||||
system.
|
||||
|
||||
These are intended to be used to coordinate critical activity among CPUs
|
||||
which are otherwise non-coherent, in situations where the hardware
|
||||
provides no other mechanism to support this and ordinary spinlocks
|
||||
cannot be used.
|
||||
|
||||
|
||||
vlocks make use of the atomicity provided by the memory system for
|
||||
writes to a single memory location. To arbitrate, every CPU "votes for
|
||||
itself", by storing a unique number to a common memory location. The
|
||||
final value seen in that memory location when all the votes have been
|
||||
cast identifies the winner.
|
||||
|
||||
In order to make sure that the election produces an unambiguous result
|
||||
in finite time, a CPU will only enter the election in the first place if
|
||||
no winner has been chosen and the election does not appear to have
|
||||
started yet.
|
||||
|
||||
|
||||
Algorithm
|
||||
---------
|
||||
|
||||
The easiest way to explain the vlocks algorithm is with some pseudo-code:
|
||||
|
||||
|
||||
int currently_voting[NR_CPUS] = { 0, };
|
||||
int last_vote = -1; /* no votes yet */
|
||||
|
||||
bool vlock_trylock(int this_cpu)
|
||||
{
|
||||
/* signal our desire to vote */
|
||||
currently_voting[this_cpu] = 1;
|
||||
if (last_vote != -1) {
|
||||
/* someone already volunteered himself */
|
||||
currently_voting[this_cpu] = 0;
|
||||
return false; /* not ourself */
|
||||
}
|
||||
|
||||
/* let's suggest ourself */
|
||||
last_vote = this_cpu;
|
||||
currently_voting[this_cpu] = 0;
|
||||
|
||||
/* then wait until everyone else is done voting */
|
||||
for_each_cpu(i) {
|
||||
while (currently_voting[i] != 0)
|
||||
/* wait */;
|
||||
}
|
||||
|
||||
/* result */
|
||||
if (last_vote == this_cpu)
|
||||
return true; /* we won */
|
||||
return false;
|
||||
}
|
||||
|
||||
bool vlock_unlock(void)
|
||||
{
|
||||
last_vote = -1;
|
||||
}
|
||||
|
||||
|
||||
The currently_voting[] array provides a way for the CPUs to determine
|
||||
whether an election is in progress, and plays a role analogous to the
|
||||
"entering" array in Lamport's bakery algorithm [1].
|
||||
|
||||
However, once the election has started, the underlying memory system
|
||||
atomicity is used to pick the winner. This avoids the need for a static
|
||||
priority rule to act as a tie-breaker, or any counters which could
|
||||
overflow.
|
||||
|
||||
As long as the last_vote variable is globally visible to all CPUs, it
|
||||
will contain only one value that won't change once every CPU has cleared
|
||||
its currently_voting flag.
|
||||
|
||||
|
||||
Features and limitations
|
||||
------------------------
|
||||
|
||||
* vlocks are not intended to be fair. In the contended case, it is the
|
||||
_last_ CPU which attempts to get the lock which will be most likely
|
||||
to win.
|
||||
|
||||
vlocks are therefore best suited to situations where it is necessary
|
||||
to pick a unique winner, but it does not matter which CPU actually
|
||||
wins.
|
||||
|
||||
* Like other similar mechanisms, vlocks will not scale well to a large
|
||||
number of CPUs.
|
||||
|
||||
vlocks can be cascaded in a voting hierarchy to permit better scaling
|
||||
if necessary, as in the following hypothetical example for 4096 CPUs:
|
||||
|
||||
/* first level: local election */
|
||||
my_town = towns[(this_cpu >> 4) & 0xf];
|
||||
I_won = vlock_trylock(my_town, this_cpu & 0xf);
|
||||
if (I_won) {
|
||||
/* we won the town election, let's go for the state */
|
||||
my_state = states[(this_cpu >> 8) & 0xf];
|
||||
I_won = vlock_lock(my_state, this_cpu & 0xf));
|
||||
if (I_won) {
|
||||
/* and so on */
|
||||
I_won = vlock_lock(the_whole_country, this_cpu & 0xf];
|
||||
if (I_won) {
|
||||
/* ... */
|
||||
}
|
||||
vlock_unlock(the_whole_country);
|
||||
}
|
||||
vlock_unlock(my_state);
|
||||
}
|
||||
vlock_unlock(my_town);
|
||||
|
||||
|
||||
ARM implementation
|
||||
------------------
|
||||
|
||||
The current ARM implementation [2] contains some optimisations beyond
|
||||
the basic algorithm:
|
||||
|
||||
* By packing the members of the currently_voting array close together,
|
||||
we can read the whole array in one transaction (providing the number
|
||||
of CPUs potentially contending the lock is small enough). This
|
||||
reduces the number of round-trips required to external memory.
|
||||
|
||||
In the ARM implementation, this means that we can use a single load
|
||||
and comparison:
|
||||
|
||||
LDR Rt, [Rn]
|
||||
CMP Rt, #0
|
||||
|
||||
...in place of code equivalent to:
|
||||
|
||||
LDRB Rt, [Rn]
|
||||
CMP Rt, #0
|
||||
LDRBEQ Rt, [Rn, #1]
|
||||
CMPEQ Rt, #0
|
||||
LDRBEQ Rt, [Rn, #2]
|
||||
CMPEQ Rt, #0
|
||||
LDRBEQ Rt, [Rn, #3]
|
||||
CMPEQ Rt, #0
|
||||
|
||||
This cuts down on the fast-path latency, as well as potentially
|
||||
reducing bus contention in contended cases.
|
||||
|
||||
The optimisation relies on the fact that the ARM memory system
|
||||
guarantees coherency between overlapping memory accesses of
|
||||
different sizes, similarly to many other architectures. Note that
|
||||
we do not care which element of currently_voting appears in which
|
||||
bits of Rt, so there is no need to worry about endianness in this
|
||||
optimisation.
|
||||
|
||||
If there are too many CPUs to read the currently_voting array in
|
||||
one transaction then multiple transations are still required. The
|
||||
implementation uses a simple loop of word-sized loads for this
|
||||
case. The number of transactions is still fewer than would be
|
||||
required if bytes were loaded individually.
|
||||
|
||||
|
||||
In principle, we could aggregate further by using LDRD or LDM, but
|
||||
to keep the code simple this was not attempted in the initial
|
||||
implementation.
|
||||
|
||||
|
||||
* vlocks are currently only used to coordinate between CPUs which are
|
||||
unable to enable their caches yet. This means that the
|
||||
implementation removes many of the barriers which would be required
|
||||
when executing the algorithm in cached memory.
|
||||
|
||||
packing of the currently_voting array does not work with cached
|
||||
memory unless all CPUs contending the lock are cache-coherent, due
|
||||
to cache writebacks from one CPU clobbering values written by other
|
||||
CPUs. (Though if all the CPUs are cache-coherent, you should be
|
||||
probably be using proper spinlocks instead anyway).
|
||||
|
||||
|
||||
* The "no votes yet" value used for the last_vote variable is 0 (not
|
||||
-1 as in the pseudocode). This allows statically-allocated vlocks
|
||||
to be implicitly initialised to an unlocked state simply by putting
|
||||
them in .bss.
|
||||
|
||||
An offset is added to each CPU's ID for the purpose of setting this
|
||||
variable, so that no CPU uses the value 0 for its ID.
|
||||
|
||||
|
||||
Colophon
|
||||
--------
|
||||
|
||||
Originally created and documented by Dave Martin for Linaro Limited, for
|
||||
use in ARM-based big.LITTLE platforms, with review and input gratefully
|
||||
received from Nicolas Pitre and Achin Gupta. Thanks to Nicolas for
|
||||
grabbing most of this text out of the relevant mail thread and writing
|
||||
up the pseudocode.
|
||||
|
||||
Copyright (C) 2012-2013 Linaro Limited
|
||||
Distributed under the terms of Version 2 of the GNU General Public
|
||||
License, as defined in linux/COPYING.
|
||||
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
[1] Lamport, L. "A New Solution of Dijkstra's Concurrent Programming
|
||||
Problem", Communications of the ACM 17, 8 (August 1974), 453-455.
|
||||
|
||||
http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm
|
||||
|
||||
[2] linux/arch/arm/common/vlock.S, www.kernel.org.
|
|
@ -59,6 +59,7 @@ config ARM
|
|||
select CLONE_BACKWARDS
|
||||
select OLD_SIGSUSPEND3
|
||||
select OLD_SIGACTION
|
||||
select HAVE_CONTEXT_TRACKING
|
||||
help
|
||||
The ARM series is a line of low-power-consumption RISC chip designs
|
||||
licensed by ARM Ltd and targeted at embedded applications and
|
||||
|
@ -1479,6 +1480,14 @@ config HAVE_ARM_TWD
|
|||
help
|
||||
This options enables support for the ARM timer and watchdog unit
|
||||
|
||||
config MCPM
|
||||
bool "Multi-Cluster Power Management"
|
||||
depends on CPU_V7 && SMP
|
||||
help
|
||||
This option provides the common power management infrastructure
|
||||
for (multi-)cluster based systems, such as big.LITTLE based
|
||||
systems.
|
||||
|
||||
choice
|
||||
prompt "Memory split"
|
||||
default VMSPLIT_3G
|
||||
|
@ -1565,8 +1574,9 @@ config SCHED_HRTICK
|
|||
def_bool HIGH_RES_TIMERS
|
||||
|
||||
config THUMB2_KERNEL
|
||||
bool "Compile the kernel in Thumb-2 mode"
|
||||
bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
|
||||
depends on CPU_V7 && !CPU_V6 && !CPU_V6K
|
||||
default y if CPU_THUMBONLY
|
||||
select AEABI
|
||||
select ARM_ASM_UNIFIED
|
||||
select ARM_UNWIND
|
||||
|
|
|
@ -641,6 +641,17 @@ config DEBUG_LL_INCLUDE
|
|||
default "debug/zynq.S" if DEBUG_ZYNQ_UART0 || DEBUG_ZYNQ_UART1
|
||||
default "mach/debug-macro.S"
|
||||
|
||||
config DEBUG_UNCOMPRESS
|
||||
bool
|
||||
default y if ARCH_MULTIPLATFORM && DEBUG_LL && \
|
||||
!DEBUG_OMAP2PLUS_UART && \
|
||||
!DEBUG_TEGRA_UART
|
||||
|
||||
config UNCOMPRESS_INCLUDE
|
||||
string
|
||||
default "debug/uncompress.h" if ARCH_MULTIPLATFORM
|
||||
default "mach/uncompress.h"
|
||||
|
||||
config EARLY_PRINTK
|
||||
bool "Early printk"
|
||||
depends on DEBUG_LL
|
||||
|
|
|
@ -24,6 +24,9 @@ endif
|
|||
AFLAGS_head.o += -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
HEAD = head.o
|
||||
OBJS += misc.o decompress.o
|
||||
ifeq ($(CONFIG_DEBUG_UNCOMPRESS),y)
|
||||
OBJS += debug.o
|
||||
endif
|
||||
FONTC = $(srctree)/drivers/video/console/font_acorn_8x8.c
|
||||
|
||||
# string library code (-Os is enforced to keep it much smaller)
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
||||
#include CONFIG_DEBUG_LL_INCLUDE
|
||||
|
||||
ENTRY(putc)
|
||||
addruart r1, r2, r3
|
||||
waituart r3, r1
|
||||
senduart r0, r1
|
||||
busyuart r3, r1
|
||||
mov pc, lr
|
||||
ENDPROC(putc)
|
|
@ -25,13 +25,7 @@ unsigned int __machine_arch_type;
|
|||
static void putstr(const char *ptr);
|
||||
extern void error(char *x);
|
||||
|
||||
#ifdef CONFIG_ARCH_MULTIPLATFORM
|
||||
static inline void putc(int c) {}
|
||||
static inline void flush(void) {}
|
||||
static inline void arch_decomp_setup(void) {}
|
||||
#else
|
||||
#include <mach/uncompress.h>
|
||||
#endif
|
||||
#include CONFIG_UNCOMPRESS_INCLUDE
|
||||
|
||||
#ifdef CONFIG_DEBUG_ICEDCC
|
||||
|
||||
|
|
|
@ -11,3 +11,6 @@ obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o
|
|||
obj-$(CONFIG_SHARP_SCOOP) += scoop.o
|
||||
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
|
||||
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
|
||||
obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o
|
||||
AFLAGS_mcpm_head.o := -march=armv7-a
|
||||
AFLAGS_vlock.o := -march=armv7-a
|
||||
|
|
|
@ -0,0 +1,263 @@
|
|||
/*
|
||||
* arch/arm/common/mcpm_entry.c -- entry point for multi-cluster PM
|
||||
*
|
||||
* Created by: Nicolas Pitre, March 2012
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/irqflags.h>
|
||||
|
||||
#include <asm/mcpm.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/idmap.h>
|
||||
#include <asm/cputype.h>
|
||||
|
||||
extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER];
|
||||
|
||||
void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr)
|
||||
{
|
||||
unsigned long val = ptr ? virt_to_phys(ptr) : 0;
|
||||
mcpm_entry_vectors[cluster][cpu] = val;
|
||||
sync_cache_w(&mcpm_entry_vectors[cluster][cpu]);
|
||||
}
|
||||
|
||||
static const struct mcpm_platform_ops *platform_ops;
|
||||
|
||||
int __init mcpm_platform_register(const struct mcpm_platform_ops *ops)
|
||||
{
|
||||
if (platform_ops)
|
||||
return -EBUSY;
|
||||
platform_ops = ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
|
||||
{
|
||||
if (!platform_ops)
|
||||
return -EUNATCH; /* try not to shadow power_up errors */
|
||||
might_sleep();
|
||||
return platform_ops->power_up(cpu, cluster);
|
||||
}
|
||||
|
||||
typedef void (*phys_reset_t)(unsigned long);
|
||||
|
||||
void mcpm_cpu_power_down(void)
|
||||
{
|
||||
phys_reset_t phys_reset;
|
||||
|
||||
BUG_ON(!platform_ops);
|
||||
BUG_ON(!irqs_disabled());
|
||||
|
||||
/*
|
||||
* Do this before calling into the power_down method,
|
||||
* as it might not always be safe to do afterwards.
|
||||
*/
|
||||
setup_mm_for_reboot();
|
||||
|
||||
platform_ops->power_down();
|
||||
|
||||
/*
|
||||
* It is possible for a power_up request to happen concurrently
|
||||
* with a power_down request for the same CPU. In this case the
|
||||
* power_down method might not be able to actually enter a
|
||||
* powered down state with the WFI instruction if the power_up
|
||||
* method has removed the required reset condition. The
|
||||
* power_down method is then allowed to return. We must perform
|
||||
* a re-entry in the kernel as if the power_up method just had
|
||||
* deasserted reset on the CPU.
|
||||
*
|
||||
* To simplify race issues, the platform specific implementation
|
||||
* must accommodate for the possibility of unordered calls to
|
||||
* power_down and power_up with a usage count. Therefore, if a
|
||||
* call to power_up is issued for a CPU that is not down, then
|
||||
* the next call to power_down must not attempt a full shutdown
|
||||
* but only do the minimum (normally disabling L1 cache and CPU
|
||||
* coherency) and return just as if a concurrent power_up request
|
||||
* had happened as described above.
|
||||
*/
|
||||
|
||||
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
|
||||
phys_reset(virt_to_phys(mcpm_entry_point));
|
||||
|
||||
/* should never get here */
|
||||
BUG();
|
||||
}
|
||||
|
||||
void mcpm_cpu_suspend(u64 expected_residency)
|
||||
{
|
||||
phys_reset_t phys_reset;
|
||||
|
||||
BUG_ON(!platform_ops);
|
||||
BUG_ON(!irqs_disabled());
|
||||
|
||||
/* Very similar to mcpm_cpu_power_down() */
|
||||
setup_mm_for_reboot();
|
||||
platform_ops->suspend(expected_residency);
|
||||
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
|
||||
phys_reset(virt_to_phys(mcpm_entry_point));
|
||||
BUG();
|
||||
}
|
||||
|
||||
int mcpm_cpu_powered_up(void)
|
||||
{
|
||||
if (!platform_ops)
|
||||
return -EUNATCH;
|
||||
if (platform_ops->powered_up)
|
||||
platform_ops->powered_up();
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct sync_struct mcpm_sync;
|
||||
|
||||
/*
|
||||
* __mcpm_cpu_going_down: Indicates that the cpu is being torn down.
|
||||
* This must be called at the point of committing to teardown of a CPU.
|
||||
* The CPU cache (SCTRL.C bit) is expected to still be active.
|
||||
*/
|
||||
void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster)
|
||||
{
|
||||
mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN;
|
||||
sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mcpm_cpu_down: Indicates that cpu teardown is complete and that the
|
||||
* cluster can be torn down without disrupting this CPU.
|
||||
* To avoid deadlocks, this must be called before a CPU is powered down.
|
||||
* The CPU cache (SCTRL.C bit) is expected to be off.
|
||||
* However L2 cache might or might not be active.
|
||||
*/
|
||||
void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster)
|
||||
{
|
||||
dmb();
|
||||
mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN;
|
||||
sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
|
||||
dsb_sev();
|
||||
}
|
||||
|
||||
/*
|
||||
* __mcpm_outbound_leave_critical: Leave the cluster teardown critical section.
|
||||
* @state: the final state of the cluster:
|
||||
* CLUSTER_UP: no destructive teardown was done and the cluster has been
|
||||
* restored to the previous state (CPU cache still active); or
|
||||
* CLUSTER_DOWN: the cluster has been torn-down, ready for power-off
|
||||
* (CPU cache disabled, L2 cache either enabled or disabled).
|
||||
*/
|
||||
void __mcpm_outbound_leave_critical(unsigned int cluster, int state)
|
||||
{
|
||||
dmb();
|
||||
mcpm_sync.clusters[cluster].cluster = state;
|
||||
sync_cache_w(&mcpm_sync.clusters[cluster].cluster);
|
||||
dsb_sev();
|
||||
}
|
||||
|
||||
/*
|
||||
* __mcpm_outbound_enter_critical: Enter the cluster teardown critical section.
|
||||
* This function should be called by the last man, after local CPU teardown
|
||||
* is complete. CPU cache expected to be active.
|
||||
*
|
||||
* Returns:
|
||||
* false: the critical section was not entered because an inbound CPU was
|
||||
* observed, or the cluster is already being set up;
|
||||
* true: the critical section was entered: it is now safe to tear down the
|
||||
* cluster.
|
||||
*/
|
||||
bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster)
|
||||
{
|
||||
unsigned int i;
|
||||
struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster];
|
||||
|
||||
/* Warn inbound CPUs that the cluster is being torn down: */
|
||||
c->cluster = CLUSTER_GOING_DOWN;
|
||||
sync_cache_w(&c->cluster);
|
||||
|
||||
/* Back out if the inbound cluster is already in the critical region: */
|
||||
sync_cache_r(&c->inbound);
|
||||
if (c->inbound == INBOUND_COMING_UP)
|
||||
goto abort;
|
||||
|
||||
/*
|
||||
* Wait for all CPUs to get out of the GOING_DOWN state, so that local
|
||||
* teardown is complete on each CPU before tearing down the cluster.
|
||||
*
|
||||
* If any CPU has been woken up again from the DOWN state, then we
|
||||
* shouldn't be taking the cluster down at all: abort in that case.
|
||||
*/
|
||||
sync_cache_r(&c->cpus);
|
||||
for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) {
|
||||
int cpustate;
|
||||
|
||||
if (i == cpu)
|
||||
continue;
|
||||
|
||||
while (1) {
|
||||
cpustate = c->cpus[i].cpu;
|
||||
if (cpustate != CPU_GOING_DOWN)
|
||||
break;
|
||||
|
||||
wfe();
|
||||
sync_cache_r(&c->cpus[i].cpu);
|
||||
}
|
||||
|
||||
switch (cpustate) {
|
||||
case CPU_DOWN:
|
||||
continue;
|
||||
|
||||
default:
|
||||
goto abort;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
abort:
|
||||
__mcpm_outbound_leave_critical(cluster, CLUSTER_UP);
|
||||
return false;
|
||||
}
|
||||
|
||||
int __mcpm_cluster_state(unsigned int cluster)
|
||||
{
|
||||
sync_cache_r(&mcpm_sync.clusters[cluster].cluster);
|
||||
return mcpm_sync.clusters[cluster].cluster;
|
||||
}
|
||||
|
||||
extern unsigned long mcpm_power_up_setup_phys;
|
||||
|
||||
int __init mcpm_sync_init(
|
||||
void (*power_up_setup)(unsigned int affinity_level))
|
||||
{
|
||||
unsigned int i, j, mpidr, this_cluster;
|
||||
|
||||
BUILD_BUG_ON(MCPM_SYNC_CLUSTER_SIZE * MAX_NR_CLUSTERS != sizeof mcpm_sync);
|
||||
BUG_ON((unsigned long)&mcpm_sync & (__CACHE_WRITEBACK_GRANULE - 1));
|
||||
|
||||
/*
|
||||
* Set initial CPU and cluster states.
|
||||
* Only one cluster is assumed to be active at this point.
|
||||
*/
|
||||
for (i = 0; i < MAX_NR_CLUSTERS; i++) {
|
||||
mcpm_sync.clusters[i].cluster = CLUSTER_DOWN;
|
||||
mcpm_sync.clusters[i].inbound = INBOUND_NOT_COMING_UP;
|
||||
for (j = 0; j < MAX_CPUS_PER_CLUSTER; j++)
|
||||
mcpm_sync.clusters[i].cpus[j].cpu = CPU_DOWN;
|
||||
}
|
||||
mpidr = read_cpuid_mpidr();
|
||||
this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
|
||||
for_each_online_cpu(i)
|
||||
mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP;
|
||||
mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP;
|
||||
sync_cache_w(&mcpm_sync);
|
||||
|
||||
if (power_up_setup) {
|
||||
mcpm_power_up_setup_phys = virt_to_phys(power_up_setup);
|
||||
sync_cache_w(&mcpm_power_up_setup_phys);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,219 @@
|
|||
/*
|
||||
* arch/arm/common/mcpm_head.S -- kernel entry point for multi-cluster PM
|
||||
*
|
||||
* Created by: Nicolas Pitre, March 2012
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
*
|
||||
* Refer to Documentation/arm/cluster-pm-race-avoidance.txt
|
||||
* for details of the synchronisation algorithms used here.
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/mcpm.h>
|
||||
|
||||
#include "vlock.h"
|
||||
|
||||
.if MCPM_SYNC_CLUSTER_CPUS
|
||||
.error "cpus must be the first member of struct mcpm_sync_struct"
|
||||
.endif
|
||||
|
||||
.macro pr_dbg string
|
||||
#if defined(CONFIG_DEBUG_LL) && defined(DEBUG)
|
||||
b 1901f
|
||||
1902: .asciz "CPU"
|
||||
1903: .asciz " cluster"
|
||||
1904: .asciz ": \string"
|
||||
.align
|
||||
1901: adr r0, 1902b
|
||||
bl printascii
|
||||
mov r0, r9
|
||||
bl printhex8
|
||||
adr r0, 1903b
|
||||
bl printascii
|
||||
mov r0, r10
|
||||
bl printhex8
|
||||
adr r0, 1904b
|
||||
bl printascii
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.arm
|
||||
.align
|
||||
|
||||
ENTRY(mcpm_entry_point)
|
||||
|
||||
THUMB( adr r12, BSYM(1f) )
|
||||
THUMB( bx r12 )
|
||||
THUMB( .thumb )
|
||||
1:
|
||||
mrc p15, 0, r0, c0, c0, 5 @ MPIDR
|
||||
ubfx r9, r0, #0, #8 @ r9 = cpu
|
||||
ubfx r10, r0, #8, #8 @ r10 = cluster
|
||||
mov r3, #MAX_CPUS_PER_CLUSTER
|
||||
mla r4, r3, r10, r9 @ r4 = canonical CPU index
|
||||
cmp r4, #(MAX_CPUS_PER_CLUSTER * MAX_NR_CLUSTERS)
|
||||
blo 2f
|
||||
|
||||
/* We didn't expect this CPU. Try to cheaply make it quiet. */
|
||||
1: wfi
|
||||
wfe
|
||||
b 1b
|
||||
|
||||
2: pr_dbg "kernel mcpm_entry_point\n"
|
||||
|
||||
/*
|
||||
* MMU is off so we need to get to various variables in a
|
||||
* position independent way.
|
||||
*/
|
||||
adr r5, 3f
|
||||
ldmia r5, {r6, r7, r8, r11}
|
||||
add r6, r5, r6 @ r6 = mcpm_entry_vectors
|
||||
ldr r7, [r5, r7] @ r7 = mcpm_power_up_setup_phys
|
||||
add r8, r5, r8 @ r8 = mcpm_sync
|
||||
add r11, r5, r11 @ r11 = first_man_locks
|
||||
|
||||
mov r0, #MCPM_SYNC_CLUSTER_SIZE
|
||||
mla r8, r0, r10, r8 @ r8 = sync cluster base
|
||||
|
||||
@ Signal that this CPU is coming UP:
|
||||
mov r0, #CPU_COMING_UP
|
||||
mov r5, #MCPM_SYNC_CPU_SIZE
|
||||
mla r5, r9, r5, r8 @ r5 = sync cpu address
|
||||
strb r0, [r5]
|
||||
|
||||
@ At this point, the cluster cannot unexpectedly enter the GOING_DOWN
|
||||
@ state, because there is at least one active CPU (this CPU).
|
||||
|
||||
mov r0, #VLOCK_SIZE
|
||||
mla r11, r0, r10, r11 @ r11 = cluster first man lock
|
||||
mov r0, r11
|
||||
mov r1, r9 @ cpu
|
||||
bl vlock_trylock @ implies DMB
|
||||
|
||||
cmp r0, #0 @ failed to get the lock?
|
||||
bne mcpm_setup_wait @ wait for cluster setup if so
|
||||
|
||||
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
|
||||
cmp r0, #CLUSTER_UP @ cluster already up?
|
||||
bne mcpm_setup @ if not, set up the cluster
|
||||
|
||||
@ Otherwise, release the first man lock and skip setup:
|
||||
mov r0, r11
|
||||
bl vlock_unlock
|
||||
b mcpm_setup_complete
|
||||
|
||||
mcpm_setup:
|
||||
@ Control dependency implies strb not observable before previous ldrb.
|
||||
|
||||
@ Signal that the cluster is being brought up:
|
||||
mov r0, #INBOUND_COMING_UP
|
||||
strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
|
||||
dmb
|
||||
|
||||
@ Any CPU trying to take the cluster into CLUSTER_GOING_DOWN from this
|
||||
@ point onwards will observe INBOUND_COMING_UP and abort.
|
||||
|
||||
@ Wait for any previously-pending cluster teardown operations to abort
|
||||
@ or complete:
|
||||
mcpm_teardown_wait:
|
||||
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
|
||||
cmp r0, #CLUSTER_GOING_DOWN
|
||||
bne first_man_setup
|
||||
wfe
|
||||
b mcpm_teardown_wait
|
||||
|
||||
first_man_setup:
|
||||
dmb
|
||||
|
||||
@ If the outbound gave up before teardown started, skip cluster setup:
|
||||
|
||||
cmp r0, #CLUSTER_UP
|
||||
beq mcpm_setup_leave
|
||||
|
||||
@ power_up_setup is now responsible for setting up the cluster:
|
||||
|
||||
cmp r7, #0
|
||||
mov r0, #1 @ second (cluster) affinity level
|
||||
blxne r7 @ Call power_up_setup if defined
|
||||
dmb
|
||||
|
||||
mov r0, #CLUSTER_UP
|
||||
strb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
|
||||
dmb
|
||||
|
||||
mcpm_setup_leave:
|
||||
@ Leave the cluster setup critical section:
|
||||
|
||||
mov r0, #INBOUND_NOT_COMING_UP
|
||||
strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
|
||||
dsb
|
||||
sev
|
||||
|
||||
mov r0, r11
|
||||
bl vlock_unlock @ implies DMB
|
||||
b mcpm_setup_complete
|
||||
|
||||
@ In the contended case, non-first men wait here for cluster setup
|
||||
@ to complete:
|
||||
mcpm_setup_wait:
|
||||
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
|
||||
cmp r0, #CLUSTER_UP
|
||||
wfene
|
||||
bne mcpm_setup_wait
|
||||
dmb
|
||||
|
||||
mcpm_setup_complete:
|
||||
@ If a platform-specific CPU setup hook is needed, it is
|
||||
@ called from here.
|
||||
|
||||
cmp r7, #0
|
||||
mov r0, #0 @ first (CPU) affinity level
|
||||
blxne r7 @ Call power_up_setup if defined
|
||||
dmb
|
||||
|
||||
@ Mark the CPU as up:
|
||||
|
||||
mov r0, #CPU_UP
|
||||
strb r0, [r5]
|
||||
|
||||
@ Observability order of CPU_UP and opening of the gate does not matter.
|
||||
|
||||
mcpm_entry_gated:
|
||||
ldr r5, [r6, r4, lsl #2] @ r5 = CPU entry vector
|
||||
cmp r5, #0
|
||||
wfeeq
|
||||
beq mcpm_entry_gated
|
||||
dmb
|
||||
|
||||
pr_dbg "released\n"
|
||||
bx r5
|
||||
|
||||
.align 2
|
||||
|
||||
3: .word mcpm_entry_vectors - .
|
||||
.word mcpm_power_up_setup_phys - 3b
|
||||
.word mcpm_sync - 3b
|
||||
.word first_man_locks - 3b
|
||||
|
||||
ENDPROC(mcpm_entry_point)
|
||||
|
||||
.bss
|
||||
|
||||
.align CACHE_WRITEBACK_ORDER
|
||||
.type first_man_locks, #object
|
||||
first_man_locks:
|
||||
.space VLOCK_SIZE * MAX_NR_CLUSTERS
|
||||
.align CACHE_WRITEBACK_ORDER
|
||||
|
||||
.type mcpm_entry_vectors, #object
|
||||
ENTRY(mcpm_entry_vectors)
|
||||
.space 4 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER
|
||||
|
||||
.type mcpm_power_up_setup_phys, #object
|
||||
ENTRY(mcpm_power_up_setup_phys)
|
||||
.space 4 @ set by mcpm_sync_init()
|
|
@ -0,0 +1,92 @@
|
|||
/*
|
||||
* linux/arch/arm/mach-vexpress/mcpm_platsmp.c
|
||||
*
|
||||
* Created by: Nicolas Pitre, November 2012
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* Code to handle secondary CPU bringup and hotplug for the cluster power API.
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
#include <linux/irqchip/arm-gic.h>
|
||||
|
||||
#include <asm/mcpm.h>
|
||||
#include <asm/smp.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
static void __init simple_smp_init_cpus(void)
|
||||
{
|
||||
}
|
||||
|
||||
static int __cpuinit mcpm_boot_secondary(unsigned int cpu, struct task_struct *idle)
|
||||
{
|
||||
unsigned int mpidr, pcpu, pcluster, ret;
|
||||
extern void secondary_startup(void);
|
||||
|
||||
mpidr = cpu_logical_map(cpu);
|
||||
pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
|
||||
pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
|
||||
pr_debug("%s: logical CPU %d is physical CPU %d cluster %d\n",
|
||||
__func__, cpu, pcpu, pcluster);
|
||||
|
||||
mcpm_set_entry_vector(pcpu, pcluster, NULL);
|
||||
ret = mcpm_cpu_power_up(pcpu, pcluster);
|
||||
if (ret)
|
||||
return ret;
|
||||
mcpm_set_entry_vector(pcpu, pcluster, secondary_startup);
|
||||
arch_send_wakeup_ipi_mask(cpumask_of(cpu));
|
||||
dsb_sev();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __cpuinit mcpm_secondary_init(unsigned int cpu)
|
||||
{
|
||||
mcpm_cpu_powered_up();
|
||||
gic_secondary_init(0);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
|
||||
static int mcpm_cpu_disable(unsigned int cpu)
|
||||
{
|
||||
/*
|
||||
* We assume all CPUs may be shut down.
|
||||
* This would be the hook to use for eventual Secure
|
||||
* OS migration requests as described in the PSCI spec.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mcpm_cpu_die(unsigned int cpu)
|
||||
{
|
||||
unsigned int mpidr, pcpu, pcluster;
|
||||
mpidr = read_cpuid_mpidr();
|
||||
pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
|
||||
pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
|
||||
mcpm_set_entry_vector(pcpu, pcluster, NULL);
|
||||
mcpm_cpu_power_down();
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static struct smp_operations __initdata mcpm_smp_ops = {
|
||||
.smp_init_cpus = simple_smp_init_cpus,
|
||||
.smp_boot_secondary = mcpm_boot_secondary,
|
||||
.smp_secondary_init = mcpm_secondary_init,
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
.cpu_disable = mcpm_cpu_disable,
|
||||
.cpu_die = mcpm_cpu_die,
|
||||
#endif
|
||||
};
|
||||
|
||||
void __init mcpm_smp_set_ops(void)
|
||||
{
|
||||
smp_set_ops(&mcpm_smp_ops);
|
||||
}
|
|
@ -0,0 +1,108 @@
|
|||
/*
|
||||
* vlock.S - simple voting lock implementation for ARM
|
||||
*
|
||||
* Created by: Dave Martin, 2012-08-16
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
*
|
||||
* This algorithm is described in more detail in
|
||||
* Documentation/arm/vlocks.txt.
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include "vlock.h"
|
||||
|
||||
/* Select different code if voting flags can fit in a single word. */
|
||||
#if VLOCK_VOTING_SIZE > 4
|
||||
#define FEW(x...)
|
||||
#define MANY(x...) x
|
||||
#else
|
||||
#define FEW(x...) x
|
||||
#define MANY(x...)
|
||||
#endif
|
||||
|
||||
@ voting lock for first-man coordination
|
||||
|
||||
.macro voting_begin rbase:req, rcpu:req, rscratch:req
|
||||
mov \rscratch, #1
|
||||
strb \rscratch, [\rbase, \rcpu]
|
||||
dmb
|
||||
.endm
|
||||
|
||||
.macro voting_end rbase:req, rcpu:req, rscratch:req
|
||||
dmb
|
||||
mov \rscratch, #0
|
||||
strb \rscratch, [\rbase, \rcpu]
|
||||
dsb
|
||||
sev
|
||||
.endm
|
||||
|
||||
/*
|
||||
* The vlock structure must reside in Strongly-Ordered or Device memory.
|
||||
* This implementation deliberately eliminates most of the barriers which
|
||||
* would be required for other memory types, and assumes that independent
|
||||
* writes to neighbouring locations within a cacheline do not interfere
|
||||
* with one another.
|
||||
*/
|
||||
|
||||
@ r0: lock structure base
|
||||
@ r1: CPU ID (0-based index within cluster)
|
||||
ENTRY(vlock_trylock)
|
||||
add r1, r1, #VLOCK_VOTING_OFFSET
|
||||
|
||||
voting_begin r0, r1, r2
|
||||
|
||||
ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
|
||||
cmp r2, #VLOCK_OWNER_NONE
|
||||
bne trylock_fail @ fail if so
|
||||
|
||||
@ Control dependency implies strb not observable before previous ldrb.
|
||||
|
||||
strb r1, [r0, #VLOCK_OWNER_OFFSET] @ submit my vote
|
||||
|
||||
voting_end r0, r1, r2 @ implies DMB
|
||||
|
||||
@ Wait for the current round of voting to finish:
|
||||
|
||||
MANY( mov r3, #VLOCK_VOTING_OFFSET )
|
||||
0:
|
||||
MANY( ldr r2, [r0, r3] )
|
||||
FEW( ldr r2, [r0, #VLOCK_VOTING_OFFSET] )
|
||||
cmp r2, #0
|
||||
wfene
|
||||
bne 0b
|
||||
MANY( add r3, r3, #4 )
|
||||
MANY( cmp r3, #VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE )
|
||||
MANY( bne 0b )
|
||||
|
||||
@ Check who won:
|
||||
|
||||
dmb
|
||||
ldrb r2, [r0, #VLOCK_OWNER_OFFSET]
|
||||
eor r0, r1, r2 @ zero if I won, else nonzero
|
||||
bx lr
|
||||
|
||||
trylock_fail:
|
||||
voting_end r0, r1, r2
|
||||
mov r0, #1 @ nonzero indicates that I lost
|
||||
bx lr
|
||||
ENDPROC(vlock_trylock)
|
||||
|
||||
@ r0: lock structure base
|
||||
ENTRY(vlock_unlock)
|
||||
dmb
|
||||
mov r1, #VLOCK_OWNER_NONE
|
||||
strb r1, [r0, #VLOCK_OWNER_OFFSET]
|
||||
dsb
|
||||
sev
|
||||
bx lr
|
||||
ENDPROC(vlock_unlock)
|
|
@ -0,0 +1,29 @@
|
|||
/*
|
||||
* vlock.h - simple voting lock implementation
|
||||
*
|
||||
* Created by: Dave Martin, 2012-08-16
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __VLOCK_H
|
||||
#define __VLOCK_H
|
||||
|
||||
#include <asm/mcpm.h>
|
||||
|
||||
/* Offsets and sizes are rounded to a word (4 bytes) */
|
||||
#define VLOCK_OWNER_OFFSET 0
|
||||
#define VLOCK_VOTING_OFFSET 4
|
||||
#define VLOCK_VOTING_SIZE ((MAX_CPUS_PER_CLUSTER + 3) / 4 * 4)
|
||||
#define VLOCK_SIZE (VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE)
|
||||
#define VLOCK_OWNER_NONE 0
|
||||
|
||||
#endif /* ! __VLOCK_H */
|
|
@ -243,6 +243,29 @@ typedef struct {
|
|||
|
||||
#define ATOMIC64_INIT(i) { (i) }
|
||||
|
||||
#ifdef CONFIG_ARM_LPAE
|
||||
static inline u64 atomic64_read(const atomic64_t *v)
|
||||
{
|
||||
u64 result;
|
||||
|
||||
__asm__ __volatile__("@ atomic64_read\n"
|
||||
" ldrd %0, %H0, [%1]"
|
||||
: "=&r" (result)
|
||||
: "r" (&v->counter), "Qo" (v->counter)
|
||||
);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static inline void atomic64_set(atomic64_t *v, u64 i)
|
||||
{
|
||||
__asm__ __volatile__("@ atomic64_set\n"
|
||||
" strd %2, %H2, [%1]"
|
||||
: "=Qo" (v->counter)
|
||||
: "r" (&v->counter), "r" (i)
|
||||
);
|
||||
}
|
||||
#else
|
||||
static inline u64 atomic64_read(const atomic64_t *v)
|
||||
{
|
||||
u64 result;
|
||||
|
@ -269,6 +292,7 @@ static inline void atomic64_set(atomic64_t *v, u64 i)
|
|||
: "r" (&v->counter), "r" (i)
|
||||
: "cc");
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void atomic64_add(u64 i, atomic64_t *v)
|
||||
{
|
||||
|
|
|
@ -363,4 +363,79 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
|
|||
flush_cache_all();
|
||||
}
|
||||
|
||||
/*
|
||||
* Memory synchronization helpers for mixed cached vs non cached accesses.
|
||||
*
|
||||
* Some synchronization algorithms have to set states in memory with the
|
||||
* cache enabled or disabled depending on the code path. It is crucial
|
||||
* to always ensure proper cache maintenance to update main memory right
|
||||
* away in that case.
|
||||
*
|
||||
* Any cached write must be followed by a cache clean operation.
|
||||
* Any cached read must be preceded by a cache invalidate operation.
|
||||
* Yet, in the read case, a cache flush i.e. atomic clean+invalidate
|
||||
* operation is needed to avoid discarding possible concurrent writes to the
|
||||
* accessed memory.
|
||||
*
|
||||
* Also, in order to prevent a cached writer from interfering with an
|
||||
* adjacent non-cached writer, each state variable must be located to
|
||||
* a separate cache line.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This needs to be >= the max cache writeback size of all
|
||||
* supported platforms included in the current kernel configuration.
|
||||
* This is used to align state variables to their own cache lines.
|
||||
*/
|
||||
#define __CACHE_WRITEBACK_ORDER 6 /* guessed from existing platforms */
|
||||
#define __CACHE_WRITEBACK_GRANULE (1 << __CACHE_WRITEBACK_ORDER)
|
||||
|
||||
/*
|
||||
* There is no __cpuc_clean_dcache_area but we use it anyway for
|
||||
* code intent clarity, and alias it to __cpuc_flush_dcache_area.
|
||||
*/
|
||||
#define __cpuc_clean_dcache_area __cpuc_flush_dcache_area
|
||||
|
||||
/*
|
||||
* Ensure preceding writes to *p by this CPU are visible to
|
||||
* subsequent reads by other CPUs:
|
||||
*/
|
||||
static inline void __sync_cache_range_w(volatile void *p, size_t size)
|
||||
{
|
||||
char *_p = (char *)p;
|
||||
|
||||
__cpuc_clean_dcache_area(_p, size);
|
||||
outer_clean_range(__pa(_p), __pa(_p + size));
|
||||
}
|
||||
|
||||
/*
|
||||
* Ensure preceding writes to *p by other CPUs are visible to
|
||||
* subsequent reads by this CPU. We must be careful not to
|
||||
* discard data simultaneously written by another CPU, hence the
|
||||
* usage of flush rather than invalidate operations.
|
||||
*/
|
||||
static inline void __sync_cache_range_r(volatile void *p, size_t size)
|
||||
{
|
||||
char *_p = (char *)p;
|
||||
|
||||
#ifdef CONFIG_OUTER_CACHE
|
||||
if (outer_cache.flush_range) {
|
||||
/*
|
||||
* Ensure dirty data migrated from other CPUs into our cache
|
||||
* are cleaned out safely before the outer cache is cleaned:
|
||||
*/
|
||||
__cpuc_clean_dcache_area(_p, size);
|
||||
|
||||
/* Clean and invalidate stale data for *p from outer ... */
|
||||
outer_flush_range(__pa(_p), __pa(_p + size));
|
||||
}
|
||||
#endif
|
||||
|
||||
/* ... and inner cache: */
|
||||
__cpuc_flush_dcache_area(_p, size);
|
||||
}
|
||||
|
||||
#define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr))
|
||||
#define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr))
|
||||
|
||||
#endif
|
||||
|
|
|
@ -42,6 +42,8 @@
|
|||
#define vectors_high() (0)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
|
||||
extern unsigned long cr_no_alignment; /* defined in entry-armv.S */
|
||||
extern unsigned long cr_alignment; /* defined in entry-armv.S */
|
||||
|
||||
|
@ -82,6 +84,18 @@ static inline void set_copro_access(unsigned int val)
|
|||
isb();
|
||||
}
|
||||
|
||||
#endif
|
||||
#else /* ifdef CONFIG_CPU_CP15 */
|
||||
|
||||
/*
|
||||
* cr_alignment and cr_no_alignment are tightly coupled to cp15 (at least in the
|
||||
* minds of the developers). Yielding 0 for machines without a cp15 (and making
|
||||
* it read-only) is fine for most cases and saves quite some #ifdeffery.
|
||||
*/
|
||||
#define cr_no_alignment UL(0)
|
||||
#define cr_alignment UL(0)
|
||||
|
||||
#endif /* ifdef CONFIG_CPU_CP15 / else */
|
||||
|
||||
#endif /* ifndef __ASSEMBLY__ */
|
||||
|
||||
#endif
|
||||
|
|
|
@ -38,32 +38,6 @@
|
|||
#define MPIDR_AFFINITY_LEVEL(mpidr, level) \
|
||||
((mpidr >> (MPIDR_LEVEL_BITS * level)) & MPIDR_LEVEL_MASK)
|
||||
|
||||
extern unsigned int processor_id;
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
#define read_cpuid(reg) \
|
||||
({ \
|
||||
unsigned int __val; \
|
||||
asm("mrc p15, 0, %0, c0, c0, " __stringify(reg) \
|
||||
: "=r" (__val) \
|
||||
: \
|
||||
: "cc"); \
|
||||
__val; \
|
||||
})
|
||||
#define read_cpuid_ext(ext_reg) \
|
||||
({ \
|
||||
unsigned int __val; \
|
||||
asm("mrc p15, 0, %0, c0, " ext_reg \
|
||||
: "=r" (__val) \
|
||||
: \
|
||||
: "cc"); \
|
||||
__val; \
|
||||
})
|
||||
#else
|
||||
#define read_cpuid(reg) (processor_id)
|
||||
#define read_cpuid_ext(reg) 0
|
||||
#endif
|
||||
|
||||
#define ARM_CPU_IMP_ARM 0x41
|
||||
#define ARM_CPU_IMP_INTEL 0x69
|
||||
|
||||
|
@ -82,6 +56,46 @@ extern unsigned int processor_id;
|
|||
#define ARM_CPU_XSCALE_ARCH_V2 0x4000
|
||||
#define ARM_CPU_XSCALE_ARCH_V3 0x6000
|
||||
|
||||
extern unsigned int processor_id;
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
#define read_cpuid(reg) \
|
||||
({ \
|
||||
unsigned int __val; \
|
||||
asm("mrc p15, 0, %0, c0, c0, " __stringify(reg) \
|
||||
: "=r" (__val) \
|
||||
: \
|
||||
: "cc"); \
|
||||
__val; \
|
||||
})
|
||||
|
||||
#define read_cpuid_ext(ext_reg) \
|
||||
({ \
|
||||
unsigned int __val; \
|
||||
asm("mrc p15, 0, %0, c0, " ext_reg \
|
||||
: "=r" (__val) \
|
||||
: \
|
||||
: "cc"); \
|
||||
__val; \
|
||||
})
|
||||
|
||||
#else /* ifdef CONFIG_CPU_CP15 */
|
||||
|
||||
/*
|
||||
* read_cpuid and read_cpuid_ext should only ever be called on machines that
|
||||
* have cp15 so warn on other usages.
|
||||
*/
|
||||
#define read_cpuid(reg) \
|
||||
({ \
|
||||
WARN_ON_ONCE(1); \
|
||||
0; \
|
||||
})
|
||||
|
||||
#define read_cpuid_ext(reg) read_cpuid(reg)
|
||||
|
||||
#endif /* ifdef CONFIG_CPU_CP15 / else */
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
/*
|
||||
* The CPU ID never changes at run time, so we might as well tell the
|
||||
* compiler that it's constant. Use this function to read the CPU ID
|
||||
|
@ -92,6 +106,15 @@ static inline unsigned int __attribute_const__ read_cpuid_id(void)
|
|||
return read_cpuid(CPUID_ID);
|
||||
}
|
||||
|
||||
#else /* ifdef CONFIG_CPU_CP15 */
|
||||
|
||||
static inline unsigned int __attribute_const__ read_cpuid_id(void)
|
||||
{
|
||||
return processor_id;
|
||||
}
|
||||
|
||||
#endif /* ifdef CONFIG_CPU_CP15 / else */
|
||||
|
||||
static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
|
||||
{
|
||||
return (read_cpuid_id() & 0xFF000000) >> 24;
|
||||
|
|
|
@ -18,12 +18,12 @@
|
|||
* ================
|
||||
*
|
||||
* We have the following to choose from:
|
||||
* arm6 - ARM6 style
|
||||
* arm7 - ARM7 style
|
||||
* v4_early - ARMv4 without Thumb early abort handler
|
||||
* v4t_late - ARMv4 with Thumb late abort handler
|
||||
* v4t_early - ARMv4 with Thumb early abort handler
|
||||
* v5tej_early - ARMv5 with Thumb and Java early abort handler
|
||||
* v5t_early - ARMv5 with Thumb early abort handler
|
||||
* v5tj_early - ARMv5 with Thumb and Java early abort handler
|
||||
* xscale - ARMv5 with Thumb with Xscale extensions
|
||||
* v6_early - ARMv6 generic early abort handler
|
||||
* v7_early - ARMv7 generic early abort handler
|
||||
|
@ -39,14 +39,6 @@
|
|||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_LV4T
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
# else
|
||||
# define CPU_DABORT_HANDLER v4t_late_abort
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV4
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
|
@ -55,6 +47,14 @@
|
|||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_LV4T
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
# else
|
||||
# define CPU_DABORT_HANDLER v4t_late_abort
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV4T
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
|
@ -63,14 +63,6 @@
|
|||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV5TJ
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
# else
|
||||
# define CPU_DABORT_HANDLER v5tj_early_abort
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV5T
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
|
@ -79,6 +71,14 @@
|
|||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV5TJ
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
# else
|
||||
# define CPU_DABORT_HANDLER v5tj_early_abort
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_ABRT_EV6
|
||||
# ifdef CPU_DABORT_HANDLER
|
||||
# define MULTI_DABORT 1
|
||||
|
|
|
@ -211,4 +211,8 @@
|
|||
|
||||
#define HSR_HVC_IMM_MASK ((1UL << 16) - 1)
|
||||
|
||||
#define HSR_DABT_S1PTW (1U << 7)
|
||||
#define HSR_DABT_CM (1U << 8)
|
||||
#define HSR_DABT_EA (1U << 9)
|
||||
|
||||
#endif /* __ARM_KVM_ARM_H__ */
|
||||
|
|
|
@ -75,7 +75,7 @@ extern char __kvm_hyp_code_end[];
|
|||
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
|
||||
|
||||
extern void __kvm_flush_vm_context(void);
|
||||
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
|
||||
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
|
||||
|
||||
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
|
||||
#endif
|
||||
|
|
|
@ -22,11 +22,12 @@
|
|||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_mmio.h>
|
||||
#include <asm/kvm_arm.h>
|
||||
|
||||
u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
|
||||
u32 *vcpu_spsr(struct kvm_vcpu *vcpu);
|
||||
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
|
||||
unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu);
|
||||
|
||||
int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run);
|
||||
bool kvm_condition_valid(struct kvm_vcpu *vcpu);
|
||||
void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr);
|
||||
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
|
||||
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||
|
@ -37,14 +38,14 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
|
|||
return 1;
|
||||
}
|
||||
|
||||
static inline u32 *vcpu_pc(struct kvm_vcpu *vcpu)
|
||||
static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_pc;
|
||||
return &vcpu->arch.regs.usr_regs.ARM_pc;
|
||||
}
|
||||
|
||||
static inline u32 *vcpu_cpsr(struct kvm_vcpu *vcpu)
|
||||
static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_cpsr;
|
||||
return &vcpu->arch.regs.usr_regs.ARM_cpsr;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
|
||||
|
@ -69,4 +70,96 @@ static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
|
|||
return reg == 15;
|
||||
}
|
||||
|
||||
static inline u32 kvm_vcpu_get_hsr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu->arch.fault.hsr;
|
||||
}
|
||||
|
||||
static inline unsigned long kvm_vcpu_get_hfar(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu->arch.fault.hxfar;
|
||||
}
|
||||
|
||||
static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
|
||||
}
|
||||
|
||||
static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu->arch.fault.hyp_pc;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_WNR;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_dabt_issext(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_SSE;
|
||||
}
|
||||
|
||||
static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (kvm_vcpu_get_hsr(vcpu) & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_dabt_isextabt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_EA;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
|
||||
}
|
||||
|
||||
/* Get Access Size from a data abort */
|
||||
static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
switch ((kvm_vcpu_get_hsr(vcpu) >> 22) & 0x3) {
|
||||
case 0:
|
||||
return 1;
|
||||
case 1:
|
||||
return 2;
|
||||
case 2:
|
||||
return 4;
|
||||
default:
|
||||
kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
|
||||
/* This one is not specific to Data Abort */
|
||||
static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_IL;
|
||||
}
|
||||
|
||||
static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) >> HSR_EC_SHIFT;
|
||||
}
|
||||
|
||||
static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_trap_get_class(vcpu) == HSR_EC_IABT;
|
||||
}
|
||||
|
||||
static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_FSC_TYPE;
|
||||
}
|
||||
|
||||
static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_vcpu_get_hsr(vcpu) & HSR_HVC_IMM_MASK;
|
||||
}
|
||||
|
||||
#endif /* __ARM_KVM_EMULATE_H__ */
|
||||
|
|
|
@ -80,6 +80,15 @@ struct kvm_mmu_memory_cache {
|
|||
void *objects[KVM_NR_MEM_OBJS];
|
||||
};
|
||||
|
||||
struct kvm_vcpu_fault_info {
|
||||
u32 hsr; /* Hyp Syndrome Register */
|
||||
u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
|
||||
u32 hpfar; /* Hyp IPA Fault Address Register */
|
||||
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
|
||||
};
|
||||
|
||||
typedef struct vfp_hard_struct kvm_kernel_vfp_t;
|
||||
|
||||
struct kvm_vcpu_arch {
|
||||
struct kvm_regs regs;
|
||||
|
||||
|
@ -93,13 +102,11 @@ struct kvm_vcpu_arch {
|
|||
u32 midr;
|
||||
|
||||
/* Exception Information */
|
||||
u32 hsr; /* Hyp Syndrome Register */
|
||||
u32 hxfar; /* Hyp Data/Inst Fault Address Register */
|
||||
u32 hpfar; /* Hyp IPA Fault Address Register */
|
||||
struct kvm_vcpu_fault_info fault;
|
||||
|
||||
/* Floating point registers (VFP and Advanced SIMD/NEON) */
|
||||
struct vfp_hard_struct vfp_guest;
|
||||
struct vfp_hard_struct *vfp_host;
|
||||
kvm_kernel_vfp_t vfp_guest;
|
||||
kvm_kernel_vfp_t *vfp_host;
|
||||
|
||||
/* VGIC state */
|
||||
struct vgic_cpu vgic_cpu;
|
||||
|
@ -122,9 +129,6 @@ struct kvm_vcpu_arch {
|
|||
/* Interrupt related fields */
|
||||
u32 irq_lines; /* IRQ and FIQ levels */
|
||||
|
||||
/* Hyp exception information */
|
||||
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
|
||||
|
||||
/* Cache some mmu pages needed inside spinlock regions */
|
||||
struct kvm_mmu_memory_cache mmu_page_cache;
|
||||
|
||||
|
@ -181,4 +185,26 @@ struct kvm_one_reg;
|
|||
int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
|
||||
int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
|
||||
|
||||
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
int exception_index);
|
||||
|
||||
static inline void __cpu_init_hyp_mode(unsigned long long pgd_ptr,
|
||||
unsigned long hyp_stack_ptr,
|
||||
unsigned long vector_ptr)
|
||||
{
|
||||
unsigned long pgd_low, pgd_high;
|
||||
|
||||
pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
|
||||
pgd_high = (pgd_ptr >> 32ULL);
|
||||
|
||||
/*
|
||||
* Call initialization code, and switch to the full blown
|
||||
* HYP code. The init code doesn't need to preserve these registers as
|
||||
* r1-r3 and r12 are already callee save according to the AAPCS.
|
||||
* Note that we slightly misuse the prototype by casing the pgd_low to
|
||||
* a void *.
|
||||
*/
|
||||
kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
|
||||
}
|
||||
|
||||
#endif /* __ARM_KVM_HOST_H__ */
|
||||
|
|
|
@ -19,6 +19,18 @@
|
|||
#ifndef __ARM_KVM_MMU_H__
|
||||
#define __ARM_KVM_MMU_H__
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/idmap.h>
|
||||
|
||||
/*
|
||||
* We directly use the kernel VA for the HYP, as we can directly share
|
||||
* the mapping (HTTBR "covers" TTBR1).
|
||||
*/
|
||||
#define HYP_PAGE_OFFSET_MASK (~0UL)
|
||||
#define HYP_PAGE_OFFSET PAGE_OFFSET
|
||||
#define KERN_TO_HYP(kva) (kva)
|
||||
|
||||
int create_hyp_mappings(void *from, void *to);
|
||||
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
|
||||
void free_hyp_pmds(void);
|
||||
|
@ -36,6 +48,16 @@ phys_addr_t kvm_mmu_get_httbr(void);
|
|||
int kvm_mmu_init(void);
|
||||
void kvm_clear_hyp_idmap(void);
|
||||
|
||||
static inline void kvm_set_pte(pte_t *pte, pte_t new_pte)
|
||||
{
|
||||
pte_val(*pte) = new_pte;
|
||||
/*
|
||||
* flush_pmd_entry just takes a void pointer and cleans the necessary
|
||||
* cache entries, so we can reuse the function for ptes.
|
||||
*/
|
||||
flush_pmd_entry(pte);
|
||||
}
|
||||
|
||||
static inline bool kvm_is_write_fault(unsigned long hsr)
|
||||
{
|
||||
unsigned long hsr_ec = hsr >> HSR_EC_SHIFT;
|
||||
|
@ -47,4 +69,49 @@ static inline bool kvm_is_write_fault(unsigned long hsr)
|
|||
return true;
|
||||
}
|
||||
|
||||
static inline void kvm_clean_pgd(pgd_t *pgd)
|
||||
{
|
||||
clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
|
||||
}
|
||||
|
||||
static inline void kvm_clean_pmd_entry(pmd_t *pmd)
|
||||
{
|
||||
clean_pmd_entry(pmd);
|
||||
}
|
||||
|
||||
static inline void kvm_clean_pte(pte_t *pte)
|
||||
{
|
||||
clean_pte_table(pte);
|
||||
}
|
||||
|
||||
static inline void kvm_set_s2pte_writable(pte_t *pte)
|
||||
{
|
||||
pte_val(*pte) |= L_PTE_S2_RDWR;
|
||||
}
|
||||
|
||||
struct kvm;
|
||||
|
||||
static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
|
||||
{
|
||||
/*
|
||||
* If we are going to insert an instruction page and the icache is
|
||||
* either VIPT or PIPT, there is a potential problem where the host
|
||||
* (or another VM) may have used the same page as this guest, and we
|
||||
* read incorrect data from the icache. If we're using a PIPT cache,
|
||||
* we can invalidate just that page, but if we are using a VIPT cache
|
||||
* we need to invalidate the entire icache - damn shame - as written
|
||||
* in the ARM ARM (DDI 0406C.b - Page B3-1393).
|
||||
*
|
||||
* VIVT caches are tagged using both the ASID and the VMID and doesn't
|
||||
* need any kind of flushing (DDI 0406C.b - Page B3-1392).
|
||||
*/
|
||||
if (icache_is_pipt()) {
|
||||
unsigned long hva = gfn_to_hva(kvm, gfn);
|
||||
__cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
|
||||
} else if (!icache_is_vivt_asid_tagged()) {
|
||||
/* any kind of VIPT cache */
|
||||
__flush_icache_all();
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* __ARM_KVM_MMU_H__ */
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/irqreturn.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/types.h>
|
||||
|
|
|
@ -30,6 +30,11 @@ struct hw_pci {
|
|||
void (*postinit)(void);
|
||||
u8 (*swizzle)(struct pci_dev *dev, u8 *pin);
|
||||
int (*map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
|
||||
resource_size_t (*align_resource)(struct pci_dev *dev,
|
||||
const struct resource *res,
|
||||
resource_size_t start,
|
||||
resource_size_t size,
|
||||
resource_size_t align);
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -51,6 +56,12 @@ struct pci_sys_data {
|
|||
u8 (*swizzle)(struct pci_dev *, u8 *);
|
||||
/* IRQ mapping */
|
||||
int (*map_irq)(const struct pci_dev *, u8, u8);
|
||||
/* Resource alignement requirements */
|
||||
resource_size_t (*align_resource)(struct pci_dev *dev,
|
||||
const struct resource *res,
|
||||
resource_size_t start,
|
||||
resource_size_t size,
|
||||
resource_size_t align);
|
||||
void *private_data; /* platform controller private data */
|
||||
};
|
||||
|
||||
|
|
|
@ -0,0 +1,209 @@
|
|||
/*
|
||||
* arch/arm/include/asm/mcpm.h
|
||||
*
|
||||
* Created by: Nicolas Pitre, April 2012
|
||||
* Copyright: (C) 2012-2013 Linaro Limited
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#ifndef MCPM_H
|
||||
#define MCPM_H
|
||||
|
||||
/*
|
||||
* Maximum number of possible clusters / CPUs per cluster.
|
||||
*
|
||||
* This should be sufficient for quite a while, while keeping the
|
||||
* (assembly) code simpler. When this starts to grow then we'll have
|
||||
* to consider dynamic allocation.
|
||||
*/
|
||||
#define MAX_CPUS_PER_CLUSTER 4
|
||||
#define MAX_NR_CLUSTERS 2
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
/*
|
||||
* Platform specific code should use this symbol to set up secondary
|
||||
* entry location for processors to use when released from reset.
|
||||
*/
|
||||
extern void mcpm_entry_point(void);
|
||||
|
||||
/*
|
||||
* This is used to indicate where the given CPU from given cluster should
|
||||
* branch once it is ready to re-enter the kernel using ptr, or NULL if it
|
||||
* should be gated. A gated CPU is held in a WFE loop until its vector
|
||||
* becomes non NULL.
|
||||
*/
|
||||
void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr);
|
||||
|
||||
/*
|
||||
* CPU/cluster power operations API for higher subsystems to use.
|
||||
*/
|
||||
|
||||
/**
|
||||
* mcpm_cpu_power_up - make given CPU in given cluster runable
|
||||
*
|
||||
* @cpu: CPU number within given cluster
|
||||
* @cluster: cluster number for the CPU
|
||||
*
|
||||
* The identified CPU is brought out of reset. If the cluster was powered
|
||||
* down then it is brought up as well, taking care not to let the other CPUs
|
||||
* in the cluster run, and ensuring appropriate cluster setup.
|
||||
*
|
||||
* Caller must ensure the appropriate entry vector is initialized with
|
||||
* mcpm_set_entry_vector() prior to calling this.
|
||||
*
|
||||
* This must be called in a sleepable context. However, the implementation
|
||||
* is strongly encouraged to return early and let the operation happen
|
||||
* asynchronously, especially when significant delays are expected.
|
||||
*
|
||||
* If the operation cannot be performed then an error code is returned.
|
||||
*/
|
||||
int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster);
|
||||
|
||||
/**
|
||||
* mcpm_cpu_power_down - power the calling CPU down
|
||||
*
|
||||
* The calling CPU is powered down.
|
||||
*
|
||||
* If this CPU is found to be the "last man standing" in the cluster
|
||||
* then the cluster is prepared for power-down too.
|
||||
*
|
||||
* This must be called with interrupts disabled.
|
||||
*
|
||||
* This does not return. Re-entry in the kernel is expected via
|
||||
* mcpm_entry_point.
|
||||
*/
|
||||
void mcpm_cpu_power_down(void);
|
||||
|
||||
/**
|
||||
* mcpm_cpu_suspend - bring the calling CPU in a suspended state
|
||||
*
|
||||
* @expected_residency: duration in microseconds the CPU is expected
|
||||
* to remain suspended, or 0 if unknown/infinity.
|
||||
*
|
||||
* The calling CPU is suspended. The expected residency argument is used
|
||||
* as a hint by the platform specific backend to implement the appropriate
|
||||
* sleep state level according to the knowledge it has on wake-up latency
|
||||
* for the given hardware.
|
||||
*
|
||||
* If this CPU is found to be the "last man standing" in the cluster
|
||||
* then the cluster may be prepared for power-down too, if the expected
|
||||
* residency makes it worthwhile.
|
||||
*
|
||||
* This must be called with interrupts disabled.
|
||||
*
|
||||
* This does not return. Re-entry in the kernel is expected via
|
||||
* mcpm_entry_point.
|
||||
*/
|
||||
void mcpm_cpu_suspend(u64 expected_residency);
|
||||
|
||||
/**
|
||||
* mcpm_cpu_powered_up - housekeeping workafter a CPU has been powered up
|
||||
*
|
||||
* This lets the platform specific backend code perform needed housekeeping
|
||||
* work. This must be called by the newly activated CPU as soon as it is
|
||||
* fully operational in kernel space, before it enables interrupts.
|
||||
*
|
||||
* If the operation cannot be performed then an error code is returned.
|
||||
*/
|
||||
int mcpm_cpu_powered_up(void);
|
||||
|
||||
/*
|
||||
* Platform specific methods used in the implementation of the above API.
|
||||
*/
|
||||
struct mcpm_platform_ops {
|
||||
int (*power_up)(unsigned int cpu, unsigned int cluster);
|
||||
void (*power_down)(void);
|
||||
void (*suspend)(u64);
|
||||
void (*powered_up)(void);
|
||||
};
|
||||
|
||||
/**
|
||||
* mcpm_platform_register - register platform specific power methods
|
||||
*
|
||||
* @ops: mcpm_platform_ops structure to register
|
||||
*
|
||||
* An error is returned if the registration has been done previously.
|
||||
*/
|
||||
int __init mcpm_platform_register(const struct mcpm_platform_ops *ops);
|
||||
|
||||
/* Synchronisation structures for coordinating safe cluster setup/teardown: */
|
||||
|
||||
/*
|
||||
* When modifying this structure, make sure you update the MCPM_SYNC_ defines
|
||||
* to match.
|
||||
*/
|
||||
struct mcpm_sync_struct {
|
||||
/* individual CPU states */
|
||||
struct {
|
||||
s8 cpu __aligned(__CACHE_WRITEBACK_GRANULE);
|
||||
} cpus[MAX_CPUS_PER_CLUSTER];
|
||||
|
||||
/* cluster state */
|
||||
s8 cluster __aligned(__CACHE_WRITEBACK_GRANULE);
|
||||
|
||||
/* inbound-side state */
|
||||
s8 inbound __aligned(__CACHE_WRITEBACK_GRANULE);
|
||||
};
|
||||
|
||||
struct sync_struct {
|
||||
struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS];
|
||||
};
|
||||
|
||||
extern unsigned long sync_phys; /* physical address of *mcpm_sync */
|
||||
|
||||
void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster);
|
||||
void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster);
|
||||
void __mcpm_outbound_leave_critical(unsigned int cluster, int state);
|
||||
bool __mcpm_outbound_enter_critical(unsigned int this_cpu, unsigned int cluster);
|
||||
int __mcpm_cluster_state(unsigned int cluster);
|
||||
|
||||
int __init mcpm_sync_init(
|
||||
void (*power_up_setup)(unsigned int affinity_level));
|
||||
|
||||
void __init mcpm_smp_set_ops(void);
|
||||
|
||||
#else
|
||||
|
||||
/*
|
||||
* asm-offsets.h causes trouble when included in .c files, and cacheflush.h
|
||||
* cannot be included in asm files. Let's work around the conflict like this.
|
||||
*/
|
||||
#include <asm/asm-offsets.h>
|
||||
#define __CACHE_WRITEBACK_GRANULE CACHE_WRITEBACK_GRANULE
|
||||
|
||||
#endif /* ! __ASSEMBLY__ */
|
||||
|
||||
/* Definitions for mcpm_sync_struct */
|
||||
#define CPU_DOWN 0x11
|
||||
#define CPU_COMING_UP 0x12
|
||||
#define CPU_UP 0x13
|
||||
#define CPU_GOING_DOWN 0x14
|
||||
|
||||
#define CLUSTER_DOWN 0x21
|
||||
#define CLUSTER_UP 0x22
|
||||
#define CLUSTER_GOING_DOWN 0x23
|
||||
|
||||
#define INBOUND_NOT_COMING_UP 0x31
|
||||
#define INBOUND_COMING_UP 0x32
|
||||
|
||||
/*
|
||||
* Offsets for the mcpm_sync_struct members, for use in asm.
|
||||
* We don't want to make them global to the kernel via asm-offsets.c.
|
||||
*/
|
||||
#define MCPM_SYNC_CLUSTER_CPUS 0
|
||||
#define MCPM_SYNC_CPU_SIZE __CACHE_WRITEBACK_GRANULE
|
||||
#define MCPM_SYNC_CLUSTER_CLUSTER \
|
||||
(MCPM_SYNC_CLUSTER_CPUS + MCPM_SYNC_CPU_SIZE * MAX_CPUS_PER_CLUSTER)
|
||||
#define MCPM_SYNC_CLUSTER_INBOUND \
|
||||
(MCPM_SYNC_CLUSTER_CLUSTER + __CACHE_WRITEBACK_GRANULE)
|
||||
#define MCPM_SYNC_CLUSTER_SIZE \
|
||||
(MCPM_SYNC_CLUSTER_INBOUND + __CACHE_WRITEBACK_GRANULE)
|
||||
|
||||
#endif
|
|
@ -152,6 +152,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
|
|||
#define TIF_SYSCALL_AUDIT 9
|
||||
#define TIF_SYSCALL_TRACEPOINT 10
|
||||
#define TIF_SECCOMP 11 /* seccomp syscall filtering active */
|
||||
#define TIF_NOHZ 12 /* in adaptive nohz mode */
|
||||
#define TIF_USING_IWMMXT 17
|
||||
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
|
||||
#define TIF_RESTORE_SIGMASK 20
|
||||
|
|
|
@ -166,7 +166,7 @@
|
|||
# define v6wbi_always_flags (-1UL)
|
||||
#endif
|
||||
|
||||
#define v7wbi_tlb_flags_smp (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
|
||||
#define v7wbi_tlb_flags_smp (TLB_WB | TLB_BARRIER | \
|
||||
TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | \
|
||||
TLB_V7_UIS_ASID | TLB_V7_UIS_BP)
|
||||
#define v7wbi_tlb_flags_up (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
#ifdef CONFIG_DEBUG_UNCOMPRESS
|
||||
extern void putc(int c);
|
||||
#else
|
||||
static inline void putc(int c) {}
|
||||
#endif
|
||||
static inline void flush(void) {}
|
||||
static inline void arch_decomp_setup(void) {}
|
|
@ -53,12 +53,12 @@
|
|||
#define KVM_ARM_FIQ_spsr fiq_regs[7]
|
||||
|
||||
struct kvm_regs {
|
||||
struct pt_regs usr_regs;/* R0_usr - R14_usr, PC, CPSR */
|
||||
__u32 svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
|
||||
__u32 abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
|
||||
__u32 und_regs[3]; /* SP_und, LR_und, SPSR_und */
|
||||
__u32 irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
|
||||
__u32 fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
|
||||
struct pt_regs usr_regs; /* R0_usr - R14_usr, PC, CPSR */
|
||||
unsigned long svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
|
||||
unsigned long abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
|
||||
unsigned long und_regs[3]; /* SP_und, LR_und, SPSR_und */
|
||||
unsigned long irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
|
||||
unsigned long fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
|
||||
};
|
||||
|
||||
/* Supported Processor Types */
|
||||
|
|
|
@ -149,6 +149,10 @@ int main(void)
|
|||
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
|
||||
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
|
||||
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
|
||||
BLANK();
|
||||
DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER);
|
||||
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
|
||||
BLANK();
|
||||
#ifdef CONFIG_KVM_ARM_HOST
|
||||
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
|
||||
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
|
||||
|
@ -165,10 +169,10 @@ int main(void)
|
|||
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
|
||||
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
|
||||
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
|
||||
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.hsr));
|
||||
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.hxfar));
|
||||
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.hpfar));
|
||||
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.hyp_pc));
|
||||
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
|
||||
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
|
||||
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
|
||||
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
|
||||
#ifdef CONFIG_KVM_ARM_VGIC
|
||||
DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
|
||||
DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
|
||||
|
|
|
@ -462,6 +462,7 @@ static void pcibios_init_hw(struct hw_pci *hw, struct list_head *head)
|
|||
sys->busnr = busnr;
|
||||
sys->swizzle = hw->swizzle;
|
||||
sys->map_irq = hw->map_irq;
|
||||
sys->align_resource = hw->align_resource;
|
||||
INIT_LIST_HEAD(&sys->resources);
|
||||
|
||||
if (hw->private_data)
|
||||
|
@ -574,6 +575,8 @@ char * __init pcibios_setup(char *str)
|
|||
resource_size_t pcibios_align_resource(void *data, const struct resource *res,
|
||||
resource_size_t size, resource_size_t align)
|
||||
{
|
||||
struct pci_dev *dev = data;
|
||||
struct pci_sys_data *sys = dev->sysdata;
|
||||
resource_size_t start = res->start;
|
||||
|
||||
if (res->flags & IORESOURCE_IO && start & 0x300)
|
||||
|
@ -581,6 +584,9 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
|
|||
|
||||
start = (start + align - 1) & ~(align - 1);
|
||||
|
||||
if (sys->align_resource)
|
||||
return sys->align_resource(dev, res, start, size, align);
|
||||
|
||||
return start;
|
||||
}
|
||||
|
||||
|
|
|
@ -192,18 +192,6 @@ __dabt_svc:
|
|||
svc_entry
|
||||
mov r2, sp
|
||||
dabt_helper
|
||||
|
||||
@
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
@
|
||||
disable_irq_notrace
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__dabt_svc)
|
||||
|
@ -223,12 +211,7 @@ __irq_svc:
|
|||
blne svc_preempt
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
@ The parent context IRQs must have been enabled to get here in
|
||||
@ the first place, so there's no point checking the PSR I bit.
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
svc_exit r5, irq = 1 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__irq_svc)
|
||||
|
||||
|
@ -295,22 +278,8 @@ __und_svc_fault:
|
|||
mov r0, sp @ struct pt_regs *regs
|
||||
bl __und_fault
|
||||
|
||||
@
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
@
|
||||
__und_svc_finish:
|
||||
disable_irq_notrace
|
||||
|
||||
@
|
||||
@ restore SPSR and restart the instruction
|
||||
@
|
||||
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__und_svc)
|
||||
|
@ -320,18 +289,6 @@ __pabt_svc:
|
|||
svc_entry
|
||||
mov r2, sp @ regs
|
||||
pabt_helper
|
||||
|
||||
@
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
@
|
||||
disable_irq_notrace
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__pabt_svc)
|
||||
|
@ -396,6 +353,7 @@ ENDPROC(__pabt_svc)
|
|||
#ifdef CONFIG_IRQSOFF_TRACER
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
ct_user_exit save = 0
|
||||
.endm
|
||||
|
||||
.macro kuser_cmpxchg_check
|
||||
|
@ -562,21 +520,21 @@ ENDPROC(__und_usr)
|
|||
@ Fall-through from Thumb-2 __und_usr
|
||||
@
|
||||
#ifdef CONFIG_NEON
|
||||
get_thread_info r10 @ get current thread
|
||||
adr r6, .LCneon_thumb_opcodes
|
||||
b 2f
|
||||
#endif
|
||||
call_fpe:
|
||||
get_thread_info r10 @ get current thread
|
||||
#ifdef CONFIG_NEON
|
||||
adr r6, .LCneon_arm_opcodes
|
||||
2:
|
||||
ldr r7, [r6], #4 @ mask value
|
||||
cmp r7, #0 @ end mask?
|
||||
beq 1f
|
||||
and r8, r0, r7
|
||||
2: ldr r5, [r6], #4 @ mask value
|
||||
ldr r7, [r6], #4 @ opcode bits matching in mask
|
||||
cmp r5, #0 @ end mask?
|
||||
beq 1f
|
||||
and r8, r0, r5
|
||||
cmp r8, r7 @ NEON instruction?
|
||||
bne 2b
|
||||
get_thread_info r10
|
||||
mov r7, #1
|
||||
strb r7, [r10, #TI_USED_CP + 10] @ mark CP#10 as used
|
||||
strb r7, [r10, #TI_USED_CP + 11] @ mark CP#11 as used
|
||||
|
@ -586,7 +544,6 @@ call_fpe:
|
|||
tst r0, #0x08000000 @ only CDP/CPRT/LDC/STC have bit 27
|
||||
tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2
|
||||
moveq pc, lr
|
||||
get_thread_info r10 @ get current thread
|
||||
and r8, r0, #0x00000f00 @ mask out CP number
|
||||
THUMB( lsr r8, r8, #8 )
|
||||
mov r7, #1
|
||||
|
|
|
@ -35,12 +35,11 @@ ret_fast_syscall:
|
|||
ldr r1, [tsk, #TI_FLAGS]
|
||||
tst r1, #_TIF_WORK_MASK
|
||||
bne fast_work_pending
|
||||
#if defined(CONFIG_IRQSOFF_TRACER)
|
||||
asm_trace_hardirqs_on
|
||||
#endif
|
||||
|
||||
/* perform architecture specific actions before user return */
|
||||
arch_ret_to_user r1, lr
|
||||
ct_user_enter
|
||||
|
||||
restore_user_regs fast = 1, offset = S_OFF
|
||||
UNWIND(.fnend )
|
||||
|
@ -71,11 +70,11 @@ ENTRY(ret_to_user_from_irq)
|
|||
tst r1, #_TIF_WORK_MASK
|
||||
bne work_pending
|
||||
no_work_pending:
|
||||
#if defined(CONFIG_IRQSOFF_TRACER)
|
||||
asm_trace_hardirqs_on
|
||||
#endif
|
||||
|
||||
/* perform architecture specific actions before user return */
|
||||
arch_ret_to_user r1, lr
|
||||
ct_user_enter save = 0
|
||||
|
||||
restore_user_regs fast = 0, offset = 0
|
||||
ENDPROC(ret_to_user_from_irq)
|
||||
|
@ -406,6 +405,7 @@ ENTRY(vector_swi)
|
|||
mcr p15, 0, ip, c1, c0 @ update control register
|
||||
#endif
|
||||
enable_irq
|
||||
ct_user_exit
|
||||
|
||||
get_thread_info tsk
|
||||
adr tbl, sys_call_table @ load syscall table pointer
|
||||
|
|
|
@ -74,7 +74,24 @@
|
|||
.endm
|
||||
|
||||
#ifndef CONFIG_THUMB2_KERNEL
|
||||
.macro svc_exit, rpsr
|
||||
.macro svc_exit, rpsr, irq = 0
|
||||
.if \irq != 0
|
||||
@ IRQs already off
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
@ The parent context IRQs must have been enabled to get here in
|
||||
@ the first place, so there's no point checking the PSR I bit.
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
.else
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
disable_irq_notrace
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst \rpsr, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst \rpsr, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
.endif
|
||||
msr spsr_cxsf, \rpsr
|
||||
#if defined(CONFIG_CPU_V6)
|
||||
ldr r0, [sp]
|
||||
|
@ -120,7 +137,24 @@
|
|||
mov pc, \reg
|
||||
.endm
|
||||
#else /* CONFIG_THUMB2_KERNEL */
|
||||
.macro svc_exit, rpsr
|
||||
.macro svc_exit, rpsr, irq = 0
|
||||
.if \irq != 0
|
||||
@ IRQs already off
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
@ The parent context IRQs must have been enabled to get here in
|
||||
@ the first place, so there's no point checking the PSR I bit.
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
.else
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
disable_irq_notrace
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst \rpsr, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst \rpsr, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
.endif
|
||||
ldr lr, [sp, #S_SP] @ top of the stack
|
||||
ldrd r0, r1, [sp, #S_LR] @ calling lr and pc
|
||||
clrex @ clear the exclusive monitor
|
||||
|
@ -163,6 +197,34 @@
|
|||
.endm
|
||||
#endif /* !CONFIG_THUMB2_KERNEL */
|
||||
|
||||
/*
|
||||
* Context tracking subsystem. Used to instrument transitions
|
||||
* between user and kernel mode.
|
||||
*/
|
||||
.macro ct_user_exit, save = 1
|
||||
#ifdef CONFIG_CONTEXT_TRACKING
|
||||
.if \save
|
||||
stmdb sp!, {r0-r3, ip, lr}
|
||||
bl user_exit
|
||||
ldmia sp!, {r0-r3, ip, lr}
|
||||
.else
|
||||
bl user_exit
|
||||
.endif
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro ct_user_enter, save = 1
|
||||
#ifdef CONFIG_CONTEXT_TRACKING
|
||||
.if \save
|
||||
stmdb sp!, {r0-r3, ip, lr}
|
||||
bl user_enter
|
||||
ldmia sp!, {r0-r3, ip, lr}
|
||||
.else
|
||||
bl user_enter
|
||||
.endif
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* These are the registers used in the syscall handler, and allow us to
|
||||
* have in theory up to 7 arguments to a function - r0 to r6.
|
||||
|
|
|
@ -98,8 +98,9 @@ __mmap_switched:
|
|||
str r9, [r4] @ Save processor ID
|
||||
str r1, [r5] @ Save machine type
|
||||
str r2, [r6] @ Save atags pointer
|
||||
bic r4, r0, #CR_A @ Clear 'A' bit
|
||||
stmia r7, {r0, r4} @ Save control register values
|
||||
cmp r7, #0
|
||||
bicne r4, r0, #CR_A @ Clear 'A' bit
|
||||
stmneia r7, {r0, r4} @ Save control register values
|
||||
b start_kernel
|
||||
ENDPROC(__mmap_switched)
|
||||
|
||||
|
@ -113,7 +114,11 @@ __mmap_switched_data:
|
|||
.long processor_id @ r4
|
||||
.long __machine_arch_type @ r5
|
||||
.long __atags_pointer @ r6
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
.long cr_alignment @ r7
|
||||
#else
|
||||
.long 0 @ r7
|
||||
#endif
|
||||
.long init_thread_union + THREAD_START_SP @ sp
|
||||
.size __mmap_switched_data, . - __mmap_switched_data
|
||||
|
||||
|
|
|
@ -32,15 +32,21 @@
|
|||
* numbers for r1.
|
||||
*
|
||||
*/
|
||||
.arm
|
||||
|
||||
__HEAD
|
||||
|
||||
#ifdef CONFIG_CPU_THUMBONLY
|
||||
.thumb
|
||||
ENTRY(stext)
|
||||
#else
|
||||
.arm
|
||||
ENTRY(stext)
|
||||
|
||||
THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM.
|
||||
THUMB( bx r9 ) @ If this is a Thumb-2 kernel,
|
||||
THUMB( .thumb ) @ switch to Thumb now.
|
||||
THUMB(1: )
|
||||
#endif
|
||||
|
||||
setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
|
||||
@ and irqs disabled
|
||||
|
|
|
@ -407,15 +407,16 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
|
|||
* atomic helpers and the signal restart code. Insert it into the
|
||||
* gate_vma so that it is visible through ptrace and /proc/<pid>/mem.
|
||||
*/
|
||||
static struct vm_area_struct gate_vma;
|
||||
static struct vm_area_struct gate_vma = {
|
||||
.vm_start = 0xffff0000,
|
||||
.vm_end = 0xffff0000 + PAGE_SIZE,
|
||||
.vm_flags = VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC,
|
||||
.vm_mm = &init_mm,
|
||||
};
|
||||
|
||||
static int __init gate_vma_init(void)
|
||||
{
|
||||
gate_vma.vm_start = 0xffff0000;
|
||||
gate_vma.vm_end = 0xffff0000 + PAGE_SIZE;
|
||||
gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
|
||||
gate_vma.vm_flags = VM_READ | VM_EXEC |
|
||||
VM_MAYREAD | VM_MAYEXEC;
|
||||
gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
|
||||
return 0;
|
||||
}
|
||||
arch_initcall(gate_vma_init);
|
||||
|
|
|
@ -26,7 +26,7 @@ static int save_return_addr(struct stackframe *frame, void *d)
|
|||
struct return_address_data *data = d;
|
||||
|
||||
if (!data->level) {
|
||||
data->addr = (void *)frame->lr;
|
||||
data->addr = (void *)frame->pc;
|
||||
|
||||
return 1;
|
||||
} else {
|
||||
|
@ -41,7 +41,8 @@ void *return_address(unsigned int level)
|
|||
struct stackframe frame;
|
||||
register unsigned long current_sp asm ("sp");
|
||||
|
||||
data.level = level + 1;
|
||||
data.level = level + 2;
|
||||
data.addr = NULL;
|
||||
|
||||
frame.fp = (unsigned long)__builtin_frame_address(0);
|
||||
frame.sp = current_sp;
|
||||
|
|
|
@ -290,10 +290,10 @@ static int cpu_has_aliasing_icache(unsigned int arch)
|
|||
|
||||
static void __init cacheid_init(void)
|
||||
{
|
||||
unsigned int cachetype = read_cpuid_cachetype();
|
||||
unsigned int arch = cpu_architecture();
|
||||
|
||||
if (arch >= CPU_ARCH_ARMv6) {
|
||||
unsigned int cachetype = read_cpuid_cachetype();
|
||||
if ((cachetype & (7 << 29)) == 4 << 29) {
|
||||
/* ARMv7 register format */
|
||||
arch = CPU_ARCH_ARMv7;
|
||||
|
@ -389,7 +389,7 @@ static void __init feat_v6_fixup(void)
|
|||
*
|
||||
* cpu_init sets up the per-CPU stacks.
|
||||
*/
|
||||
void cpu_init(void)
|
||||
void notrace cpu_init(void)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
struct stack *stk = &stacks[cpu];
|
||||
|
|
|
@ -211,6 +211,13 @@ void __cpuinit __cpu_die(unsigned int cpu)
|
|||
}
|
||||
printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
|
||||
|
||||
/*
|
||||
* platform_cpu_kill() is generally expected to do the powering off
|
||||
* and/or cutting of clocks to the dying CPU. Optionally, this may
|
||||
* be done by the CPU which is dying in preference to supporting
|
||||
* this call, but that means there is _no_ synchronisation between
|
||||
* the requesting CPU and the dying CPU actually losing power.
|
||||
*/
|
||||
if (!platform_cpu_kill(cpu))
|
||||
printk("CPU%u: unable to kill\n", cpu);
|
||||
}
|
||||
|
@ -230,14 +237,41 @@ void __ref cpu_die(void)
|
|||
idle_task_exit();
|
||||
|
||||
local_irq_disable();
|
||||
mb();
|
||||
|
||||
/* Tell __cpu_die() that this CPU is now safe to dispose of */
|
||||
/*
|
||||
* Flush the data out of the L1 cache for this CPU. This must be
|
||||
* before the completion to ensure that data is safely written out
|
||||
* before platform_cpu_kill() gets called - which may disable
|
||||
* *this* CPU and power down its cache.
|
||||
*/
|
||||
flush_cache_louis();
|
||||
|
||||
/*
|
||||
* Tell __cpu_die() that this CPU is now safe to dispose of. Once
|
||||
* this returns, power and/or clocks can be removed at any point
|
||||
* from this CPU and its cache by platform_cpu_kill().
|
||||
*/
|
||||
RCU_NONIDLE(complete(&cpu_died));
|
||||
|
||||
/*
|
||||
* actual CPU shutdown procedure is at least platform (if not
|
||||
* CPU) specific.
|
||||
* Ensure that the cache lines associated with that completion are
|
||||
* written out. This covers the case where _this_ CPU is doing the
|
||||
* powering down, to ensure that the completion is visible to the
|
||||
* CPU waiting for this one.
|
||||
*/
|
||||
flush_cache_louis();
|
||||
|
||||
/*
|
||||
* The actual CPU shutdown procedure is at least platform (if not
|
||||
* CPU) specific. This may remove power, or it may simply spin.
|
||||
*
|
||||
* Platforms are generally expected *NOT* to return from this call,
|
||||
* although there are some which do because they have no way to
|
||||
* power down the CPU. These platforms are the _only_ reason we
|
||||
* have a return path which uses the fragment of assembly below.
|
||||
*
|
||||
* The return path should not be used for platforms which can
|
||||
* power off the CPU.
|
||||
*/
|
||||
if (smp_ops.cpu_die)
|
||||
smp_ops.cpu_die(cpu);
|
||||
|
|
|
@ -41,7 +41,7 @@ void scu_enable(void __iomem *scu_base)
|
|||
|
||||
#ifdef CONFIG_ARM_ERRATA_764369
|
||||
/* Cortex-A9 only */
|
||||
if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
|
||||
if ((read_cpuid_id() & 0xff0ffff0) == 0x410fc090) {
|
||||
scu_ctrl = __raw_readl(scu_base + 0x30);
|
||||
if (!(scu_ctrl & 1))
|
||||
__raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
|
||||
|
|
|
@ -98,21 +98,21 @@ static void broadcast_tlb_a15_erratum(void)
|
|||
return;
|
||||
|
||||
dummy_flush_tlb_a15_erratum();
|
||||
smp_call_function_many(cpu_online_mask, ipi_flush_tlb_a15_erratum,
|
||||
NULL, 1);
|
||||
smp_call_function(ipi_flush_tlb_a15_erratum, NULL, 1);
|
||||
}
|
||||
|
||||
static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
|
||||
{
|
||||
int cpu;
|
||||
int cpu, this_cpu;
|
||||
cpumask_t mask = { CPU_BITS_NONE };
|
||||
|
||||
if (!erratum_a15_798181())
|
||||
return;
|
||||
|
||||
dummy_flush_tlb_a15_erratum();
|
||||
this_cpu = get_cpu();
|
||||
for_each_online_cpu(cpu) {
|
||||
if (cpu == smp_processor_id())
|
||||
if (cpu == this_cpu)
|
||||
continue;
|
||||
/*
|
||||
* We only need to send an IPI if the other CPUs are running
|
||||
|
@ -127,6 +127,7 @@ static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
|
|||
cpumask_set_cpu(cpu, &mask);
|
||||
}
|
||||
smp_call_function_many(&mask, ipi_flush_tlb_a15_erratum, NULL, 1);
|
||||
put_cpu();
|
||||
}
|
||||
|
||||
void flush_tlb_all(void)
|
||||
|
|
|
@ -17,7 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
|
|||
kvm-arm-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
|
||||
|
||||
obj-y += kvm-arm.o init.o interrupts.o
|
||||
obj-y += arm.o guest.o mmu.o emulate.o reset.o
|
||||
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
|
||||
obj-y += coproc.o coproc_a15.o mmio.o psci.o
|
||||
obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
|
||||
obj-$(CONFIG_KVM_ARM_TIMER) += arch_timer.o
|
||||
|
|
|
@ -30,11 +30,9 @@
|
|||
#define CREATE_TRACE_POINTS
|
||||
#include "trace.h"
|
||||
|
||||
#include <asm/unified.h>
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/mman.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/virt.h>
|
||||
|
@ -44,14 +42,13 @@
|
|||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_coproc.h>
|
||||
#include <asm/kvm_psci.h>
|
||||
#include <asm/opcodes.h>
|
||||
|
||||
#ifdef REQUIRES_VIRT
|
||||
__asm__(".arch_extension virt");
|
||||
#endif
|
||||
|
||||
static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
|
||||
static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
|
||||
static kvm_kernel_vfp_t __percpu *kvm_host_vfp_state;
|
||||
static unsigned long hyp_default_vectors;
|
||||
|
||||
/* Per-CPU variable containing the currently running vcpu. */
|
||||
|
@ -304,22 +301,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int __attribute_const__ kvm_target_cpu(void)
|
||||
{
|
||||
unsigned long implementor = read_cpuid_implementor();
|
||||
unsigned long part_number = read_cpuid_part_number();
|
||||
|
||||
if (implementor != ARM_CPU_IMP_ARM)
|
||||
return -EINVAL;
|
||||
|
||||
switch (part_number) {
|
||||
case ARM_CPU_PART_CORTEX_A15:
|
||||
return KVM_ARM_TARGET_CORTEX_A15;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int ret;
|
||||
|
@ -482,163 +463,6 @@ static void update_vttbr(struct kvm *kvm)
|
|||
spin_unlock(&kvm_vmid_lock);
|
||||
}
|
||||
|
||||
static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* SVC called from Hyp mode should never get here */
|
||||
kvm_debug("SVC called from Hyp mode shouldn't go here\n");
|
||||
BUG();
|
||||
return -EINVAL; /* Squash warning */
|
||||
}
|
||||
|
||||
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
|
||||
vcpu->arch.hsr & HSR_HVC_IMM_MASK);
|
||||
|
||||
if (kvm_psci_call(vcpu))
|
||||
return 1;
|
||||
|
||||
kvm_inject_undefined(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
if (kvm_psci_call(vcpu))
|
||||
return 1;
|
||||
|
||||
kvm_inject_undefined(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* The hypervisor should never cause aborts */
|
||||
kvm_err("Prefetch Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
|
||||
vcpu->arch.hxfar, vcpu->arch.hsr);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* This is either an error in the ws. code or an external abort */
|
||||
kvm_err("Data Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
|
||||
vcpu->arch.hxfar, vcpu->arch.hsr);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
|
||||
static exit_handle_fn arm_exit_handlers[] = {
|
||||
[HSR_EC_WFI] = kvm_handle_wfi,
|
||||
[HSR_EC_CP15_32] = kvm_handle_cp15_32,
|
||||
[HSR_EC_CP15_64] = kvm_handle_cp15_64,
|
||||
[HSR_EC_CP14_MR] = kvm_handle_cp14_access,
|
||||
[HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
|
||||
[HSR_EC_CP14_64] = kvm_handle_cp14_access,
|
||||
[HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
|
||||
[HSR_EC_CP10_ID] = kvm_handle_cp10_id,
|
||||
[HSR_EC_SVC_HYP] = handle_svc_hyp,
|
||||
[HSR_EC_HVC] = handle_hvc,
|
||||
[HSR_EC_SMC] = handle_smc,
|
||||
[HSR_EC_IABT] = kvm_handle_guest_abort,
|
||||
[HSR_EC_IABT_HYP] = handle_pabt_hyp,
|
||||
[HSR_EC_DABT] = kvm_handle_guest_abort,
|
||||
[HSR_EC_DABT_HYP] = handle_dabt_hyp,
|
||||
};
|
||||
|
||||
/*
|
||||
* A conditional instruction is allowed to trap, even though it
|
||||
* wouldn't be executed. So let's re-implement the hardware, in
|
||||
* software!
|
||||
*/
|
||||
static bool kvm_condition_valid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
unsigned long cpsr, cond, insn;
|
||||
|
||||
/*
|
||||
* Exception Code 0 can only happen if we set HCR.TGE to 1, to
|
||||
* catch undefined instructions, and then we won't get past
|
||||
* the arm_exit_handlers test anyway.
|
||||
*/
|
||||
BUG_ON(((vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT) == 0);
|
||||
|
||||
/* Top two bits non-zero? Unconditional. */
|
||||
if (vcpu->arch.hsr >> 30)
|
||||
return true;
|
||||
|
||||
cpsr = *vcpu_cpsr(vcpu);
|
||||
|
||||
/* Is condition field valid? */
|
||||
if ((vcpu->arch.hsr & HSR_CV) >> HSR_CV_SHIFT)
|
||||
cond = (vcpu->arch.hsr & HSR_COND) >> HSR_COND_SHIFT;
|
||||
else {
|
||||
/* This can happen in Thumb mode: examine IT state. */
|
||||
unsigned long it;
|
||||
|
||||
it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
|
||||
|
||||
/* it == 0 => unconditional. */
|
||||
if (it == 0)
|
||||
return true;
|
||||
|
||||
/* The cond for this insn works out as the top 4 bits. */
|
||||
cond = (it >> 4);
|
||||
}
|
||||
|
||||
/* Shift makes it look like an ARM-mode instruction */
|
||||
insn = cond << 28;
|
||||
return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
|
||||
* proper exit to QEMU.
|
||||
*/
|
||||
static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
int exception_index)
|
||||
{
|
||||
unsigned long hsr_ec;
|
||||
|
||||
switch (exception_index) {
|
||||
case ARM_EXCEPTION_IRQ:
|
||||
return 1;
|
||||
case ARM_EXCEPTION_UNDEFINED:
|
||||
kvm_err("Undefined exception in Hyp mode at: %#08x\n",
|
||||
vcpu->arch.hyp_pc);
|
||||
BUG();
|
||||
panic("KVM: Hypervisor undefined exception!\n");
|
||||
case ARM_EXCEPTION_DATA_ABORT:
|
||||
case ARM_EXCEPTION_PREF_ABORT:
|
||||
case ARM_EXCEPTION_HVC:
|
||||
hsr_ec = (vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT;
|
||||
|
||||
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers)
|
||||
|| !arm_exit_handlers[hsr_ec]) {
|
||||
kvm_err("Unknown exception class: %#08lx, "
|
||||
"hsr: %#08x\n", hsr_ec,
|
||||
(unsigned int)vcpu->arch.hsr);
|
||||
BUG();
|
||||
}
|
||||
|
||||
/*
|
||||
* See ARM ARM B1.14.1: "Hyp traps on instructions
|
||||
* that fail their condition code check"
|
||||
*/
|
||||
if (!kvm_condition_valid(vcpu)) {
|
||||
bool is_wide = vcpu->arch.hsr & HSR_IL;
|
||||
kvm_skip_instr(vcpu, is_wide);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return arm_exit_handlers[hsr_ec](vcpu, run);
|
||||
default:
|
||||
kvm_pr_unimpl("Unsupported exception type: %d",
|
||||
exception_index);
|
||||
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (likely(vcpu->arch.has_run_once))
|
||||
|
@ -973,7 +797,6 @@ long kvm_arch_vm_ioctl(struct file *filp,
|
|||
static void cpu_init_hyp_mode(void *vector)
|
||||
{
|
||||
unsigned long long pgd_ptr;
|
||||
unsigned long pgd_low, pgd_high;
|
||||
unsigned long hyp_stack_ptr;
|
||||
unsigned long stack_page;
|
||||
unsigned long vector_ptr;
|
||||
|
@ -982,20 +805,11 @@ static void cpu_init_hyp_mode(void *vector)
|
|||
__hyp_set_vectors((unsigned long)vector);
|
||||
|
||||
pgd_ptr = (unsigned long long)kvm_mmu_get_httbr();
|
||||
pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
|
||||
pgd_high = (pgd_ptr >> 32ULL);
|
||||
stack_page = __get_cpu_var(kvm_arm_hyp_stack_page);
|
||||
hyp_stack_ptr = stack_page + PAGE_SIZE;
|
||||
vector_ptr = (unsigned long)__kvm_hyp_vector;
|
||||
|
||||
/*
|
||||
* Call initialization code, and switch to the full blown
|
||||
* HYP code. The init code doesn't need to preserve these registers as
|
||||
* r1-r3 and r12 are already callee save according to the AAPCS.
|
||||
* Note that we slightly misuse the prototype by casing the pgd_low to
|
||||
* a void *.
|
||||
*/
|
||||
kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
|
||||
__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1078,7 +892,7 @@ static int init_hyp_mode(void)
|
|||
/*
|
||||
* Map the host VFP structures
|
||||
*/
|
||||
kvm_host_vfp_state = alloc_percpu(struct vfp_hard_struct);
|
||||
kvm_host_vfp_state = alloc_percpu(kvm_kernel_vfp_t);
|
||||
if (!kvm_host_vfp_state) {
|
||||
err = -ENOMEM;
|
||||
kvm_err("Cannot allocate host VFP state\n");
|
||||
|
@ -1086,7 +900,7 @@ static int init_hyp_mode(void)
|
|||
}
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct vfp_hard_struct *vfp;
|
||||
kvm_kernel_vfp_t *vfp;
|
||||
|
||||
vfp = per_cpu_ptr(kvm_host_vfp_state, cpu);
|
||||
err = create_hyp_mappings(vfp, vfp + 1);
|
||||
|
|
|
@ -76,7 +76,7 @@ static bool access_dcsw(struct kvm_vcpu *vcpu,
|
|||
const struct coproc_params *p,
|
||||
const struct coproc_reg *r)
|
||||
{
|
||||
u32 val;
|
||||
unsigned long val;
|
||||
int cpu;
|
||||
|
||||
if (!p->is_write)
|
||||
|
@ -293,12 +293,12 @@ static int emulate_cp15(struct kvm_vcpu *vcpu,
|
|||
|
||||
if (likely(r->access(vcpu, params, r))) {
|
||||
/* Skip instruction, since it was emulated */
|
||||
kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
|
||||
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
|
||||
return 1;
|
||||
}
|
||||
/* If access function fails, it should complain. */
|
||||
} else {
|
||||
kvm_err("Unsupported guest CP15 access at: %08x\n",
|
||||
kvm_err("Unsupported guest CP15 access at: %08lx\n",
|
||||
*vcpu_pc(vcpu));
|
||||
print_cp_instr(params);
|
||||
}
|
||||
|
@ -315,14 +315,14 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
{
|
||||
struct coproc_params params;
|
||||
|
||||
params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
|
||||
params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
|
||||
params.is_write = ((vcpu->arch.hsr & 1) == 0);
|
||||
params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
|
||||
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
|
||||
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
|
||||
params.is_64bit = true;
|
||||
|
||||
params.Op1 = (vcpu->arch.hsr >> 16) & 0xf;
|
||||
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
|
||||
params.Op2 = 0;
|
||||
params.Rt2 = (vcpu->arch.hsr >> 10) & 0xf;
|
||||
params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
|
||||
params.CRn = 0;
|
||||
|
||||
return emulate_cp15(vcpu, ¶ms);
|
||||
|
@ -347,14 +347,14 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
{
|
||||
struct coproc_params params;
|
||||
|
||||
params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
|
||||
params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
|
||||
params.is_write = ((vcpu->arch.hsr & 1) == 0);
|
||||
params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
|
||||
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
|
||||
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
|
||||
params.is_64bit = false;
|
||||
|
||||
params.CRn = (vcpu->arch.hsr >> 10) & 0xf;
|
||||
params.Op1 = (vcpu->arch.hsr >> 14) & 0x7;
|
||||
params.Op2 = (vcpu->arch.hsr >> 17) & 0x7;
|
||||
params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
|
||||
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
|
||||
params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
|
||||
params.Rt2 = 0;
|
||||
|
||||
return emulate_cp15(vcpu, ¶ms);
|
||||
|
|
|
@ -84,7 +84,7 @@ static inline bool read_zero(struct kvm_vcpu *vcpu,
|
|||
static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
|
||||
const struct coproc_params *params)
|
||||
{
|
||||
kvm_debug("CP15 write to read-only register at: %08x\n",
|
||||
kvm_debug("CP15 write to read-only register at: %08lx\n",
|
||||
*vcpu_pc(vcpu));
|
||||
print_cp_instr(params);
|
||||
return false;
|
||||
|
@ -93,7 +93,7 @@ static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
|
|||
static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
|
||||
const struct coproc_params *params)
|
||||
{
|
||||
kvm_debug("CP15 read to write-only register at: %08x\n",
|
||||
kvm_debug("CP15 read to write-only register at: %08lx\n",
|
||||
*vcpu_pc(vcpu));
|
||||
print_cp_instr(params);
|
||||
return false;
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_arm.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/opcodes.h>
|
||||
#include <trace/events/kvm.h>
|
||||
|
||||
#include "trace.h"
|
||||
|
@ -109,10 +110,10 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = {
|
|||
* Return a pointer to the register number valid in the current mode of
|
||||
* the virtual CPU.
|
||||
*/
|
||||
u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
|
||||
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
|
||||
{
|
||||
u32 *reg_array = (u32 *)&vcpu->arch.regs;
|
||||
u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
|
||||
unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs;
|
||||
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
|
||||
|
||||
switch (mode) {
|
||||
case USR_MODE...SVC_MODE:
|
||||
|
@ -141,9 +142,9 @@ u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
|
|||
/*
|
||||
* Return the SPSR for the current mode of the virtual CPU.
|
||||
*/
|
||||
u32 *vcpu_spsr(struct kvm_vcpu *vcpu)
|
||||
unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
|
||||
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
|
||||
switch (mode) {
|
||||
case SVC_MODE:
|
||||
return &vcpu->arch.regs.KVM_ARM_SVC_spsr;
|
||||
|
@ -160,20 +161,48 @@ u32 *vcpu_spsr(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
|
||||
* @vcpu: the vcpu pointer
|
||||
* @run: the kvm_run structure pointer
|
||||
*
|
||||
* Simply sets the wait_for_interrupts flag on the vcpu structure, which will
|
||||
* halt execution of world-switches and schedule other host processes until
|
||||
* there is an incoming IRQ or FIQ to the VM.
|
||||
/*
|
||||
* A conditional instruction is allowed to trap, even though it
|
||||
* wouldn't be executed. So let's re-implement the hardware, in
|
||||
* software!
|
||||
*/
|
||||
int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
bool kvm_condition_valid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
trace_kvm_wfi(*vcpu_pc(vcpu));
|
||||
kvm_vcpu_block(vcpu);
|
||||
return 1;
|
||||
unsigned long cpsr, cond, insn;
|
||||
|
||||
/*
|
||||
* Exception Code 0 can only happen if we set HCR.TGE to 1, to
|
||||
* catch undefined instructions, and then we won't get past
|
||||
* the arm_exit_handlers test anyway.
|
||||
*/
|
||||
BUG_ON(!kvm_vcpu_trap_get_class(vcpu));
|
||||
|
||||
/* Top two bits non-zero? Unconditional. */
|
||||
if (kvm_vcpu_get_hsr(vcpu) >> 30)
|
||||
return true;
|
||||
|
||||
cpsr = *vcpu_cpsr(vcpu);
|
||||
|
||||
/* Is condition field valid? */
|
||||
if ((kvm_vcpu_get_hsr(vcpu) & HSR_CV) >> HSR_CV_SHIFT)
|
||||
cond = (kvm_vcpu_get_hsr(vcpu) & HSR_COND) >> HSR_COND_SHIFT;
|
||||
else {
|
||||
/* This can happen in Thumb mode: examine IT state. */
|
||||
unsigned long it;
|
||||
|
||||
it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
|
||||
|
||||
/* it == 0 => unconditional. */
|
||||
if (it == 0)
|
||||
return true;
|
||||
|
||||
/* The cond for this insn works out as the top 4 bits. */
|
||||
cond = (it >> 4);
|
||||
}
|
||||
|
||||
/* Shift makes it look like an ARM-mode instruction */
|
||||
insn = cond << 28;
|
||||
return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -257,9 +286,9 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu)
|
|||
*/
|
||||
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u32 new_lr_value;
|
||||
u32 new_spsr_value;
|
||||
u32 cpsr = *vcpu_cpsr(vcpu);
|
||||
unsigned long new_lr_value;
|
||||
unsigned long new_spsr_value;
|
||||
unsigned long cpsr = *vcpu_cpsr(vcpu);
|
||||
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
|
||||
bool is_thumb = (cpsr & PSR_T_BIT);
|
||||
u32 vect_offset = 4;
|
||||
|
@ -291,9 +320,9 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)
|
|||
*/
|
||||
static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
|
||||
{
|
||||
u32 new_lr_value;
|
||||
u32 new_spsr_value;
|
||||
u32 cpsr = *vcpu_cpsr(vcpu);
|
||||
unsigned long new_lr_value;
|
||||
unsigned long new_spsr_value;
|
||||
unsigned long cpsr = *vcpu_cpsr(vcpu);
|
||||
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
|
||||
bool is_thumb = (cpsr & PSR_T_BIT);
|
||||
u32 vect_offset;
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/fs.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/kvm.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
|
@ -180,6 +181,22 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
int __attribute_const__ kvm_target_cpu(void)
|
||||
{
|
||||
unsigned long implementor = read_cpuid_implementor();
|
||||
unsigned long part_number = read_cpuid_part_number();
|
||||
|
||||
if (implementor != ARM_CPU_IMP_ARM)
|
||||
return -EINVAL;
|
||||
|
||||
switch (part_number) {
|
||||
case ARM_CPU_PART_CORTEX_A15:
|
||||
return KVM_ARM_TARGET_CORTEX_A15;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init)
|
||||
{
|
||||
|
|
|
@ -0,0 +1,164 @@
|
|||
/*
|
||||
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
|
||||
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||
*/
|
||||
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_coproc.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_psci.h>
|
||||
#include <trace/events/kvm.h>
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
|
||||
|
||||
static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* SVC called from Hyp mode should never get here */
|
||||
kvm_debug("SVC called from Hyp mode shouldn't go here\n");
|
||||
BUG();
|
||||
return -EINVAL; /* Squash warning */
|
||||
}
|
||||
|
||||
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
|
||||
kvm_vcpu_hvc_get_imm(vcpu));
|
||||
|
||||
if (kvm_psci_call(vcpu))
|
||||
return 1;
|
||||
|
||||
kvm_inject_undefined(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
if (kvm_psci_call(vcpu))
|
||||
return 1;
|
||||
|
||||
kvm_inject_undefined(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* The hypervisor should never cause aborts */
|
||||
kvm_err("Prefetch Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
|
||||
kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
/* This is either an error in the ws. code or an external abort */
|
||||
kvm_err("Data Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
|
||||
kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
/**
|
||||
* kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
|
||||
* @vcpu: the vcpu pointer
|
||||
* @run: the kvm_run structure pointer
|
||||
*
|
||||
* Simply sets the wait_for_interrupts flag on the vcpu structure, which will
|
||||
* halt execution of world-switches and schedule other host processes until
|
||||
* there is an incoming IRQ or FIQ to the VM.
|
||||
*/
|
||||
static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
trace_kvm_wfi(*vcpu_pc(vcpu));
|
||||
kvm_vcpu_block(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static exit_handle_fn arm_exit_handlers[] = {
|
||||
[HSR_EC_WFI] = kvm_handle_wfi,
|
||||
[HSR_EC_CP15_32] = kvm_handle_cp15_32,
|
||||
[HSR_EC_CP15_64] = kvm_handle_cp15_64,
|
||||
[HSR_EC_CP14_MR] = kvm_handle_cp14_access,
|
||||
[HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
|
||||
[HSR_EC_CP14_64] = kvm_handle_cp14_access,
|
||||
[HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
|
||||
[HSR_EC_CP10_ID] = kvm_handle_cp10_id,
|
||||
[HSR_EC_SVC_HYP] = handle_svc_hyp,
|
||||
[HSR_EC_HVC] = handle_hvc,
|
||||
[HSR_EC_SMC] = handle_smc,
|
||||
[HSR_EC_IABT] = kvm_handle_guest_abort,
|
||||
[HSR_EC_IABT_HYP] = handle_pabt_hyp,
|
||||
[HSR_EC_DABT] = kvm_handle_guest_abort,
|
||||
[HSR_EC_DABT_HYP] = handle_dabt_hyp,
|
||||
};
|
||||
|
||||
static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
|
||||
|
||||
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
|
||||
!arm_exit_handlers[hsr_ec]) {
|
||||
kvm_err("Unknown exception class: hsr: %#08x\n",
|
||||
(unsigned int)kvm_vcpu_get_hsr(vcpu));
|
||||
BUG();
|
||||
}
|
||||
|
||||
return arm_exit_handlers[hsr_ec];
|
||||
}
|
||||
|
||||
/*
|
||||
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
|
||||
* proper exit to userspace.
|
||||
*/
|
||||
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
int exception_index)
|
||||
{
|
||||
exit_handle_fn exit_handler;
|
||||
|
||||
switch (exception_index) {
|
||||
case ARM_EXCEPTION_IRQ:
|
||||
return 1;
|
||||
case ARM_EXCEPTION_UNDEFINED:
|
||||
kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
|
||||
kvm_vcpu_get_hyp_pc(vcpu));
|
||||
BUG();
|
||||
panic("KVM: Hypervisor undefined exception!\n");
|
||||
case ARM_EXCEPTION_DATA_ABORT:
|
||||
case ARM_EXCEPTION_PREF_ABORT:
|
||||
case ARM_EXCEPTION_HVC:
|
||||
/*
|
||||
* See ARM ARM B1.14.1: "Hyp traps on instructions
|
||||
* that fail their condition code check"
|
||||
*/
|
||||
if (!kvm_condition_valid(vcpu)) {
|
||||
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
|
||||
return 1;
|
||||
}
|
||||
|
||||
exit_handler = kvm_get_exit_handler(vcpu);
|
||||
|
||||
return exit_handler(vcpu, run);
|
||||
default:
|
||||
kvm_pr_unimpl("Unsupported exception type: %d",
|
||||
exception_index);
|
||||
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
|
||||
return 0;
|
||||
}
|
||||
}
|
|
@ -35,15 +35,18 @@ __kvm_hyp_code_start:
|
|||
/********************************************************************
|
||||
* Flush per-VMID TLBs
|
||||
*
|
||||
* void __kvm_tlb_flush_vmid(struct kvm *kvm);
|
||||
* void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
|
||||
*
|
||||
* We rely on the hardware to broadcast the TLB invalidation to all CPUs
|
||||
* inside the inner-shareable domain (which is the case for all v7
|
||||
* implementations). If we come across a non-IS SMP implementation, we'll
|
||||
* have to use an IPI based mechanism. Until then, we stick to the simple
|
||||
* hardware assisted version.
|
||||
*
|
||||
* As v7 does not support flushing per IPA, just nuke the whole TLB
|
||||
* instead, ignoring the ipa value.
|
||||
*/
|
||||
ENTRY(__kvm_tlb_flush_vmid)
|
||||
ENTRY(__kvm_tlb_flush_vmid_ipa)
|
||||
push {r2, r3}
|
||||
|
||||
add r0, r0, #KVM_VTTBR
|
||||
|
@ -60,7 +63,7 @@ ENTRY(__kvm_tlb_flush_vmid)
|
|||
|
||||
pop {r2, r3}
|
||||
bx lr
|
||||
ENDPROC(__kvm_tlb_flush_vmid)
|
||||
ENDPROC(__kvm_tlb_flush_vmid_ipa)
|
||||
|
||||
/********************************************************************
|
||||
* Flush TLBs and instruction caches of all CPUs inside the inner-shareable
|
||||
|
@ -235,9 +238,9 @@ ENTRY(kvm_call_hyp)
|
|||
* instruction is issued since all traps are disabled when running the host
|
||||
* kernel as per the Hyp-mode initialization at boot time.
|
||||
*
|
||||
* HVC instructions cause a trap to the vector page + offset 0x18 (see hyp_hvc
|
||||
* HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
|
||||
* below) when the HVC instruction is called from SVC mode (i.e. a guest or the
|
||||
* host kernel) and they cause a trap to the vector page + offset 0xc when HVC
|
||||
* host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
|
||||
* instructions are called from within Hyp-mode.
|
||||
*
|
||||
* Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
|
||||
|
|
|
@ -33,16 +33,16 @@
|
|||
*/
|
||||
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
__u32 *dest;
|
||||
unsigned long *dest;
|
||||
unsigned int len;
|
||||
int mask;
|
||||
|
||||
if (!run->mmio.is_write) {
|
||||
dest = vcpu_reg(vcpu, vcpu->arch.mmio_decode.rt);
|
||||
memset(dest, 0, sizeof(int));
|
||||
*dest = 0;
|
||||
|
||||
len = run->mmio.len;
|
||||
if (len > 4)
|
||||
if (len > sizeof(unsigned long))
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(dest, run->mmio.data, len);
|
||||
|
@ -50,7 +50,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
|
||||
*((u64 *)run->mmio.data));
|
||||
|
||||
if (vcpu->arch.mmio_decode.sign_extend && len < 4) {
|
||||
if (vcpu->arch.mmio_decode.sign_extend &&
|
||||
len < sizeof(unsigned long)) {
|
||||
mask = 1U << ((len * 8) - 1);
|
||||
*dest = (*dest ^ mask) - mask;
|
||||
}
|
||||
|
@ -65,40 +66,29 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
unsigned long rt, len;
|
||||
bool is_write, sign_extend;
|
||||
|
||||
if ((vcpu->arch.hsr >> 8) & 1) {
|
||||
if (kvm_vcpu_dabt_isextabt(vcpu)) {
|
||||
/* cache operation on I/O addr, tell guest unsupported */
|
||||
kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
|
||||
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
|
||||
return 1;
|
||||
}
|
||||
|
||||
if ((vcpu->arch.hsr >> 7) & 1) {
|
||||
if (kvm_vcpu_dabt_iss1tw(vcpu)) {
|
||||
/* page table accesses IO mem: tell guest to fix its TTBR */
|
||||
kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
|
||||
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
|
||||
return 1;
|
||||
}
|
||||
|
||||
switch ((vcpu->arch.hsr >> 22) & 0x3) {
|
||||
case 0:
|
||||
len = 1;
|
||||
break;
|
||||
case 1:
|
||||
len = 2;
|
||||
break;
|
||||
case 2:
|
||||
len = 4;
|
||||
break;
|
||||
default:
|
||||
kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
len = kvm_vcpu_dabt_get_as(vcpu);
|
||||
if (unlikely(len < 0))
|
||||
return len;
|
||||
|
||||
is_write = vcpu->arch.hsr & HSR_WNR;
|
||||
sign_extend = vcpu->arch.hsr & HSR_SSE;
|
||||
rt = (vcpu->arch.hsr & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
|
||||
is_write = kvm_vcpu_dabt_iswrite(vcpu);
|
||||
sign_extend = kvm_vcpu_dabt_issext(vcpu);
|
||||
rt = kvm_vcpu_dabt_get_rd(vcpu);
|
||||
|
||||
if (kvm_vcpu_reg_is_pc(vcpu, rt)) {
|
||||
/* IO memory trying to read/write pc */
|
||||
kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
|
||||
kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -112,7 +102,7 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
* The MMIO instruction is emulated and should not be re-executed
|
||||
* in the guest.
|
||||
*/
|
||||
kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
|
||||
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -130,7 +120,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
* space do its magic.
|
||||
*/
|
||||
|
||||
if (vcpu->arch.hsr & HSR_ISV) {
|
||||
if (kvm_vcpu_dabt_isvalid(vcpu)) {
|
||||
ret = decode_hsr(vcpu, fault_ipa, &mmio);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
#include <linux/kvm_host.h>
|
||||
#include <linux/io.h>
|
||||
#include <trace/events/kvm.h>
|
||||
#include <asm/idmap.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/kvm_arm.h>
|
||||
|
@ -28,8 +27,6 @@
|
|||
#include <asm/kvm_mmio.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/mach/map.h>
|
||||
#include <trace/events/kvm.h>
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
|
@ -37,19 +34,9 @@ extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
|
|||
|
||||
static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
|
||||
|
||||
static void kvm_tlb_flush_vmid(struct kvm *kvm)
|
||||
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
|
||||
{
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid, kvm);
|
||||
}
|
||||
|
||||
static void kvm_set_pte(pte_t *pte, pte_t new_pte)
|
||||
{
|
||||
pte_val(*pte) = new_pte;
|
||||
/*
|
||||
* flush_pmd_entry just takes a void pointer and cleans the necessary
|
||||
* cache entries, so we can reuse the function for ptes.
|
||||
*/
|
||||
flush_pmd_entry(pte);
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
|
||||
}
|
||||
|
||||
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
|
||||
|
@ -98,33 +85,42 @@ static void free_ptes(pmd_t *pmd, unsigned long addr)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* free_hyp_pmds - free a Hyp-mode level-2 tables and child level-3 tables
|
||||
*
|
||||
* Assumes this is a page table used strictly in Hyp-mode and therefore contains
|
||||
* only mappings in the kernel memory area, which is above PAGE_OFFSET.
|
||||
*/
|
||||
void free_hyp_pmds(void)
|
||||
static void free_hyp_pgd_entry(unsigned long addr)
|
||||
{
|
||||
pgd_t *pgd;
|
||||
pud_t *pud;
|
||||
pmd_t *pmd;
|
||||
unsigned long hyp_addr = KERN_TO_HYP(addr);
|
||||
|
||||
pgd = hyp_pgd + pgd_index(hyp_addr);
|
||||
pud = pud_offset(pgd, hyp_addr);
|
||||
|
||||
if (pud_none(*pud))
|
||||
return;
|
||||
BUG_ON(pud_bad(*pud));
|
||||
|
||||
pmd = pmd_offset(pud, hyp_addr);
|
||||
free_ptes(pmd, addr);
|
||||
pmd_free(NULL, pmd);
|
||||
pud_clear(pud);
|
||||
}
|
||||
|
||||
/**
|
||||
* free_hyp_pmds - free a Hyp-mode level-2 tables and child level-3 tables
|
||||
*
|
||||
* Assumes this is a page table used strictly in Hyp-mode and therefore contains
|
||||
* either mappings in the kernel memory area (above PAGE_OFFSET), or
|
||||
* device mappings in the vmalloc range (from VMALLOC_START to VMALLOC_END).
|
||||
*/
|
||||
void free_hyp_pmds(void)
|
||||
{
|
||||
unsigned long addr;
|
||||
|
||||
mutex_lock(&kvm_hyp_pgd_mutex);
|
||||
for (addr = PAGE_OFFSET; addr != 0; addr += PGDIR_SIZE) {
|
||||
pgd = hyp_pgd + pgd_index(addr);
|
||||
pud = pud_offset(pgd, addr);
|
||||
|
||||
if (pud_none(*pud))
|
||||
continue;
|
||||
BUG_ON(pud_bad(*pud));
|
||||
|
||||
pmd = pmd_offset(pud, addr);
|
||||
free_ptes(pmd, addr);
|
||||
pmd_free(NULL, pmd);
|
||||
pud_clear(pud);
|
||||
}
|
||||
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
|
||||
free_hyp_pgd_entry(addr);
|
||||
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
|
||||
free_hyp_pgd_entry(addr);
|
||||
mutex_unlock(&kvm_hyp_pgd_mutex);
|
||||
}
|
||||
|
||||
|
@ -136,7 +132,9 @@ static void create_hyp_pte_mappings(pmd_t *pmd, unsigned long start,
|
|||
struct page *page;
|
||||
|
||||
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
|
||||
pte = pte_offset_kernel(pmd, addr);
|
||||
unsigned long hyp_addr = KERN_TO_HYP(addr);
|
||||
|
||||
pte = pte_offset_kernel(pmd, hyp_addr);
|
||||
BUG_ON(!virt_addr_valid(addr));
|
||||
page = virt_to_page(addr);
|
||||
kvm_set_pte(pte, mk_pte(page, PAGE_HYP));
|
||||
|
@ -151,7 +149,9 @@ static void create_hyp_io_pte_mappings(pmd_t *pmd, unsigned long start,
|
|||
unsigned long addr;
|
||||
|
||||
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
|
||||
pte = pte_offset_kernel(pmd, addr);
|
||||
unsigned long hyp_addr = KERN_TO_HYP(addr);
|
||||
|
||||
pte = pte_offset_kernel(pmd, hyp_addr);
|
||||
BUG_ON(pfn_valid(*pfn_base));
|
||||
kvm_set_pte(pte, pfn_pte(*pfn_base, PAGE_HYP_DEVICE));
|
||||
(*pfn_base)++;
|
||||
|
@ -166,12 +166,13 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start,
|
|||
unsigned long addr, next;
|
||||
|
||||
for (addr = start; addr < end; addr = next) {
|
||||
pmd = pmd_offset(pud, addr);
|
||||
unsigned long hyp_addr = KERN_TO_HYP(addr);
|
||||
pmd = pmd_offset(pud, hyp_addr);
|
||||
|
||||
BUG_ON(pmd_sect(*pmd));
|
||||
|
||||
if (pmd_none(*pmd)) {
|
||||
pte = pte_alloc_one_kernel(NULL, addr);
|
||||
pte = pte_alloc_one_kernel(NULL, hyp_addr);
|
||||
if (!pte) {
|
||||
kvm_err("Cannot allocate Hyp pte\n");
|
||||
return -ENOMEM;
|
||||
|
@ -206,17 +207,23 @@ static int __create_hyp_mappings(void *from, void *to, unsigned long *pfn_base)
|
|||
unsigned long addr, next;
|
||||
int err = 0;
|
||||
|
||||
BUG_ON(start > end);
|
||||
if (start < PAGE_OFFSET)
|
||||
if (start >= end)
|
||||
return -EINVAL;
|
||||
/* Check for a valid kernel memory mapping */
|
||||
if (!pfn_base && (!virt_addr_valid(from) || !virt_addr_valid(to - 1)))
|
||||
return -EINVAL;
|
||||
/* Check for a valid kernel IO mapping */
|
||||
if (pfn_base && (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1)))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&kvm_hyp_pgd_mutex);
|
||||
for (addr = start; addr < end; addr = next) {
|
||||
pgd = hyp_pgd + pgd_index(addr);
|
||||
pud = pud_offset(pgd, addr);
|
||||
unsigned long hyp_addr = KERN_TO_HYP(addr);
|
||||
pgd = hyp_pgd + pgd_index(hyp_addr);
|
||||
pud = pud_offset(pgd, hyp_addr);
|
||||
|
||||
if (pud_none_or_clear_bad(pud)) {
|
||||
pmd = pmd_alloc_one(NULL, addr);
|
||||
pmd = pmd_alloc_one(NULL, hyp_addr);
|
||||
if (!pmd) {
|
||||
kvm_err("Cannot allocate Hyp pmd\n");
|
||||
err = -ENOMEM;
|
||||
|
@ -236,12 +243,13 @@ out:
|
|||
}
|
||||
|
||||
/**
|
||||
* create_hyp_mappings - map a kernel virtual address range in Hyp mode
|
||||
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
|
||||
* @from: The virtual kernel start address of the range
|
||||
* @to: The virtual kernel end address of the range (exclusive)
|
||||
*
|
||||
* The same virtual address as the kernel virtual address is also used in
|
||||
* Hyp-mode mapping to the same underlying physical pages.
|
||||
* The same virtual address as the kernel virtual address is also used
|
||||
* in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
|
||||
* physical pages.
|
||||
*
|
||||
* Note: Wrapping around zero in the "to" address is not supported.
|
||||
*/
|
||||
|
@ -251,10 +259,13 @@ int create_hyp_mappings(void *from, void *to)
|
|||
}
|
||||
|
||||
/**
|
||||
* create_hyp_io_mappings - map a physical IO range in Hyp mode
|
||||
* @from: The virtual HYP start address of the range
|
||||
* @to: The virtual HYP end address of the range (exclusive)
|
||||
* create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
|
||||
* @from: The kernel start VA of the range
|
||||
* @to: The kernel end VA of the range (exclusive)
|
||||
* @addr: The physical start address which gets mapped
|
||||
*
|
||||
* The resulting HYP VA is the same as the kernel VA, modulo
|
||||
* HYP_PAGE_OFFSET.
|
||||
*/
|
||||
int create_hyp_io_mappings(void *from, void *to, phys_addr_t addr)
|
||||
{
|
||||
|
@ -290,7 +301,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
|
|||
VM_BUG_ON((unsigned long)pgd & (S2_PGD_SIZE - 1));
|
||||
|
||||
memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
|
||||
clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
|
||||
kvm_clean_pgd(pgd);
|
||||
kvm->arch.pgd = pgd;
|
||||
|
||||
return 0;
|
||||
|
@ -422,22 +433,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
|
|||
return 0; /* ignore calls from kvm_set_spte_hva */
|
||||
pmd = mmu_memory_cache_alloc(cache);
|
||||
pud_populate(NULL, pud, pmd);
|
||||
pmd += pmd_index(addr);
|
||||
get_page(virt_to_page(pud));
|
||||
} else
|
||||
pmd = pmd_offset(pud, addr);
|
||||
}
|
||||
|
||||
pmd = pmd_offset(pud, addr);
|
||||
|
||||
/* Create 2nd stage page table mapping - Level 2 */
|
||||
if (pmd_none(*pmd)) {
|
||||
if (!cache)
|
||||
return 0; /* ignore calls from kvm_set_spte_hva */
|
||||
pte = mmu_memory_cache_alloc(cache);
|
||||
clean_pte_table(pte);
|
||||
kvm_clean_pte(pte);
|
||||
pmd_populate_kernel(NULL, pmd, pte);
|
||||
pte += pte_index(addr);
|
||||
get_page(virt_to_page(pmd));
|
||||
} else
|
||||
pte = pte_offset_kernel(pmd, addr);
|
||||
}
|
||||
|
||||
pte = pte_offset_kernel(pmd, addr);
|
||||
|
||||
if (iomap && pte_present(*pte))
|
||||
return -EFAULT;
|
||||
|
@ -446,7 +457,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
|
|||
old_pte = *pte;
|
||||
kvm_set_pte(pte, *new_pte);
|
||||
if (pte_present(old_pte))
|
||||
kvm_tlb_flush_vmid(kvm);
|
||||
kvm_tlb_flush_vmid_ipa(kvm, addr);
|
||||
else
|
||||
get_page(virt_to_page(pte));
|
||||
|
||||
|
@ -473,7 +484,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
|
|||
pfn = __phys_to_pfn(pa);
|
||||
|
||||
for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
|
||||
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE | L_PTE_S2_RDWR);
|
||||
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
|
||||
kvm_set_s2pte_writable(&pte);
|
||||
|
||||
ret = mmu_topup_memory_cache(&cache, 2, 2);
|
||||
if (ret)
|
||||
|
@ -492,29 +504,6 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
|
||||
{
|
||||
/*
|
||||
* If we are going to insert an instruction page and the icache is
|
||||
* either VIPT or PIPT, there is a potential problem where the host
|
||||
* (or another VM) may have used the same page as this guest, and we
|
||||
* read incorrect data from the icache. If we're using a PIPT cache,
|
||||
* we can invalidate just that page, but if we are using a VIPT cache
|
||||
* we need to invalidate the entire icache - damn shame - as written
|
||||
* in the ARM ARM (DDI 0406C.b - Page B3-1393).
|
||||
*
|
||||
* VIVT caches are tagged using both the ASID and the VMID and doesn't
|
||||
* need any kind of flushing (DDI 0406C.b - Page B3-1392).
|
||||
*/
|
||||
if (icache_is_pipt()) {
|
||||
unsigned long hva = gfn_to_hva(kvm, gfn);
|
||||
__cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
|
||||
} else if (!icache_is_vivt_asid_tagged()) {
|
||||
/* any kind of VIPT cache */
|
||||
__flush_icache_all();
|
||||
}
|
||||
}
|
||||
|
||||
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
||||
gfn_t gfn, struct kvm_memory_slot *memslot,
|
||||
unsigned long fault_status)
|
||||
|
@ -526,7 +515,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
unsigned long mmu_seq;
|
||||
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
|
||||
|
||||
write_fault = kvm_is_write_fault(vcpu->arch.hsr);
|
||||
write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
|
||||
if (fault_status == FSC_PERM && !write_fault) {
|
||||
kvm_err("Unexpected L2 read permission error\n");
|
||||
return -EFAULT;
|
||||
|
@ -560,7 +549,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
if (mmu_notifier_retry(vcpu->kvm, mmu_seq))
|
||||
goto out_unlock;
|
||||
if (writable) {
|
||||
pte_val(new_pte) |= L_PTE_S2_RDWR;
|
||||
kvm_set_s2pte_writable(&new_pte);
|
||||
kvm_set_pfn_dirty(pfn);
|
||||
}
|
||||
stage2_set_pte(vcpu->kvm, memcache, fault_ipa, &new_pte, false);
|
||||
|
@ -585,7 +574,6 @@ out_unlock:
|
|||
*/
|
||||
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
unsigned long hsr_ec;
|
||||
unsigned long fault_status;
|
||||
phys_addr_t fault_ipa;
|
||||
struct kvm_memory_slot *memslot;
|
||||
|
@ -593,18 +581,17 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
gfn_t gfn;
|
||||
int ret, idx;
|
||||
|
||||
hsr_ec = vcpu->arch.hsr >> HSR_EC_SHIFT;
|
||||
is_iabt = (hsr_ec == HSR_EC_IABT);
|
||||
fault_ipa = ((phys_addr_t)vcpu->arch.hpfar & HPFAR_MASK) << 8;
|
||||
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
|
||||
fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
|
||||
|
||||
trace_kvm_guest_fault(*vcpu_pc(vcpu), vcpu->arch.hsr,
|
||||
vcpu->arch.hxfar, fault_ipa);
|
||||
trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_hsr(vcpu),
|
||||
kvm_vcpu_get_hfar(vcpu), fault_ipa);
|
||||
|
||||
/* Check the stage-2 fault is trans. fault or write fault */
|
||||
fault_status = (vcpu->arch.hsr & HSR_FSC_TYPE);
|
||||
fault_status = kvm_vcpu_trap_get_fault(vcpu);
|
||||
if (fault_status != FSC_FAULT && fault_status != FSC_PERM) {
|
||||
kvm_err("Unsupported fault status: EC=%#lx DFCS=%#lx\n",
|
||||
hsr_ec, fault_status);
|
||||
kvm_err("Unsupported fault status: EC=%#x DFCS=%#lx\n",
|
||||
kvm_vcpu_trap_get_class(vcpu), fault_status);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
|
@ -614,7 +601,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
if (!kvm_is_visible_gfn(vcpu->kvm, gfn)) {
|
||||
if (is_iabt) {
|
||||
/* Prefetch Abort on I/O address */
|
||||
kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
|
||||
kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
|
||||
ret = 1;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
@ -626,8 +613,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
/* Adjust page offset */
|
||||
fault_ipa |= vcpu->arch.hxfar & ~PAGE_MASK;
|
||||
/*
|
||||
* The IPA is reported as [MAX:12], so we need to
|
||||
* complement it with the bottom 12 bits from the
|
||||
* faulting VA. This is always 12 bits, irrespective
|
||||
* of the page size.
|
||||
*/
|
||||
fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
|
||||
ret = io_mem_abort(vcpu, run, fault_ipa);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
@ -682,7 +674,7 @@ static void handle_hva_to_gpa(struct kvm *kvm,
|
|||
static void kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
|
||||
{
|
||||
unmap_stage2_range(kvm, gpa, PAGE_SIZE);
|
||||
kvm_tlb_flush_vmid(kvm);
|
||||
kvm_tlb_flush_vmid_ipa(kvm, gpa);
|
||||
}
|
||||
|
||||
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
|
||||
|
@ -776,7 +768,7 @@ void kvm_clear_hyp_idmap(void)
|
|||
pmd = pmd_offset(pud, addr);
|
||||
|
||||
pud_clear(pud);
|
||||
clean_pmd_entry(pmd);
|
||||
kvm_clean_pmd_entry(pmd);
|
||||
pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK));
|
||||
} while (pgd++, addr = next, addr < end);
|
||||
}
|
||||
|
|
|
@ -1477,7 +1477,7 @@ int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr)
|
|||
if (addr & ~KVM_PHYS_MASK)
|
||||
return -E2BIG;
|
||||
|
||||
if (addr & ~PAGE_MASK)
|
||||
if (addr & (SZ_4K - 1))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&kvm->lock);
|
||||
|
|
|
@ -28,7 +28,6 @@ static inline void cpu_enter_lowpower_a9(void)
|
|||
{
|
||||
unsigned int v;
|
||||
|
||||
flush_cache_all();
|
||||
asm volatile(
|
||||
" mcr p15, 0, %1, c7, c5, 0\n"
|
||||
" mcr p15, 0, %1, c7, c10, 4\n"
|
||||
|
|
|
@ -1252,7 +1252,7 @@ static void __init nuri_camera_init(void)
|
|||
}
|
||||
|
||||
m5mols_board_info.irq = s5p_register_gpio_interrupt(GPIO_CAM_8M_ISP_INT);
|
||||
if (!IS_ERR_VALUE(m5mols_board_info.irq))
|
||||
if (m5mols_board_info.irq >= 0)
|
||||
s3c_gpio_cfgpin(GPIO_CAM_8M_ISP_INT, S3C_GPIO_SFN(0xF));
|
||||
else
|
||||
pr_err("%s: Failed to configure 8M_ISP_INT GPIO\n", __func__);
|
||||
|
|
|
@ -14,7 +14,6 @@
|
|||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
#include "core.h"
|
||||
|
|
|
@ -37,7 +37,7 @@ int __init mxc_device_init(void)
|
|||
int ret;
|
||||
|
||||
ret = device_register(&mxc_aips_bus);
|
||||
if (IS_ERR_VALUE(ret))
|
||||
if (ret < 0)
|
||||
goto done;
|
||||
|
||||
ret = device_register(&mxc_ahb_bus);
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
*/
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/cp15.h>
|
||||
|
||||
#include "common.h"
|
||||
|
@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
|
|||
{
|
||||
unsigned int v;
|
||||
|
||||
flush_cache_all();
|
||||
asm volatile(
|
||||
"mcr p15, 0, %1, c7, c5, 0\n"
|
||||
" mcr p15, 0, %1, c7, c10, 4\n"
|
||||
|
|
|
@ -536,16 +536,14 @@ static void __init ap_init_of(void)
|
|||
'A' + (ap_sc_id & 0x0f));
|
||||
|
||||
soc_dev = soc_device_register(soc_dev_attr);
|
||||
if (IS_ERR_OR_NULL(soc_dev)) {
|
||||
if (IS_ERR(soc_dev)) {
|
||||
kfree(soc_dev_attr->revision);
|
||||
kfree(soc_dev_attr);
|
||||
return;
|
||||
}
|
||||
|
||||
parent = soc_device_to_device(soc_dev);
|
||||
|
||||
if (!IS_ERR_OR_NULL(parent))
|
||||
integrator_init_sysfs(parent, ap_sc_id);
|
||||
integrator_init_sysfs(parent, ap_sc_id);
|
||||
|
||||
of_platform_populate(root, of_default_bus_match_table,
|
||||
ap_auxdata_lookup, parent);
|
||||
|
|
|
@ -360,17 +360,14 @@ static void __init intcp_init_of(void)
|
|||
'A' + (intcp_sc_id & 0x0f));
|
||||
|
||||
soc_dev = soc_device_register(soc_dev_attr);
|
||||
if (IS_ERR_OR_NULL(soc_dev)) {
|
||||
if (IS_ERR(soc_dev)) {
|
||||
kfree(soc_dev_attr->revision);
|
||||
kfree(soc_dev_attr);
|
||||
return;
|
||||
}
|
||||
|
||||
parent = soc_device_to_device(soc_dev);
|
||||
|
||||
if (!IS_ERR_OR_NULL(parent))
|
||||
integrator_init_sysfs(parent, intcp_sc_id);
|
||||
|
||||
integrator_init_sysfs(parent, intcp_sc_id);
|
||||
of_platform_populate(root, of_default_bus_match_table,
|
||||
intcp_auxdata_lookup, parent);
|
||||
}
|
||||
|
|
|
@ -10,16 +10,12 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
#include "common.h"
|
||||
|
||||
static inline void cpu_enter_lowpower(void)
|
||||
{
|
||||
/* Just flush the cache. Changing the coherency is not yet
|
||||
* available on msm. */
|
||||
flush_cache_all();
|
||||
}
|
||||
|
||||
static inline void cpu_leave_lowpower(void)
|
||||
|
|
|
@ -479,7 +479,7 @@ static int __init beagle_opp_init(void)
|
|||
|
||||
/* Initialize the omap3 opp table if not already created. */
|
||||
r = omap3_opp_init();
|
||||
if (IS_ERR_VALUE(r) && (r != -EEXIST)) {
|
||||
if (r < 0 && (r != -EEXIST)) {
|
||||
pr_err("%s: opp default init failed\n", __func__);
|
||||
return r;
|
||||
}
|
||||
|
|
|
@ -611,7 +611,7 @@ int __init omap2_clk_switch_mpurate_at_boot(const char *mpurate_ck_name)
|
|||
return -ENOENT;
|
||||
|
||||
r = clk_set_rate(mpurate_ck, mpurate);
|
||||
if (IS_ERR_VALUE(r)) {
|
||||
if (r < 0) {
|
||||
WARN(1, "clock: %s: unable to set MPU rate to %d: %d\n",
|
||||
mpurate_ck_name, mpurate, r);
|
||||
clk_put(mpurate_ck);
|
||||
|
|
|
@ -303,7 +303,7 @@ static int omap2_onenand_setup_async(void __iomem *onenand_base)
|
|||
t = omap2_onenand_calc_async_timings();
|
||||
|
||||
ret = gpmc_set_async_mode(gpmc_onenand_data->cs, &t);
|
||||
if (IS_ERR_VALUE(ret))
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
omap2_onenand_set_async_mode(onenand_base);
|
||||
|
@ -325,7 +325,7 @@ static int omap2_onenand_setup_sync(void __iomem *onenand_base, int *freq_ptr)
|
|||
t = omap2_onenand_calc_sync_timings(gpmc_onenand_data, freq);
|
||||
|
||||
ret = gpmc_set_sync_mode(gpmc_onenand_data->cs, &t);
|
||||
if (IS_ERR_VALUE(ret))
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
set_onenand_cfg(onenand_base);
|
||||
|
|
|
@ -716,7 +716,7 @@ static int gpmc_setup_irq(void)
|
|||
return -EINVAL;
|
||||
|
||||
gpmc_irq_start = irq_alloc_descs(-1, 0, GPMC_NR_IRQ, 0);
|
||||
if (IS_ERR_VALUE(gpmc_irq_start)) {
|
||||
if (gpmc_irq_start < 0) {
|
||||
pr_err("irq_alloc_descs failed\n");
|
||||
return gpmc_irq_start;
|
||||
}
|
||||
|
@ -801,7 +801,7 @@ static int gpmc_mem_init(void)
|
|||
continue;
|
||||
gpmc_cs_get_memconf(cs, &base, &size);
|
||||
rc = gpmc_cs_insert_mem(cs, base, size);
|
||||
if (IS_ERR_VALUE(rc)) {
|
||||
if (rc < 0) {
|
||||
while (--cs >= 0)
|
||||
if (gpmc_cs_mem_enabled(cs))
|
||||
gpmc_cs_delete_mem(cs);
|
||||
|
@ -1370,14 +1370,14 @@ static int gpmc_probe(struct platform_device *pdev)
|
|||
GPMC_REVISION_MINOR(l));
|
||||
|
||||
rc = gpmc_mem_init();
|
||||
if (IS_ERR_VALUE(rc)) {
|
||||
if (rc < 0) {
|
||||
clk_disable_unprepare(gpmc_l3_clk);
|
||||
clk_put(gpmc_l3_clk);
|
||||
dev_err(gpmc_dev, "failed to reserve memory\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (IS_ERR_VALUE(gpmc_setup_irq()))
|
||||
if (gpmc_setup_irq() < 0)
|
||||
dev_warn(gpmc_dev, "gpmc_setup_irq failed\n");
|
||||
|
||||
/* Now the GPMC is initialised, unreserve the chip-selects */
|
||||
|
|
|
@ -314,7 +314,7 @@ void __init omap3xxx_check_revision(void)
|
|||
* If the processor type is Cortex-A8 and the revision is 0x0
|
||||
* it means its Cortex r0p0 which is 3430 ES1.0.
|
||||
*/
|
||||
cpuid = read_cpuid(CPUID_ID);
|
||||
cpuid = read_cpuid_id();
|
||||
if ((((cpuid >> 4) & 0xfff) == 0xc08) && ((cpuid & 0xf) == 0x0)) {
|
||||
omap_revision = OMAP3430_REV_ES1_0;
|
||||
cpu_rev = "1.0";
|
||||
|
@ -475,7 +475,7 @@ void __init omap4xxx_check_revision(void)
|
|||
* Use ARM register to detect the correct ES version
|
||||
*/
|
||||
if (!rev && (hawkeye != 0xb94e) && (hawkeye != 0xb975)) {
|
||||
idcode = read_cpuid(CPUID_ID);
|
||||
idcode = read_cpuid_id();
|
||||
rev = (idcode & 0xf) - 1;
|
||||
}
|
||||
|
||||
|
|
|
@ -174,7 +174,7 @@ static void __init omap4_smp_init_cpus(void)
|
|||
unsigned int i = 0, ncores = 1, cpu_id;
|
||||
|
||||
/* Use ARM cpuid check here, as SoC detection will not work so early */
|
||||
cpu_id = read_cpuid(CPUID_ID) & CPU_MASK;
|
||||
cpu_id = read_cpuid_id() & CPU_MASK;
|
||||
if (cpu_id == CPU_CORTEX_A9) {
|
||||
/*
|
||||
* Currently we can't call ioremap here because
|
||||
|
|
|
@ -131,7 +131,7 @@ static int omap_device_build_from_dt(struct platform_device *pdev)
|
|||
int oh_cnt, i, ret = 0;
|
||||
|
||||
oh_cnt = of_property_count_strings(node, "ti,hwmods");
|
||||
if (!oh_cnt || IS_ERR_VALUE(oh_cnt)) {
|
||||
if (oh_cnt <= 0) {
|
||||
dev_dbg(&pdev->dev, "No 'hwmods' to build omap_device\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -815,20 +815,17 @@ struct device *omap_device_get_by_hwmod_name(const char *oh_name)
|
|||
}
|
||||
|
||||
oh = omap_hwmod_lookup(oh_name);
|
||||
if (IS_ERR_OR_NULL(oh)) {
|
||||
if (!oh) {
|
||||
WARN(1, "%s: no hwmod for %s\n", __func__,
|
||||
oh_name);
|
||||
return ERR_PTR(oh ? PTR_ERR(oh) : -ENODEV);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
if (IS_ERR_OR_NULL(oh->od)) {
|
||||
if (!oh->od) {
|
||||
WARN(1, "%s: no omap_device for %s\n", __func__,
|
||||
oh_name);
|
||||
return ERR_PTR(oh->od ? PTR_ERR(oh->od) : -ENODEV);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
if (IS_ERR_OR_NULL(oh->od->pdev))
|
||||
return ERR_PTR(oh->od->pdev ? PTR_ERR(oh->od->pdev) : -ENODEV);
|
||||
|
||||
return &oh->od->pdev->dev;
|
||||
}
|
||||
|
||||
|
|
|
@ -1663,7 +1663,7 @@ static int _deassert_hardreset(struct omap_hwmod *oh, const char *name)
|
|||
return -ENOSYS;
|
||||
|
||||
ret = _lookup_hardreset(oh, name, &ohri);
|
||||
if (IS_ERR_VALUE(ret))
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (oh->clkdm) {
|
||||
|
@ -2413,7 +2413,7 @@ static int __init _init(struct omap_hwmod *oh, void *data)
|
|||
_init_mpu_rt_base(oh, NULL);
|
||||
|
||||
r = _init_clocks(oh, NULL);
|
||||
if (IS_ERR_VALUE(r)) {
|
||||
if (r < 0) {
|
||||
WARN(1, "omap_hwmod: %s: couldn't init clocks\n", oh->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -217,7 +217,7 @@ static int __init pwrdms_setup(struct powerdomain *pwrdm, void *dir)
|
|||
return 0;
|
||||
|
||||
d = debugfs_create_dir(pwrdm->name, (struct dentry *)dir);
|
||||
if (!(IS_ERR_OR_NULL(d)))
|
||||
if (d)
|
||||
(void) debugfs_create_file("suspend", S_IRUGO|S_IWUSR, d,
|
||||
(void *)pwrdm, &pwrdm_suspend_fops);
|
||||
|
||||
|
@ -261,8 +261,8 @@ static int __init pm_dbg_init(void)
|
|||
return 0;
|
||||
|
||||
d = debugfs_create_dir("pm_debug", NULL);
|
||||
if (IS_ERR_OR_NULL(d))
|
||||
return PTR_ERR(d);
|
||||
if (!d)
|
||||
return -EINVAL;
|
||||
|
||||
(void) debugfs_create_file("count", S_IRUGO,
|
||||
d, (void *)DEBUG_FILE_COUNTERS, &debug_fops);
|
||||
|
|
|
@ -1180,7 +1180,7 @@ bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm)
|
|||
{
|
||||
int i;
|
||||
|
||||
if (IS_ERR_OR_NULL(pwrdm)) {
|
||||
if (!pwrdm) {
|
||||
pr_debug("powerdomain: %s: invalid powerdomain pointer\n",
|
||||
__func__);
|
||||
return 1;
|
||||
|
|
|
@ -288,7 +288,7 @@ static int __init omap_dm_timer_init_one(struct omap_dm_timer *timer,
|
|||
r = -EINVAL;
|
||||
} else {
|
||||
r = clk_set_parent(timer->fclk, src);
|
||||
if (IS_ERR_VALUE(r))
|
||||
if (r < 0)
|
||||
pr_warn("%s: %s cannot set source\n",
|
||||
__func__, oh->name);
|
||||
clk_put(src);
|
||||
|
|
|
@ -10,13 +10,10 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
static inline void platform_do_lowpower(unsigned int cpu)
|
||||
{
|
||||
flush_cache_all();
|
||||
|
||||
/* we put the platform to just WFI */
|
||||
for (;;) {
|
||||
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/cp15.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
|
@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
|
|||
{
|
||||
unsigned int v;
|
||||
|
||||
flush_cache_all();
|
||||
asm volatile(
|
||||
" mcr p15, 0, %1, c7, c5, 0\n"
|
||||
" mcr p15, 0, %1, c7, c10, 4\n"
|
||||
|
|
|
@ -104,14 +104,6 @@ static int sh73a0_cpu_kill(unsigned int cpu)
|
|||
|
||||
static void sh73a0_cpu_die(unsigned int cpu)
|
||||
{
|
||||
/*
|
||||
* The ARM MPcore does not issue a cache coherency request for the L1
|
||||
* cache when powering off single CPUs. We must take care of this and
|
||||
* further caches.
|
||||
*/
|
||||
dsb();
|
||||
flush_cache_all();
|
||||
|
||||
/* Set power off mode. This takes the CPU out of the MP cluster */
|
||||
scu_power_mode(shmobile_scu_base, SCU_PM_POWEROFF);
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/cp15.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
|
@ -21,7 +20,6 @@ static inline void cpu_enter_lowpower(void)
|
|||
{
|
||||
unsigned int v;
|
||||
|
||||
flush_cache_all();
|
||||
asm volatile(
|
||||
" mcr p15, 0, %1, c7, c5, 0\n"
|
||||
" dsb\n"
|
||||
|
|
|
@ -56,9 +56,9 @@ int __init harmony_pcie_init(void)
|
|||
gpio_direction_output(en_vdd_1v05, 1);
|
||||
|
||||
regulator = regulator_get(NULL, "vdd_ldo0,vddio_pex_clk");
|
||||
if (IS_ERR_OR_NULL(regulator)) {
|
||||
pr_err("%s: regulator_get failed: %d\n", __func__,
|
||||
(int)PTR_ERR(regulator));
|
||||
if (IS_ERR(regulator)) {
|
||||
err = PTR_ERR(regulator);
|
||||
pr_err("%s: regulator_get failed: %d\n", __func__, err);
|
||||
goto err_reg;
|
||||
}
|
||||
|
||||
|
|
|
@ -2,4 +2,3 @@ extern struct smp_operations tegra_smp_ops;
|
|||
|
||||
extern int tegra_cpu_kill(unsigned int cpu);
|
||||
extern void tegra_cpu_die(unsigned int cpu);
|
||||
extern int tegra_cpu_disable(unsigned int cpu);
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
#include <linux/smp.h>
|
||||
#include <linux/clk/tegra.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
#include "fuse.h"
|
||||
|
@ -47,15 +46,6 @@ void __ref tegra_cpu_die(unsigned int cpu)
|
|||
BUG();
|
||||
}
|
||||
|
||||
int tegra_cpu_disable(unsigned int cpu)
|
||||
{
|
||||
/*
|
||||
* we don't allow CPU 0 to be shutdown (it is still too special
|
||||
* e.g. clock tick interrupts)
|
||||
*/
|
||||
return cpu == 0 ? -EPERM : 0;
|
||||
}
|
||||
|
||||
void __init tegra_hotplug_init(void)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
|
||||
|
|
|
@ -173,6 +173,5 @@ struct smp_operations tegra_smp_ops __initdata = {
|
|||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
.cpu_kill = tegra_cpu_kill,
|
||||
.cpu_die = tegra_cpu_die,
|
||||
.cpu_disable = tegra_cpu_disable,
|
||||
#endif
|
||||
};
|
||||
|
|
|
@ -276,7 +276,7 @@ static struct tegra_emc_pdata *tegra_emc_fill_pdata(struct platform_device *pdev
|
|||
int i;
|
||||
|
||||
WARN_ON(pdev->dev.platform_data);
|
||||
BUG_ON(IS_ERR_OR_NULL(c));
|
||||
BUG_ON(IS_ERR(c));
|
||||
|
||||
pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
|
||||
pdata->tables = devm_kzalloc(&pdev->dev, sizeof(*pdata->tables),
|
||||
|
|
|
@ -149,14 +149,13 @@ struct device * __init ux500_soc_device_init(const char *soc_id)
|
|||
soc_info_populate(soc_dev_attr, soc_id);
|
||||
|
||||
soc_dev = soc_device_register(soc_dev_attr);
|
||||
if (IS_ERR_OR_NULL(soc_dev)) {
|
||||
if (IS_ERR(soc_dev)) {
|
||||
kfree(soc_dev_attr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
parent = soc_device_to_device(soc_dev);
|
||||
if (!IS_ERR_OR_NULL(parent))
|
||||
device_create_file(parent, &ux500_soc_attr);
|
||||
device_create_file(parent, &ux500_soc_attr);
|
||||
|
||||
return parent;
|
||||
}
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
#include "setup.h"
|
||||
|
@ -24,8 +23,6 @@
|
|||
*/
|
||||
void __ref ux500_cpu_die(unsigned int cpu)
|
||||
{
|
||||
flush_cache_all();
|
||||
|
||||
/* directly enter low power state, skipping secure registers */
|
||||
for (;;) {
|
||||
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/smp_plat.h>
|
||||
#include <asm/cp15.h>
|
||||
|
||||
|
@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
|
|||
{
|
||||
unsigned int v;
|
||||
|
||||
flush_cache_all();
|
||||
asm volatile(
|
||||
"mcr p15, 0, %1, c7, c5, 0\n"
|
||||
" mcr p15, 0, %1, c7, c10, 4\n"
|
||||
|
|
|
@ -397,6 +397,13 @@ config CPU_V7
|
|||
select CPU_PABRT_V7
|
||||
select CPU_TLB_V7 if MMU
|
||||
|
||||
config CPU_THUMBONLY
|
||||
bool
|
||||
# There are no CPUs available with MMU that don't implement an ARM ISA:
|
||||
depends on !MMU
|
||||
help
|
||||
Select this if your CPU doesn't support the 32 bit ARM instructions.
|
||||
|
||||
# Figure out what processor architecture version we should be using.
|
||||
# This defines the compiler instruction set which depends on the machine type.
|
||||
config CPU_32v3
|
||||
|
@ -605,7 +612,7 @@ config ARCH_DMA_ADDR_T_64BIT
|
|||
bool
|
||||
|
||||
config ARM_THUMB
|
||||
bool "Support Thumb user binaries"
|
||||
bool "Support Thumb user binaries" if !CPU_THUMBONLY
|
||||
depends on CPU_ARM720T || CPU_ARM740T || CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM940T || CPU_ARM946E || CPU_ARM1020 || CPU_ARM1020E || CPU_ARM1022 || CPU_ARM1026 || CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_V6 || CPU_V6K || CPU_V7 || CPU_FEROCEON
|
||||
default y
|
||||
help
|
||||
|
|
|
@ -961,12 +961,14 @@ static int __init alignment_init(void)
|
|||
return -ENOMEM;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
if (cpu_is_v6_unaligned()) {
|
||||
cr_alignment &= ~CR_A;
|
||||
cr_no_alignment &= ~CR_A;
|
||||
set_cr(cr_alignment);
|
||||
ai_usermode = safe_usermode(ai_usermode, false);
|
||||
}
|
||||
#endif
|
||||
|
||||
hook_fault_code(FAULT_CODE_ALIGNMENT, do_alignment, SIGBUS, BUS_ADRALN,
|
||||
"alignment exception");
|
||||
|
|
|
@ -823,16 +823,17 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
|
|||
if (PageHighMem(page)) {
|
||||
if (len + offset > PAGE_SIZE)
|
||||
len = PAGE_SIZE - offset;
|
||||
vaddr = kmap_high_get(page);
|
||||
if (vaddr) {
|
||||
vaddr += offset;
|
||||
op(vaddr, len, dir);
|
||||
kunmap_high(page);
|
||||
} else if (cache_is_vipt()) {
|
||||
/* unmapped pages might still be cached */
|
||||
|
||||
if (cache_is_vipt_nonaliasing()) {
|
||||
vaddr = kmap_atomic(page);
|
||||
op(vaddr + offset, len, dir);
|
||||
kunmap_atomic(vaddr);
|
||||
} else {
|
||||
vaddr = kmap_high_get(page);
|
||||
if (vaddr) {
|
||||
op(vaddr + offset, len, dir);
|
||||
kunmap_high(page);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
vaddr = page_address(page) + offset;
|
||||
|
|
|
@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
|
|||
if (!PageHighMem(page)) {
|
||||
__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
|
||||
} else {
|
||||
void *addr = kmap_high_get(page);
|
||||
if (addr) {
|
||||
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
|
||||
kunmap_high(page);
|
||||
} else if (cache_is_vipt()) {
|
||||
/* unmapped pages might still be cached */
|
||||
void *addr;
|
||||
|
||||
if (cache_is_vipt_nonaliasing()) {
|
||||
addr = kmap_atomic(page);
|
||||
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
|
||||
kunmap_atomic(addr);
|
||||
} else {
|
||||
addr = kmap_high_get(page);
|
||||
if (addr) {
|
||||
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
|
||||
kunmap_high(page);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -113,6 +113,7 @@ static struct cachepolicy cache_policies[] __initdata = {
|
|||
}
|
||||
};
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
/*
|
||||
* These are useful for identifying cache coherency
|
||||
* problems by allowing the cache or the cache and
|
||||
|
@ -211,6 +212,22 @@ void adjust_cr(unsigned long mask, unsigned long set)
|
|||
}
|
||||
#endif
|
||||
|
||||
#else /* ifdef CONFIG_CPU_CP15 */
|
||||
|
||||
static int __init early_cachepolicy(char *p)
|
||||
{
|
||||
pr_warning("cachepolicy kernel parameter not supported without cp15\n");
|
||||
}
|
||||
early_param("cachepolicy", early_cachepolicy);
|
||||
|
||||
static int __init noalign_setup(char *__unused)
|
||||
{
|
||||
pr_warning("noalign kernel parameter not supported without cp15\n");
|
||||
}
|
||||
__setup("noalign", noalign_setup);
|
||||
|
||||
#endif /* ifdef CONFIG_CPU_CP15 / else */
|
||||
|
||||
#define PROT_PTE_DEVICE L_PTE_PRESENT|L_PTE_YOUNG|L_PTE_DIRTY|L_PTE_XN
|
||||
#define PROT_SECT_DEVICE PMD_TYPE_SECT|PMD_SECT_AP_WRITE
|
||||
|
||||
|
|
|
@ -80,12 +80,10 @@ ENTRY(cpu_v6_do_idle)
|
|||
mov pc, lr
|
||||
|
||||
ENTRY(cpu_v6_dcache_clean_area)
|
||||
#ifndef TLB_CAN_READ_FROM_L1_CACHE
|
||||
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
|
||||
add r0, r0, #D_CACHE_LINE_SIZE
|
||||
subs r1, r1, #D_CACHE_LINE_SIZE
|
||||
bhi 1b
|
||||
#endif
|
||||
mov pc, lr
|
||||
|
||||
/*
|
||||
|
|
|
@ -110,7 +110,8 @@ ENTRY(cpu_v7_set_pte_ext)
|
|||
ARM( str r3, [r0, #2048]! )
|
||||
THUMB( add r0, r0, #2048 )
|
||||
THUMB( str r3, [r0] )
|
||||
mcr p15, 0, r0, c7, c10, 1 @ flush_pte
|
||||
ALT_SMP(mov pc,lr)
|
||||
ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
|
||||
#endif
|
||||
mov pc, lr
|
||||
ENDPROC(cpu_v7_set_pte_ext)
|
||||
|
|
|
@ -73,7 +73,8 @@ ENTRY(cpu_v7_set_pte_ext)
|
|||
tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
|
||||
orreq r2, #L_PTE_RDONLY
|
||||
1: strd r2, r3, [r0]
|
||||
mcr p15, 0, r0, c7, c10, 1 @ flush_pte
|
||||
ALT_SMP(mov pc, lr)
|
||||
ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
|
||||
#endif
|
||||
mov pc, lr
|
||||
ENDPROC(cpu_v7_set_pte_ext)
|
||||
|
|
|
@ -75,14 +75,14 @@ ENTRY(cpu_v7_do_idle)
|
|||
ENDPROC(cpu_v7_do_idle)
|
||||
|
||||
ENTRY(cpu_v7_dcache_clean_area)
|
||||
#ifndef TLB_CAN_READ_FROM_L1_CACHE
|
||||
ALT_SMP(mov pc, lr) @ MP extensions imply L1 PTW
|
||||
ALT_UP(W(nop))
|
||||
dcache_line_size r2, r3
|
||||
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
|
||||
add r0, r0, r2
|
||||
subs r1, r1, r2
|
||||
bhi 1b
|
||||
dsb
|
||||
#endif
|
||||
mov pc, lr
|
||||
ENDPROC(cpu_v7_dcache_clean_area)
|
||||
|
||||
|
@ -402,6 +402,8 @@ __v7_ca9mp_proc_info:
|
|||
__v7_proc __v7_ca9mp_setup
|
||||
.size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info
|
||||
|
||||
#endif /* CONFIG_ARM_LPAE */
|
||||
|
||||
/*
|
||||
* Marvell PJ4B processor.
|
||||
*/
|
||||
|
@ -411,7 +413,6 @@ __v7_pj4b_proc_info:
|
|||
.long 0xfffffff0
|
||||
__v7_proc __v7_pj4b_setup
|
||||
.size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info
|
||||
#endif /* CONFIG_ARM_LPAE */
|
||||
|
||||
/*
|
||||
* ARM Ltd. Cortex A7 processor.
|
||||
|
|
|
@ -140,8 +140,7 @@ static int omap_dm_timer_prepare(struct omap_dm_timer *timer)
|
|||
*/
|
||||
if (!(timer->capability & OMAP_TIMER_NEEDS_RESET)) {
|
||||
timer->fclk = clk_get(&timer->pdev->dev, "fck");
|
||||
if (WARN_ON_ONCE(IS_ERR_OR_NULL(timer->fclk))) {
|
||||
timer->fclk = NULL;
|
||||
if (WARN_ON_ONCE(IS_ERR(timer->fclk))) {
|
||||
dev_err(&timer->pdev->dev, ": No fclk handle.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -373,7 +372,7 @@ EXPORT_SYMBOL_GPL(omap_dm_timer_modify_idlect_mask);
|
|||
|
||||
struct clk *omap_dm_timer_get_fclk(struct omap_dm_timer *timer)
|
||||
{
|
||||
if (timer)
|
||||
if (timer && !IS_ERR(timer->fclk))
|
||||
return timer->fclk;
|
||||
return NULL;
|
||||
}
|
||||
|
@ -482,7 +481,7 @@ int omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
|
|||
if (pdata && pdata->set_timer_src)
|
||||
return pdata->set_timer_src(timer->pdev, source);
|
||||
|
||||
if (!timer->fclk)
|
||||
if (IS_ERR(timer->fclk))
|
||||
return -EINVAL;
|
||||
|
||||
switch (source) {
|
||||
|
@ -500,13 +499,13 @@ int omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
|
|||
}
|
||||
|
||||
parent = clk_get(&timer->pdev->dev, parent_name);
|
||||
if (IS_ERR_OR_NULL(parent)) {
|
||||
if (IS_ERR(parent)) {
|
||||
pr_err("%s: %s not found\n", __func__, parent_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = clk_set_parent(timer->fclk, parent);
|
||||
if (IS_ERR_VALUE(ret))
|
||||
if (ret < 0)
|
||||
pr_err("%s: failed to set %s as parent\n", __func__,
|
||||
parent_name);
|
||||
|
||||
|
@ -808,6 +807,7 @@ static int omap_dm_timer_probe(struct platform_device *pdev)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
timer->fclk = ERR_PTR(-ENODEV);
|
||||
timer->io_base = devm_ioremap_resource(dev, mem);
|
||||
if (IS_ERR(timer->io_base))
|
||||
return PTR_ERR(timer->io_base);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue