Commit 17278a91e0 ("MIPS: CPS: Fix r1 .set mt assembler warning")
added .set MIPS_ISA_LEVEL_RAW to silence warnings about .set mt on r1,
however this can result in a MOVE being encoded as a 64-bit DADDU
instruction on certain version of binutils (e.g. 2.22), and reserved
instruction exceptions at runtime on 32-bit hardware.
Reduce the sizes of the push/pop sections to include only instructions
that are part of the MT ASE or which won't convert to 64-bit
instructions after .set mips64r2/mips64r6.
Reported-by: Greg Ungerer <gerg@linux-m68k.org>
Fixes: 17278a91e0 ("MIPS: CPS: Fix r1 .set mt assembler warning")
Signed-off-by: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.15
Tested-by: Greg Ungerer <gerg@linux-m68k.org>
Patchwork: https://patchwork.linux-mips.org/patch/18578/
MIPS CPS has a build warning on kernels configured for MIPS32R1 or
MIPS64R1, due to the use of .set mt without a prior .set mips{32,64}r2:
arch/mips/kernel/cps-vec.S Assembler messages:
arch/mips/kernel/cps-vec.S:238: Warning: the `mt' extension requires MIPS32 revision 2 or greater
Add .set MIPS_ISA_LEVEL_RAW before .set mt to silence the warning.
Fixes: 245a7868d2 ("MIPS: smp-cps: rework core/VPE initialisation")
Signed-off-by: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <james.hogan@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17699/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
MIPS will soon not be a part of Imagination Technologies, and as such
many @imgtec.com email addresses will no longer be valid. This patch
updates the addresses for those who:
- Have 10 or more patches in mainline authored using an @imgtec.com
email address, or any patches dated within the past year.
- Are still with Imagination but leaving as part of the MIPS business
unit, as determined from an internal email address list.
- Haven't already updated their email address (ie. JamesH) or expressed
a desire to be excluded (ie. Maciej).
- Acked v2 or earlier of this patch, which leaves Deng-Cheng, Matt &
myself.
New addresses are of the form firstname.lastname@mips.com, and all
verified against an internal email address list. An entry is added to
.mailmap for each person such that get_maintainer.pl will report the new
addresses rather than @imgtec.com addresses which will soon be dead.
Instances of the affected addresses throughout the tree are then
mechanically replaced with the new @mips.com address.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com>
Acked-by: Dengcheng Zhu <dengcheng.zhu@mips.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@mips.com>
Acked-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We now have definitions for the GlobalNumber register in asm/mipsregs.h,
so use them in place of magic numbers in cps-vec.S.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17008/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On pre-r6 systems with the MT ASE the CPS SMP code included checks to
halt the VPE running mips_cps_boot_vpes() if its bit in the struct
core_boot_config vpe_mask field is clear. This was largely done in order
to allow us to start arbitrary VPEs within a core despite the fact that
hardware is typically configured to run only VPE0 after powering up a
core. VPE0 would start the desired other VPEs, halt itself, and the fact
that VPE0 started would be largely hidden & irrelevant.
In MIPSr6 multithreading we have control over which VPs start executing
when a core powers up via the cores CPC registers accessed remotely
through the redirect block. For this reason the MIPSr6 multithreading
path in mips_cps_boot_vpes() hasn't bothered up until now to handle
halting the VP running it.
However it is possible to power up cores entirely in hardware by using a
pwr_up pin associated with the core. Unfortunately some systems wire
this pin to a logic 1, which means that it is possible for a core to
power up at a point that software doesn't expect. The result is that we
generally go execute the kernel on a CPU that ought not to be running &
the results can be unpredictable.
Handle this case by stopping VPs that we don't expect to be running in
mips_cps_boot_vpes() - with this change even if a core powers up it will
do nothing useful & all VPs within it will stop running before they
proceed to run general kernel code & do any damage. Ideally we would
produce some sort of warning here, but given the stage of core bringup
this happens at that would be non-trivial. We also will only hit this if
a core starts up after being offlined via hotplug, and when that happens
we will already produce a warning that the CPU didn't power down in
cps_cpu_die() which seems sufficient.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16198/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The vpe_mask member of struct core_boot_config is of type atomic_t,
which is a 32bit type. In cps-vec.S this member was being retrieved by a
PTR_L macro, which on 64bit systems is a 64bit load. On little endian
systems this is OK, since the double word that is retrieved will have
the required less significant word in the correct position. However, on
big endian systems the less significant word of the load is retrieved
from address+4, and the more significant from address+0. The destination
register therefore ends up with the required word in the more
significant word
e.g. when starting the second VP of a big endian 64bit system, the load
PTR_L ta2, COREBOOTCFG_VPEMASK(a0)
ends up setting register ta2 to 0x0000000300000000
When this value is written to the CPC it is ignored, since it is
invalid to write anything larger than 4 bits. This results in any VP
other than VP0 in a core failing to start in 64bit big endian systems.
Change the load to a 32bit load word instruction to fix the bug.
Fixes: f12401d721 ("MIPS: smp-cps: Pull boot config retrieval out of mips_cps_boot_vpes")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/15787/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When starting secondary VPEs which support EVA and the SegCtl registers,
copy the memory segmentation configuration from the running VPE to ensure
that all VPEs in the core have a consistent virtual memory map.
The EVA configuration of secondary cores is dealt with when starting the
core via the CM.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13291/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When CONFIG_MIPS_CPS_NS16550 is enabled, some register state is dumped
to the UART when an exception is taken via the BEV on secondary cores.
EJTAG exceptions are architecturally expected to be handled by the BEV
even when Status.BEV is 0. This effectively means that if userland
executes an sdbbp instruction on a secondary core then the kernel dumps
register state to the UART even though the exception is perfectly normal
& expected. Prevent this by simply not dumping information to the UART
for EJTAG exceptions.
Fixes: 609cf6f229 ("MIPS: CPS: Early debug using an ns16550-compatible UART")
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12341/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Introduce support for bringing up Virtual Processors in MIPSr6 systems
as CPUs, much like their VPE parallel from the now-deprecated MT ASE.
The existing mips_cps_boot_vpes function fits the MIPSr6 architecture
pretty well - it can now simply write the mask of running VPs to the
VC_RUN register, rather than looping through each & starting or stopping
as appropriate as is done for VPEs from the MT ASE. Thus the VP support
is in general an extension & simplification of the existing MT ASE VPE
(aka SMVP) support.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Niklas Cassel <niklas.cassel@axis.com>
Cc: Ezequiel Garcia <ezequiel.garcia@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12339/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for supporting MIPSr6 multithreading (ie. VPs) which will
begin execution from the core reset vector, skip core level setup if the
core is already coherent. This is never the case when a core is first
started, since boot_core explicitly clears the cores GCR_Cx_COH_EN
register, and always the case when secondary VPs start since the first
VP to start will have enabled coherence after initialising the core &
its caches.
One notable side effect of this patch is that eva_init gets called
slightly earlier, prior to mips_cps_core_init rather than after it.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12338/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The mips_cps_boot_vpes function previously included code to retrieve
pointers to the core & VPE boot configuration structs. These structures
were used both by mips_cps_boot_vpes and by its mips_cps_core_entry
callsite. In preparation for skipping the call to mips_cps_boot_vpes on
some invocations of mips_cps_core_entry, pull the calculation of those
pointers out into a separate function such that it can continue to be
shared.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Niklas Cassel <niklas.cassel@axis.com>
Cc: Ezequiel Garcia <ezequiel.garcia@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12337/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for further modifications to mips_cps_core_entry, pull
the L1 cache initialisation out into a separate function. This both
makes the code in mips_cps_core_entry read more clearly, particularly
when modifying it, and shortens it which will become important as code
is added that needs to continue to fit within the reset vector.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12336/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit 977e043d5e ("MIPS: kernel: cps-vec: Replace mips32r2 ISA level
with mips64r2") leads to .set mips64r2 directives being present in 32
bit (ie. CONFIG_32BIT=y) kernels. This is incorrect & leads to MIPS64
instructions being emitted by the assembler when expanding
pseudo-instructions. For example the "move" instruction can legitimately
be expanded to a "daddu". This causes problems when the kernel is run on
a MIPS32 CPU, as CONFIG_32BIT kernels of course often are...
Fix this by dropping the .set <ISA> directives entirely now that Kconfig
should be ensuring that kernels including this code are built with a
suitable -march= compiler flag.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10869/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The Config1 register is architecturally defined as required, and is thus
present in all systems which may make use of cps-vec.S. Skip the check
for its presence via the Config.M bit.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/11204/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Rather than patching the start of mips_cps_core_entry to provide the
base address of the CM GCRs, simply read that base address from the cop0
CMGCRBase register, converting from the physical address to an uncached
virtual address.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: linux-kernel@vger.kernel.org
Cc: Niklas Cassel <niklas.cassel@axis.com>
Cc: Ezequiel Garcia <ezequiel.garcia@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11203/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Provide support for outputting early debug information, in the form of
various register values should an exception occur, during the early
bringup of secondary cores. This code requires an ns16550-compatible
UART accessible from the secondary core, and is written in assembly due
to the environment in which such early exceptions occur where way may
not have a stack, be coherent or even have initialised caches.
[ralf@linux-mips.org: Fix merge conflict.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11202/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If the kernel may make use of 64 bit addresses outside of the
compatibility address space then we need to set KX such that those
accesses can succeed. Do so for MIPS64 kernels.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/11201/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Set the Status.BEV bit throughout the early startup of a secondary core
such that if an exception occurs the core branches to one of the
exception vector entries from cps-vec.S, rather than branching to
whatever is set in EBase.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/11200/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The CONFIG_MIPS_MT symbol can be selected by CONFIG_MIPS_VPE_LOADER in
addition to CONFIG_MIPS_MT_SMP. We only want MT code in the CPS SMP boot
vector if we're using MT for SMP. Thus switch the config symbol we ifdef
against to CONFIG_MIPS_MT_SMP.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10867/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The MT-specific code in mips_cps_boot_vpes can safely be omitted from
kernels which don't support MT, with the default VPE==0 case being used
as it would be after the has_mt (Config3.MT) check failed at runtime.
Discarding the code entirely will save us a few bytes & allow cleaner
handling of MT ASE instructions by later patches.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10866/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The has_mt macro ended with a branch, leaving its callers with a delay
slot that would be executed if Config3.MT is not set. However it would
not be executed if Config3 (or earlier Config registers) don't exist
which makes it somewhat inconsistent at best. Fill the delay slot in the
macro & fix the mips_cps_boot_vpes caller appropriately.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10865/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit b677bc03d7 ("MIPS: cps-vec: Use macros for various arithmetics
and memory operations") replaced various load & store instructions
through cps-vec.S with the PTR_L & PTR_S macros. However it was somewhat
overzealous in doing so for CM GCR accesses, since the bit width of the
CM doesn't necessarily match that of the CPU. The registers accessed
(GCR_CL_COHERENCE & GCR_CL_ID) should be safe to simply always access
using 32b instructions, so do so in order to avoid issues when using a
32b CM with a 64b CPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/10864/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Replace lw/sw and various arithmetic instructions with macros so the
code can work on 64-bit kernels as well.
Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10591/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for 64-bit CPS support, we replace KSEG0 with CKSEG0
so 64-bit kernels can be supported.
Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10590/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The cps-vec code assumes O32 ABI and uses t4-t7 in quite a few places. This
breaks the build on 64-bit. As a result of which, use the pseudo-registers
ta0-ta3 to make the code compatible with 64-bit.
Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10589/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
mips32r2 is a subset of mips64r2, so we replace mips32r2 with mips64r2
in preparation for 64-bit CPS support.
Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10588/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The PTR_LA macro will pick the correct "la" or "dla" macro to
load an address to a register. This gets rids of the following
warnings (and others) when building a 64-bit CPS kernel:
arch/mips/kernel/cps-vec.S:63: Warning: la used to load 64-bit address
arch/mips/kernel/cps-vec.S:159: Warning: la used to load 64-bit address
arch/mips/kernel/cps-vec.S:220: Warning: la used to load 64-bit address
arch/mips/kernel/cps-vec.S:240: Warning: la used to load 64-bit address
[...]
Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10587/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The "addi" instruction will trap on overflows which is not something
we need in this code, so we replace that with "addiu".
Link: http://www.linux-mips.org/archives/linux-mips/2015-01/msg00430.html
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: <stable@vger.kernel.org> # v3.15+
Cc: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The CPS code is doing several memory loads when configuring the VPEs
from secondary cores, so the segmentation control registers must be
initialized in time otherwise the kernel will crash with strange
TLB exceptions.
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: http://patchwork.linux-mips.org/patch/7424/
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Rather than hardcoding CCA=0x5 for secondary cores, re-use the CCA from
the boot CPU. This allows overrides of the CCA using the cca= kernel
parameter to take effect on all CPUs for consistency.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
This patch adds code to generate entry & exit code for various low power
states available on systems based around the MIPS Coherent Processing
System architecture (ie. those with a Coherence Manager, Global
Interrupt Controller & for >=CM2 a Cluster Power Controller). States
supported are:
- Non-coherent wait. This state first leaves the coherent domain and
then executes a regular MIPS wait instruction. Power savings are
found from the elimination of coherency interventions between the
core and any other coherent requestors in the system.
- Clock gated. This state leaves the coherent domain and then gates
the clock input to the core. This removes all dynamic power from the
core but leaves the core at the mercy of another to restart its
clock. Register state is preserved, but the core can not service
interrupts whilst its clock is gated.
- Power gated. This deepest state removes all power input to the core.
All register state is lost and the core will restart execution from
its BEV when another core powers it back up. Because register state
is lost this state requires cooperation with the CONFIG_MIPS_CPS SMP
implementation in order for the core to exit the state successfully.
The code will detect which states are available on the current system
during boot & generate the entry/exit code for those states. This will
be used by cpuidle & hotplug implementations.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
When hotplug and/or a powered down idle state are supported cases will
arise where a non-zero VPE must be brought online without VPE 0, and it
where multiple VPEs must be onlined simultaneously. This patch prepares
for that by:
- Splitting struct boot_config into core & VPE boot config structures,
allocated one per core or VPE respectively. This allows for multiple
VPEs to be onlined simultaneously without clobbering each others
configuration.
- Indicating which VPEs should be online within a core at any given
time using a bitmap. This allows multiple VPEs to be brought online
simultaneously and also indicates to VPE 0 whether it should halt
after starting any non-zero VPEs that should be online within the
core. For example if all VPEs within a core are offlined via hotplug
and the user onlines the second VPE within that core:
1) The core will be powered up.
2) VPE 0 will run from the BEV (ie. mips_cps_core_entry) to
initialise the core.
3) VPE 0 will start VPE 1 because its bit is set in the cores
bitmap.
4) VPE 0 will halt itself because its bit is clear in the cores
bitmap.
- Moving the core & VPE initialisation to assembly code which does not
make any use of the stack. This is because if a non-zero VPE is to
be brought online in a powered down core then when VPE 0 of that
core runs it may not have a valid stack, and even if it did then
it's messy to run through parts of generic kernel code on VPE 0
before starting the correct VPE.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>