All low-level PM/SMP code using virt_to_phys() should actually use
__pa_symbol() against kernel symbols. Update code where relevant to move
away from virt_to_phys().
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All ARMv5 and older CPUs invalidate their caches in the early assembly
setup function, prior to enabling the MMU. This is because the L1
cache should not contain any data relevant to the execution of the
kernel at this point; all data should have been flushed out to memory.
This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed,
these typically do not search their caches when caching is disabled (as
it needs to be when the MMU is disabled) so this change should be safe.
ARMv7 allows there to be CPUs which search their caches while caching is
disabled, and it's permitted that the cache is uninitialised at boot;
for these, the architecture reference manual requires that an
implementation specific code sequence is used immediately after reset
to ensure that the cache is placed into a sane state. Such
functionality is definitely outside the remit of the Linux kernel, and
must be done by the SoC's firmware before _any_ CPU gets to the Linux
kernel.
Changing the data cache clean+invalidate to a mere invalidate allows us
to get rid of a lot of platform specific hacks around this issue for
their secondary CPU bringup paths - some of which were buggy.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Match the naming pattern of all other SMP ops and rename
zynq_platform_cpu_die --> zynq_cpu_die.
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
The hotplug code contains only a single function, which is an SMP
function. Move that to platsmp.c where all other SMP runctions reside.
That allows removing hotplug.c and declaring the cpu_die function
static.
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Avoid races and add synchronisation between the arch specific
kill and die routines.
The same synchronisation issue was fixed on IMX platform
by this commit:
"ARM: imx: fix sync issue between imx_cpu_die and imx_cpu_kill"
(sha1: 2f3edfd7e2)
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
This patch also removes setting cpu_present_mask as platforms should
only re-initialize it in smp_prepare_cpus() if present != possible.
Cc: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
During boot, Linux initiates a clean-invalidate operation only, resulting
in faulty data to be written to the memory system during resume.
Therefore invalidate the L1 in the secondary boot path to avoid these
issues.
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
The generic code already checks that the CPU being requested is legal if
the cpu possible/present masks are set correctly.
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Both zynq and shmobile have conflicts against the gic cleanup
series, resolved here.
Conflicts:
arch/arm/mach-shmobile/smp-emev2.c
arch/arm/mach-shmobile/smp-r8a7779.c
arch/arm/mach-shmobile/smp-sh73a0.c
arch/arm/mach-zynq/platsmp.c
drivers/gpio/gpio-pl061.c
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Zynq is dual core Cortex A9 which starts always
at zero. Using simple trampoline ensure long jump
to secondary_startup code.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>