powerpc updates for 4.5
- Ground work for the new Power9 MMU from Aneesh Kumar K.V - Optimise FP/VMX/VSX context switching from Anton Blanchard - Various cleanups from Krzysztof Kozlowski, John Ogness, Rashmica Gupta, Russell Currey, Gavin Shan, Daniel Axtens, Michael Neuling, Andrew Donnellan - Allow wrapper to work on non-english system from Laurent Vivier - Add rN aliases to the pt_regs_offset table from Rashmica Gupta - Fix module autoload for rackmeter & axonram drivers from Luis de Bethencourt - Include KVM guest test in all interrupt vectors from Paul Mackerras - Fix DSCR inheritance over fork() from Anton Blanchard - Make value-returning atomics & {cmp}xchg* & their atomic_ versions fully ordered from Boqun Feng - Print MSR TM bits in oops messages from Michael Neuling - Add TM signal return & invalid stack selftests from Michael Neuling - Limit EPOW reset event warnings from Vipin K Parashar - Remove the Cell QPACE code from Rashmica Gupta - Append linux_banner to exception information in xmon from Rashmica Gupta - Add selftest to check if VSRs are corrupted from Rashmica Gupta - Remove broken GregorianDay() from Daniel Axtens - Import Anton's context_switch2 benchmark into selftests from Michael Ellerman - Add selftest script to test HMI functionality from Daniel Axtens - Remove obsolete OPAL v2 support from Stewart Smith - Make enter_rtas() private from Michael Ellerman - PPR exception cleanups from Michael Ellerman - Add page soft dirty tracking from Laurent Dufour - Add support for Nvlink NPUs from Alistair Popple - Add support for kexec on 476fpe from Alistair Popple - Enable kernel CPU dlpar from sysfs from Nathan Fontenot - Copy only required pieces of the mm_context_t to the paca from Michael Neuling - Add a kmsg_dumper that flushes OPAL console output on panic from Russell Currey - Implement save_stack_trace_regs() to enable kprobe stack tracing from Steven Rostedt - Add HWCAP bits for Power9 from Michael Ellerman - Fix _PAGE_PTE breaking swapoff from Aneesh Kumar K.V - Fix _PAGE_SWP_SOFT_DIRTY breaking swapoff from Hugh Dickins - scripts/recordmcount.pl: support data in text section on powerpc from Ulrich Weigand - Handle R_PPC64_ENTRY relocations in modules from Ulrich Weigand - cxl: Fix possible idr warning when contexts are released from Vaibhav Jain - cxl: use correct operator when writing pcie config space values from Andrew Donnellan - cxl: Fix DSI misses when the context owning task exits from Vaibhav Jain - cxl: fix build for GCC 4.6.x from Brian Norris - cxl: use -Werror only with CONFIG_PPC_WERROR from Brian Norris - cxl: Enable PCI device ID for future IBM CXL adapter from Uma Krishnan - Freescale updates from Scott: Highlights include moving QE code out of arch/powerpc (to be shared with arm), device tree updates, and minor fixes. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJWmIxeAAoJEFHr6jzI4aWAA+cQAIXAw4WfVWJ2V4ZK+1eKfB57 fdXG71PuXG+WYIWy71ly8keLHdzzD1NQ2OUB64bUVRq202nRgVc15ZYKRJ/FE/sP SkxaQ2AG/2kI2EflWshOi0Lu9qaZ+LMHJnszIqE/9lnGSB2kUI/cwsSXgziiMKXR XNci9v14SdDd40YV/6BSZXoxApwyq9cUbZ7rnzFLmz4hrFuKmB/L3LABDF8QcpH7 sGt/YaHGOtqP0UX7h5KQTFLGe1OPvK6NWixSXeZKQ71ED6cho1iKUEOtBA9EZeIN QM5JdHFWgX8MMRA0OHAgidkSiqO38BXjmjkVYWoIbYz7Zax3ThmrDHB4IpFwWnk3 l7WBykEXY7KEqpZzbh0GFGehZWzVZvLnNgDdvpmpk/GkPzeYKomBj7ZZfm3H1yGD BTHPwuWCTX+/K75yEVNO8aJO12wBg7DRl4IEwBgqhwU8ga4FvUOCJkm+SCxA1Dnn qlpS7qPwTXNIEfKMJcxp5X0KiwDY1EoOotd4glTN0jbeY5GEYcxe+7RQ302GrYxP zcc8EGLn8h6BtQvV3ypNHF5l6QeTW/0ZlO9c236tIuUQ5gQU39SQci7jQKsYjSzv BB1XdLHkbtIvYDkmbnr1elbeJCDbrWL9rAXRUTRyfuCzaFWTfZmfVNe8c8qwDMLk TUxMR/38aI7bLcIQjwj9 =R5bX -----END PGP SIGNATURE----- Merge tag 'powerpc-4.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: "Core: - Ground work for the new Power9 MMU from Aneesh Kumar K.V - Optimise FP/VMX/VSX context switching from Anton Blanchard Misc: - Various cleanups from Krzysztof Kozlowski, John Ogness, Rashmica Gupta, Russell Currey, Gavin Shan, Daniel Axtens, Michael Neuling, Andrew Donnellan - Allow wrapper to work on non-english system from Laurent Vivier - Add rN aliases to the pt_regs_offset table from Rashmica Gupta - Fix module autoload for rackmeter & axonram drivers from Luis de Bethencourt - Include KVM guest test in all interrupt vectors from Paul Mackerras - Fix DSCR inheritance over fork() from Anton Blanchard - Make value-returning atomics & {cmp}xchg* & their atomic_ versions fully ordered from Boqun Feng - Print MSR TM bits in oops messages from Michael Neuling - Add TM signal return & invalid stack selftests from Michael Neuling - Limit EPOW reset event warnings from Vipin K Parashar - Remove the Cell QPACE code from Rashmica Gupta - Append linux_banner to exception information in xmon from Rashmica Gupta - Add selftest to check if VSRs are corrupted from Rashmica Gupta - Remove broken GregorianDay() from Daniel Axtens - Import Anton's context_switch2 benchmark into selftests from Michael Ellerman - Add selftest script to test HMI functionality from Daniel Axtens - Remove obsolete OPAL v2 support from Stewart Smith - Make enter_rtas() private from Michael Ellerman - PPR exception cleanups from Michael Ellerman - Add page soft dirty tracking from Laurent Dufour - Add support for Nvlink NPUs from Alistair Popple - Add support for kexec on 476fpe from Alistair Popple - Enable kernel CPU dlpar from sysfs from Nathan Fontenot - Copy only required pieces of the mm_context_t to the paca from Michael Neuling - Add a kmsg_dumper that flushes OPAL console output on panic from Russell Currey - Implement save_stack_trace_regs() to enable kprobe stack tracing from Steven Rostedt - Add HWCAP bits for Power9 from Michael Ellerman - Fix _PAGE_PTE breaking swapoff from Aneesh Kumar K.V - Fix _PAGE_SWP_SOFT_DIRTY breaking swapoff from Hugh Dickins - scripts/recordmcount.pl: support data in text section on powerpc from Ulrich Weigand - Handle R_PPC64_ENTRY relocations in modules from Ulrich Weigand cxl: - cxl: Fix possible idr warning when contexts are released from Vaibhav Jain - cxl: use correct operator when writing pcie config space values from Andrew Donnellan - cxl: Fix DSI misses when the context owning task exits from Vaibhav Jain - cxl: fix build for GCC 4.6.x from Brian Norris - cxl: use -Werror only with CONFIG_PPC_WERROR from Brian Norris - cxl: Enable PCI device ID for future IBM CXL adapter from Uma Krishnan Freescale: - Freescale updates from Scott: Highlights include moving QE code out of arch/powerpc (to be shared with arm), device tree updates, and minor fixes" * tag 'powerpc-4.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (149 commits) powerpc/module: Handle R_PPC64_ENTRY relocations scripts/recordmcount.pl: support data in text section on powerpc powerpc/powernv: Fix OPAL_CONSOLE_FLUSH prototype and usages powerpc/mm: fix _PAGE_SWP_SOFT_DIRTY breaking swapoff powerpc/mm: Fix _PAGE_PTE breaking swapoff cxl: Enable PCI device ID for future IBM CXL adapter cxl: use -Werror only with CONFIG_PPC_WERROR cxl: fix build for GCC 4.6.x powerpc: Add HWCAP bits for Power9 powerpc/powernv: Reserve PE#0 on NPU powerpc/powernv: Change NPU PE# assignment powerpc/powernv: Fix update of NVLink DMA mask powerpc/powernv: Remove misleading comment in pci.c powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing powerpc: Fix build break due to paca mm_context_t changes cxl: Fix DSI misses when the context owning task exits MAINTAINERS: Update Scott Wood's e-mail address powerpc/powernv: Fix minor off-by-one error in opal_mce_check_early_recovery() powerpc: Fix style of self-test config prompts powerpc/powernv: Only delay opal_rtc_read() retry when necessary ...
This commit is contained in:
commit
f689b742f2
|
@ -14,7 +14,6 @@ Required properties:
|
|||
tegra132, or tegra210.
|
||||
- "nxp,lpc3220-uart"
|
||||
- "ralink,rt2880-uart"
|
||||
- "ibm,qpace-nwp-serial"
|
||||
- "altr,16550-FIFO32"
|
||||
- "altr,16550-FIFO64"
|
||||
- "altr,16550-FIFO128"
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
* Thermal Monitoring Unit (TMU) on Freescale QorIQ SoCs
|
||||
|
||||
Required properties:
|
||||
- compatible : Must include "fsl,qoriq-tmu". The version of the device is
|
||||
determined by the TMU IP Block Revision Register (IPBRR0) at
|
||||
offset 0x0BF8.
|
||||
Table of correspondences between IPBRR0 values and example chips:
|
||||
Value Device
|
||||
---------- -----
|
||||
0x01900102 T1040
|
||||
- reg : Address range of TMU registers.
|
||||
- interrupts : Contains the interrupt for TMU.
|
||||
- fsl,tmu-range : The values to be programmed into TTRnCR, as specified by
|
||||
the SoC reference manual. The first cell is TTR0CR, the second is
|
||||
TTR1CR, etc.
|
||||
- fsl,tmu-calibration : A list of cell pairs containing temperature
|
||||
calibration data, as specified by the SoC reference manual.
|
||||
The first cell of each pair is the value to be written to TTCFGR,
|
||||
and the second is the value to be written to TSCFGR.
|
||||
|
||||
Example:
|
||||
|
||||
tmu@f0000 {
|
||||
compatible = "fsl,qoriq-tmu";
|
||||
reg = <0xf0000 0x1000>;
|
||||
interrupts = <18 2 0 0>;
|
||||
fsl,tmu-range = <0x000a0000 0x00090026 0x0008004a 0x0001006a>;
|
||||
fsl,tmu-calibration = <0x00000000 0x00000025
|
||||
0x00000001 0x00000028
|
||||
0x00000002 0x0000002d
|
||||
0x00000003 0x00000031
|
||||
0x00000004 0x00000036
|
||||
0x00000005 0x0000003a
|
||||
0x00000006 0x00000040
|
||||
0x00000007 0x00000044
|
||||
0x00000008 0x0000004a
|
||||
0x00000009 0x0000004f
|
||||
0x0000000a 0x00000054
|
||||
|
||||
0x00010000 0x0000000d
|
||||
0x00010001 0x00000013
|
||||
0x00010002 0x00000019
|
||||
0x00010003 0x0000001f
|
||||
0x00010004 0x00000025
|
||||
0x00010005 0x0000002d
|
||||
0x00010006 0x00000033
|
||||
0x00010007 0x00000043
|
||||
0x00010008 0x0000004b
|
||||
0x00010009 0x00000053
|
||||
|
||||
0x00020000 0x00000010
|
||||
0x00020001 0x00000017
|
||||
0x00020002 0x0000001f
|
||||
0x00020003 0x00000029
|
||||
0x00020004 0x00000031
|
||||
0x00020005 0x0000003c
|
||||
0x00020006 0x00000042
|
||||
0x00020007 0x0000004d
|
||||
0x00020008 0x00000056
|
||||
|
||||
0x00030000 0x00000012
|
||||
0x00030001 0x0000001d>;
|
||||
};
|
|
@ -2993,6 +2993,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
may be specified.
|
||||
Format: <port>,<port>....
|
||||
|
||||
ppc_strict_facility_enable
|
||||
[PPC] This option catches any kernel floating point,
|
||||
Altivec, VSX and SPE outside of regions specifically
|
||||
allowed (eg kernel_enable_fpu()/kernel_disable_fpu()).
|
||||
There is some performance impact when enabling this.
|
||||
|
||||
print-fatal-signals=
|
||||
[KNL] debug: print fatal signals
|
||||
|
||||
|
|
|
@ -4490,8 +4490,9 @@ F: include/linux/fs_enet_pd.h
|
|||
FREESCALE QUICC ENGINE LIBRARY
|
||||
L: linuxppc-dev@lists.ozlabs.org
|
||||
S: Orphan
|
||||
F: arch/powerpc/sysdev/qe_lib/
|
||||
F: arch/powerpc/include/asm/*qe.h
|
||||
F: drivers/soc/fsl/qe/
|
||||
F: include/soc/fsl/*qe*.h
|
||||
F: include/soc/fsl/*ucc*.h
|
||||
|
||||
FREESCALE USB PERIPHERAL DRIVERS
|
||||
M: Li Yang <leoli@freescale.com>
|
||||
|
@ -6444,7 +6445,7 @@ S: Maintained
|
|||
F: arch/powerpc/platforms/8xx/
|
||||
|
||||
LINUX FOR POWERPC EMBEDDED PPC83XX AND PPC85XX
|
||||
M: Scott Wood <scottwood@freescale.com>
|
||||
M: Scott Wood <oss@buserror.net>
|
||||
M: Kumar Gala <galak@kernel.crashing.org>
|
||||
W: http://www.penguinppc.org/
|
||||
L: linuxppc-dev@lists.ozlabs.org
|
||||
|
|
|
@ -560,6 +560,7 @@ choice
|
|||
|
||||
config PPC_4K_PAGES
|
||||
bool "4k page size"
|
||||
select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
|
||||
|
||||
config PPC_16K_PAGES
|
||||
bool "16k page size"
|
||||
|
@ -568,6 +569,7 @@ config PPC_16K_PAGES
|
|||
config PPC_64K_PAGES
|
||||
bool "64k page size"
|
||||
depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
|
||||
select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
|
||||
|
||||
config PPC_256K_PAGES
|
||||
bool "256k page size"
|
||||
|
@ -1075,8 +1077,6 @@ source "drivers/Kconfig"
|
|||
|
||||
source "fs/Kconfig"
|
||||
|
||||
source "arch/powerpc/sysdev/qe_lib/Kconfig"
|
||||
|
||||
source "lib/Kconfig"
|
||||
|
||||
source "arch/powerpc/Kconfig.debug"
|
||||
|
|
|
@ -64,17 +64,17 @@ config PPC_EMULATED_STATS
|
|||
emulated.
|
||||
|
||||
config CODE_PATCHING_SELFTEST
|
||||
bool "Run self-tests of the code-patching code."
|
||||
bool "Run self-tests of the code-patching code"
|
||||
depends on DEBUG_KERNEL
|
||||
default n
|
||||
|
||||
config FTR_FIXUP_SELFTEST
|
||||
bool "Run self-tests of the feature-fixup code."
|
||||
bool "Run self-tests of the feature-fixup code"
|
||||
depends on DEBUG_KERNEL
|
||||
default n
|
||||
|
||||
config MSI_BITMAP_SELFTEST
|
||||
bool "Run self-tests of the MSI bitmap code."
|
||||
bool "Run self-tests of the MSI bitmap code"
|
||||
depends on DEBUG_KERNEL
|
||||
default n
|
||||
|
||||
|
|
|
@ -113,7 +113,6 @@ src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c
|
|||
src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S
|
||||
src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S
|
||||
src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S
|
||||
src-plat-$(CONFIG_PPC_CELL_QPACE) += pseries-head.S
|
||||
|
||||
src-wlib := $(sort $(src-wlib-y))
|
||||
src-plat := $(sort $(src-plat-y))
|
||||
|
@ -217,7 +216,6 @@ image-$(CONFIG_PPC_POWERNV) += zImage.pseries
|
|||
image-$(CONFIG_PPC_MAPLE) += zImage.maple
|
||||
image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries
|
||||
image-$(CONFIG_PPC_PS3) += dtbImage.ps3
|
||||
image-$(CONFIG_PPC_CELL_QPACE) += zImage.pseries
|
||||
image-$(CONFIG_PPC_CHRP) += zImage.chrp
|
||||
image-$(CONFIG_PPC_EFIKA) += zImage.chrp
|
||||
image-$(CONFIG_PPC_PMAC) += zImage.pmac
|
||||
|
|
|
@ -474,6 +474,11 @@
|
|||
fman@400000 {
|
||||
interrupts = <96 2 0 0>, <16 2 1 30>;
|
||||
|
||||
muram@0 {
|
||||
compatible = "fsl,fman-muram";
|
||||
reg = <0x0 0x80000>;
|
||||
};
|
||||
|
||||
enet0: ethernet@e0000 {
|
||||
};
|
||||
|
||||
|
|
|
@ -29,6 +29,21 @@
|
|||
soc: soc@ff700000 {
|
||||
ranges = <0x0 0x0 0xff700000 0x100000>;
|
||||
};
|
||||
|
||||
pci0: pcie@ff70a000 {
|
||||
reg = <0 0xff70a000 0 0x1000>;
|
||||
ranges = <0x2000000 0x0 0x90000000 0 0x90000000 0x0 0x20000000
|
||||
0x1000000 0x0 0x00000000 0 0xc0010000 0x0 0x10000>;
|
||||
pcie@0 {
|
||||
ranges = <0x2000000 0x0 0x90000000
|
||||
0x2000000 0x0 0x90000000
|
||||
0x0 0x20000000
|
||||
|
||||
0x1000000 0x0 0x0
|
||||
0x1000000 0x0 0x0
|
||||
0x0 0x100000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
/include/ "bsc9132qds.dtsi"
|
||||
|
|
|
@ -40,6 +40,34 @@
|
|||
interrupts = <16 2 0 0 20 2 0 0>;
|
||||
};
|
||||
|
||||
/* controller at 0xa000 */
|
||||
&pci0 {
|
||||
compatible = "fsl,bsc9132-pcie", "fsl,qoriq-pcie-v2.2";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
bus-range = <0 255>;
|
||||
interrupts = <16 2 0 0>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#interrupt-cells = <1>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
interrupts = <16 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0 0 7>;
|
||||
|
||||
interrupt-map = <
|
||||
/* IDSEL 0x0 */
|
||||
0000 0x0 0x0 0x1 &mpic 0x0 0x2 0x0 0x0
|
||||
0000 0x0 0x0 0x2 &mpic 0x1 0x2 0x0 0x0
|
||||
0000 0x0 0x0 0x3 &mpic 0x2 0x2 0x0 0x0
|
||||
0000 0x0 0x0 0x4 &mpic 0x3 0x2 0x0 0x0
|
||||
>;
|
||||
};
|
||||
};
|
||||
|
||||
&soc {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
|
|
@ -45,6 +45,7 @@
|
|||
serial0 = &serial0;
|
||||
ethernet0 = &enet0;
|
||||
ethernet1 = &enet1;
|
||||
pci0 = &pci0;
|
||||
};
|
||||
|
||||
cpus {
|
||||
|
|
|
@ -215,3 +215,19 @@
|
|||
phy-connection-type = "sgmii";
|
||||
};
|
||||
};
|
||||
|
||||
&pci0 {
|
||||
pcie@0 {
|
||||
interrupt-map = <
|
||||
/* IDSEL 0x0 */
|
||||
/*
|
||||
*irq[4:5] are active-high
|
||||
*irq[6:7] are active-low
|
||||
*/
|
||||
0000 0x0 0x0 0x1 &mpic 0x4 0x2 0x0 0x0
|
||||
0000 0x0 0x0 0x2 &mpic 0x5 0x2 0x0 0x0
|
||||
0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0
|
||||
0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0
|
||||
>;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -159,4 +159,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1023si-post.dtsi"
|
||||
#include "t1023si-post.dtsi"
|
||||
|
|
|
@ -32,6 +32,8 @@
|
|||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include <dt-bindings/thermal/thermal.h>
|
||||
|
||||
&ifc {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <1>;
|
||||
|
@ -275,6 +277,90 @@
|
|||
reg = <0xea000 0x4000>;
|
||||
};
|
||||
|
||||
tmu: tmu@f0000 {
|
||||
compatible = "fsl,qoriq-tmu";
|
||||
reg = <0xf0000 0x1000>;
|
||||
interrupts = <18 2 0 0>;
|
||||
fsl,tmu-range = <0xb0000 0xa0026 0x80048 0x30061>;
|
||||
fsl,tmu-calibration = <0x00000000 0x0000000f
|
||||
0x00000001 0x00000017
|
||||
0x00000002 0x0000001e
|
||||
0x00000003 0x00000026
|
||||
0x00000004 0x0000002e
|
||||
0x00000005 0x00000035
|
||||
0x00000006 0x0000003d
|
||||
0x00000007 0x00000044
|
||||
0x00000008 0x0000004c
|
||||
0x00000009 0x00000053
|
||||
0x0000000a 0x0000005b
|
||||
0x0000000b 0x00000064
|
||||
|
||||
0x00010000 0x00000011
|
||||
0x00010001 0x0000001c
|
||||
0x00010002 0x00000024
|
||||
0x00010003 0x0000002b
|
||||
0x00010004 0x00000034
|
||||
0x00010005 0x00000039
|
||||
0x00010006 0x00000042
|
||||
0x00010007 0x0000004c
|
||||
0x00010008 0x00000051
|
||||
0x00010009 0x0000005a
|
||||
0x0001000a 0x00000063
|
||||
|
||||
0x00020000 0x00000013
|
||||
0x00020001 0x00000019
|
||||
0x00020002 0x00000024
|
||||
0x00020003 0x0000002c
|
||||
0x00020004 0x00000035
|
||||
0x00020005 0x0000003d
|
||||
0x00020006 0x00000046
|
||||
0x00020007 0x00000050
|
||||
0x00020008 0x00000059
|
||||
|
||||
0x00030000 0x00000002
|
||||
0x00030001 0x0000000d
|
||||
0x00030002 0x00000019
|
||||
0x00030003 0x00000024>;
|
||||
#thermal-sensor-cells = <0>;
|
||||
};
|
||||
|
||||
thermal-zones {
|
||||
cpu_thermal: cpu-thermal {
|
||||
polling-delay-passive = <1000>;
|
||||
polling-delay = <5000>;
|
||||
|
||||
thermal-sensors = <&tmu>;
|
||||
|
||||
trips {
|
||||
cpu_alert: cpu-alert {
|
||||
temperature = <85000>;
|
||||
hysteresis = <2000>;
|
||||
type = "passive";
|
||||
};
|
||||
cpu_crit: cpu-crit {
|
||||
temperature = <95000>;
|
||||
hysteresis = <2000>;
|
||||
type = "critical";
|
||||
};
|
||||
};
|
||||
|
||||
cooling-maps {
|
||||
map0 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu0 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
map1 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu1 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
scfg: global-utilities@fc000 {
|
||||
compatible = "fsl,t1023-scfg";
|
||||
reg = <0xfc000 0x1000>;
|
||||
|
|
|
@ -248,4 +248,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1024si-post.dtsi"
|
||||
#include "t1024si-post.dtsi"
|
||||
|
|
|
@ -188,4 +188,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1024si-post.dtsi"
|
||||
#include "t1024si-post.dtsi"
|
||||
|
|
|
@ -32,7 +32,7 @@
|
|||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/include/ "t1023si-post.dtsi"
|
||||
#include "t1023si-post.dtsi"
|
||||
|
||||
/ {
|
||||
aliases {
|
||||
|
|
|
@ -76,6 +76,7 @@
|
|||
reg = <0>;
|
||||
clocks = <&mux0>;
|
||||
next-level-cache = <&L2_1>;
|
||||
#cooling-cells = <2>;
|
||||
L2_1: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
@ -85,6 +86,7 @@
|
|||
reg = <1>;
|
||||
clocks = <&mux1>;
|
||||
next-level-cache = <&L2_2>;
|
||||
#cooling-cells = <2>;
|
||||
L2_2: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
|
|
@ -43,4 +43,4 @@
|
|||
interrupt-parent = <&mpic>;
|
||||
};
|
||||
|
||||
/include/ "t1040si-post.dtsi"
|
||||
#include "t1040si-post.dtsi"
|
||||
|
|
|
@ -43,4 +43,4 @@
|
|||
interrupt-parent = <&mpic>;
|
||||
};
|
||||
|
||||
/include/ "t1040si-post.dtsi"
|
||||
#include "t1040si-post.dtsi"
|
||||
|
|
|
@ -45,4 +45,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1040si-post.dtsi"
|
||||
#include "t1040si-post.dtsi"
|
||||
|
|
|
@ -32,6 +32,8 @@
|
|||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include <dt-bindings/thermal/thermal.h>
|
||||
|
||||
&bman_fbpr {
|
||||
compatible = "fsl,bman-fbpr";
|
||||
alloc-ranges = <0 0 0x10000 0>;
|
||||
|
@ -484,6 +486,98 @@
|
|||
reg = <0xea000 0x4000>;
|
||||
};
|
||||
|
||||
tmu: tmu@f0000 {
|
||||
compatible = "fsl,qoriq-tmu";
|
||||
reg = <0xf0000 0x1000>;
|
||||
interrupts = <18 2 0 0>;
|
||||
fsl,tmu-range = <0xa0000 0x90026 0x8004a 0x1006a>;
|
||||
fsl,tmu-calibration = <0x00000000 0x00000025
|
||||
0x00000001 0x00000028
|
||||
0x00000002 0x0000002d
|
||||
0x00000003 0x00000031
|
||||
0x00000004 0x00000036
|
||||
0x00000005 0x0000003a
|
||||
0x00000006 0x00000040
|
||||
0x00000007 0x00000044
|
||||
0x00000008 0x0000004a
|
||||
0x00000009 0x0000004f
|
||||
0x0000000a 0x00000054
|
||||
|
||||
0x00010000 0x0000000d
|
||||
0x00010001 0x00000013
|
||||
0x00010002 0x00000019
|
||||
0x00010003 0x0000001f
|
||||
0x00010004 0x00000025
|
||||
0x00010005 0x0000002d
|
||||
0x00010006 0x00000033
|
||||
0x00010007 0x00000043
|
||||
0x00010008 0x0000004b
|
||||
0x00010009 0x00000053
|
||||
|
||||
0x00020000 0x00000010
|
||||
0x00020001 0x00000017
|
||||
0x00020002 0x0000001f
|
||||
0x00020003 0x00000029
|
||||
0x00020004 0x00000031
|
||||
0x00020005 0x0000003c
|
||||
0x00020006 0x00000042
|
||||
0x00020007 0x0000004d
|
||||
0x00020008 0x00000056
|
||||
|
||||
0x00030000 0x00000012
|
||||
0x00030001 0x0000001d>;
|
||||
#thermal-sensor-cells = <0>;
|
||||
};
|
||||
|
||||
thermal-zones {
|
||||
cpu_thermal: cpu-thermal {
|
||||
polling-delay-passive = <1000>;
|
||||
polling-delay = <5000>;
|
||||
|
||||
thermal-sensors = <&tmu>;
|
||||
|
||||
trips {
|
||||
cpu_alert: cpu-alert {
|
||||
temperature = <85000>;
|
||||
hysteresis = <2000>;
|
||||
type = "passive";
|
||||
};
|
||||
cpu_crit: cpu-crit {
|
||||
temperature = <95000>;
|
||||
hysteresis = <2000>;
|
||||
type = "critical";
|
||||
};
|
||||
};
|
||||
|
||||
cooling-maps {
|
||||
map0 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu0 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
map1 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu1 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
map2 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu2 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
map3 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu3 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
scfg: global-utilities@fc000 {
|
||||
compatible = "fsl,t1040-scfg";
|
||||
reg = <0xfc000 0x1000>;
|
||||
|
|
|
@ -50,4 +50,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1040si-post.dtsi"
|
||||
#include "t1042si-post.dtsi"
|
||||
|
|
|
@ -43,4 +43,4 @@
|
|||
interrupt-parent = <&mpic>;
|
||||
};
|
||||
|
||||
/include/ "t1042si-post.dtsi"
|
||||
#include "t1042si-post.dtsi"
|
||||
|
|
|
@ -45,4 +45,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1042si-post.dtsi"
|
||||
#include "t1042si-post.dtsi"
|
||||
|
|
|
@ -54,4 +54,4 @@
|
|||
};
|
||||
};
|
||||
|
||||
/include/ "t1042si-post.dtsi"
|
||||
#include "t1042si-post.dtsi"
|
||||
|
|
|
@ -32,6 +32,6 @@
|
|||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/include/ "t1040si-post.dtsi"
|
||||
#include "t1040si-post.dtsi"
|
||||
|
||||
/* Place holder for ethernet related device tree nodes */
|
||||
|
|
|
@ -76,6 +76,7 @@
|
|||
reg = <0>;
|
||||
clocks = <&mux0>;
|
||||
next-level-cache = <&L2_1>;
|
||||
#cooling-cells = <2>;
|
||||
L2_1: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
@ -85,6 +86,7 @@
|
|||
reg = <1>;
|
||||
clocks = <&mux1>;
|
||||
next-level-cache = <&L2_2>;
|
||||
#cooling-cells = <2>;
|
||||
L2_2: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
@ -94,6 +96,7 @@
|
|||
reg = <2>;
|
||||
clocks = <&mux2>;
|
||||
next-level-cache = <&L2_3>;
|
||||
#cooling-cells = <2>;
|
||||
L2_3: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
@ -103,6 +106,7 @@
|
|||
reg = <3>;
|
||||
clocks = <&mux3>;
|
||||
next-level-cache = <&L2_4>;
|
||||
#cooling-cells = <2>;
|
||||
L2_4: l2-cache {
|
||||
next-level-cache = <&cpc>;
|
||||
};
|
||||
|
|
|
@ -154,7 +154,7 @@ if [ -z "$kernel" ]; then
|
|||
kernel=vmlinux
|
||||
fi
|
||||
|
||||
elfformat="`${CROSS}objdump -p "$kernel" | grep 'file format' | awk '{print $4}'`"
|
||||
LANG=C elfformat="`${CROSS}objdump -p "$kernel" | grep 'file format' | awk '{print $4}'`"
|
||||
case "$elfformat" in
|
||||
elf64-powerpcle) format=elf64lppc ;;
|
||||
elf64-powerpc) format=elf32ppc ;;
|
||||
|
|
|
@ -12,6 +12,7 @@ CONFIG_P1010_RDB=y
|
|||
CONFIG_P1022_DS=y
|
||||
CONFIG_P1022_RDK=y
|
||||
CONFIG_P1023_RDB=y
|
||||
CONFIG_TWR_P102x=y
|
||||
CONFIG_SBC8548=y
|
||||
CONFIG_SOCRATES=y
|
||||
CONFIG_STX_GP3=y
|
||||
|
|
|
@ -36,7 +36,6 @@ CONFIG_PS3_ROM=m
|
|||
CONFIG_PS3_FLASH=m
|
||||
CONFIG_PS3_LPM=m
|
||||
CONFIG_PPC_IBM_CELL_BLADE=y
|
||||
CONFIG_PPC_CELL_QPACE=y
|
||||
CONFIG_RTAS_FLASH=m
|
||||
CONFIG_IBMEBUS=y
|
||||
CONFIG_CPU_FREQ_PMAC64=y
|
||||
|
|
|
@ -85,6 +85,7 @@ static void spe_begin(void)
|
|||
|
||||
static void spe_end(void)
|
||||
{
|
||||
disable_kernel_spe();
|
||||
/* reenable preemption */
|
||||
preempt_enable();
|
||||
}
|
||||
|
|
|
@ -46,6 +46,7 @@ static void spe_begin(void)
|
|||
|
||||
static void spe_end(void)
|
||||
{
|
||||
disable_kernel_spe();
|
||||
/* reenable preemption */
|
||||
preempt_enable();
|
||||
}
|
||||
|
|
|
@ -47,6 +47,7 @@ static void spe_begin(void)
|
|||
|
||||
static void spe_end(void)
|
||||
{
|
||||
disable_kernel_spe();
|
||||
/* reenable preemption */
|
||||
preempt_enable();
|
||||
}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_HASH32_H
|
||||
#define _ASM_POWERPC_PTE_HASH32_H
|
||||
#ifndef _ASM_POWERPC_BOOK3S_32_HASH_H
|
||||
#define _ASM_POWERPC_BOOK3S_32_HASH_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
|
@ -43,4 +43,4 @@
|
|||
#define PTE_ATOMIC_UPDATES 1
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_HASH32_H */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */
|
|
@ -0,0 +1,482 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_32_PGTABLE_H
|
||||
#define _ASM_POWERPC_BOOK3S_32_PGTABLE_H
|
||||
|
||||
#include <asm-generic/pgtable-nopmd.h>
|
||||
|
||||
#include <asm/book3s/32/hash.h>
|
||||
|
||||
/* And here we include common definitions */
|
||||
#include <asm/pte-common.h>
|
||||
|
||||
/*
|
||||
* The normal case is that PTEs are 32-bits and we have a 1-page
|
||||
* 1024-entry pgdir pointing to 1-page 1024-entry PTE pages. -- paulus
|
||||
*
|
||||
* For any >32-bit physical address platform, we can use the following
|
||||
* two level page table layout where the pgdir is 8KB and the MS 13 bits
|
||||
* are an index to the second level table. The combined pgdir/pmd first
|
||||
* level has 2048 entries and the second level has 512 64-bit PTE entries.
|
||||
* -Matt
|
||||
*/
|
||||
/* PGDIR_SHIFT determines what a top-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PAGE_SHIFT + PTE_SHIFT)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_SHIFT)
|
||||
#define PTRS_PER_PMD 1
|
||||
#define PTRS_PER_PGD (1 << (32 - PGDIR_SHIFT))
|
||||
|
||||
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
|
||||
/*
|
||||
* This is the bottom of the PKMAP area with HIGHMEM or an arbitrary
|
||||
* value (for now) on others, from where we can start layout kernel
|
||||
* virtual space that goes below PKMAP and FIXMAP
|
||||
*/
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#define KVIRT_TOP PKMAP_BASE
|
||||
#else
|
||||
#define KVIRT_TOP (0xfe000000UL) /* for now, could be FIXMAP_BASE ? */
|
||||
#endif
|
||||
|
||||
/*
|
||||
* ioremap_bot starts at that address. Early ioremaps move down from there,
|
||||
* until mem_init() at which point this becomes the top of the vmalloc
|
||||
* and ioremap space
|
||||
*/
|
||||
#ifdef CONFIG_NOT_COHERENT_CACHE
|
||||
#define IOREMAP_TOP ((KVIRT_TOP - CONFIG_CONSISTENT_SIZE) & PAGE_MASK)
|
||||
#else
|
||||
#define IOREMAP_TOP KVIRT_TOP
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Just any arbitrary offset to the start of the vmalloc VM area: the
|
||||
* current 16MB value just means that there will be a 64MB "hole" after the
|
||||
* physical memory until the kernel virtual memory starts. That means that
|
||||
* any out-of-bounds memory accesses will hopefully be caught.
|
||||
* The vmalloc() routines leaves a hole of 4kB between each vmalloced
|
||||
* area for the same reason. ;)
|
||||
*
|
||||
* We no longer map larger than phys RAM with the BATs so we don't have
|
||||
* to worry about the VMALLOC_OFFSET causing problems. We do have to worry
|
||||
* about clashes between our early calls to ioremap() that start growing down
|
||||
* from ioremap_base being run into the VM area allocations (growing upwards
|
||||
* from VMALLOC_START). For this reason we have ioremap_bot to check when
|
||||
* we actually run into our mappings setup in the early boot with the VM
|
||||
* system. This really does become a problem for machines with good amounts
|
||||
* of RAM. -- Cort
|
||||
*/
|
||||
#define VMALLOC_OFFSET (0x1000000) /* 16M */
|
||||
#ifdef PPC_PIN_SIZE
|
||||
#define VMALLOC_START (((_ALIGN((long)high_memory, PPC_PIN_SIZE) + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)))
|
||||
#else
|
||||
#define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)))
|
||||
#endif
|
||||
#define VMALLOC_END ioremap_bot
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <linux/sched.h>
|
||||
#include <linux/threads.h>
|
||||
#include <asm/io.h> /* For sub-arch specific PPC_PIN_SIZE */
|
||||
|
||||
extern unsigned long ioremap_bot;
|
||||
|
||||
/*
|
||||
* entries per page directory level: our page-table tree is two-level, so
|
||||
* we don't really have any PMD directory.
|
||||
*/
|
||||
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_SHIFT)
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << (32 - PGDIR_SHIFT))
|
||||
|
||||
#define pte_ERROR(e) \
|
||||
pr_err("%s:%d: bad pte %llx.\n", __FILE__, __LINE__, \
|
||||
(unsigned long long)pte_val(e))
|
||||
#define pgd_ERROR(e) \
|
||||
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
|
||||
/*
|
||||
* Bits in a linux-style PTE. These match the bits in the
|
||||
* (hardware-defined) PowerPC PTE as closely as possible.
|
||||
*/
|
||||
|
||||
#define pte_clear(mm, addr, ptep) \
|
||||
do { pte_update(ptep, ~_PAGE_HASHPTE, 0); } while (0)
|
||||
|
||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
|
||||
#define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK)
|
||||
static inline void pmd_clear(pmd_t *pmdp)
|
||||
{
|
||||
*pmdp = __pmd(0);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* When flushing the tlb entry for a page, we also need to flush the hash
|
||||
* table entry. flush_hash_pages is assembler (for speed) in hashtable.S.
|
||||
*/
|
||||
extern int flush_hash_pages(unsigned context, unsigned long va,
|
||||
unsigned long pmdval, int count);
|
||||
|
||||
/* Add an HPTE to the hash table */
|
||||
extern void add_hash_page(unsigned context, unsigned long va,
|
||||
unsigned long pmdval);
|
||||
|
||||
/* Flush an entry from the TLB/hash table */
|
||||
extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep,
|
||||
unsigned long address);
|
||||
|
||||
/*
|
||||
* PTE updates. This function is called whenever an existing
|
||||
* valid PTE is updated. This does -not- include set_pte_at()
|
||||
* which nowadays only sets a new PTE.
|
||||
*
|
||||
* Depending on the type of MMU, we may need to use atomic updates
|
||||
* and the PTE may be either 32 or 64 bit wide. In the later case,
|
||||
* when using atomic updates, only the low part of the PTE is
|
||||
* accessed atomically.
|
||||
*
|
||||
* In addition, on 44x, we also maintain a global flag indicating
|
||||
* that an executable user mapping was modified, which is needed
|
||||
* to properly flush the virtually tagged instruction cache of
|
||||
* those implementations.
|
||||
*/
|
||||
#ifndef CONFIG_PTE_64BIT
|
||||
static inline unsigned long pte_update(pte_t *p,
|
||||
unsigned long clr,
|
||||
unsigned long set)
|
||||
{
|
||||
unsigned long old, tmp;
|
||||
|
||||
__asm__ __volatile__("\
|
||||
1: lwarx %0,0,%3\n\
|
||||
andc %1,%0,%4\n\
|
||||
or %1,%1,%5\n"
|
||||
PPC405_ERR77(0,%3)
|
||||
" stwcx. %1,0,%3\n\
|
||||
bne- 1b"
|
||||
: "=&r" (old), "=&r" (tmp), "=m" (*p)
|
||||
: "r" (p), "r" (clr), "r" (set), "m" (*p)
|
||||
: "cc" );
|
||||
|
||||
return old;
|
||||
}
|
||||
#else /* CONFIG_PTE_64BIT */
|
||||
static inline unsigned long long pte_update(pte_t *p,
|
||||
unsigned long clr,
|
||||
unsigned long set)
|
||||
{
|
||||
unsigned long long old;
|
||||
unsigned long tmp;
|
||||
|
||||
__asm__ __volatile__("\
|
||||
1: lwarx %L0,0,%4\n\
|
||||
lwzx %0,0,%3\n\
|
||||
andc %1,%L0,%5\n\
|
||||
or %1,%1,%6\n"
|
||||
PPC405_ERR77(0,%3)
|
||||
" stwcx. %1,0,%4\n\
|
||||
bne- 1b"
|
||||
: "=&r" (old), "=&r" (tmp), "=m" (*p)
|
||||
: "r" (p), "r" ((unsigned long)(p) + 4), "r" (clr), "r" (set), "m" (*p)
|
||||
: "cc" );
|
||||
|
||||
return old;
|
||||
}
|
||||
#endif /* CONFIG_PTE_64BIT */
|
||||
|
||||
/*
|
||||
* 2.6 calls this without flushing the TLB entry; this is wrong
|
||||
* for our hash-based implementation, we fix that up here.
|
||||
*/
|
||||
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
||||
static inline int __ptep_test_and_clear_young(unsigned int context, unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old;
|
||||
old = pte_update(ptep, _PAGE_ACCESSED, 0);
|
||||
if (old & _PAGE_HASHPTE) {
|
||||
unsigned long ptephys = __pa(ptep) & PAGE_MASK;
|
||||
flush_hash_pages(context, addr, ptephys, 1);
|
||||
}
|
||||
return (old & _PAGE_ACCESSED) != 0;
|
||||
}
|
||||
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
|
||||
__ptep_test_and_clear_young((__vma)->vm_mm->context.id, __addr, __ptep)
|
||||
|
||||
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
return __pte(pte_update(ptep, ~_PAGE_HASHPTE, 0));
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
||||
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
pte_update(ptep, (_PAGE_RW | _PAGE_HWWRITE), _PAGE_RO);
|
||||
}
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
ptep_set_wrprotect(mm, addr, ptep);
|
||||
}
|
||||
|
||||
|
||||
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
{
|
||||
unsigned long set = pte_val(entry) &
|
||||
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
|
||||
unsigned long clr = ~pte_val(entry) & _PAGE_RO;
|
||||
|
||||
pte_update(ptep, clr, set);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTE_SAME
|
||||
#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HASHPTE) == 0)
|
||||
|
||||
/*
|
||||
* Note that on Book E processors, the pmd contains the kernel virtual
|
||||
* (lowmem) address of the pte page. The physical address is less useful
|
||||
* because everything runs with translation enabled (even the TLB miss
|
||||
* handler). On everything else the pmd contains the physical address
|
||||
* of the pte page. -- paulus
|
||||
*/
|
||||
#ifndef CONFIG_BOOKE
|
||||
#define pmd_page_vaddr(pmd) \
|
||||
((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
|
||||
#define pmd_page(pmd) \
|
||||
pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)
|
||||
#else
|
||||
#define pmd_page_vaddr(pmd) \
|
||||
((unsigned long) (pmd_val(pmd) & PAGE_MASK))
|
||||
#define pmd_page(pmd) \
|
||||
pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT))
|
||||
#endif
|
||||
|
||||
/* to find an entry in a kernel page-table-directory */
|
||||
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
|
||||
|
||||
/* to find an entry in a page-table-directory */
|
||||
#define pgd_index(address) ((address) >> PGDIR_SHIFT)
|
||||
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
|
||||
|
||||
/* Find an entry in the third-level page table.. */
|
||||
#define pte_index(address) \
|
||||
(((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
|
||||
#define pte_offset_kernel(dir, addr) \
|
||||
((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(addr))
|
||||
#define pte_offset_map(dir, addr) \
|
||||
((pte_t *) kmap_atomic(pmd_page(*(dir))) + pte_index(addr))
|
||||
#define pte_unmap(pte) kunmap_atomic(pte)
|
||||
|
||||
/*
|
||||
* Encode and decode a swap entry.
|
||||
* Note that the bits we use in a PTE for representing a swap entry
|
||||
* must not include the _PAGE_PRESENT bit or the _PAGE_HASHPTE bit (if used).
|
||||
* -- paulus
|
||||
*/
|
||||
#define __swp_type(entry) ((entry).val & 0x1f)
|
||||
#define __swp_offset(entry) ((entry).val >> 5)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { (type) | ((offset) << 5) })
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 })
|
||||
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 })
|
||||
|
||||
#ifndef CONFIG_PPC_4K_PAGES
|
||||
void pgtable_cache_init(void);
|
||||
#else
|
||||
/*
|
||||
* No page table caches to initialise
|
||||
*/
|
||||
#define pgtable_cache_init() do { } while (0)
|
||||
#endif
|
||||
|
||||
extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep,
|
||||
pmd_t **pmdp);
|
||||
|
||||
/* Generic accessors to PTE bits */
|
||||
static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_RW);}
|
||||
static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
|
||||
static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
|
||||
static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
|
||||
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) & _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
/* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
|
||||
{
|
||||
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
|
||||
pgprot_val(pgprot));
|
||||
}
|
||||
|
||||
static inline unsigned long pte_pfn(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) >> PTE_RPN_SHIFT;
|
||||
}
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkclean(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkold(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkwrite(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkdirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkyoung(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkspecial(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SPECIAL);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkhuge(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* This low level function performs the actual PTE insertion
|
||||
* Setting the PTE depends on the MMU type and other factors. It's
|
||||
* an horrible mess that I'm not going to try to clean up now but
|
||||
* I'm keeping it in one place rather than spread around
|
||||
*/
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
|
||||
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
|
||||
* helper pte_update() which does an atomic update. We need to do that
|
||||
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
|
||||
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
|
||||
* the hash bits instead (ie, same as the non-SMP case)
|
||||
*/
|
||||
if (percpu)
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
else
|
||||
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
|
||||
|
||||
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
|
||||
/* Second case is 32-bit with 64-bit PTE. In this case, we
|
||||
* can just store as long as we do the two halves in the right order
|
||||
* with a barrier in between. This is possible because we take care,
|
||||
* in the hash code, to pre-invalidate if the PTE was already hashed,
|
||||
* which synchronizes us with any concurrent invalidation.
|
||||
* In the percpu case, we also fallback to the simple update preserving
|
||||
* the hash bits
|
||||
*/
|
||||
if (percpu) {
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
return;
|
||||
}
|
||||
if (pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
flush_hash_entry(mm, ptep, addr);
|
||||
__asm__ __volatile__("\
|
||||
stw%U0%X0 %2,%0\n\
|
||||
eieio\n\
|
||||
stw%U0%X0 %L2,%1"
|
||||
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
|
||||
: "r" (pte) : "memory");
|
||||
|
||||
#elif defined(CONFIG_PPC_STD_MMU_32)
|
||||
/* Third case is 32-bit hash table in UP mode, we need to preserve
|
||||
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
|
||||
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
|
||||
* and see we need to keep track that this PTE needs invalidating
|
||||
*/
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
|
||||
#else
|
||||
#error "Not supported "
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU)
|
||||
|
||||
#define pgprot_noncached pgprot_noncached
|
||||
static inline pgprot_t pgprot_noncached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED);
|
||||
}
|
||||
|
||||
#define pgprot_noncached_wc pgprot_noncached_wc
|
||||
static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE);
|
||||
}
|
||||
|
||||
#define pgprot_cached pgprot_cached
|
||||
static inline pgprot_t pgprot_cached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT);
|
||||
}
|
||||
|
||||
#define pgprot_cached_wthru pgprot_cached_wthru
|
||||
static inline pgprot_t pgprot_cached_wthru(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT | _PAGE_WRITETHRU);
|
||||
}
|
||||
|
||||
#define pgprot_cached_noncoherent pgprot_cached_noncoherent
|
||||
static inline pgprot_t pgprot_cached_noncoherent(pgprot_t prot)
|
||||
{
|
||||
return __pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL);
|
||||
}
|
||||
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
static inline pgprot_t pgprot_writecombine(pgprot_t prot)
|
||||
{
|
||||
return pgprot_noncached_wc(prot);
|
||||
}
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_32_PGTABLE_H */
|
|
@ -0,0 +1,132 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_4K_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_HASH_4K_H
|
||||
/*
|
||||
* Entries per page directory level. The PTE level must use a 64b record
|
||||
* for each page table entry. The PMD and PGD level use a 32b record for
|
||||
* each entry by assuming that each entry is page aligned.
|
||||
*/
|
||||
#define PTE_INDEX_SIZE 9
|
||||
#define PMD_INDEX_SIZE 7
|
||||
#define PUD_INDEX_SIZE 9
|
||||
#define PGD_INDEX_SIZE 9
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE)
|
||||
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
||||
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
|
||||
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
|
||||
/* PMD_SHIFT determines what a second-level page table entry can map */
|
||||
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
||||
#define PMD_SIZE (1UL << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
|
||||
/* With 4k base page size, hugepage PTEs go at the PMD level */
|
||||
#define MIN_HUGEPTE_SHIFT PMD_SHIFT
|
||||
|
||||
/* PUD_SHIFT determines what a third-level page table entry can map */
|
||||
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
|
||||
#define PUD_SIZE (1UL << PUD_SHIFT)
|
||||
#define PUD_MASK (~(PUD_SIZE-1))
|
||||
|
||||
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0
|
||||
/* Bits to mask out from a PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0
|
||||
/* Bits to mask out from a PGD to get to the PUD page */
|
||||
#define PGD_MASKED_BITS 0
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
|
||||
_PAGE_F_SECOND | _PAGE_F_GIX)
|
||||
|
||||
/* shift to put page number into pte */
|
||||
#define PTE_RPN_SHIFT (18)
|
||||
|
||||
#define _PAGE_4K_PFN 0
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* 4-level page tables related bits
|
||||
*/
|
||||
|
||||
#define pgd_none(pgd) (!pgd_val(pgd))
|
||||
#define pgd_bad(pgd) (pgd_val(pgd) == 0)
|
||||
#define pgd_present(pgd) (pgd_val(pgd) != 0)
|
||||
#define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS)
|
||||
|
||||
static inline void pgd_clear(pgd_t *pgdp)
|
||||
{
|
||||
*pgdp = __pgd(0);
|
||||
}
|
||||
|
||||
static inline pte_t pgd_pte(pgd_t pgd)
|
||||
{
|
||||
return __pte(pgd_val(pgd));
|
||||
}
|
||||
|
||||
static inline pgd_t pte_pgd(pte_t pte)
|
||||
{
|
||||
return __pgd(pte_val(pte));
|
||||
}
|
||||
extern struct page *pgd_page(pgd_t pgd);
|
||||
|
||||
#define pud_offset(pgdp, addr) \
|
||||
(((pud_t *) pgd_page_vaddr(*(pgdp))) + \
|
||||
(((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)))
|
||||
|
||||
#define pud_ERROR(e) \
|
||||
pr_err("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e))
|
||||
|
||||
/*
|
||||
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range() */
|
||||
#define remap_4k_pfn(vma, addr, pfn, prot) \
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
/*
|
||||
* For 4k page size, we support explicit hugepage via hugepd
|
||||
*/
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
/*
|
||||
* if it is not a pte and have hugepd shift mask
|
||||
* set, then it is a hugepd directory pointer
|
||||
*/
|
||||
if (!(hpd.pd & _PAGE_PTE) &&
|
||||
((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#endif
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */
|
|
@ -0,0 +1,312 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
||||
|
||||
#include <asm-generic/pgtable-nopud.h>
|
||||
|
||||
#define PTE_INDEX_SIZE 8
|
||||
#define PMD_INDEX_SIZE 10
|
||||
#define PUD_INDEX_SIZE 0
|
||||
#define PGD_INDEX_SIZE 12
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
||||
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
|
||||
/* With 4k base page size, hugepage PTEs go at the PMD level */
|
||||
#define MIN_HUGEPTE_SHIFT PAGE_SHIFT
|
||||
|
||||
/* PMD_SHIFT determines what a second-level page table entry can map */
|
||||
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
||||
#define PMD_SIZE (1UL << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
|
||||
/* PGDIR_SHIFT determines what a third-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
#define _PAGE_COMBO 0x00040000 /* this is a combo 4k page */
|
||||
#define _PAGE_4K_PFN 0x00080000 /* PFN is for a single 4k page */
|
||||
/*
|
||||
* Used to track subpage group valid if _PAGE_COMBO is set
|
||||
* This overloads _PAGE_F_GIX and _PAGE_F_SECOND
|
||||
*/
|
||||
#define _PAGE_COMBO_VALID (_PAGE_F_GIX | _PAGE_F_SECOND)
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_F_SECOND | \
|
||||
_PAGE_F_GIX | _PAGE_HASHPTE | _PAGE_COMBO)
|
||||
|
||||
/* Shift to put page number into pte.
|
||||
*
|
||||
* That gives us a max RPN of 34 bits, which means a max of 50 bits
|
||||
* of addressable physical space, or 46 bits for the special 4k PFNs.
|
||||
*/
|
||||
#define PTE_RPN_SHIFT (30)
|
||||
/*
|
||||
* we support 16 fragments per PTE page of 64K size.
|
||||
*/
|
||||
#define PTE_FRAG_NR 16
|
||||
/*
|
||||
* We use a 2K PTE page fragment and another 2K for storing
|
||||
* real_pte_t hash index
|
||||
*/
|
||||
#define PTE_FRAG_SIZE_SHIFT 12
|
||||
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
|
||||
|
||||
/*
|
||||
* Bits to mask out from a PMD to get to the PTE page
|
||||
* PMDs point to PTE table fragments which are PTE_FRAG_SIZE aligned.
|
||||
*/
|
||||
#define PMD_MASKED_BITS (PTE_FRAG_SIZE - 1)
|
||||
/* Bits to mask out from a PGD/PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0x1ff
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* With 64K pages on hash table, we have a special PTE format that
|
||||
* uses a second "half" of the page table to encode sub-page information
|
||||
* in order to deal with 64K made of 4K HW pages. Thus we override the
|
||||
* generic accessors and iterators here
|
||||
*/
|
||||
#define __real_pte __real_pte
|
||||
static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
|
||||
{
|
||||
real_pte_t rpte;
|
||||
unsigned long *hidxp;
|
||||
|
||||
rpte.pte = pte;
|
||||
rpte.hidx = 0;
|
||||
if (pte_val(pte) & _PAGE_COMBO) {
|
||||
/*
|
||||
* Make sure we order the hidx load against the _PAGE_COMBO
|
||||
* check. The store side ordering is done in __hash_page_4K
|
||||
*/
|
||||
smp_rmb();
|
||||
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
|
||||
rpte.hidx = *hidxp;
|
||||
}
|
||||
return rpte;
|
||||
}
|
||||
|
||||
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
|
||||
{
|
||||
if ((pte_val(rpte.pte) & _PAGE_COMBO))
|
||||
return (rpte.hidx >> (index<<2)) & 0xf;
|
||||
return (pte_val(rpte.pte) >> _PAGE_F_GIX_SHIFT) & 0xf;
|
||||
}
|
||||
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
|
||||
/*
|
||||
* Trick: we set __end to va + 64k, which happens works for
|
||||
* a 16M page as well as we want only one iteration
|
||||
*/
|
||||
#define pte_iterate_hashed_subpages(rpte, psize, vpn, index, shift) \
|
||||
do { \
|
||||
unsigned long __end = vpn + (1UL << (PAGE_SHIFT - VPN_SHIFT)); \
|
||||
unsigned __split = (psize == MMU_PAGE_4K || \
|
||||
psize == MMU_PAGE_64K_AP); \
|
||||
shift = mmu_psize_defs[psize].shift; \
|
||||
for (index = 0; vpn < __end; index++, \
|
||||
vpn += (1L << (shift - VPN_SHIFT))) { \
|
||||
if (!__split || __rpte_sub_valid(rpte, index)) \
|
||||
do {
|
||||
|
||||
#define pte_iterate_hashed_end() } while(0); } } while(0)
|
||||
|
||||
#define pte_pagesize_index(mm, addr, pte) \
|
||||
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
|
||||
|
||||
#define remap_4k_pfn(vma, addr, pfn, prot) \
|
||||
(WARN_ON(((pfn) >= (1UL << (64 - PTE_RPN_SHIFT)))) ? -EINVAL : \
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
|
||||
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
|
||||
|
||||
#define PTE_TABLE_SIZE PTE_FRAG_SIZE
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
#define PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + (sizeof(unsigned long) << PMD_INDEX_SIZE))
|
||||
#else
|
||||
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#endif
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
|
||||
#define pgd_pte(pgd) (pud_pte(((pud_t){ pgd })))
|
||||
#define pte_pgd(pte) ((pgd_t)pte_pud(pte))
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
/*
|
||||
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
|
||||
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
|
||||
*
|
||||
* Defined in such a way that we can optimize away code block at build time
|
||||
* if CONFIG_HUGETLB_PAGE=n.
|
||||
*/
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pud_val(pud) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pgd_val(pgd) & _PAGE_PTE);
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
#ifdef CONFIG_DEBUG_VM
|
||||
extern int hugepd_ok(hugepd_t hpd);
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#else
|
||||
/*
|
||||
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
|
||||
* need to setup hugepage directory for them. Our pte and page directory format
|
||||
* enable us to have this enabled.
|
||||
*/
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define is_hugepd(pdep) 0
|
||||
#endif /* CONFIG_DEBUG_VM */
|
||||
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pmd_t *pmdp,
|
||||
unsigned long clr,
|
||||
unsigned long set);
|
||||
static inline char *get_hpte_slot_array(pmd_t *pmdp)
|
||||
{
|
||||
/*
|
||||
* The hpte hindex is stored in the pgtable whose address is in the
|
||||
* second half of the PMD
|
||||
*
|
||||
* Order this load with the test for pmd_trans_huge in the caller
|
||||
*/
|
||||
smp_rmb();
|
||||
return *(char **)(pmdp + PTRS_PER_PMD);
|
||||
|
||||
|
||||
}
|
||||
/*
|
||||
* The linux hugepage PMD now include the pmd entries followed by the address
|
||||
* to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
|
||||
* [ 1 bit secondary | 3 bit hidx | 1 bit valid | 000]. We use one byte per
|
||||
* each HPTE entry. With 16MB hugepage and 64K HPTE we need 256 entries and
|
||||
* with 4K HPTE we need 4096 entries. Both will fit in a 4K pgtable_t.
|
||||
*
|
||||
* The last three bits are intentionally left to zero. This memory location
|
||||
* are also used as normal page PTE pointers. So if we have any pointers
|
||||
* left around while we collapse a hugepage, we need to make sure
|
||||
* _PAGE_PRESENT bit of that is zero when we look at them
|
||||
*/
|
||||
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
|
||||
{
|
||||
return (hpte_slot_array[index] >> 3) & 0x1;
|
||||
}
|
||||
|
||||
static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
|
||||
int index)
|
||||
{
|
||||
return hpte_slot_array[index] >> 4;
|
||||
}
|
||||
|
||||
static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
|
||||
unsigned int index, unsigned int hidx)
|
||||
{
|
||||
hpte_slot_array[index] = hidx << 4 | 0x1 << 3;
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* For core kernel code by design pmd_trans_huge is never run on any hugetlbfs
|
||||
* page. The hugetlbfs page table walking and mangling paths are totally
|
||||
* separated form the core VM paths and they're differentiated by
|
||||
* VM_HUGETLB being set on vm_flags well before any pmd_trans_huge could run.
|
||||
*
|
||||
* pmd_trans_huge() is defined as false at build time if
|
||||
* CONFIG_TRANSPARENT_HUGEPAGE=n to optimize away code blocks at build
|
||||
* time in such case.
|
||||
*
|
||||
* For ppc64 we need to differntiate from explicit hugepages from THP, because
|
||||
* for THP we also track the subpage details at the pmd level. We don't do
|
||||
* that for explicit huge pages.
|
||||
*
|
||||
*/
|
||||
static inline int pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
return !!((pmd_val(pmd) & (_PAGE_PTE | _PAGE_THP_HUGE)) ==
|
||||
(_PAGE_PTE | _PAGE_THP_HUGE));
|
||||
}
|
||||
|
||||
static inline int pmd_trans_splitting(pmd_t pmd)
|
||||
{
|
||||
if (pmd_trans_huge(pmd))
|
||||
return pmd_val(pmd) & _PAGE_SPLITTING;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pmd_large(pmd_t pmd)
|
||||
{
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mksplitting(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) | _PAGE_SPLITTING);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMD_SAME
|
||||
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
|
||||
}
|
||||
|
||||
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
|
||||
return ((old & _PAGE_ACCESSED) != 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
|
||||
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp)
|
||||
{
|
||||
|
||||
if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */
|
|
@ -0,0 +1,551 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_HASH_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
* Common bits between 4K and 64K pages in a linux-style PTE.
|
||||
* These match the bits in the (hardware-defined) PowerPC PTE as closely
|
||||
* as possible. Additional bits may be defined in pgtable-hash64-*.h
|
||||
*
|
||||
* Note: We only support user read/write permissions. Supervisor always
|
||||
* have full read/write to pages above PAGE_OFFSET (pages below that
|
||||
* always use the user access permissions).
|
||||
*
|
||||
* We could create separate kernel read-only if we used the 3 PP bits
|
||||
* combinations that newer processors provide but we currently don't.
|
||||
*/
|
||||
#define _PAGE_PTE 0x00001
|
||||
#define _PAGE_PRESENT 0x00002 /* software: pte contains a translation */
|
||||
#define _PAGE_BIT_SWAP_TYPE 2
|
||||
#define _PAGE_USER 0x00004 /* matches one of the PP bits */
|
||||
#define _PAGE_EXEC 0x00008 /* No execute on POWER4 and newer (we invert) */
|
||||
#define _PAGE_GUARDED 0x00010
|
||||
/* We can derive Memory coherence from _PAGE_NO_CACHE */
|
||||
#define _PAGE_COHERENT 0x0
|
||||
#define _PAGE_NO_CACHE 0x00020 /* I: cache inhibit */
|
||||
#define _PAGE_WRITETHRU 0x00040 /* W: cache write-through */
|
||||
#define _PAGE_DIRTY 0x00080 /* C: page changed */
|
||||
#define _PAGE_ACCESSED 0x00100 /* R: page referenced */
|
||||
#define _PAGE_RW 0x00200 /* software: user write access allowed */
|
||||
#define _PAGE_HASHPTE 0x00400 /* software: pte has an associated HPTE */
|
||||
#define _PAGE_BUSY 0x00800 /* software: PTE & hash are busy */
|
||||
#define _PAGE_F_GIX 0x07000 /* full page: hidx bits */
|
||||
#define _PAGE_F_GIX_SHIFT 12
|
||||
#define _PAGE_F_SECOND 0x08000 /* Whether to use secondary hash or not */
|
||||
#define _PAGE_SPECIAL 0x10000 /* software: special page */
|
||||
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SOFT_DIRTY 0x20000 /* software: software dirty tracking */
|
||||
#else
|
||||
#define _PAGE_SOFT_DIRTY 0x00000
|
||||
#endif
|
||||
|
||||
/*
|
||||
* THP pages can't be special. So use the _PAGE_SPECIAL
|
||||
*/
|
||||
#define _PAGE_SPLITTING _PAGE_SPECIAL
|
||||
|
||||
/*
|
||||
* We need to differentiate between explicit huge page and THP huge
|
||||
* page, since THP huge page also need to track real subpage details
|
||||
*/
|
||||
#define _PAGE_THP_HUGE _PAGE_4K_PFN
|
||||
|
||||
/*
|
||||
* set of bits not changed in pmd_modify.
|
||||
*/
|
||||
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | \
|
||||
_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPLITTING | \
|
||||
_PAGE_THP_HUGE | _PAGE_PTE | _PAGE_SOFT_DIRTY)
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/book3s/64/hash-64k.h>
|
||||
#else
|
||||
#include <asm/book3s/64/hash-4k.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Size of EA range mapped by our pagetables.
|
||||
*/
|
||||
#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
|
||||
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
#define PMD_CACHE_INDEX (PMD_INDEX_SIZE + 1)
|
||||
#else
|
||||
#define PMD_CACHE_INDEX PMD_INDEX_SIZE
|
||||
#endif
|
||||
/*
|
||||
* Define the address range of the kernel non-linear virtual area
|
||||
*/
|
||||
#define KERN_VIRT_START ASM_CONST(0xD000000000000000)
|
||||
#define KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
|
||||
|
||||
/*
|
||||
* The vmalloc space starts at the beginning of that region, and
|
||||
* occupies half of it on hash CPUs and a quarter of it on Book3E
|
||||
* (we keep a quarter for the virtual memmap)
|
||||
*/
|
||||
#define VMALLOC_START KERN_VIRT_START
|
||||
#define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1)
|
||||
#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
|
||||
|
||||
/*
|
||||
* Region IDs
|
||||
*/
|
||||
#define REGION_SHIFT 60UL
|
||||
#define REGION_MASK (0xfUL << REGION_SHIFT)
|
||||
#define REGION_ID(ea) (((unsigned long)(ea)) >> REGION_SHIFT)
|
||||
|
||||
#define VMALLOC_REGION_ID (REGION_ID(VMALLOC_START))
|
||||
#define KERNEL_REGION_ID (REGION_ID(PAGE_OFFSET))
|
||||
#define VMEMMAP_REGION_ID (0xfUL) /* Server only */
|
||||
#define USER_REGION_ID (0UL)
|
||||
|
||||
/*
|
||||
* Defines the address of the vmemap area, in its own region on
|
||||
* hash table CPUs.
|
||||
*/
|
||||
#define VMEMMAP_BASE (VMEMMAP_REGION_ID << REGION_SHIFT)
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
#define HAVE_ARCH_UNMAPPED_AREA
|
||||
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
/* No separate kernel read-only */
|
||||
#define _PAGE_KERNEL_RW (_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
|
||||
#define _PAGE_KERNEL_RO _PAGE_KERNEL_RW
|
||||
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
|
||||
|
||||
/* Strong Access Ordering */
|
||||
#define _PAGE_SAO (_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
|
||||
|
||||
/* No page size encoding in the linux PTE */
|
||||
#define _PAGE_PSIZE 0
|
||||
|
||||
/* PTEIDX nibble */
|
||||
#define _PTEIDX_SECONDARY 0x8
|
||||
#define _PTEIDX_GROUP_IX 0x7
|
||||
|
||||
/* Hash table based platforms need atomic updates of the linux PTE */
|
||||
#define PTE_ATOMIC_UPDATES 1
|
||||
#define _PTE_NONE_MASK _PAGE_HPTEFLAGS
|
||||
/*
|
||||
* The mask convered by the RPN must be a ULL on 32-bit platforms with
|
||||
* 64-bit PTEs
|
||||
*/
|
||||
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
|
||||
/*
|
||||
* _PAGE_CHG_MASK masks of bits that are to be preserved across
|
||||
* pgprot changes
|
||||
*/
|
||||
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* Mask of bits returned by pte_pgprot()
|
||||
*/
|
||||
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU | _PAGE_4K_PFN | \
|
||||
_PAGE_USER | _PAGE_ACCESSED | \
|
||||
_PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* We define 2 sets of base prot bits, one for basic pages (ie,
|
||||
* cacheable kernel and user pages) and one for non cacheable
|
||||
* pages. We always set _PAGE_COHERENT when SMP is enabled or
|
||||
* the processor might need it for DMA coherency.
|
||||
*/
|
||||
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
|
||||
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
|
||||
|
||||
/* Permission masks used to generate the __P and __S table,
|
||||
*
|
||||
* Note:__pgprot is defined in arch/powerpc/include/asm/page.h
|
||||
*
|
||||
* Write permissions imply read permissions for now (we could make write-only
|
||||
* pages on BookE but we don't bother for now). Execute permission control is
|
||||
* possible on platforms that define _PAGE_EXEC
|
||||
*
|
||||
* Note due to the way vm flags are laid out, the bits are XWR
|
||||
*/
|
||||
#define PAGE_NONE __pgprot(_PAGE_BASE)
|
||||
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
|
||||
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
|
||||
_PAGE_EXEC)
|
||||
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER )
|
||||
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
|
||||
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER )
|
||||
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
|
||||
|
||||
#define __P000 PAGE_NONE
|
||||
#define __P001 PAGE_READONLY
|
||||
#define __P010 PAGE_COPY
|
||||
#define __P011 PAGE_COPY
|
||||
#define __P100 PAGE_READONLY_X
|
||||
#define __P101 PAGE_READONLY_X
|
||||
#define __P110 PAGE_COPY_X
|
||||
#define __P111 PAGE_COPY_X
|
||||
|
||||
#define __S000 PAGE_NONE
|
||||
#define __S001 PAGE_READONLY
|
||||
#define __S010 PAGE_SHARED
|
||||
#define __S011 PAGE_SHARED
|
||||
#define __S100 PAGE_READONLY_X
|
||||
#define __S101 PAGE_READONLY_X
|
||||
#define __S110 PAGE_SHARED_X
|
||||
#define __S111 PAGE_SHARED_X
|
||||
|
||||
/* Permission masks used for kernel mappings */
|
||||
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
|
||||
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_NO_CACHE)
|
||||
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED)
|
||||
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
|
||||
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
|
||||
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
|
||||
|
||||
/* Protection used for kernel text. We want the debuggers to be able to
|
||||
* set breakpoints anywhere, so don't write protect the kernel text
|
||||
* on platforms where such control is possible.
|
||||
*/
|
||||
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
|
||||
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
|
||||
#else
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
|
||||
#endif
|
||||
|
||||
/* Make modules code happy. We don't set RO yet */
|
||||
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
|
||||
#define PAGE_AGP (PAGE_KERNEL_NC)
|
||||
|
||||
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
||||
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|
||||
|| (pmd_val(pmd) & PMD_BAD_BITS))
|
||||
#define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS)
|
||||
|
||||
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|
||||
|| (pud_val(pud) & PUD_BAD_BITS))
|
||||
#define pud_page_vaddr(pud) (pud_val(pud) & ~PUD_MASKED_BITS)
|
||||
|
||||
#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
|
||||
#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
|
||||
#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
|
||||
|
||||
extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, unsigned long pte, int huge);
|
||||
extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
|
||||
/* Atomic PTE updates */
|
||||
static inline unsigned long pte_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pte_t *ptep, unsigned long clr,
|
||||
unsigned long set,
|
||||
int huge)
|
||||
{
|
||||
unsigned long old, tmp;
|
||||
|
||||
__asm__ __volatile__(
|
||||
"1: ldarx %0,0,%3 # pte_update\n\
|
||||
andi. %1,%0,%6\n\
|
||||
bne- 1b \n\
|
||||
andc %1,%0,%4 \n\
|
||||
or %1,%1,%7\n\
|
||||
stdcx. %1,0,%3 \n\
|
||||
bne- 1b"
|
||||
: "=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
||||
: "r" (ptep), "r" (clr), "m" (*ptep), "i" (_PAGE_BUSY), "r" (set)
|
||||
: "cc" );
|
||||
/* huge pages use the old page table lock */
|
||||
if (!huge)
|
||||
assert_pte_locked(mm, addr);
|
||||
|
||||
if (old & _PAGE_HASHPTE)
|
||||
hpte_need_flush(mm, addr, ptep, old, huge);
|
||||
|
||||
return old;
|
||||
}
|
||||
|
||||
static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pte_val(*ptep) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
|
||||
return (old & _PAGE_ACCESSED) != 0;
|
||||
}
|
||||
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
||||
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
|
||||
({ \
|
||||
int __r; \
|
||||
__r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
|
||||
__r; \
|
||||
})
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
||||
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
|
||||
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
|
||||
}
|
||||
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_RW, 0, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* We currently remove entries from the hashtable regardless of whether
|
||||
* the entry was young or dirty. The generic routines only flush if the
|
||||
* entry was young or dirty which is not good enough.
|
||||
*
|
||||
* We should be more intelligent about this but for the moment we override
|
||||
* these functions and force a tlb flush unconditionally
|
||||
*/
|
||||
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
|
||||
#define ptep_clear_flush_young(__vma, __address, __ptep) \
|
||||
({ \
|
||||
int __young = __ptep_test_and_clear_young((__vma)->vm_mm, __address, \
|
||||
__ptep); \
|
||||
__young; \
|
||||
})
|
||||
|
||||
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
return __pte(old);
|
||||
}
|
||||
|
||||
static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t * ptep)
|
||||
{
|
||||
pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
}
|
||||
|
||||
|
||||
/* Set the dirty and/or accessed bits atomically in a linux PTE, this
|
||||
* function doesn't need to flush the hash entry
|
||||
*/
|
||||
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
{
|
||||
unsigned long bits = pte_val(entry) &
|
||||
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC |
|
||||
_PAGE_SOFT_DIRTY);
|
||||
|
||||
unsigned long old, tmp;
|
||||
|
||||
__asm__ __volatile__(
|
||||
"1: ldarx %0,0,%4\n\
|
||||
andi. %1,%0,%6\n\
|
||||
bne- 1b \n\
|
||||
or %0,%3,%0\n\
|
||||
stdcx. %0,0,%4\n\
|
||||
bne- 1b"
|
||||
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
||||
:"r" (bits), "r" (ptep), "m" (*ptep), "i" (_PAGE_BUSY)
|
||||
:"cc");
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTE_SAME
|
||||
#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HPTEFLAGS) == 0)
|
||||
|
||||
/* Generic accessors to PTE bits */
|
||||
static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_RW);}
|
||||
static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
|
||||
static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
|
||||
static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
|
||||
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline bool pte_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* These work without NUMA balancing but the kernel does not care. See the
|
||||
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
|
||||
* work for user pages and always return true for kernel pages.
|
||||
*/
|
||||
static inline int pte_protnone(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) &
|
||||
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) & _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
/* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
|
||||
{
|
||||
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
|
||||
pgprot_val(pgprot));
|
||||
}
|
||||
|
||||
static inline unsigned long pte_pfn(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) >> PTE_RPN_SHIFT;
|
||||
}
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkclean(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkold(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkwrite(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkdirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkyoung(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkspecial(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SPECIAL);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkhuge(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
|
||||
}
|
||||
|
||||
/* This low level function performs the actual PTE insertion
|
||||
* Setting the PTE depends on the MMU type and other factors. It's
|
||||
* an horrible mess that I'm not going to try to clean up now but
|
||||
* I'm keeping it in one place rather than spread around
|
||||
*/
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
/*
|
||||
* Anything else just stores the PTE normally. That covers all 64-bit
|
||||
* cases, and 32-bit non-hash with 32-bit PTEs.
|
||||
*/
|
||||
*ptep = pte;
|
||||
}
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU)
|
||||
|
||||
#define pgprot_noncached pgprot_noncached
|
||||
static inline pgprot_t pgprot_noncached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED);
|
||||
}
|
||||
|
||||
#define pgprot_noncached_wc pgprot_noncached_wc
|
||||
static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE);
|
||||
}
|
||||
|
||||
#define pgprot_cached pgprot_cached
|
||||
static inline pgprot_t pgprot_cached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT);
|
||||
}
|
||||
|
||||
#define pgprot_cached_wthru pgprot_cached_wthru
|
||||
static inline pgprot_t pgprot_cached_wthru(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT | _PAGE_WRITETHRU);
|
||||
}
|
||||
|
||||
#define pgprot_cached_noncoherent pgprot_cached_noncoherent
|
||||
static inline pgprot_t pgprot_cached_noncoherent(pgprot_t prot)
|
||||
{
|
||||
return __pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL);
|
||||
}
|
||||
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
static inline pgprot_t pgprot_writecombine(pgprot_t prot)
|
||||
{
|
||||
return pgprot_noncached_wc(prot);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, unsigned long old_pmd);
|
||||
#else
|
||||
static inline void hpte_do_hugepage_flush(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp,
|
||||
unsigned long old_pmd)
|
||||
{
|
||||
WARN(1, "%s called with THP disabled\n", __func__);
|
||||
}
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
|
|
@ -0,0 +1,300 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
|
||||
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
|
||||
/*
|
||||
* This file contains the functions and defines necessary to modify and use
|
||||
* the ppc64 hashed page table.
|
||||
*/
|
||||
|
||||
#include <asm/book3s/64/hash.h>
|
||||
#include <asm/barrier.h>
|
||||
|
||||
/*
|
||||
* The second half of the kernel virtual space is used for IO mappings,
|
||||
* it's itself carved into the PIO region (ISA and PHB IO space) and
|
||||
* the ioremap space
|
||||
*
|
||||
* ISA_IO_BASE = KERN_IO_START, 64K reserved area
|
||||
* PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
|
||||
* IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
|
||||
*/
|
||||
#define KERN_IO_START (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
|
||||
#define FULL_IO_SIZE 0x80000000ul
|
||||
#define ISA_IO_BASE (KERN_IO_START)
|
||||
#define ISA_IO_END (KERN_IO_START + 0x10000ul)
|
||||
#define PHB_IO_BASE (ISA_IO_END)
|
||||
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
|
||||
#define IOREMAP_BASE (PHB_IO_END)
|
||||
#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
|
||||
|
||||
#define vmemmap ((struct page *)VMEMMAP_BASE)
|
||||
|
||||
/* Advertise special mapping type for AGP */
|
||||
#define HAVE_PAGE_AGP
|
||||
|
||||
/* Advertise support for _PAGE_SPECIAL */
|
||||
#define __HAVE_ARCH_PTE_SPECIAL
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* This is the default implementation of various PTE accessors, it's
|
||||
* used in all cases except Book3S with 64K pages where we have a
|
||||
* concept of sub-pages
|
||||
*/
|
||||
#ifndef __real_pte
|
||||
|
||||
#ifdef CONFIG_STRICT_MM_TYPECHECKS
|
||||
#define __real_pte(e,p) ((real_pte_t){(e)})
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
#else
|
||||
#define __real_pte(e,p) (e)
|
||||
#define __rpte_to_pte(r) (__pte(r))
|
||||
#endif
|
||||
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >>_PAGE_F_GIX_SHIFT)
|
||||
|
||||
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
|
||||
do { \
|
||||
index = 0; \
|
||||
shift = mmu_psize_defs[psize].shift; \
|
||||
|
||||
#define pte_iterate_hashed_end() } while(0)
|
||||
|
||||
/*
|
||||
* We expect this to be called only for user addresses or kernel virtual
|
||||
* addresses other than the linear mapping.
|
||||
*/
|
||||
#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K
|
||||
|
||||
#endif /* __real_pte */
|
||||
|
||||
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
|
||||
{
|
||||
*pmdp = __pmd(val);
|
||||
}
|
||||
|
||||
static inline void pmd_clear(pmd_t *pmdp)
|
||||
{
|
||||
*pmdp = __pmd(0);
|
||||
}
|
||||
|
||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_present(pmd) (!pmd_none(pmd))
|
||||
|
||||
static inline void pud_set(pud_t *pudp, unsigned long val)
|
||||
{
|
||||
*pudp = __pud(val);
|
||||
}
|
||||
|
||||
static inline void pud_clear(pud_t *pudp)
|
||||
{
|
||||
*pudp = __pud(0);
|
||||
}
|
||||
|
||||
#define pud_none(pud) (!pud_val(pud))
|
||||
#define pud_present(pud) (pud_val(pud) != 0)
|
||||
|
||||
extern struct page *pud_page(pud_t pud);
|
||||
extern struct page *pmd_page(pmd_t pmd);
|
||||
static inline pte_t pud_pte(pud_t pud)
|
||||
{
|
||||
return __pte(pud_val(pud));
|
||||
}
|
||||
|
||||
static inline pud_t pte_pud(pte_t pte)
|
||||
{
|
||||
return __pud(pte_val(pte));
|
||||
}
|
||||
#define pud_write(pud) pte_write(pud_pte(pud))
|
||||
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
|
||||
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
|
||||
{
|
||||
*pgdp = __pgd(val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Find an entry in a page-table-directory. We combine the address region
|
||||
* (the high order N bits) and the pgd portion of the address.
|
||||
*/
|
||||
|
||||
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
|
||||
|
||||
#define pmd_offset(pudp,addr) \
|
||||
(((pmd_t *) pud_page_vaddr(*(pudp))) + pmd_index(addr))
|
||||
|
||||
#define pte_offset_kernel(dir,addr) \
|
||||
(((pte_t *) pmd_page_vaddr(*(dir))) + pte_index(addr))
|
||||
|
||||
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
|
||||
#define pte_unmap(pte) do { } while(0)
|
||||
|
||||
/* to find an entry in a kernel page-table-directory */
|
||||
/* This now only contains the vmalloc pages */
|
||||
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
|
||||
|
||||
#define pte_ERROR(e) \
|
||||
pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
|
||||
#define pmd_ERROR(e) \
|
||||
pr_err("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
|
||||
#define pgd_ERROR(e) \
|
||||
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define MAX_SWAPFILES_CHECK() do { \
|
||||
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
|
||||
/* \
|
||||
* Don't have overlapping bits with _PAGE_HPTEFLAGS \
|
||||
* We filter HPTEFLAGS on set_pte. \
|
||||
*/ \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
|
||||
} while (0)
|
||||
/*
|
||||
* on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
|
||||
*/
|
||||
#define SWP_TYPE_BITS 5
|
||||
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
|
||||
& ((1UL << SWP_TYPE_BITS) - 1))
|
||||
#define __swp_offset(x) ((x).val >> PTE_RPN_SHIFT)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
||||
((type) << _PAGE_BIT_SWAP_TYPE) \
|
||||
| ((offset) << PTE_RPN_SHIFT) })
|
||||
/*
|
||||
* swp_entry_t must be independent of pte bits. We build a swp_entry_t from
|
||||
* swap type and offset we get from swap and convert that to pte to find a
|
||||
* matching pte in linux page table.
|
||||
* Clear bits not found in swap entries here.
|
||||
*/
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
|
||||
#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
|
||||
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
|
||||
#else
|
||||
#define _PAGE_SWP_SOFT_DIRTY 0UL
|
||||
#endif /* CONFIG_MEM_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline bool pte_swp_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
|
||||
void pgtable_cache_init(void);
|
||||
|
||||
struct page *realmode_pfn_to_page(unsigned long pfn);
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
|
||||
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
|
||||
extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
|
||||
extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, pmd_t pmd);
|
||||
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
|
||||
pmd_t *pmd);
|
||||
extern int has_transparent_hugepage(void);
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
|
||||
|
||||
static inline pte_t pmd_pte(pmd_t pmd)
|
||||
{
|
||||
return __pte(pmd_val(pmd));
|
||||
}
|
||||
|
||||
static inline pmd_t pte_pmd(pte_t pte)
|
||||
{
|
||||
return __pmd(pte_val(pte));
|
||||
}
|
||||
|
||||
static inline pte_t *pmdp_ptep(pmd_t *pmd)
|
||||
{
|
||||
return (pte_t *)pmd;
|
||||
}
|
||||
|
||||
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
|
||||
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
|
||||
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
|
||||
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
|
||||
#define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
|
||||
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
|
||||
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
|
||||
#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd)))
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
#define pmd_soft_dirty(pmd) pte_soft_dirty(pmd_pte(pmd))
|
||||
#define pmd_mksoft_dirty(pmd) pte_pmd(pte_mksoft_dirty(pmd_pte(pmd)))
|
||||
#define pmd_clear_soft_dirty(pmd) pte_pmd(pte_clear_soft_dirty(pmd_pte(pmd)))
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
static inline int pmd_protnone(pmd_t pmd)
|
||||
{
|
||||
return pte_protnone(pmd_pte(pmd));
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
#define __HAVE_ARCH_PMD_WRITE
|
||||
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
|
||||
|
||||
static inline pmd_t pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_THP_HUGE));
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
|
||||
extern int pmdp_set_access_flags(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp,
|
||||
pmd_t entry, int dirty);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
|
||||
extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
|
||||
extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
|
||||
extern pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH
|
||||
extern void pmdp_splitting_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
|
||||
extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
#define pmdp_collapse_flush pmdp_collapse_flush
|
||||
|
||||
#define __HAVE_ARCH_PGTABLE_DEPOSIT
|
||||
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
#define __HAVE_ARCH_PGTABLE_WITHDRAW
|
||||
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_INVALIDATE
|
||||
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
|
||||
pmd_t *pmdp);
|
||||
|
||||
#define pmd_move_must_withdraw pmd_move_must_withdraw
|
||||
struct spinlock;
|
||||
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
|
||||
struct spinlock *old_pmd_ptl)
|
||||
{
|
||||
/*
|
||||
* Archs like ppc64 use pgtable to store per pmd
|
||||
* specific information. So when we switch the pmd,
|
||||
* we should also withdraw and deposit the pgtable
|
||||
*/
|
||||
return true;
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
|
|
@ -0,0 +1,29 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_PGTABLE_H
|
||||
#define _ASM_POWERPC_BOOK3S_PGTABLE_H
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
#include <asm/book3s/64/pgtable.h>
|
||||
#else
|
||||
#include <asm/book3s/32/pgtable.h>
|
||||
#endif
|
||||
|
||||
#define FIRST_USER_ADDRESS 0UL
|
||||
#ifndef __ASSEMBLY__
|
||||
/* Insert a PTE, top-level function is out of line. It uses an inline
|
||||
* low level function in the respective pgtable-* files
|
||||
*/
|
||||
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
|
||||
pte_t pte);
|
||||
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
|
||||
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
|
||||
pte_t *ptep, pte_t entry, int dirty);
|
||||
|
||||
struct file;
|
||||
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
unsigned long size, pgprot_t vma_prot);
|
||||
#define __HAVE_PHYS_MEM_ACCESS_PROT
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif
|
|
@ -18,12 +18,12 @@ __xchg_u32(volatile void *p, unsigned long val)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__(
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: lwarx %0,0,%2 \n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stwcx. %3,0,%2 \n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
: "=&r" (prev), "+m" (*(volatile unsigned int *)p)
|
||||
: "r" (p), "r" (val)
|
||||
: "cc", "memory");
|
||||
|
@ -61,12 +61,12 @@ __xchg_u64(volatile void *p, unsigned long val)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__(
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: ldarx %0,0,%2 \n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stdcx. %3,0,%2 \n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
: "=&r" (prev), "+m" (*(volatile unsigned long *)p)
|
||||
: "r" (p), "r" (val)
|
||||
: "cc", "memory");
|
||||
|
@ -151,14 +151,14 @@ __cmpxchg_u32(volatile unsigned int *p, unsigned long old, unsigned long new)
|
|||
unsigned int prev;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: lwarx %0,0,%2 # __cmpxchg_u32\n\
|
||||
cmpw 0,%0,%3\n\
|
||||
bne- 2f\n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stwcx. %4,0,%2\n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
"\n\
|
||||
2:"
|
||||
: "=&r" (prev), "+m" (*p)
|
||||
|
@ -197,13 +197,13 @@ __cmpxchg_u64(volatile unsigned long *p, unsigned long old, unsigned long new)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: ldarx %0,0,%2 # __cmpxchg_u64\n\
|
||||
cmpd 0,%0,%3\n\
|
||||
bne- 2f\n\
|
||||
stdcx. %4,0,%2\n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
"\n\
|
||||
2:"
|
||||
: "=&r" (prev), "+m" (*p)
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/of.h>
|
||||
#include <soc/fsl/qe/qe.h>
|
||||
|
||||
/*
|
||||
* SPI Parameter RAM common to QE and CPM.
|
||||
|
@ -155,49 +156,6 @@ typedef struct cpm_buf_desc {
|
|||
*/
|
||||
#define BD_I2C_START (0x0400)
|
||||
|
||||
int cpm_muram_init(void);
|
||||
|
||||
#if defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE)
|
||||
unsigned long cpm_muram_alloc(unsigned long size, unsigned long align);
|
||||
int cpm_muram_free(unsigned long offset);
|
||||
unsigned long cpm_muram_alloc_fixed(unsigned long offset, unsigned long size);
|
||||
void __iomem *cpm_muram_addr(unsigned long offset);
|
||||
unsigned long cpm_muram_offset(void __iomem *addr);
|
||||
dma_addr_t cpm_muram_dma(void __iomem *addr);
|
||||
#else
|
||||
static inline unsigned long cpm_muram_alloc(unsigned long size,
|
||||
unsigned long align)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline int cpm_muram_free(unsigned long offset)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline unsigned long cpm_muram_alloc_fixed(unsigned long offset,
|
||||
unsigned long size)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline void __iomem *cpm_muram_addr(unsigned long offset)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline unsigned long cpm_muram_offset(void __iomem *addr)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
static inline dma_addr_t cpm_muram_dma(void __iomem *addr)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif /* defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE) */
|
||||
|
||||
#ifdef CONFIG_CPM
|
||||
int cpm_command(u32 command, u8 opcode);
|
||||
#else
|
||||
|
|
|
@ -129,15 +129,6 @@ BEGIN_FTR_SECTION_NESTED(941) \
|
|||
mtspr SPRN_PPR,ra; \
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941)
|
||||
|
||||
/*
|
||||
* Increase the priority on systems where PPR save/restore is not
|
||||
* implemented/ supported.
|
||||
*/
|
||||
#define HMT_MEDIUM_PPR_DISCARD \
|
||||
BEGIN_FTR_SECTION_NESTED(942) \
|
||||
HMT_MEDIUM; \
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,0,942) /*non P7*/
|
||||
|
||||
/*
|
||||
* Get an SPR into a register if the CPU has the given feature
|
||||
*/
|
||||
|
@ -263,17 +254,6 @@ do_kvm_##n: \
|
|||
#define KVM_HANDLER_SKIP(area, h, n)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
|
||||
#define KVMTEST_PR(n) __KVMTEST(n)
|
||||
#define KVM_HANDLER_PR(area, h, n) __KVM_HANDLER(area, h, n)
|
||||
#define KVM_HANDLER_PR_SKIP(area, h, n) __KVM_HANDLER_SKIP(area, h, n)
|
||||
|
||||
#else
|
||||
#define KVMTEST_PR(n)
|
||||
#define KVM_HANDLER_PR(area, h, n)
|
||||
#define KVM_HANDLER_PR_SKIP(area, h, n)
|
||||
#endif
|
||||
|
||||
#define NOTEST(n)
|
||||
|
||||
/*
|
||||
|
@ -353,27 +333,25 @@ do_kvm_##n: \
|
|||
/*
|
||||
* Exception vectors.
|
||||
*/
|
||||
#define STD_EXCEPTION_PSERIES(loc, vec, label) \
|
||||
. = loc; \
|
||||
#define STD_EXCEPTION_PSERIES(vec, label) \
|
||||
. = vec; \
|
||||
.globl label##_pSeries; \
|
||||
label##_pSeries: \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
SET_SCRATCH0(r13); /* save r13 */ \
|
||||
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
|
||||
EXC_STD, KVMTEST_PR, vec)
|
||||
EXC_STD, KVMTEST, vec)
|
||||
|
||||
/* Version of above for when we have to branch out-of-line */
|
||||
#define STD_EXCEPTION_PSERIES_OOL(vec, label) \
|
||||
.globl label##_pSeries; \
|
||||
label##_pSeries: \
|
||||
EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec); \
|
||||
EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec); \
|
||||
EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD)
|
||||
|
||||
#define STD_EXCEPTION_HV(loc, vec, label) \
|
||||
. = loc; \
|
||||
.globl label##_hv; \
|
||||
label##_hv: \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
SET_SCRATCH0(r13); /* save r13 */ \
|
||||
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
|
||||
EXC_HV, KVMTEST, vec)
|
||||
|
@ -389,7 +367,6 @@ label##_hv: \
|
|||
. = loc; \
|
||||
.globl label##_relon_pSeries; \
|
||||
label##_relon_pSeries: \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
/* No guest interrupts come through here */ \
|
||||
SET_SCRATCH0(r13); /* save r13 */ \
|
||||
EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
|
||||
|
@ -405,7 +382,6 @@ label##_relon_pSeries: \
|
|||
. = loc; \
|
||||
.globl label##_relon_hv; \
|
||||
label##_relon_hv: \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
/* No guest interrupts come through here */ \
|
||||
SET_SCRATCH0(r13); /* save r13 */ \
|
||||
EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
|
||||
|
@ -436,17 +412,13 @@ label##_relon_hv: \
|
|||
#define _SOFTEN_TEST(h, vec) __SOFTEN_TEST(h, vec)
|
||||
|
||||
#define SOFTEN_TEST_PR(vec) \
|
||||
KVMTEST_PR(vec); \
|
||||
KVMTEST(vec); \
|
||||
_SOFTEN_TEST(EXC_STD, vec)
|
||||
|
||||
#define SOFTEN_TEST_HV(vec) \
|
||||
KVMTEST(vec); \
|
||||
_SOFTEN_TEST(EXC_HV, vec)
|
||||
|
||||
#define SOFTEN_TEST_HV_201(vec) \
|
||||
KVMTEST(vec); \
|
||||
_SOFTEN_TEST(EXC_STD, vec)
|
||||
|
||||
#define SOFTEN_NOTEST_PR(vec) _SOFTEN_TEST(EXC_STD, vec)
|
||||
#define SOFTEN_NOTEST_HV(vec) _SOFTEN_TEST(EXC_HV, vec)
|
||||
|
||||
|
@ -463,7 +435,6 @@ label##_relon_hv: \
|
|||
. = loc; \
|
||||
.globl label##_pSeries; \
|
||||
label##_pSeries: \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
_MASKABLE_EXCEPTION_PSERIES(vec, label, \
|
||||
EXC_STD, SOFTEN_TEST_PR)
|
||||
|
||||
|
@ -481,7 +452,6 @@ label##_hv: \
|
|||
EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
|
||||
|
||||
#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra) \
|
||||
HMT_MEDIUM_PPR_DISCARD; \
|
||||
SET_SCRATCH0(r13); /* save r13 */ \
|
||||
EXCEPTION_PROLOG_0(PACA_EXGEN); \
|
||||
__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec); \
|
||||
|
|
|
@ -47,12 +47,10 @@
|
|||
#define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000)
|
||||
#define FW_FEATURE_XCMO ASM_CONST(0x0000000008000000)
|
||||
#define FW_FEATURE_OPAL ASM_CONST(0x0000000010000000)
|
||||
#define FW_FEATURE_OPALv2 ASM_CONST(0x0000000020000000)
|
||||
#define FW_FEATURE_SET_MODE ASM_CONST(0x0000000040000000)
|
||||
#define FW_FEATURE_BEST_ENERGY ASM_CONST(0x0000000080000000)
|
||||
#define FW_FEATURE_TYPE1_AFFINITY ASM_CONST(0x0000000100000000)
|
||||
#define FW_FEATURE_PRRN ASM_CONST(0x0000000200000000)
|
||||
#define FW_FEATURE_OPALv3 ASM_CONST(0x0000000400000000)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
|
@ -70,8 +68,7 @@ enum {
|
|||
FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
|
||||
FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN,
|
||||
FW_FEATURE_PSERIES_ALWAYS = 0,
|
||||
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL | FW_FEATURE_OPALv2 |
|
||||
FW_FEATURE_OPALv3,
|
||||
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
|
||||
FW_FEATURE_POWERNV_ALWAYS = 0,
|
||||
FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
|
||||
FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
|
||||
|
|
|
@ -385,6 +385,17 @@ static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr)
|
|||
{
|
||||
*(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v;
|
||||
}
|
||||
|
||||
/*
|
||||
* Real mode version of the above. stdcix is only supposed to be used
|
||||
* in hypervisor real mode as per the architecture spec.
|
||||
*/
|
||||
static inline void __raw_rm_writeq(u64 val, volatile void __iomem *paddr)
|
||||
{
|
||||
__asm__ __volatile__("stdcix %0,0,%1"
|
||||
: : "r" (val), "r" (paddr) : "memory");
|
||||
}
|
||||
|
||||
#endif /* __powerpc64__ */
|
||||
|
||||
/*
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
* need for various slices related matters. Note that this isn't the
|
||||
* complete pgtable.h but only a portion of it.
|
||||
*/
|
||||
#include <asm/pgtable-ppc64.h>
|
||||
#include <asm/book3s/64/pgtable.h>
|
||||
#include <asm/bug.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PGTABLE_PPC32_H
|
||||
#define _ASM_POWERPC_PGTABLE_PPC32_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_32_PGTABLE_H
|
||||
#define _ASM_POWERPC_NOHASH_32_PGTABLE_H
|
||||
|
||||
#include <asm-generic/pgtable-nopmd.h>
|
||||
|
||||
|
@ -106,17 +106,15 @@ extern int icache_44x_need_flush;
|
|||
*/
|
||||
|
||||
#if defined(CONFIG_40x)
|
||||
#include <asm/pte-40x.h>
|
||||
#include <asm/nohash/32/pte-40x.h>
|
||||
#elif defined(CONFIG_44x)
|
||||
#include <asm/pte-44x.h>
|
||||
#include <asm/nohash/32/pte-44x.h>
|
||||
#elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
|
||||
#include <asm/pte-book3e.h>
|
||||
#include <asm/nohash/pte-book3e.h>
|
||||
#elif defined(CONFIG_FSL_BOOKE)
|
||||
#include <asm/pte-fsl-booke.h>
|
||||
#include <asm/nohash/32/pte-fsl-booke.h>
|
||||
#elif defined(CONFIG_8xx)
|
||||
#include <asm/pte-8xx.h>
|
||||
#else /* CONFIG_6xx */
|
||||
#include <asm/pte-hash32.h>
|
||||
#include <asm/nohash/32/pte-8xx.h>
|
||||
#endif
|
||||
|
||||
/* And here we include common definitions */
|
||||
|
@ -130,7 +128,12 @@ extern int icache_44x_need_flush;
|
|||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
|
||||
#define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK)
|
||||
#define pmd_clear(pmdp) do { pmd_val(*(pmdp)) = 0; } while (0)
|
||||
static inline void pmd_clear(pmd_t *pmdp)
|
||||
{
|
||||
*pmdp = __pmd(0);
|
||||
}
|
||||
|
||||
|
||||
|
||||
/*
|
||||
* When flushing the tlb entry for a page, we also need to flush the hash
|
||||
|
@ -337,4 +340,4 @@ extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep,
|
|||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_PPC32_H */
|
||||
#endif /* __ASM_POWERPC_NOHASH_32_PGTABLE_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_40x_H
|
||||
#define _ASM_POWERPC_PTE_40x_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_32_PTE_40x_H
|
||||
#define _ASM_POWERPC_NOHASH_32_PTE_40x_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
|
@ -61,4 +61,4 @@
|
|||
#define PTE_ATOMIC_UPDATES 1
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_40x_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_32_PTE_40x_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_44x_H
|
||||
#define _ASM_POWERPC_PTE_44x_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_32_PTE_44x_H
|
||||
#define _ASM_POWERPC_NOHASH_32_PTE_44x_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
|
@ -94,4 +94,4 @@
|
|||
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_44x_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_32_PTE_44x_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_8xx_H
|
||||
#define _ASM_POWERPC_PTE_8xx_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_32_PTE_8xx_H
|
||||
#define _ASM_POWERPC_NOHASH_32_PTE_8xx_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
|
@ -62,4 +62,4 @@
|
|||
_PAGE_HWWRITE | _PAGE_EXEC)
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_8xx_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_32_PTE_8xx_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_FSL_BOOKE_H
|
||||
#define _ASM_POWERPC_PTE_FSL_BOOKE_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H
|
||||
#define _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/* PTE bit definitions for Freescale BookE SW loaded TLB MMU based
|
||||
|
@ -37,4 +37,4 @@
|
|||
#define PTE_WIMGE_SHIFT (6)
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_FSL_BOOKE_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PGTABLE_PPC64_4K_H
|
||||
#define _ASM_POWERPC_PGTABLE_PPC64_4K_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H
|
||||
#define _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H
|
||||
/*
|
||||
* Entries per page directory level. The PTE level must use a 64b record
|
||||
* for each page table entry. The PMD and PGD level use a 32b record for
|
||||
|
@ -55,11 +55,15 @@
|
|||
#define pgd_none(pgd) (!pgd_val(pgd))
|
||||
#define pgd_bad(pgd) (pgd_val(pgd) == 0)
|
||||
#define pgd_present(pgd) (pgd_val(pgd) != 0)
|
||||
#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0)
|
||||
#define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
static inline void pgd_clear(pgd_t *pgdp)
|
||||
{
|
||||
*pgdp = __pgd(0);
|
||||
}
|
||||
|
||||
static inline pte_t pgd_pte(pgd_t pgd)
|
||||
{
|
||||
return __pte(pgd_val(pgd));
|
||||
|
@ -85,4 +89,4 @@ extern struct page *pgd_page(pgd_t pgd);
|
|||
#define remap_4k_pfn(vma, addr, pfn, prot) \
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_PPC64_4K_H */
|
||||
#endif /* _ _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H */
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PGTABLE_PPC64_64K_H
|
||||
#define _ASM_POWERPC_PGTABLE_PPC64_64K_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H
|
||||
#define _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H
|
||||
|
||||
#include <asm-generic/pgtable-nopud.h>
|
||||
|
||||
|
@ -9,8 +9,19 @@
|
|||
#define PUD_INDEX_SIZE 0
|
||||
#define PGD_INDEX_SIZE 12
|
||||
|
||||
/*
|
||||
* we support 32 fragments per PTE page of 64K size
|
||||
*/
|
||||
#define PTE_FRAG_NR 32
|
||||
/*
|
||||
* We use a 2K PTE page fragment and another 2K for storing
|
||||
* real_pte_t hash index
|
||||
*/
|
||||
#define PTE_FRAG_SIZE_SHIFT 11
|
||||
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define PTE_TABLE_SIZE (sizeof(real_pte_t) << PTE_INDEX_SIZE)
|
||||
#define PTE_TABLE_SIZE PTE_FRAG_SIZE
|
||||
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
@ -32,13 +43,15 @@
|
|||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
/* PMDs point to PTE table fragments which are 4K aligned. */
|
||||
#define PMD_MASKED_BITS 0xfff
|
||||
/*
|
||||
* Bits to mask out from a PMD to get to the PTE page
|
||||
* PMDs point to PTE table fragments which are PTE_FRAG_SIZE aligned.
|
||||
*/
|
||||
#define PMD_MASKED_BITS (PTE_FRAG_SIZE - 1)
|
||||
/* Bits to mask out from a PGD/PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0x1ff
|
||||
|
||||
#define pgd_pte(pgd) (pud_pte(((pud_t){ pgd })))
|
||||
#define pte_pgd(pte) ((pgd_t)pte_pud(pte))
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_PPC64_64K_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H */
|
|
@ -1,14 +1,14 @@
|
|||
#ifndef _ASM_POWERPC_PGTABLE_PPC64_H_
|
||||
#define _ASM_POWERPC_PGTABLE_PPC64_H_
|
||||
#ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
||||
#define _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
||||
/*
|
||||
* This file contains the functions and defines necessary to modify and use
|
||||
* the ppc64 hashed page table.
|
||||
*/
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/pgtable-ppc64-64k.h>
|
||||
#include <asm/nohash/64/pgtable-64k.h>
|
||||
#else
|
||||
#include <asm/pgtable-ppc64-4k.h>
|
||||
#include <asm/nohash/64/pgtable-4k.h>
|
||||
#endif
|
||||
#include <asm/barrier.h>
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
|||
* Size of EA range mapped by our pagetables.
|
||||
*/
|
||||
#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
|
||||
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
|
@ -97,11 +97,7 @@
|
|||
/*
|
||||
* Include the PTE bits definitions
|
||||
*/
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
#include <asm/pte-hash64.h>
|
||||
#else
|
||||
#include <asm/pte-book3e.h>
|
||||
#endif
|
||||
#include <asm/nohash/pte-book3e.h>
|
||||
#include <asm/pte-common.h>
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
|
@ -110,59 +106,47 @@
|
|||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* This is the default implementation of various PTE accessors, it's
|
||||
* used in all cases except Book3S with 64K pages where we have a
|
||||
* concept of sub-pages
|
||||
*/
|
||||
#ifndef __real_pte
|
||||
|
||||
#ifdef CONFIG_STRICT_MM_TYPECHECKS
|
||||
#define __real_pte(e,p) ((real_pte_t){(e)})
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
#else
|
||||
#define __real_pte(e,p) (e)
|
||||
#define __rpte_to_pte(r) (__pte(r))
|
||||
#endif
|
||||
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> 12)
|
||||
|
||||
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
|
||||
do { \
|
||||
index = 0; \
|
||||
shift = mmu_psize_defs[psize].shift; \
|
||||
|
||||
#define pte_iterate_hashed_end() } while(0)
|
||||
|
||||
/*
|
||||
* We expect this to be called only for user addresses or kernel virtual
|
||||
* addresses other than the linear mapping.
|
||||
*/
|
||||
#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K
|
||||
|
||||
#endif /* __real_pte */
|
||||
|
||||
|
||||
/* pte_clear moved to later in this file */
|
||||
|
||||
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
||||
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
||||
|
||||
#define pmd_set(pmdp, pmdval) (pmd_val(*(pmdp)) = (pmdval))
|
||||
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
|
||||
{
|
||||
*pmdp = __pmd(val);
|
||||
}
|
||||
|
||||
static inline void pmd_clear(pmd_t *pmdp)
|
||||
{
|
||||
*pmdp = __pmd(0);
|
||||
}
|
||||
|
||||
static inline pte_t pmd_pte(pmd_t pmd)
|
||||
{
|
||||
return __pte(pmd_val(pmd));
|
||||
}
|
||||
|
||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|
||||
|| (pmd_val(pmd) & PMD_BAD_BITS))
|
||||
#define pmd_present(pmd) (!pmd_none(pmd))
|
||||
#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0)
|
||||
#define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS)
|
||||
extern struct page *pmd_page(pmd_t pmd);
|
||||
|
||||
#define pud_set(pudp, pudval) (pud_val(*(pudp)) = (pudval))
|
||||
static inline void pud_set(pud_t *pudp, unsigned long val)
|
||||
{
|
||||
*pudp = __pud(val);
|
||||
}
|
||||
|
||||
static inline void pud_clear(pud_t *pudp)
|
||||
{
|
||||
*pudp = __pud(0);
|
||||
}
|
||||
|
||||
#define pud_none(pud) (!pud_val(pud))
|
||||
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|
||||
|| (pud_val(pud) & PUD_BAD_BITS))
|
||||
#define pud_present(pud) (pud_val(pud) != 0)
|
||||
#define pud_clear(pudp) (pud_val(*(pudp)) = 0)
|
||||
#define pud_page_vaddr(pud) (pud_val(pud) & ~PUD_MASKED_BITS)
|
||||
|
||||
extern struct page *pud_page(pud_t pud);
|
||||
|
@ -177,9 +161,13 @@ static inline pud_t pte_pud(pte_t pte)
|
|||
return __pud(pte_val(pte));
|
||||
}
|
||||
#define pud_write(pud) pte_write(pud_pte(pud))
|
||||
#define pgd_set(pgdp, pudp) ({pgd_val(*(pgdp)) = (unsigned long)(pudp);})
|
||||
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
|
||||
|
||||
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
|
||||
{
|
||||
*pgdp = __pgd(val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Find an entry in a page-table-directory. We combine the address region
|
||||
* (the high order N bits) and the pgd portion of the address.
|
||||
|
@ -373,254 +361,4 @@ void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
|
|||
void pgtable_cache_init(void);
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
/*
|
||||
* THP pages can't be special. So use the _PAGE_SPECIAL
|
||||
*/
|
||||
#define _PAGE_SPLITTING _PAGE_SPECIAL
|
||||
|
||||
/*
|
||||
* We need to differentiate between explicit huge page and THP huge
|
||||
* page, since THP huge page also need to track real subpage details
|
||||
*/
|
||||
#define _PAGE_THP_HUGE _PAGE_4K_PFN
|
||||
|
||||
/*
|
||||
* set of bits not changed in pmd_modify.
|
||||
*/
|
||||
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | \
|
||||
_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPLITTING | \
|
||||
_PAGE_THP_HUGE)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* The linux hugepage PMD now include the pmd entries followed by the address
|
||||
* to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
|
||||
* [ 1 bit secondary | 3 bit hidx | 1 bit valid | 000]. We use one byte per
|
||||
* each HPTE entry. With 16MB hugepage and 64K HPTE we need 256 entries and
|
||||
* with 4K HPTE we need 4096 entries. Both will fit in a 4K pgtable_t.
|
||||
*
|
||||
* The last three bits are intentionally left to zero. This memory location
|
||||
* are also used as normal page PTE pointers. So if we have any pointers
|
||||
* left around while we collapse a hugepage, we need to make sure
|
||||
* _PAGE_PRESENT bit of that is zero when we look at them
|
||||
*/
|
||||
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
|
||||
{
|
||||
return (hpte_slot_array[index] >> 3) & 0x1;
|
||||
}
|
||||
|
||||
static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
|
||||
int index)
|
||||
{
|
||||
return hpte_slot_array[index] >> 4;
|
||||
}
|
||||
|
||||
static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
|
||||
unsigned int index, unsigned int hidx)
|
||||
{
|
||||
hpte_slot_array[index] = hidx << 4 | 0x1 << 3;
|
||||
}
|
||||
|
||||
struct page *realmode_pfn_to_page(unsigned long pfn);
|
||||
|
||||
static inline char *get_hpte_slot_array(pmd_t *pmdp)
|
||||
{
|
||||
/*
|
||||
* The hpte hindex is stored in the pgtable whose address is in the
|
||||
* second half of the PMD
|
||||
*
|
||||
* Order this load with the test for pmd_trans_huge in the caller
|
||||
*/
|
||||
smp_rmb();
|
||||
return *(char **)(pmdp + PTRS_PER_PMD);
|
||||
|
||||
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, unsigned long old_pmd);
|
||||
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
|
||||
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
|
||||
extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
|
||||
extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, pmd_t pmd);
|
||||
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
|
||||
pmd_t *pmd);
|
||||
/*
|
||||
*
|
||||
* For core kernel code by design pmd_trans_huge is never run on any hugetlbfs
|
||||
* page. The hugetlbfs page table walking and mangling paths are totally
|
||||
* separated form the core VM paths and they're differentiated by
|
||||
* VM_HUGETLB being set on vm_flags well before any pmd_trans_huge could run.
|
||||
*
|
||||
* pmd_trans_huge() is defined as false at build time if
|
||||
* CONFIG_TRANSPARENT_HUGEPAGE=n to optimize away code blocks at build
|
||||
* time in such case.
|
||||
*
|
||||
* For ppc64 we need to differntiate from explicit hugepages from THP, because
|
||||
* for THP we also track the subpage details at the pmd level. We don't do
|
||||
* that for explicit huge pages.
|
||||
*
|
||||
*/
|
||||
static inline int pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page, bottom two bits != 00
|
||||
*/
|
||||
return (pmd_val(pmd) & 0x3) && (pmd_val(pmd) & _PAGE_THP_HUGE);
|
||||
}
|
||||
|
||||
static inline int pmd_trans_splitting(pmd_t pmd)
|
||||
{
|
||||
if (pmd_trans_huge(pmd))
|
||||
return pmd_val(pmd) & _PAGE_SPLITTING;
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern int has_transparent_hugepage(void);
|
||||
#else
|
||||
static inline void hpte_do_hugepage_flush(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp,
|
||||
unsigned long old_pmd)
|
||||
{
|
||||
|
||||
WARN(1, "%s called with THP disabled\n", __func__);
|
||||
}
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
|
||||
static inline int pmd_large(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page, bottom two bits != 00
|
||||
*/
|
||||
return ((pmd_val(pmd) & 0x3) != 0x0);
|
||||
}
|
||||
|
||||
static inline pte_t pmd_pte(pmd_t pmd)
|
||||
{
|
||||
return __pte(pmd_val(pmd));
|
||||
}
|
||||
|
||||
static inline pmd_t pte_pmd(pte_t pte)
|
||||
{
|
||||
return __pmd(pte_val(pte));
|
||||
}
|
||||
|
||||
static inline pte_t *pmdp_ptep(pmd_t *pmd)
|
||||
{
|
||||
return (pte_t *)pmd;
|
||||
}
|
||||
|
||||
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
|
||||
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
|
||||
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
|
||||
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
|
||||
#define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
|
||||
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
|
||||
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
|
||||
#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd)))
|
||||
|
||||
#define __HAVE_ARCH_PMD_WRITE
|
||||
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
|
||||
|
||||
static inline pmd_t pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
/* Do nothing, mk_pmd() does this part. */
|
||||
return pmd;
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
|
||||
{
|
||||
pmd_val(pmd) &= ~_PAGE_PRESENT;
|
||||
return pmd;
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mksplitting(pmd_t pmd)
|
||||
{
|
||||
pmd_val(pmd) |= _PAGE_SPLITTING;
|
||||
return pmd;
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMD_SAME
|
||||
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
|
||||
extern int pmdp_set_access_flags(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp,
|
||||
pmd_t entry, int dirty);
|
||||
|
||||
extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pmd_t *pmdp,
|
||||
unsigned long clr,
|
||||
unsigned long set);
|
||||
|
||||
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
|
||||
return ((old & _PAGE_ACCESSED) != 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
|
||||
extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
|
||||
extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
|
||||
extern pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
|
||||
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp)
|
||||
{
|
||||
|
||||
if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH
|
||||
extern void pmdp_splitting_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
|
||||
extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
#define pmdp_collapse_flush pmdp_collapse_flush
|
||||
|
||||
#define __HAVE_ARCH_PGTABLE_DEPOSIT
|
||||
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
#define __HAVE_ARCH_PGTABLE_WITHDRAW
|
||||
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_INVALIDATE
|
||||
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
|
||||
pmd_t *pmdp);
|
||||
|
||||
#define pmd_move_must_withdraw pmd_move_must_withdraw
|
||||
struct spinlock;
|
||||
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
|
||||
struct spinlock *old_pmd_ptl)
|
||||
{
|
||||
/*
|
||||
* Archs like ppc64 use pgtable to store per pmd
|
||||
* specific information. So when we switch the pmd,
|
||||
* we should also withdraw and deposit the pgtable
|
||||
*/
|
||||
return true;
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _ASM_POWERPC_PGTABLE_PPC64_H_ */
|
||||
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
|
|
@ -0,0 +1,252 @@
|
|||
#ifndef _ASM_POWERPC_NOHASH_PGTABLE_H
|
||||
#define _ASM_POWERPC_NOHASH_PGTABLE_H
|
||||
|
||||
#if defined(CONFIG_PPC64)
|
||||
#include <asm/nohash/64/pgtable.h>
|
||||
#else
|
||||
#include <asm/nohash/32/pgtable.h>
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/* Generic accessors to PTE bits */
|
||||
static inline int pte_write(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & (_PAGE_RW | _PAGE_RO)) != _PAGE_RO;
|
||||
}
|
||||
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
|
||||
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
|
||||
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
|
||||
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* These work without NUMA balancing but the kernel does not care. See the
|
||||
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
|
||||
* work for user pages and always return true for kernel pages.
|
||||
*/
|
||||
static inline int pte_protnone(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) &
|
||||
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
static inline int pmd_protnone(pmd_t pmd)
|
||||
{
|
||||
return pte_protnone(pmd_pte(pmd));
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) & _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
/* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
|
||||
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
|
||||
pgprot_val(pgprot)); }
|
||||
static inline unsigned long pte_pfn(pte_t pte) {
|
||||
return pte_val(pte) >> PTE_RPN_SHIFT; }
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte)
|
||||
{
|
||||
pte_basic_t ptev;
|
||||
|
||||
ptev = pte_val(pte) & ~(_PAGE_RW | _PAGE_HWWRITE);
|
||||
ptev |= _PAGE_RO;
|
||||
return __pte(ptev);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkclean(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~(_PAGE_DIRTY | _PAGE_HWWRITE));
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkold(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkwrite(pte_t pte)
|
||||
{
|
||||
pte_basic_t ptev;
|
||||
|
||||
ptev = pte_val(pte) & ~_PAGE_RO;
|
||||
ptev |= _PAGE_RW;
|
||||
return __pte(ptev);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkdirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkyoung(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkspecial(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SPECIAL);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkhuge(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
|
||||
}
|
||||
|
||||
/* Insert a PTE, top-level function is out of line. It uses an inline
|
||||
* low level function in the respective pgtable-* files
|
||||
*/
|
||||
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
|
||||
pte_t pte);
|
||||
|
||||
/* This low level function performs the actual PTE insertion
|
||||
* Setting the PTE depends on the MMU type and other factors. It's
|
||||
* an horrible mess that I'm not going to try to clean up now but
|
||||
* I'm keeping it in one place rather than spread around
|
||||
*/
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
|
||||
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
|
||||
* helper pte_update() which does an atomic update. We need to do that
|
||||
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
|
||||
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
|
||||
* the hash bits instead (ie, same as the non-SMP case)
|
||||
*/
|
||||
if (percpu)
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
else
|
||||
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
|
||||
|
||||
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
|
||||
/* Second case is 32-bit with 64-bit PTE. In this case, we
|
||||
* can just store as long as we do the two halves in the right order
|
||||
* with a barrier in between. This is possible because we take care,
|
||||
* in the hash code, to pre-invalidate if the PTE was already hashed,
|
||||
* which synchronizes us with any concurrent invalidation.
|
||||
* In the percpu case, we also fallback to the simple update preserving
|
||||
* the hash bits
|
||||
*/
|
||||
if (percpu) {
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
return;
|
||||
}
|
||||
#if _PAGE_HASHPTE != 0
|
||||
if (pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
flush_hash_entry(mm, ptep, addr);
|
||||
#endif
|
||||
__asm__ __volatile__("\
|
||||
stw%U0%X0 %2,%0\n\
|
||||
eieio\n\
|
||||
stw%U0%X0 %L2,%1"
|
||||
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
|
||||
: "r" (pte) : "memory");
|
||||
|
||||
#elif defined(CONFIG_PPC_STD_MMU_32)
|
||||
/* Third case is 32-bit hash table in UP mode, we need to preserve
|
||||
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
|
||||
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
|
||||
* and see we need to keep track that this PTE needs invalidating
|
||||
*/
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
|
||||
#else
|
||||
/* Anything else just stores the PTE normally. That covers all 64-bit
|
||||
* cases, and 32-bit non-hash with 32-bit PTEs.
|
||||
*/
|
||||
*ptep = pte;
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3E_64
|
||||
/*
|
||||
* With hardware tablewalk, a sync is needed to ensure that
|
||||
* subsequent accesses see the PTE we just wrote. Unlike userspace
|
||||
* mappings, we can't tolerate spurious faults, so make sure
|
||||
* the new PTE will be seen the first time.
|
||||
*/
|
||||
if (is_kernel_addr(addr))
|
||||
mb();
|
||||
#endif
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
|
||||
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
|
||||
pte_t *ptep, pte_t entry, int dirty);
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU)
|
||||
|
||||
#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED))
|
||||
|
||||
#define pgprot_noncached_wc(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_NO_CACHE))
|
||||
|
||||
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_COHERENT))
|
||||
|
||||
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_COHERENT | _PAGE_WRITETHRU))
|
||||
|
||||
#define pgprot_cached_noncoherent(prot) \
|
||||
(__pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL))
|
||||
|
||||
#define pgprot_writecombine pgprot_noncached_wc
|
||||
|
||||
struct file;
|
||||
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
unsigned long size, pgprot_t vma_prot);
|
||||
#define __HAVE_PHYS_MEM_ACCESS_PROT
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
return (hpd.pd > 0);
|
||||
}
|
||||
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#endif
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif
|
|
@ -1,5 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PTE_BOOK3E_H
|
||||
#define _ASM_POWERPC_PTE_BOOK3E_H
|
||||
#ifndef _ASM_POWERPC_NOHASH_PTE_BOOK3E_H
|
||||
#define _ASM_POWERPC_NOHASH_PTE_BOOK3E_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/* PTE bit definitions for processors compliant to the Book3E
|
||||
|
@ -84,4 +84,4 @@
|
|||
#endif
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_FSL_BOOKE_H */
|
||||
#endif /* _ASM_POWERPC_NOHASH_PTE_BOOK3E_H */
|
|
@ -157,7 +157,8 @@
|
|||
#define OPAL_LEDS_GET_INDICATOR 114
|
||||
#define OPAL_LEDS_SET_INDICATOR 115
|
||||
#define OPAL_CEC_REBOOT2 116
|
||||
#define OPAL_LAST 116
|
||||
#define OPAL_CONSOLE_FLUSH 117
|
||||
#define OPAL_LAST 117
|
||||
|
||||
/* Device tree flags */
|
||||
|
||||
|
|
|
@ -35,6 +35,7 @@ int64_t opal_console_read(int64_t term_number, __be64 *length,
|
|||
uint8_t *buffer);
|
||||
int64_t opal_console_write_buffer_space(int64_t term_number,
|
||||
__be64 *length);
|
||||
int64_t opal_console_flush(int64_t term_number);
|
||||
int64_t opal_rtc_read(__be32 *year_month_day,
|
||||
__be64 *hour_minute_second_millisecond);
|
||||
int64_t opal_rtc_write(uint32_t year_month_day,
|
||||
|
@ -262,6 +263,8 @@ extern int opal_resync_timebase(void);
|
|||
|
||||
extern void opal_lpc_init(void);
|
||||
|
||||
extern void opal_kmsg_init(void);
|
||||
|
||||
extern int opal_event_request(unsigned int opal_event_nr);
|
||||
|
||||
struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr,
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
|
||||
#ifdef CONFIG_PPC64
|
||||
|
||||
#include <linux/string.h>
|
||||
#include <asm/types.h>
|
||||
#include <asm/lppaca.h>
|
||||
#include <asm/mmu.h>
|
||||
|
@ -131,7 +132,16 @@ struct paca_struct {
|
|||
struct tlb_core_data tcd;
|
||||
#endif /* CONFIG_PPC_BOOK3E */
|
||||
|
||||
mm_context_t context;
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
mm_context_id_t mm_ctx_id;
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
u64 mm_ctx_low_slices_psize;
|
||||
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
|
||||
#else
|
||||
u16 mm_ctx_user_psize;
|
||||
u16 mm_ctx_sllp;
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/*
|
||||
* then miscellaneous read-write fields
|
||||
|
@ -194,6 +204,23 @@ struct paca_struct {
|
|||
#endif
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
static inline void copy_mm_to_paca(mm_context_t *context)
|
||||
{
|
||||
get_paca()->mm_ctx_id = context->id;
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
get_paca()->mm_ctx_low_slices_psize = context->low_slices_psize;
|
||||
memcpy(&get_paca()->mm_ctx_high_slices_psize,
|
||||
&context->high_slices_psize, SLICE_ARRAY_SIZE);
|
||||
#else
|
||||
get_paca()->mm_ctx_user_psize = context->user_psize;
|
||||
get_paca()->mm_ctx_sllp = context->sllp;
|
||||
#endif
|
||||
}
|
||||
#else
|
||||
static inline void copy_mm_to_paca(mm_context_t *context){}
|
||||
#endif
|
||||
|
||||
extern struct paca_struct *paca;
|
||||
extern void initialise_paca(struct paca_struct *new_paca, int cpu);
|
||||
extern void setup_paca(struct paca_struct *new_paca);
|
||||
|
|
|
@ -286,8 +286,11 @@ extern long long virt_phys_offset;
|
|||
|
||||
/* PTE level */
|
||||
typedef struct { pte_basic_t pte; } pte_t;
|
||||
#define pte_val(x) ((x).pte)
|
||||
#define __pte(x) ((pte_t) { (x) })
|
||||
static inline pte_basic_t pte_val(pte_t x)
|
||||
{
|
||||
return x.pte;
|
||||
}
|
||||
|
||||
/* 64k pages additionally define a bigger "real PTE" type that gathers
|
||||
* the "second half" part of the PTE for pseudo 64k pages
|
||||
|
@ -301,21 +304,30 @@ typedef struct { pte_t pte; } real_pte_t;
|
|||
/* PMD level */
|
||||
#ifdef CONFIG_PPC64
|
||||
typedef struct { unsigned long pmd; } pmd_t;
|
||||
#define pmd_val(x) ((x).pmd)
|
||||
#define __pmd(x) ((pmd_t) { (x) })
|
||||
static inline unsigned long pmd_val(pmd_t x)
|
||||
{
|
||||
return x.pmd;
|
||||
}
|
||||
|
||||
/* PUD level exusts only on 4k pages */
|
||||
#ifndef CONFIG_PPC_64K_PAGES
|
||||
typedef struct { unsigned long pud; } pud_t;
|
||||
#define pud_val(x) ((x).pud)
|
||||
#define __pud(x) ((pud_t) { (x) })
|
||||
static inline unsigned long pud_val(pud_t x)
|
||||
{
|
||||
return x.pud;
|
||||
}
|
||||
#endif /* !CONFIG_PPC_64K_PAGES */
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
/* PGD level */
|
||||
typedef struct { unsigned long pgd; } pgd_t;
|
||||
#define pgd_val(x) ((x).pgd)
|
||||
#define __pgd(x) ((pgd_t) { (x) })
|
||||
static inline unsigned long pgd_val(pgd_t x)
|
||||
{
|
||||
return x.pgd;
|
||||
}
|
||||
|
||||
/* Page protection bits */
|
||||
typedef struct { unsigned long pgprot; } pgprot_t;
|
||||
|
@ -329,8 +341,11 @@ typedef struct { unsigned long pgprot; } pgprot_t;
|
|||
*/
|
||||
|
||||
typedef pte_basic_t pte_t;
|
||||
#define pte_val(x) (x)
|
||||
#define __pte(x) (x)
|
||||
static inline pte_basic_t pte_val(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
|
||||
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
|
||||
|
@ -341,67 +356,42 @@ typedef pte_t real_pte_t;
|
|||
|
||||
#ifdef CONFIG_PPC64
|
||||
typedef unsigned long pmd_t;
|
||||
#define pmd_val(x) (x)
|
||||
#define __pmd(x) (x)
|
||||
static inline unsigned long pmd_val(pmd_t pmd)
|
||||
{
|
||||
return pmd;
|
||||
}
|
||||
|
||||
#ifndef CONFIG_PPC_64K_PAGES
|
||||
typedef unsigned long pud_t;
|
||||
#define pud_val(x) (x)
|
||||
#define __pud(x) (x)
|
||||
static inline unsigned long pud_val(pud_t pud)
|
||||
{
|
||||
return pud;
|
||||
}
|
||||
#endif /* !CONFIG_PPC_64K_PAGES */
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
typedef unsigned long pgd_t;
|
||||
#define pgd_val(x) (x)
|
||||
#define pgprot_val(x) (x)
|
||||
#define __pgd(x) (x)
|
||||
static inline unsigned long pgd_val(pgd_t pgd)
|
||||
{
|
||||
return pgd;
|
||||
}
|
||||
|
||||
typedef unsigned long pgprot_t;
|
||||
#define __pgd(x) (x)
|
||||
#define pgprot_val(x) (x)
|
||||
#define __pgprot(x) (x)
|
||||
|
||||
#endif
|
||||
|
||||
typedef struct { signed long pd; } hugepd_t;
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
/*
|
||||
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
|
||||
* need to setup hugepage directory for them. Our pte and page directory format
|
||||
* enable us to have this enabled. But to avoid errors when implementing new
|
||||
* features disable hugepd for 64K. We enable a debug version here, So we catch
|
||||
* wrong usage.
|
||||
*/
|
||||
#ifdef CONFIG_DEBUG_VM
|
||||
extern int hugepd_ok(hugepd_t hpd);
|
||||
#else
|
||||
#define hugepd_ok(x) (0)
|
||||
#endif
|
||||
#else
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
/*
|
||||
* hugepd pointer, bottom two bits == 00 and next 4 bits
|
||||
* indicate size of table
|
||||
*/
|
||||
return (((hpd.pd & 0x3) == 0x0) && ((hpd.pd & HUGEPD_SHIFT_MASK) != 0));
|
||||
}
|
||||
#endif
|
||||
#else
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
return (hpd.pd > 0);
|
||||
}
|
||||
#endif
|
||||
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#define pgd_huge pgd_huge
|
||||
int pgd_huge(pgd_t pgd);
|
||||
#else /* CONFIG_HUGETLB_PAGE */
|
||||
#define is_hugepd(pdep) 0
|
||||
#define pgd_huge(pgd) 0
|
||||
#ifndef CONFIG_HUGETLB_PAGE
|
||||
#define is_hugepd(pdep) (0)
|
||||
#define pgd_huge(pgd) (0)
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
|
||||
#define __hugepd(x) ((hugepd_t) { (x) })
|
||||
|
||||
struct page;
|
||||
|
|
|
@ -205,6 +205,7 @@ struct pci_dn {
|
|||
|
||||
int pci_ext_config_space; /* for pci devices */
|
||||
|
||||
struct pci_dev *pcidev; /* back-pointer to the pci device */
|
||||
#ifdef CONFIG_EEH
|
||||
struct eeh_dev *edev; /* eeh device */
|
||||
#endif
|
||||
|
|
|
@ -149,4 +149,8 @@ extern void pcibios_setup_phb_io_space(struct pci_controller *hose);
|
|||
extern void pcibios_scan_phb(struct pci_controller *hose);
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev);
|
||||
extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
|
||||
|
||||
#endif /* __ASM_POWERPC_PCI_H */
|
||||
|
|
|
@ -21,16 +21,34 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
|
|||
/* #define pgd_populate(mm, pmd, pte) BUG() */
|
||||
|
||||
#ifndef CONFIG_BOOKE
|
||||
#define pmd_populate_kernel(mm, pmd, pte) \
|
||||
(pmd_val(*(pmd)) = __pa(pte) | _PMD_PRESENT)
|
||||
#define pmd_populate(mm, pmd, pte) \
|
||||
(pmd_val(*(pmd)) = (page_to_pfn(pte) << PAGE_SHIFT) | _PMD_PRESENT)
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pte_t *pte)
|
||||
{
|
||||
*pmdp = __pmd(__pa(pte) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
#else
|
||||
#define pmd_populate_kernel(mm, pmd, pte) \
|
||||
(pmd_val(*(pmd)) = (unsigned long)pte | _PMD_PRESENT)
|
||||
#define pmd_populate(mm, pmd, pte) \
|
||||
(pmd_val(*(pmd)) = (unsigned long)lowmem_page_address(pte) | _PMD_PRESENT)
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pte_t *pte)
|
||||
{
|
||||
*pmdp = __pmd((unsigned long)pte | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
*pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
#endif
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
|
|||
|
||||
#ifndef CONFIG_PPC_64K_PAGES
|
||||
|
||||
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, PUD)
|
||||
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, (unsigned long)PUD)
|
||||
|
||||
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
|
@ -71,9 +71,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
|||
pud_set(pud, (unsigned long)pmd);
|
||||
}
|
||||
|
||||
#define pmd_populate(mm, pmd, pte_page) \
|
||||
pmd_populate_kernel(mm, pmd, page_address(pte_page))
|
||||
#define pmd_populate_kernel(mm, pmd, pte) pmd_set(pmd, (unsigned long)(pte))
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
|
||||
pte_t *pte)
|
||||
{
|
||||
pmd_set(pmd, (unsigned long)pte);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
pmd_set(pmd, (unsigned long)page_address(pte_page));
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
|
||||
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
|
||||
|
@ -154,16 +163,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
|
|||
}
|
||||
|
||||
#else /* if CONFIG_PPC_64K_PAGES */
|
||||
/*
|
||||
* we support 16 fragments per PTE page.
|
||||
*/
|
||||
#define PTE_FRAG_NR 16
|
||||
/*
|
||||
* We use a 2K PTE page fragment and another 2K for storing
|
||||
* real_pte_t hash index
|
||||
*/
|
||||
#define PTE_FRAG_SIZE_SHIFT 12
|
||||
#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
|
||||
|
||||
extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
|
||||
extern void page_table_free(struct mm_struct *, unsigned long *, int);
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
#ifndef _ASM_POWERPC_PGTABLE_H
|
||||
#define _ASM_POWERPC_PGTABLE_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <linux/mmdebug.h>
|
||||
|
@ -13,210 +12,20 @@ struct mm_struct;
|
|||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#if defined(CONFIG_PPC64)
|
||||
# include <asm/pgtable-ppc64.h>
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
#include <asm/book3s/pgtable.h>
|
||||
#else
|
||||
# include <asm/pgtable-ppc32.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We save the slot number & secondary bit in the second half of the
|
||||
* PTE page. We use the 8 bytes per each pte entry.
|
||||
*/
|
||||
#define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
|
||||
#include <asm/nohash/pgtable.h>
|
||||
#endif /* !CONFIG_PPC_BOOK3S */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
/* Generic accessors to PTE bits */
|
||||
static inline int pte_write(pte_t pte)
|
||||
{ return (pte_val(pte) & (_PAGE_RW | _PAGE_RO)) != _PAGE_RO; }
|
||||
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
|
||||
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
|
||||
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
|
||||
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* These work without NUMA balancing but the kernel does not care. See the
|
||||
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
|
||||
* work for user pages and always return true for kernel pages.
|
||||
*/
|
||||
static inline int pte_protnone(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) &
|
||||
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
static inline int pmd_protnone(pmd_t pmd)
|
||||
{
|
||||
return pte_protnone(pmd_pte(pmd));
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return pte_val(pte) & _PAGE_PRESENT;
|
||||
}
|
||||
|
||||
/* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
|
||||
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
|
||||
pgprot_val(pgprot)); }
|
||||
static inline unsigned long pte_pfn(pte_t pte) {
|
||||
return pte_val(pte) >> PTE_RPN_SHIFT; }
|
||||
|
||||
/* Keep these as a macros to avoid include dependency mess */
|
||||
#define pte_page(x) pfn_to_page(pte_pfn(x))
|
||||
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte) {
|
||||
pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE);
|
||||
pte_val(pte) |= _PAGE_RO; return pte; }
|
||||
static inline pte_t pte_mkclean(pte_t pte) {
|
||||
pte_val(pte) &= ~(_PAGE_DIRTY | _PAGE_HWWRITE); return pte; }
|
||||
static inline pte_t pte_mkold(pte_t pte) {
|
||||
pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
|
||||
static inline pte_t pte_mkwrite(pte_t pte) {
|
||||
pte_val(pte) &= ~_PAGE_RO;
|
||||
pte_val(pte) |= _PAGE_RW; return pte; }
|
||||
static inline pte_t pte_mkdirty(pte_t pte) {
|
||||
pte_val(pte) |= _PAGE_DIRTY; return pte; }
|
||||
static inline pte_t pte_mkyoung(pte_t pte) {
|
||||
pte_val(pte) |= _PAGE_ACCESSED; return pte; }
|
||||
static inline pte_t pte_mkspecial(pte_t pte) {
|
||||
pte_val(pte) |= _PAGE_SPECIAL; return pte; }
|
||||
static inline pte_t pte_mkhuge(pte_t pte) {
|
||||
return pte; }
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
|
||||
return pte;
|
||||
}
|
||||
|
||||
|
||||
/* Insert a PTE, top-level function is out of line. It uses an inline
|
||||
* low level function in the respective pgtable-* files
|
||||
*/
|
||||
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
|
||||
pte_t pte);
|
||||
|
||||
/* This low level function performs the actual PTE insertion
|
||||
* Setting the PTE depends on the MMU type and other factors. It's
|
||||
* an horrible mess that I'm not going to try to clean up now but
|
||||
* I'm keeping it in one place rather than spread around
|
||||
*/
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
|
||||
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
|
||||
* helper pte_update() which does an atomic update. We need to do that
|
||||
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
|
||||
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
|
||||
* the hash bits instead (ie, same as the non-SMP case)
|
||||
*/
|
||||
if (percpu)
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
else
|
||||
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
|
||||
|
||||
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
|
||||
/* Second case is 32-bit with 64-bit PTE. In this case, we
|
||||
* can just store as long as we do the two halves in the right order
|
||||
* with a barrier in between. This is possible because we take care,
|
||||
* in the hash code, to pre-invalidate if the PTE was already hashed,
|
||||
* which synchronizes us with any concurrent invalidation.
|
||||
* In the percpu case, we also fallback to the simple update preserving
|
||||
* the hash bits
|
||||
*/
|
||||
if (percpu) {
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
return;
|
||||
}
|
||||
#if _PAGE_HASHPTE != 0
|
||||
if (pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
flush_hash_entry(mm, ptep, addr);
|
||||
#endif
|
||||
__asm__ __volatile__("\
|
||||
stw%U0%X0 %2,%0\n\
|
||||
eieio\n\
|
||||
stw%U0%X0 %L2,%1"
|
||||
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
|
||||
: "r" (pte) : "memory");
|
||||
|
||||
#elif defined(CONFIG_PPC_STD_MMU_32)
|
||||
/* Third case is 32-bit hash table in UP mode, we need to preserve
|
||||
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
|
||||
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
|
||||
* and see we need to keep track that this PTE needs invalidating
|
||||
*/
|
||||
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
|
||||
| (pte_val(pte) & ~_PAGE_HASHPTE));
|
||||
|
||||
#else
|
||||
/* Anything else just stores the PTE normally. That covers all 64-bit
|
||||
* cases, and 32-bit non-hash with 32-bit PTEs.
|
||||
*/
|
||||
*ptep = pte;
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3E_64
|
||||
/*
|
||||
* With hardware tablewalk, a sync is needed to ensure that
|
||||
* subsequent accesses see the PTE we just wrote. Unlike userspace
|
||||
* mappings, we can't tolerate spurious faults, so make sure
|
||||
* the new PTE will be seen the first time.
|
||||
*/
|
||||
if (is_kernel_addr(addr))
|
||||
mb();
|
||||
#endif
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
|
||||
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
|
||||
pte_t *ptep, pte_t entry, int dirty);
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU)
|
||||
|
||||
#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED))
|
||||
|
||||
#define pgprot_noncached_wc(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_NO_CACHE))
|
||||
|
||||
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_COHERENT))
|
||||
|
||||
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
|
||||
_PAGE_COHERENT | _PAGE_WRITETHRU))
|
||||
|
||||
#define pgprot_cached_noncoherent(prot) \
|
||||
(__pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL))
|
||||
|
||||
#define pgprot_writecombine pgprot_noncached_wc
|
||||
|
||||
struct file;
|
||||
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
unsigned long size, pgprot_t vma_prot);
|
||||
#define __HAVE_PHYS_MEM_ACCESS_PROT
|
||||
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
|
@ -271,5 +80,4 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
|
|||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PGTABLE_H */
|
||||
|
|
|
@ -201,6 +201,23 @@ static inline long plpar_pte_read_raw(unsigned long flags, unsigned long ptex,
|
|||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* ptes must be 8*sizeof(unsigned long)
|
||||
*/
|
||||
static inline long plpar_pte_read_4(unsigned long flags, unsigned long ptex,
|
||||
unsigned long *ptes)
|
||||
|
||||
{
|
||||
long rc;
|
||||
unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
|
||||
|
||||
rc = plpar_hcall9(H_READ, retbuf, flags | H_READ_4, ptex);
|
||||
|
||||
memcpy(ptes, retbuf, 8*sizeof(unsigned long));
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* plpar_pte_read_4_raw can be called in real mode.
|
||||
* ptes must be 8*sizeof(unsigned long)
|
||||
|
|
|
@ -413,24 +413,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
|
|||
FTR_SECTION_ELSE_NESTED(848); \
|
||||
mtocrf (FXM), RS; \
|
||||
ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_NOEXECUTE, 848)
|
||||
|
||||
/*
|
||||
* PPR restore macros used in entry_64.S
|
||||
* Used for P7 or later processors
|
||||
*/
|
||||
#define HMT_MEDIUM_LOW_HAS_PPR \
|
||||
BEGIN_FTR_SECTION_NESTED(944) \
|
||||
HMT_MEDIUM_LOW; \
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,944)
|
||||
|
||||
#define SET_DEFAULT_THREAD_PPR(ra, rb) \
|
||||
BEGIN_FTR_SECTION_NESTED(945) \
|
||||
lis ra,INIT_PPR@highest; /* default ppr=3 */ \
|
||||
ld rb,PACACURRENT(r13); \
|
||||
sldi ra,ra,32; /* 11- 13 bits are used for ppr */ \
|
||||
std ra,TASKTHREADPPR(rb); \
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -88,12 +88,6 @@ struct task_struct;
|
|||
void start_thread(struct pt_regs *regs, unsigned long fdptr, unsigned long sp);
|
||||
void release_thread(struct task_struct *);
|
||||
|
||||
/* Lazy FPU handling on uni-processor */
|
||||
extern struct task_struct *last_task_used_math;
|
||||
extern struct task_struct *last_task_used_altivec;
|
||||
extern struct task_struct *last_task_used_vsx;
|
||||
extern struct task_struct *last_task_used_spe;
|
||||
|
||||
#ifdef CONFIG_PPC32
|
||||
|
||||
#if CONFIG_TASK_SIZE > CONFIG_KERNEL_START
|
||||
|
@ -294,6 +288,7 @@ struct thread_struct {
|
|||
#endif
|
||||
#ifdef CONFIG_PPC64
|
||||
unsigned long dscr;
|
||||
unsigned long fscr;
|
||||
/*
|
||||
* This member element dscr_inherit indicates that the process
|
||||
* has explicitly attempted and changed the DSCR register value
|
||||
|
@ -385,8 +380,6 @@ extern int set_endian(struct task_struct *tsk, unsigned int val);
|
|||
extern int get_unalign_ctl(struct task_struct *tsk, unsigned long adr);
|
||||
extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
|
||||
|
||||
extern void fp_enable(void);
|
||||
extern void vec_enable(void);
|
||||
extern void load_fp_state(struct thread_fp_state *fp);
|
||||
extern void store_fp_state(struct thread_fp_state *fp);
|
||||
extern void load_vr_state(struct thread_vr_state *vr);
|
||||
|
|
|
@ -40,6 +40,11 @@
|
|||
#else
|
||||
#define _PAGE_RW 0
|
||||
#endif
|
||||
|
||||
#ifndef _PAGE_PTE
|
||||
#define _PAGE_PTE 0
|
||||
#endif
|
||||
|
||||
#ifndef _PMD_PRESENT_MASK
|
||||
#define _PMD_PRESENT_MASK _PMD_PRESENT
|
||||
#endif
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
/* To be include by pgtable-hash64.h only */
|
||||
|
||||
/* PTE bits */
|
||||
#define _PAGE_HASHPTE 0x0400 /* software: pte has an associated HPTE */
|
||||
#define _PAGE_SECONDARY 0x8000 /* software: HPTE is in secondary group */
|
||||
#define _PAGE_GROUP_IX 0x7000 /* software: HPTE index within group */
|
||||
#define _PAGE_F_SECOND _PAGE_SECONDARY
|
||||
#define _PAGE_F_GIX _PAGE_GROUP_IX
|
||||
#define _PAGE_SPECIAL 0x10000 /* software: special page */
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
|
||||
_PAGE_SECONDARY | _PAGE_GROUP_IX)
|
||||
|
||||
/* shift to put page number into pte */
|
||||
#define PTE_RPN_SHIFT (17)
|
||||
|
|
@ -1,102 +0,0 @@
|
|||
/* To be include by pgtable-hash64.h only */
|
||||
|
||||
/* Additional PTE bits (don't change without checking asm in hash_low.S) */
|
||||
#define _PAGE_SPECIAL 0x00000400 /* software: special page */
|
||||
#define _PAGE_HPTE_SUB 0x0ffff000 /* combo only: sub pages HPTE bits */
|
||||
#define _PAGE_HPTE_SUB0 0x08000000 /* combo only: first sub page */
|
||||
#define _PAGE_COMBO 0x10000000 /* this is a combo 4k page */
|
||||
#define _PAGE_4K_PFN 0x20000000 /* PFN is for a single 4k page */
|
||||
|
||||
/* For 64K page, we don't have a separate _PAGE_HASHPTE bit. Instead,
|
||||
* we set that to be the whole sub-bits mask. The C code will only
|
||||
* test this, so a multi-bit mask will work. For combo pages, this
|
||||
* is equivalent as effectively, the old _PAGE_HASHPTE was an OR of
|
||||
* all the sub bits. For real 64k pages, we now have the assembly set
|
||||
* _PAGE_HPTE_SUB0 in addition to setting the HIDX bits which overlap
|
||||
* that mask. This is fine as long as the HIDX bits are never set on
|
||||
* a PTE that isn't hashed, which is the case today.
|
||||
*
|
||||
* A little nit is for the huge page C code, which does the hashing
|
||||
* in C, we need to provide which bit to use.
|
||||
*/
|
||||
#define _PAGE_HASHPTE _PAGE_HPTE_SUB
|
||||
|
||||
/* Note the full page bits must be in the same location as for normal
|
||||
* 4k pages as the same assembly will be used to insert 64K pages
|
||||
* whether the kernel has CONFIG_PPC_64K_PAGES or not
|
||||
*/
|
||||
#define _PAGE_F_SECOND 0x00008000 /* full page: hidx bits */
|
||||
#define _PAGE_F_GIX 0x00007000 /* full page: hidx bits */
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | _PAGE_COMBO)
|
||||
|
||||
/* Shift to put page number into pte.
|
||||
*
|
||||
* That gives us a max RPN of 34 bits, which means a max of 50 bits
|
||||
* of addressable physical space, or 46 bits for the special 4k PFNs.
|
||||
*/
|
||||
#define PTE_RPN_SHIFT (30)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* With 64K pages on hash table, we have a special PTE format that
|
||||
* uses a second "half" of the page table to encode sub-page information
|
||||
* in order to deal with 64K made of 4K HW pages. Thus we override the
|
||||
* generic accessors and iterators here
|
||||
*/
|
||||
#define __real_pte __real_pte
|
||||
static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
|
||||
{
|
||||
real_pte_t rpte;
|
||||
|
||||
rpte.pte = pte;
|
||||
rpte.hidx = 0;
|
||||
if (pte_val(pte) & _PAGE_COMBO) {
|
||||
/*
|
||||
* Make sure we order the hidx load against the _PAGE_COMBO
|
||||
* check. The store side ordering is done in __hash_page_4K
|
||||
*/
|
||||
smp_rmb();
|
||||
rpte.hidx = pte_val(*((ptep) + PTRS_PER_PTE));
|
||||
}
|
||||
return rpte;
|
||||
}
|
||||
|
||||
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
|
||||
{
|
||||
if ((pte_val(rpte.pte) & _PAGE_COMBO))
|
||||
return (rpte.hidx >> (index<<2)) & 0xf;
|
||||
return (pte_val(rpte.pte) >> 12) & 0xf;
|
||||
}
|
||||
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
#define __rpte_sub_valid(rpte, index) \
|
||||
(pte_val(rpte.pte) & (_PAGE_HPTE_SUB0 >> (index)))
|
||||
|
||||
/* Trick: we set __end to va + 64k, which happens works for
|
||||
* a 16M page as well as we want only one iteration
|
||||
*/
|
||||
#define pte_iterate_hashed_subpages(rpte, psize, vpn, index, shift) \
|
||||
do { \
|
||||
unsigned long __end = vpn + (1UL << (PAGE_SHIFT - VPN_SHIFT)); \
|
||||
unsigned __split = (psize == MMU_PAGE_4K || \
|
||||
psize == MMU_PAGE_64K_AP); \
|
||||
shift = mmu_psize_defs[psize].shift; \
|
||||
for (index = 0; vpn < __end; index++, \
|
||||
vpn += (1L << (shift - VPN_SHIFT))) { \
|
||||
if (!__split || __rpte_sub_valid(rpte, index)) \
|
||||
do {
|
||||
|
||||
#define pte_iterate_hashed_end() } while(0); } } while(0)
|
||||
|
||||
#define pte_pagesize_index(mm, addr, pte) \
|
||||
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
|
||||
|
||||
#define remap_4k_pfn(vma, addr, pfn, prot) \
|
||||
(WARN_ON(((pfn) >= (1UL << (64 - PTE_RPN_SHIFT)))) ? -EINVAL : \
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
|
||||
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
|
@ -1,54 +0,0 @@
|
|||
#ifndef _ASM_POWERPC_PTE_HASH64_H
|
||||
#define _ASM_POWERPC_PTE_HASH64_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
* Common bits between 4K and 64K pages in a linux-style PTE.
|
||||
* These match the bits in the (hardware-defined) PowerPC PTE as closely
|
||||
* as possible. Additional bits may be defined in pgtable-hash64-*.h
|
||||
*
|
||||
* Note: We only support user read/write permissions. Supervisor always
|
||||
* have full read/write to pages above PAGE_OFFSET (pages below that
|
||||
* always use the user access permissions).
|
||||
*
|
||||
* We could create separate kernel read-only if we used the 3 PP bits
|
||||
* combinations that newer processors provide but we currently don't.
|
||||
*/
|
||||
#define _PAGE_PRESENT 0x0001 /* software: pte contains a translation */
|
||||
#define _PAGE_USER 0x0002 /* matches one of the PP bits */
|
||||
#define _PAGE_BIT_SWAP_TYPE 2
|
||||
#define _PAGE_EXEC 0x0004 /* No execute on POWER4 and newer (we invert) */
|
||||
#define _PAGE_GUARDED 0x0008
|
||||
/* We can derive Memory coherence from _PAGE_NO_CACHE */
|
||||
#define _PAGE_NO_CACHE 0x0020 /* I: cache inhibit */
|
||||
#define _PAGE_WRITETHRU 0x0040 /* W: cache write-through */
|
||||
#define _PAGE_DIRTY 0x0080 /* C: page changed */
|
||||
#define _PAGE_ACCESSED 0x0100 /* R: page referenced */
|
||||
#define _PAGE_RW 0x0200 /* software: user write access allowed */
|
||||
#define _PAGE_BUSY 0x0800 /* software: PTE & hash are busy */
|
||||
|
||||
/* No separate kernel read-only */
|
||||
#define _PAGE_KERNEL_RW (_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
|
||||
#define _PAGE_KERNEL_RO _PAGE_KERNEL_RW
|
||||
|
||||
/* Strong Access Ordering */
|
||||
#define _PAGE_SAO (_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
|
||||
|
||||
/* No page size encoding in the linux PTE */
|
||||
#define _PAGE_PSIZE 0
|
||||
|
||||
/* PTEIDX nibble */
|
||||
#define _PTEIDX_SECONDARY 0x8
|
||||
#define _PTEIDX_GROUP_IX 0x7
|
||||
|
||||
/* Hash table based platforms need atomic updates of the linux PTE */
|
||||
#define PTE_ATOMIC_UPDATES 1
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/pte-hash64-64k.h>
|
||||
#else
|
||||
#include <asm/pte-hash64-4k.h>
|
||||
#endif
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PTE_HASH64_H */
|
|
@ -1194,12 +1194,20 @@
|
|||
#define __mtmsrd(v, l) asm volatile("mtmsrd %0," __stringify(l) \
|
||||
: : "r" (v) : "memory")
|
||||
#define mtmsr(v) __mtmsrd((v), 0)
|
||||
#define __MTMSR "mtmsrd"
|
||||
#else
|
||||
#define mtmsr(v) asm volatile("mtmsr %0" : \
|
||||
: "r" ((unsigned long)(v)) \
|
||||
: "memory")
|
||||
#define __MTMSR "mtmsr"
|
||||
#endif
|
||||
|
||||
static inline void mtmsr_isync(unsigned long val)
|
||||
{
|
||||
asm volatile(__MTMSR " %0; " ASM_FTR_IFCLR("isync", "nop", %1) : :
|
||||
"r" (val), "i" (CPU_FTR_ARCH_206) : "memory");
|
||||
}
|
||||
|
||||
#define mfspr(rn) ({unsigned long rval; \
|
||||
asm volatile("mfspr %0," __stringify(rn) \
|
||||
: "=r" (rval)); rval;})
|
||||
|
@ -1207,6 +1215,15 @@
|
|||
: "r" ((unsigned long)(v)) \
|
||||
: "memory")
|
||||
|
||||
extern void msr_check_and_set(unsigned long bits);
|
||||
extern bool strict_msr_control;
|
||||
extern void __msr_check_and_clear(unsigned long bits);
|
||||
static inline void msr_check_and_clear(unsigned long bits)
|
||||
{
|
||||
if (strict_msr_control)
|
||||
__msr_check_and_clear(bits);
|
||||
}
|
||||
|
||||
static inline unsigned long mfvtb (void)
|
||||
{
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
|
|
|
@ -334,10 +334,11 @@ extern void (*rtas_flash_term_hook)(int);
|
|||
|
||||
extern struct rtas_t rtas;
|
||||
|
||||
extern void enter_rtas(unsigned long);
|
||||
extern int rtas_token(const char *service);
|
||||
extern int rtas_service_present(const char *service);
|
||||
extern int rtas_call(int token, int, int, int *, ...);
|
||||
void rtas_call_unlocked(struct rtas_args *args, int token, int nargs,
|
||||
int nret, ...);
|
||||
extern void rtas_restart(char *cmd);
|
||||
extern void rtas_power_off(void);
|
||||
extern void rtas_halt(void);
|
||||
|
|
|
@ -4,6 +4,8 @@
|
|||
#ifndef _ASM_POWERPC_SWITCH_TO_H
|
||||
#define _ASM_POWERPC_SWITCH_TO_H
|
||||
|
||||
#include <asm/reg.h>
|
||||
|
||||
struct thread_struct;
|
||||
struct task_struct;
|
||||
struct pt_regs;
|
||||
|
@ -12,74 +14,59 @@ extern struct task_struct *__switch_to(struct task_struct *,
|
|||
struct task_struct *);
|
||||
#define switch_to(prev, next, last) ((last) = __switch_to((prev), (next)))
|
||||
|
||||
struct thread_struct;
|
||||
extern struct task_struct *_switch(struct thread_struct *prev,
|
||||
struct thread_struct *next);
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
static inline void save_early_sprs(struct thread_struct *prev)
|
||||
{
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S))
|
||||
prev->tar = mfspr(SPRN_TAR);
|
||||
if (cpu_has_feature(CPU_FTR_DSCR))
|
||||
prev->dscr = mfspr(SPRN_DSCR);
|
||||
}
|
||||
#else
|
||||
static inline void save_early_sprs(struct thread_struct *prev) {}
|
||||
#endif
|
||||
|
||||
extern void enable_kernel_fp(void);
|
||||
extern void enable_kernel_altivec(void);
|
||||
extern void enable_kernel_vsx(void);
|
||||
extern int emulate_altivec(struct pt_regs *);
|
||||
extern void __giveup_vsx(struct task_struct *);
|
||||
extern void giveup_vsx(struct task_struct *);
|
||||
extern void enable_kernel_spe(void);
|
||||
extern void giveup_spe(struct task_struct *);
|
||||
extern void load_up_spe(struct task_struct *);
|
||||
extern void switch_booke_debug_regs(struct debug_reg *new_debug);
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
extern void discard_lazy_cpu_state(void);
|
||||
#else
|
||||
static inline void discard_lazy_cpu_state(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
extern int emulate_altivec(struct pt_regs *);
|
||||
|
||||
extern void flush_all_to_thread(struct task_struct *);
|
||||
extern void giveup_all(struct task_struct *);
|
||||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
extern void enable_kernel_fp(void);
|
||||
extern void flush_fp_to_thread(struct task_struct *);
|
||||
extern void giveup_fpu(struct task_struct *);
|
||||
extern void __giveup_fpu(struct task_struct *);
|
||||
static inline void disable_kernel_fp(void)
|
||||
{
|
||||
msr_check_and_clear(MSR_FP);
|
||||
}
|
||||
#else
|
||||
static inline void flush_fp_to_thread(struct task_struct *t) { }
|
||||
static inline void giveup_fpu(struct task_struct *t) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
extern void enable_kernel_altivec(void);
|
||||
extern void flush_altivec_to_thread(struct task_struct *);
|
||||
extern void giveup_altivec(struct task_struct *);
|
||||
extern void giveup_altivec_notask(void);
|
||||
#else
|
||||
static inline void flush_altivec_to_thread(struct task_struct *t)
|
||||
{
|
||||
}
|
||||
static inline void giveup_altivec(struct task_struct *t)
|
||||
extern void __giveup_altivec(struct task_struct *);
|
||||
static inline void disable_kernel_altivec(void)
|
||||
{
|
||||
msr_check_and_clear(MSR_VEC);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
extern void enable_kernel_vsx(void);
|
||||
extern void flush_vsx_to_thread(struct task_struct *);
|
||||
#else
|
||||
static inline void flush_vsx_to_thread(struct task_struct *t)
|
||||
extern void giveup_vsx(struct task_struct *);
|
||||
extern void __giveup_vsx(struct task_struct *);
|
||||
static inline void disable_kernel_vsx(void)
|
||||
{
|
||||
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SPE
|
||||
extern void enable_kernel_spe(void);
|
||||
extern void flush_spe_to_thread(struct task_struct *);
|
||||
#else
|
||||
static inline void flush_spe_to_thread(struct task_struct *t)
|
||||
extern void giveup_spe(struct task_struct *);
|
||||
extern void __giveup_spe(struct task_struct *);
|
||||
static inline void disable_kernel_spe(void)
|
||||
{
|
||||
msr_check_and_clear(MSR_SPE);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@ static inline void isync(void)
|
|||
MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
|
||||
#define PPC_ACQUIRE_BARRIER "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
|
||||
#define PPC_RELEASE_BARRIER stringify_in_c(LWSYNC) "\n"
|
||||
#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(LWSYNC) "\n"
|
||||
#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(sync) "\n"
|
||||
#define PPC_ATOMIC_EXIT_BARRIER "\n" stringify_in_c(sync) "\n"
|
||||
#else
|
||||
#define PPC_ACQUIRE_BARRIER
|
||||
|
|
|
@ -27,7 +27,6 @@ extern struct clock_event_device decrementer_clockevent;
|
|||
|
||||
struct rtc_time;
|
||||
extern void to_tm(int tim, struct rtc_time * tm);
|
||||
extern void GregorianDay(struct rtc_time *tm);
|
||||
extern void tick_broadcast_ipi_handler(void);
|
||||
|
||||
extern void generic_calibrate_decr(void);
|
||||
|
|
|
@ -12,10 +12,9 @@
|
|||
#include <uapi/asm/unistd.h>
|
||||
|
||||
|
||||
#define __NR_syscalls 379
|
||||
#define NR_syscalls 379
|
||||
|
||||
#define __NR__exit __NR_exit
|
||||
#define NR_syscalls __NR_syscalls
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@
|
|||
#include <linux/unistd.h>
|
||||
#include <linux/time.h>
|
||||
|
||||
#define SYSCALL_MAP_SIZE ((__NR_syscalls + 31) / 32)
|
||||
#define SYSCALL_MAP_SIZE ((NR_syscalls + 31) / 32)
|
||||
|
||||
/*
|
||||
* So here is the ppc64 backward compatible version
|
||||
|
|
|
@ -43,5 +43,7 @@
|
|||
#define PPC_FEATURE2_TAR 0x04000000
|
||||
#define PPC_FEATURE2_VEC_CRYPTO 0x02000000
|
||||
#define PPC_FEATURE2_HTM_NOSC 0x01000000
|
||||
#define PPC_FEATURE2_ARCH_3_00 0x00800000 /* ISA 3.00 */
|
||||
#define PPC_FEATURE2_HAS_IEEE128 0x00400000 /* VSX IEEE Binary Float 128-bit */
|
||||
|
||||
#endif /* _UAPI__ASM_POWERPC_CPUTABLE_H */
|
||||
|
|
|
@ -295,6 +295,8 @@ do { \
|
|||
#define R_PPC64_TLSLD 108
|
||||
#define R_PPC64_TOCSAVE 109
|
||||
|
||||
#define R_PPC64_ENTRY 118
|
||||
|
||||
#define R_PPC64_REL16 249
|
||||
#define R_PPC64_REL16_LO 250
|
||||
#define R_PPC64_REL16_HI 251
|
||||
|
|
|
@ -960,6 +960,7 @@ int fix_alignment(struct pt_regs *regs)
|
|||
preempt_disable();
|
||||
enable_kernel_fp();
|
||||
cvt_df(&data.dd, (float *)&data.x32.low32);
|
||||
disable_kernel_fp();
|
||||
preempt_enable();
|
||||
#else
|
||||
return 0;
|
||||
|
@ -1000,6 +1001,7 @@ int fix_alignment(struct pt_regs *regs)
|
|||
preempt_disable();
|
||||
enable_kernel_fp();
|
||||
cvt_fd((float *)&data.x32.low32, &data.dd);
|
||||
disable_kernel_fp();
|
||||
preempt_enable();
|
||||
#else
|
||||
return 0;
|
||||
|
|
|
@ -185,14 +185,16 @@ int main(void)
|
|||
DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr));
|
||||
DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled));
|
||||
DEFINE(PACAIRQHAPPENED, offsetof(struct paca_struct, irq_happened));
|
||||
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, mm_ctx_id));
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
|
||||
context.low_slices_psize));
|
||||
mm_ctx_low_slices_psize));
|
||||
DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct,
|
||||
context.high_slices_psize));
|
||||
mm_ctx_high_slices_psize));
|
||||
DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3E
|
||||
DEFINE(PACAPGD, offsetof(struct paca_struct, pgd));
|
||||
|
@ -222,7 +224,7 @@ int main(void)
|
|||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp));
|
||||
#else
|
||||
DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
|
||||
DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, mm_ctx_sllp));
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen));
|
||||
DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc));
|
||||
|
|
|
@ -223,7 +223,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||
|
||||
beq- 1f
|
||||
ACCOUNT_CPU_USER_EXIT(r11, r12)
|
||||
HMT_MEDIUM_LOW_HAS_PPR
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
HMT_MEDIUM_LOW
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
||||
|
||||
ld r13,GPR13(r1) /* only restore r13 if returning to usermode */
|
||||
1: ld r2,GPR2(r1)
|
||||
ld r1,GPR1(r1)
|
||||
|
@ -312,7 +316,13 @@ syscall_exit_work:
|
|||
subi r12,r12,TI_FLAGS
|
||||
|
||||
4: /* Anything else left to do? */
|
||||
SET_DEFAULT_THREAD_PPR(r3, r10) /* Set thread.ppr = 3 */
|
||||
BEGIN_FTR_SECTION
|
||||
lis r3,INIT_PPR@highest /* Set thread.ppr = 3 */
|
||||
ld r10,PACACURRENT(r13)
|
||||
sldi r3,r3,32 /* bits 11-13 are used for ppr */
|
||||
std r3,TASKTHREADPPR(r10)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
||||
|
||||
andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP)
|
||||
beq ret_from_except_lite
|
||||
|
||||
|
@ -452,43 +462,11 @@ _GLOBAL(_switch)
|
|||
/* r3-r13 are caller saved -- Cort */
|
||||
SAVE_8GPRS(14, r1)
|
||||
SAVE_10GPRS(22, r1)
|
||||
mflr r20 /* Return to switch caller */
|
||||
mfmsr r22
|
||||
li r0, MSR_FP
|
||||
#ifdef CONFIG_VSX
|
||||
BEGIN_FTR_SECTION
|
||||
oris r0,r0,MSR_VSX@h /* Disable VSX */
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
BEGIN_FTR_SECTION
|
||||
oris r0,r0,MSR_VEC@h /* Disable altivec */
|
||||
mfspr r24,SPRN_VRSAVE /* save vrsave register value */
|
||||
std r24,THREAD_VRSAVE(r3)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
and. r0,r0,r22
|
||||
beq+ 1f
|
||||
andc r22,r22,r0
|
||||
MTMSRD(r22)
|
||||
isync
|
||||
1: std r20,_NIP(r1)
|
||||
std r0,_NIP(r1) /* Return to switch caller */
|
||||
mfcr r23
|
||||
std r23,_CCR(r1)
|
||||
std r1,KSP(r3) /* Set old stack pointer */
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
BEGIN_FTR_SECTION
|
||||
/* Event based branch registers */
|
||||
mfspr r0, SPRN_BESCR
|
||||
std r0, THREAD_BESCR(r3)
|
||||
mfspr r0, SPRN_EBBHR
|
||||
std r0, THREAD_EBBHR(r3)
|
||||
mfspr r0, SPRN_EBBRR
|
||||
std r0, THREAD_EBBRR(r3)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/* We need a sync somewhere here to make sure that if the
|
||||
* previous task gets rescheduled on another CPU, it sees all
|
||||
|
@ -576,47 +554,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
|
|||
mr r1,r8 /* start using new stack pointer */
|
||||
std r7,PACAKSAVE(r13)
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
BEGIN_FTR_SECTION
|
||||
/* Event based branch registers */
|
||||
ld r0, THREAD_BESCR(r4)
|
||||
mtspr SPRN_BESCR, r0
|
||||
ld r0, THREAD_EBBHR(r4)
|
||||
mtspr SPRN_EBBHR, r0
|
||||
ld r0, THREAD_EBBRR(r4)
|
||||
mtspr SPRN_EBBRR, r0
|
||||
|
||||
ld r0,THREAD_TAR(r4)
|
||||
mtspr SPRN_TAR,r0
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
BEGIN_FTR_SECTION
|
||||
ld r0,THREAD_VRSAVE(r4)
|
||||
mtspr SPRN_VRSAVE,r0 /* if G4, restore VRSAVE reg */
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
#ifdef CONFIG_PPC64
|
||||
BEGIN_FTR_SECTION
|
||||
lwz r6,THREAD_DSCR_INHERIT(r4)
|
||||
ld r0,THREAD_DSCR(r4)
|
||||
cmpwi r6,0
|
||||
bne 1f
|
||||
ld r0,PACA_DSCR_DEFAULT(r13)
|
||||
1:
|
||||
BEGIN_FTR_SECTION_NESTED(70)
|
||||
mfspr r8, SPRN_FSCR
|
||||
rldimi r8, r6, FSCR_DSCR_LG, (63 - FSCR_DSCR_LG)
|
||||
mtspr SPRN_FSCR, r8
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_ARCH_207S, CPU_FTR_ARCH_207S, 70)
|
||||
cmpd r0,r25
|
||||
beq 2f
|
||||
mtspr SPRN_DSCR,r0
|
||||
2:
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_DSCR)
|
||||
#endif
|
||||
|
||||
ld r6,_CCR(r1)
|
||||
mtcrf 0xFF,r6
|
||||
|
||||
|
|
|
@ -96,7 +96,6 @@ __start_interrupts:
|
|||
|
||||
.globl system_reset_pSeries;
|
||||
system_reset_pSeries:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13)
|
||||
#ifdef CONFIG_PPC_P7_NAP
|
||||
BEGIN_FTR_SECTION
|
||||
|
@ -164,7 +163,6 @@ machine_check_pSeries_1:
|
|||
* some code path might still want to branch into the original
|
||||
* vector
|
||||
*/
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13) /* save r13 */
|
||||
#ifdef CONFIG_PPC_P7_NAP
|
||||
BEGIN_FTR_SECTION
|
||||
|
@ -199,7 +197,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
|
|||
. = 0x300
|
||||
.globl data_access_pSeries
|
||||
data_access_pSeries:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13)
|
||||
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD,
|
||||
KVMTEST, 0x300)
|
||||
|
@ -207,7 +204,6 @@ data_access_pSeries:
|
|||
. = 0x380
|
||||
.globl data_access_slb_pSeries
|
||||
data_access_slb_pSeries:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13)
|
||||
EXCEPTION_PROLOG_0(PACA_EXSLB)
|
||||
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
|
||||
|
@ -234,15 +230,14 @@ data_access_slb_pSeries:
|
|||
bctr
|
||||
#endif
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x400, 0x400, instruction_access)
|
||||
STD_EXCEPTION_PSERIES(0x400, instruction_access)
|
||||
|
||||
. = 0x480
|
||||
.globl instruction_access_slb_pSeries
|
||||
instruction_access_slb_pSeries:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13)
|
||||
EXCEPTION_PROLOG_0(PACA_EXSLB)
|
||||
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
|
||||
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
|
||||
std r3,PACA_EXSLB+EX_R3(r13)
|
||||
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
|
||||
#ifdef __DISABLED__
|
||||
|
@ -269,25 +264,24 @@ instruction_access_slb_pSeries:
|
|||
.globl hardware_interrupt_hv;
|
||||
hardware_interrupt_pSeries:
|
||||
hardware_interrupt_hv:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
BEGIN_FTR_SECTION
|
||||
_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
|
||||
EXC_HV, SOFTEN_TEST_HV)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
|
||||
FTR_SECTION_ELSE
|
||||
_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
|
||||
EXC_STD, SOFTEN_TEST_HV_201)
|
||||
EXC_STD, SOFTEN_TEST_PR)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
|
||||
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x600, 0x600, alignment)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x600)
|
||||
STD_EXCEPTION_PSERIES(0x600, alignment)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x600)
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x700, 0x700, program_check)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x700)
|
||||
STD_EXCEPTION_PSERIES(0x700, program_check)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x700)
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x800, 0x800, fp_unavailable)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x800)
|
||||
STD_EXCEPTION_PSERIES(0x800, fp_unavailable)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x800)
|
||||
|
||||
. = 0x900
|
||||
.globl decrementer_pSeries
|
||||
|
@ -297,10 +291,10 @@ decrementer_pSeries:
|
|||
STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
|
||||
|
||||
MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xa00)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
|
||||
|
||||
STD_EXCEPTION_PSERIES(0xb00, 0xb00, trap_0b)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xb00)
|
||||
STD_EXCEPTION_PSERIES(0xb00, trap_0b)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xb00)
|
||||
|
||||
. = 0xc00
|
||||
.globl system_call_pSeries
|
||||
|
@ -331,8 +325,8 @@ system_call_pSeries:
|
|||
SYSCALL_PSERIES_3
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00)
|
||||
|
||||
STD_EXCEPTION_PSERIES(0xd00, 0xd00, single_step)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xd00)
|
||||
STD_EXCEPTION_PSERIES(0xd00, single_step)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xd00)
|
||||
|
||||
/* At 0xe??? we have a bunch of hypervisor exceptions, we branch
|
||||
* out of line to handle them
|
||||
|
@ -407,13 +401,12 @@ hv_facility_unavailable_trampoline:
|
|||
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202)
|
||||
#endif /* CONFIG_CBE_RAS */
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x1300, 0x1300, instruction_breakpoint)
|
||||
KVM_HANDLER_PR_SKIP(PACA_EXGEN, EXC_STD, 0x1300)
|
||||
STD_EXCEPTION_PSERIES(0x1300, instruction_breakpoint)
|
||||
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1300)
|
||||
|
||||
. = 0x1500
|
||||
.global denorm_exception_hv
|
||||
denorm_exception_hv:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
mtspr SPRN_SPRG_HSCRATCH0,r13
|
||||
EXCEPTION_PROLOG_0(PACA_EXGEN)
|
||||
EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
|
||||
|
@ -435,8 +428,8 @@ denorm_exception_hv:
|
|||
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602)
|
||||
#endif /* CONFIG_CBE_RAS */
|
||||
|
||||
STD_EXCEPTION_PSERIES(0x1700, 0x1700, altivec_assist)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x1700)
|
||||
STD_EXCEPTION_PSERIES(0x1700, altivec_assist)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x1700)
|
||||
|
||||
#ifdef CONFIG_CBE_RAS
|
||||
STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal)
|
||||
|
@ -527,7 +520,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
|
|||
machine_check_pSeries:
|
||||
.globl machine_check_fwnmi
|
||||
machine_check_fwnmi:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13) /* save r13 */
|
||||
EXCEPTION_PROLOG_0(PACA_EXMC)
|
||||
machine_check_pSeries_0:
|
||||
|
@ -536,9 +528,9 @@ machine_check_pSeries_0:
|
|||
KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200)
|
||||
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300)
|
||||
KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x400)
|
||||
KVM_HANDLER_PR(PACA_EXSLB, EXC_STD, 0x480)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x900)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x400)
|
||||
KVM_HANDLER(PACA_EXSLB, EXC_STD, 0x480)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x900)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982)
|
||||
|
||||
#ifdef CONFIG_PPC_DENORMALISATION
|
||||
|
@ -621,13 +613,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
|
|||
|
||||
/* moved from 0xf00 */
|
||||
STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf00)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
|
||||
STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf20)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
|
||||
STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf40)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
|
||||
STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
|
||||
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf60)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
|
||||
STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable)
|
||||
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
|
||||
|
||||
|
@ -711,7 +703,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
|
|||
.globl system_reset_fwnmi
|
||||
.align 7
|
||||
system_reset_fwnmi:
|
||||
HMT_MEDIUM_PPR_DISCARD
|
||||
SET_SCRATCH0(r13) /* save r13 */
|
||||
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
|
||||
NOTEST, 0x100)
|
||||
|
@ -1556,29 +1547,19 @@ do_hash_page:
|
|||
lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */
|
||||
andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */
|
||||
bne 77f /* then don't call hash_page now */
|
||||
/*
|
||||
* We need to set the _PAGE_USER bit if MSR_PR is set or if we are
|
||||
* accessing a userspace segment (even from the kernel). We assume
|
||||
* kernel addresses always have the high bit set.
|
||||
*/
|
||||
rlwinm r4,r4,32-25+9,31-9,31-9 /* DSISR_STORE -> _PAGE_RW */
|
||||
rotldi r0,r3,15 /* Move high bit into MSR_PR posn */
|
||||
orc r0,r12,r0 /* MSR_PR | ~high_bit */
|
||||
rlwimi r4,r0,32-13,30,30 /* becomes _PAGE_USER access bit */
|
||||
ori r4,r4,1 /* add _PAGE_PRESENT */
|
||||
rlwimi r4,r5,22+2,31-2,31-2 /* Set _PAGE_EXEC if trap is 0x400 */
|
||||
|
||||
/*
|
||||
* r3 contains the faulting address
|
||||
* r4 contains the required access permissions
|
||||
* r4 msr
|
||||
* r5 contains the trap number
|
||||
* r6 contains dsisr
|
||||
*
|
||||
* at return r3 = 0 for success, 1 for page fault, negative for error
|
||||
*/
|
||||
mr r4,r12
|
||||
ld r6,_DSISR(r1)
|
||||
bl hash_page /* build HPTE if possible */
|
||||
cmpdi r3,0 /* see if hash_page succeeded */
|
||||
bl __hash_page /* build HPTE if possible */
|
||||
cmpdi r3,0 /* see if __hash_page succeeded */
|
||||
|
||||
/* Success */
|
||||
beq fast_exc_return_irq /* Return from exception on success */
|
||||
|
|
|
@ -73,29 +73,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
|||
MTFSF_L(fr0)
|
||||
REST_32FPVSRS(0, R4, R7)
|
||||
|
||||
/* FP/VSX off again */
|
||||
MTMSRD(r6)
|
||||
SYNC
|
||||
|
||||
blr
|
||||
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
|
||||
|
||||
/*
|
||||
* Enable use of the FPU, and VSX if possible, for the caller.
|
||||
*/
|
||||
_GLOBAL(fp_enable)
|
||||
mfmsr r3
|
||||
ori r3,r3,MSR_FP
|
||||
#ifdef CONFIG_VSX
|
||||
BEGIN_FTR_SECTION
|
||||
oris r3,r3,MSR_VSX@h
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
#endif
|
||||
SYNC
|
||||
MTMSRD(r3)
|
||||
isync /* (not necessary for arch 2.02 and later) */
|
||||
blr
|
||||
|
||||
/*
|
||||
* Load state from memory into FP registers including FPSCR.
|
||||
* Assumes the caller has enabled FP in the MSR.
|
||||
|
@ -136,31 +116,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
|||
SYNC
|
||||
MTMSRD(r5) /* enable use of fpu now */
|
||||
isync
|
||||
/*
|
||||
* For SMP, we don't do lazy FPU switching because it just gets too
|
||||
* horrendously complex, especially when a task switches from one CPU
|
||||
* to another. Instead we call giveup_fpu in switch_to.
|
||||
*/
|
||||
#ifndef CONFIG_SMP
|
||||
LOAD_REG_ADDRBASE(r3, last_task_used_math)
|
||||
toreal(r3)
|
||||
PPC_LL r4,ADDROFF(last_task_used_math)(r3)
|
||||
PPC_LCMPI 0,r4,0
|
||||
beq 1f
|
||||
toreal(r4)
|
||||
addi r4,r4,THREAD /* want last_task_used_math->thread */
|
||||
addi r10,r4,THREAD_FPSTATE
|
||||
SAVE_32FPVSRS(0, R5, R10)
|
||||
mffs fr0
|
||||
stfd fr0,FPSTATE_FPSCR(r10)
|
||||
PPC_LL r5,PT_REGS(r4)
|
||||
toreal(r5)
|
||||
PPC_LL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
li r10,MSR_FP|MSR_FE0|MSR_FE1
|
||||
andc r4,r4,r10 /* disable FP for previous task */
|
||||
PPC_STL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
1:
|
||||
#endif /* CONFIG_SMP */
|
||||
/* enable use of FP after return */
|
||||
#ifdef CONFIG_PPC32
|
||||
mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
|
||||
|
@ -179,36 +134,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
|||
lfd fr0,FPSTATE_FPSCR(r10)
|
||||
MTFSF_L(fr0)
|
||||
REST_32FPVSRS(0, R4, R10)
|
||||
#ifndef CONFIG_SMP
|
||||
subi r4,r5,THREAD
|
||||
fromreal(r4)
|
||||
PPC_STL r4,ADDROFF(last_task_used_math)(r3)
|
||||
#endif /* CONFIG_SMP */
|
||||
/* restore registers and return */
|
||||
/* we haven't used ctr or xer or lr */
|
||||
blr
|
||||
|
||||
/*
|
||||
* giveup_fpu(tsk)
|
||||
* __giveup_fpu(tsk)
|
||||
* Disable FP for the task given as the argument,
|
||||
* and save the floating-point registers in its thread_struct.
|
||||
* Enables the FPU for use in the kernel on return.
|
||||
*/
|
||||
_GLOBAL(giveup_fpu)
|
||||
mfmsr r5
|
||||
ori r5,r5,MSR_FP
|
||||
#ifdef CONFIG_VSX
|
||||
BEGIN_FTR_SECTION
|
||||
oris r5,r5,MSR_VSX@h
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
#endif
|
||||
SYNC_601
|
||||
ISYNC_601
|
||||
MTMSRD(r5) /* enable use of fpu now */
|
||||
SYNC_601
|
||||
isync
|
||||
PPC_LCMPI 0,r3,0
|
||||
beqlr- /* if no previous owner, done */
|
||||
_GLOBAL(__giveup_fpu)
|
||||
addi r3,r3,THREAD /* want THREAD of task */
|
||||
PPC_LL r6,THREAD_FPSAVEAREA(r3)
|
||||
PPC_LL r5,PT_REGS(r3)
|
||||
|
@ -230,11 +166,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
|||
andc r4,r4,r3 /* disable FP for previous task */
|
||||
PPC_STL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
1:
|
||||
#ifndef CONFIG_SMP
|
||||
li r5,0
|
||||
LOAD_REG_ADDRBASE(r4,last_task_used_math)
|
||||
PPC_STL r5,ADDROFF(last_task_used_math)(r4)
|
||||
#endif /* CONFIG_SMP */
|
||||
blr
|
||||
|
||||
/*
|
||||
|
|
|
@ -857,29 +857,6 @@ _GLOBAL(load_up_spe)
|
|||
oris r5,r5,MSR_SPE@h
|
||||
mtmsr r5 /* enable use of SPE now */
|
||||
isync
|
||||
/*
|
||||
* For SMP, we don't do lazy SPE switching because it just gets too
|
||||
* horrendously complex, especially when a task switches from one CPU
|
||||
* to another. Instead we call giveup_spe in switch_to.
|
||||
*/
|
||||
#ifndef CONFIG_SMP
|
||||
lis r3,last_task_used_spe@ha
|
||||
lwz r4,last_task_used_spe@l(r3)
|
||||
cmpi 0,r4,0
|
||||
beq 1f
|
||||
addi r4,r4,THREAD /* want THREAD of last_task_used_spe */
|
||||
SAVE_32EVRS(0,r10,r4,THREAD_EVR0)
|
||||
evxor evr10, evr10, evr10 /* clear out evr10 */
|
||||
evmwumiaa evr10, evr10, evr10 /* evr10 <- ACC = 0 * 0 + ACC */
|
||||
li r5,THREAD_ACC
|
||||
evstddx evr10, r4, r5 /* save off accumulator */
|
||||
lwz r5,PT_REGS(r4)
|
||||
lwz r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
lis r10,MSR_SPE@h
|
||||
andc r4,r4,r10 /* disable SPE for previous task */
|
||||
stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
1:
|
||||
#endif /* !CONFIG_SMP */
|
||||
/* enable use of SPE after return */
|
||||
oris r9,r9,MSR_SPE@h
|
||||
mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
|
||||
|
@ -889,10 +866,6 @@ _GLOBAL(load_up_spe)
|
|||
evlddx evr4,r10,r5
|
||||
evmra evr4,evr4
|
||||
REST_32EVRS(0,r10,r5,THREAD_EVR0)
|
||||
#ifndef CONFIG_SMP
|
||||
subi r4,r5,THREAD
|
||||
stw r4,last_task_used_spe@l(r3)
|
||||
#endif /* !CONFIG_SMP */
|
||||
blr
|
||||
|
||||
/*
|
||||
|
@ -1011,16 +984,10 @@ _GLOBAL(__setup_ehv_ivors)
|
|||
|
||||
#ifdef CONFIG_SPE
|
||||
/*
|
||||
* extern void giveup_spe(struct task_struct *prev)
|
||||
* extern void __giveup_spe(struct task_struct *prev)
|
||||
*
|
||||
*/
|
||||
_GLOBAL(giveup_spe)
|
||||
mfmsr r5
|
||||
oris r5,r5,MSR_SPE@h
|
||||
mtmsr r5 /* enable use of SPE now */
|
||||
isync
|
||||
cmpi 0,r3,0
|
||||
beqlr- /* if no previous owner, done */
|
||||
_GLOBAL(__giveup_spe)
|
||||
addi r3,r3,THREAD /* want THREAD of task */
|
||||
lwz r5,PT_REGS(r3)
|
||||
cmpi 0,r5,0
|
||||
|
@ -1035,11 +1002,6 @@ _GLOBAL(giveup_spe)
|
|||
andc r4,r4,r3 /* disable SPE for previous task */
|
||||
stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
|
||||
1:
|
||||
#ifndef CONFIG_SMP
|
||||
li r5,0
|
||||
lis r4,last_task_used_spe@ha
|
||||
stw r5,last_task_used_spe@l(r4)
|
||||
#endif /* !CONFIG_SMP */
|
||||
blr
|
||||
#endif /* CONFIG_SPE */
|
||||
|
||||
|
|
|
@ -89,13 +89,6 @@ _GLOBAL(power7_powersave_common)
|
|||
std r0,_LINK(r1)
|
||||
std r0,_NIP(r1)
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
/* Make sure FPU, VSX etc... are flushed as we may lose
|
||||
* state when going to nap mode
|
||||
*/
|
||||
bl discard_lazy_cpu_state
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
/* Hard disable interrupts */
|
||||
mfmsr r9
|
||||
rldicl r9,r9,48,1
|
||||
|
|
|
@ -743,6 +743,8 @@ relocate_new_kernel:
|
|||
/* Check for 47x cores */
|
||||
mfspr r3,SPRN_PVR
|
||||
srwi r3,r3,16
|
||||
cmplwi cr0,r3,PVR_476FPE@h
|
||||
beq setup_map_47x
|
||||
cmplwi cr0,r3,PVR_476@h
|
||||
beq setup_map_47x
|
||||
cmplwi cr0,r3,PVR_476_ISS@h
|
||||
|
|
|
@ -635,6 +635,33 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
*/
|
||||
break;
|
||||
|
||||
case R_PPC64_ENTRY:
|
||||
/*
|
||||
* Optimize ELFv2 large code model entry point if
|
||||
* the TOC is within 2GB range of current location.
|
||||
*/
|
||||
value = my_r2(sechdrs, me) - (unsigned long)location;
|
||||
if (value + 0x80008000 > 0xffffffff)
|
||||
break;
|
||||
/*
|
||||
* Check for the large code model prolog sequence:
|
||||
* ld r2, ...(r12)
|
||||
* add r2, r2, r12
|
||||
*/
|
||||
if ((((uint32_t *)location)[0] & ~0xfffc)
|
||||
!= 0xe84c0000)
|
||||
break;
|
||||
if (((uint32_t *)location)[1] != 0x7c426214)
|
||||
break;
|
||||
/*
|
||||
* If found, replace it with:
|
||||
* addis r2, r12, (.TOC.-func)@ha
|
||||
* addi r2, r12, (.TOC.-func)@l
|
||||
*/
|
||||
((uint32_t *)location)[0] = 0x3c4c0000 + PPC_HA(value);
|
||||
((uint32_t *)location)[1] = 0x38420000 + PPC_LO(value);
|
||||
break;
|
||||
|
||||
case R_PPC64_REL16_HA:
|
||||
/* Subtract location pointer */
|
||||
value -= (unsigned long)location;
|
||||
|
|
|
@ -19,13 +19,11 @@ EXPORT_SYMBOL(_mcount);
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
EXPORT_SYMBOL(giveup_fpu);
|
||||
EXPORT_SYMBOL(load_fp_state);
|
||||
EXPORT_SYMBOL(store_fp_state);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
EXPORT_SYMBOL(giveup_altivec);
|
||||
EXPORT_SYMBOL(load_vr_state);
|
||||
EXPORT_SYMBOL(store_vr_state);
|
||||
#endif
|
||||
|
@ -34,10 +32,6 @@ EXPORT_SYMBOL(store_vr_state);
|
|||
EXPORT_SYMBOL_GPL(__giveup_vsx);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SPE
|
||||
EXPORT_SYMBOL(giveup_spe);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_EPAPR_PARAVIRT
|
||||
EXPORT_SYMBOL(epapr_hypercall_start);
|
||||
#endif
|
||||
|
|
|
@ -67,15 +67,8 @@
|
|||
|
||||
extern unsigned long _get_SP(void);
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
struct task_struct *last_task_used_math = NULL;
|
||||
struct task_struct *last_task_used_altivec = NULL;
|
||||
struct task_struct *last_task_used_vsx = NULL;
|
||||
struct task_struct *last_task_used_spe = NULL;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||
void giveup_fpu_maybe_transactional(struct task_struct *tsk)
|
||||
static void check_if_tm_restore_required(struct task_struct *tsk)
|
||||
{
|
||||
/*
|
||||
* If we are saving the current thread's registers, and the
|
||||
|
@ -89,34 +82,67 @@ void giveup_fpu_maybe_transactional(struct task_struct *tsk)
|
|||
tsk->thread.ckpt_regs.msr = tsk->thread.regs->msr;
|
||||
set_thread_flag(TIF_RESTORE_TM);
|
||||
}
|
||||
|
||||
giveup_fpu(tsk);
|
||||
}
|
||||
|
||||
void giveup_altivec_maybe_transactional(struct task_struct *tsk)
|
||||
{
|
||||
/*
|
||||
* If we are saving the current thread's registers, and the
|
||||
* thread is in a transactional state, set the TIF_RESTORE_TM
|
||||
* bit so that we know to restore the registers before
|
||||
* returning to userspace.
|
||||
*/
|
||||
if (tsk == current && tsk->thread.regs &&
|
||||
MSR_TM_ACTIVE(tsk->thread.regs->msr) &&
|
||||
!test_thread_flag(TIF_RESTORE_TM)) {
|
||||
tsk->thread.ckpt_regs.msr = tsk->thread.regs->msr;
|
||||
set_thread_flag(TIF_RESTORE_TM);
|
||||
}
|
||||
|
||||
giveup_altivec(tsk);
|
||||
}
|
||||
|
||||
#else
|
||||
#define giveup_fpu_maybe_transactional(tsk) giveup_fpu(tsk)
|
||||
#define giveup_altivec_maybe_transactional(tsk) giveup_altivec(tsk)
|
||||
static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
|
||||
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
|
||||
|
||||
bool strict_msr_control;
|
||||
EXPORT_SYMBOL(strict_msr_control);
|
||||
|
||||
static int __init enable_strict_msr_control(char *str)
|
||||
{
|
||||
strict_msr_control = true;
|
||||
pr_info("Enabling strict facility control\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
early_param("ppc_strict_facility_enable", enable_strict_msr_control);
|
||||
|
||||
void msr_check_and_set(unsigned long bits)
|
||||
{
|
||||
unsigned long oldmsr = mfmsr();
|
||||
unsigned long newmsr;
|
||||
|
||||
newmsr = oldmsr | bits;
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
if (cpu_has_feature(CPU_FTR_VSX) && (bits & MSR_FP))
|
||||
newmsr |= MSR_VSX;
|
||||
#endif
|
||||
|
||||
if (oldmsr != newmsr)
|
||||
mtmsr_isync(newmsr);
|
||||
}
|
||||
|
||||
void __msr_check_and_clear(unsigned long bits)
|
||||
{
|
||||
unsigned long oldmsr = mfmsr();
|
||||
unsigned long newmsr;
|
||||
|
||||
newmsr = oldmsr & ~bits;
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
if (cpu_has_feature(CPU_FTR_VSX) && (bits & MSR_FP))
|
||||
newmsr &= ~MSR_VSX;
|
||||
#endif
|
||||
|
||||
if (oldmsr != newmsr)
|
||||
mtmsr_isync(newmsr);
|
||||
}
|
||||
EXPORT_SYMBOL(__msr_check_and_clear);
|
||||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
void giveup_fpu(struct task_struct *tsk)
|
||||
{
|
||||
check_if_tm_restore_required(tsk);
|
||||
|
||||
msr_check_and_set(MSR_FP);
|
||||
__giveup_fpu(tsk);
|
||||
msr_check_and_clear(MSR_FP);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_fpu);
|
||||
|
||||
/*
|
||||
* Make sure the floating-point register state in the
|
||||
* the thread_struct is up to date for task tsk.
|
||||
|
@ -134,52 +160,56 @@ void flush_fp_to_thread(struct task_struct *tsk)
|
|||
*/
|
||||
preempt_disable();
|
||||
if (tsk->thread.regs->msr & MSR_FP) {
|
||||
#ifdef CONFIG_SMP
|
||||
/*
|
||||
* This should only ever be called for current or
|
||||
* for a stopped child process. Since we save away
|
||||
* the FP register state on context switch on SMP,
|
||||
* the FP register state on context switch,
|
||||
* there is something wrong if a stopped child appears
|
||||
* to still have its FP state in the CPU registers.
|
||||
*/
|
||||
BUG_ON(tsk != current);
|
||||
#endif
|
||||
giveup_fpu_maybe_transactional(tsk);
|
||||
giveup_fpu(tsk);
|
||||
}
|
||||
preempt_enable();
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(flush_fp_to_thread);
|
||||
#endif /* CONFIG_PPC_FPU */
|
||||
|
||||
void enable_kernel_fp(void)
|
||||
{
|
||||
WARN_ON(preemptible());
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_FP))
|
||||
giveup_fpu_maybe_transactional(current);
|
||||
else
|
||||
giveup_fpu(NULL); /* just enables FP for kernel */
|
||||
#else
|
||||
giveup_fpu_maybe_transactional(last_task_used_math);
|
||||
#endif /* CONFIG_SMP */
|
||||
msr_check_and_set(MSR_FP);
|
||||
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_FP)) {
|
||||
check_if_tm_restore_required(current);
|
||||
__giveup_fpu(current);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(enable_kernel_fp);
|
||||
#endif /* CONFIG_PPC_FPU */
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
void giveup_altivec(struct task_struct *tsk)
|
||||
{
|
||||
check_if_tm_restore_required(tsk);
|
||||
|
||||
msr_check_and_set(MSR_VEC);
|
||||
__giveup_altivec(tsk);
|
||||
msr_check_and_clear(MSR_VEC);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_altivec);
|
||||
|
||||
void enable_kernel_altivec(void)
|
||||
{
|
||||
WARN_ON(preemptible());
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_VEC))
|
||||
giveup_altivec_maybe_transactional(current);
|
||||
else
|
||||
giveup_altivec_notask();
|
||||
#else
|
||||
giveup_altivec_maybe_transactional(last_task_used_altivec);
|
||||
#endif /* CONFIG_SMP */
|
||||
msr_check_and_set(MSR_VEC);
|
||||
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) {
|
||||
check_if_tm_restore_required(current);
|
||||
__giveup_altivec(current);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(enable_kernel_altivec);
|
||||
|
||||
|
@ -192,10 +222,8 @@ void flush_altivec_to_thread(struct task_struct *tsk)
|
|||
if (tsk->thread.regs) {
|
||||
preempt_disable();
|
||||
if (tsk->thread.regs->msr & MSR_VEC) {
|
||||
#ifdef CONFIG_SMP
|
||||
BUG_ON(tsk != current);
|
||||
#endif
|
||||
giveup_altivec_maybe_transactional(tsk);
|
||||
giveup_altivec(tsk);
|
||||
}
|
||||
preempt_enable();
|
||||
}
|
||||
|
@ -204,37 +232,43 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
|
|||
#endif /* CONFIG_ALTIVEC */
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
void giveup_vsx(struct task_struct *tsk)
|
||||
{
|
||||
check_if_tm_restore_required(tsk);
|
||||
|
||||
msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);
|
||||
if (tsk->thread.regs->msr & MSR_FP)
|
||||
__giveup_fpu(tsk);
|
||||
if (tsk->thread.regs->msr & MSR_VEC)
|
||||
__giveup_altivec(tsk);
|
||||
__giveup_vsx(tsk);
|
||||
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_vsx);
|
||||
|
||||
void enable_kernel_vsx(void)
|
||||
{
|
||||
WARN_ON(preemptible());
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_VSX))
|
||||
giveup_vsx(current);
|
||||
else
|
||||
giveup_vsx(NULL); /* just enable vsx for kernel - force */
|
||||
#else
|
||||
giveup_vsx(last_task_used_vsx);
|
||||
#endif /* CONFIG_SMP */
|
||||
msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);
|
||||
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) {
|
||||
check_if_tm_restore_required(current);
|
||||
if (current->thread.regs->msr & MSR_FP)
|
||||
__giveup_fpu(current);
|
||||
if (current->thread.regs->msr & MSR_VEC)
|
||||
__giveup_altivec(current);
|
||||
__giveup_vsx(current);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(enable_kernel_vsx);
|
||||
|
||||
void giveup_vsx(struct task_struct *tsk)
|
||||
{
|
||||
giveup_fpu_maybe_transactional(tsk);
|
||||
giveup_altivec_maybe_transactional(tsk);
|
||||
__giveup_vsx(tsk);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_vsx);
|
||||
|
||||
void flush_vsx_to_thread(struct task_struct *tsk)
|
||||
{
|
||||
if (tsk->thread.regs) {
|
||||
preempt_disable();
|
||||
if (tsk->thread.regs->msr & MSR_VSX) {
|
||||
#ifdef CONFIG_SMP
|
||||
BUG_ON(tsk != current);
|
||||
#endif
|
||||
giveup_vsx(tsk);
|
||||
}
|
||||
preempt_enable();
|
||||
|
@ -244,19 +278,26 @@ EXPORT_SYMBOL_GPL(flush_vsx_to_thread);
|
|||
#endif /* CONFIG_VSX */
|
||||
|
||||
#ifdef CONFIG_SPE
|
||||
void giveup_spe(struct task_struct *tsk)
|
||||
{
|
||||
check_if_tm_restore_required(tsk);
|
||||
|
||||
msr_check_and_set(MSR_SPE);
|
||||
__giveup_spe(tsk);
|
||||
msr_check_and_clear(MSR_SPE);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_spe);
|
||||
|
||||
void enable_kernel_spe(void)
|
||||
{
|
||||
WARN_ON(preemptible());
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_SPE))
|
||||
giveup_spe(current);
|
||||
else
|
||||
giveup_spe(NULL); /* just enable SPE for kernel - force */
|
||||
#else
|
||||
giveup_spe(last_task_used_spe);
|
||||
#endif /* __SMP __ */
|
||||
msr_check_and_set(MSR_SPE);
|
||||
|
||||
if (current->thread.regs && (current->thread.regs->msr & MSR_SPE)) {
|
||||
check_if_tm_restore_required(current);
|
||||
__giveup_spe(current);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(enable_kernel_spe);
|
||||
|
||||
|
@ -265,9 +306,7 @@ void flush_spe_to_thread(struct task_struct *tsk)
|
|||
if (tsk->thread.regs) {
|
||||
preempt_disable();
|
||||
if (tsk->thread.regs->msr & MSR_SPE) {
|
||||
#ifdef CONFIG_SMP
|
||||
BUG_ON(tsk != current);
|
||||
#endif
|
||||
tsk->thread.spefscr = mfspr(SPRN_SPEFSCR);
|
||||
giveup_spe(tsk);
|
||||
}
|
||||
|
@ -276,31 +315,81 @@ void flush_spe_to_thread(struct task_struct *tsk)
|
|||
}
|
||||
#endif /* CONFIG_SPE */
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
/*
|
||||
* If we are doing lazy switching of CPU state (FP, altivec or SPE),
|
||||
* and the current task has some state, discard it.
|
||||
*/
|
||||
void discard_lazy_cpu_state(void)
|
||||
static unsigned long msr_all_available;
|
||||
|
||||
static int __init init_msr_all_available(void)
|
||||
{
|
||||
preempt_disable();
|
||||
if (last_task_used_math == current)
|
||||
last_task_used_math = NULL;
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (last_task_used_altivec == current)
|
||||
last_task_used_altivec = NULL;
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
#ifdef CONFIG_VSX
|
||||
if (last_task_used_vsx == current)
|
||||
last_task_used_vsx = NULL;
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_SPE
|
||||
if (last_task_used_spe == current)
|
||||
last_task_used_spe = NULL;
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
msr_all_available |= MSR_FP;
|
||||
#endif
|
||||
preempt_enable();
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (cpu_has_feature(CPU_FTR_ALTIVEC))
|
||||
msr_all_available |= MSR_VEC;
|
||||
#endif
|
||||
#ifdef CONFIG_VSX
|
||||
if (cpu_has_feature(CPU_FTR_VSX))
|
||||
msr_all_available |= MSR_VSX;
|
||||
#endif
|
||||
#ifdef CONFIG_SPE
|
||||
if (cpu_has_feature(CPU_FTR_SPE))
|
||||
msr_all_available |= MSR_SPE;
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_SMP */
|
||||
early_initcall(init_msr_all_available);
|
||||
|
||||
void giveup_all(struct task_struct *tsk)
|
||||
{
|
||||
unsigned long usermsr;
|
||||
|
||||
if (!tsk->thread.regs)
|
||||
return;
|
||||
|
||||
usermsr = tsk->thread.regs->msr;
|
||||
|
||||
if ((usermsr & msr_all_available) == 0)
|
||||
return;
|
||||
|
||||
msr_check_and_set(msr_all_available);
|
||||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
if (usermsr & MSR_FP)
|
||||
__giveup_fpu(tsk);
|
||||
#endif
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (usermsr & MSR_VEC)
|
||||
__giveup_altivec(tsk);
|
||||
#endif
|
||||
#ifdef CONFIG_VSX
|
||||
if (usermsr & MSR_VSX)
|
||||
__giveup_vsx(tsk);
|
||||
#endif
|
||||
#ifdef CONFIG_SPE
|
||||
if (usermsr & MSR_SPE)
|
||||
__giveup_spe(tsk);
|
||||
#endif
|
||||
|
||||
msr_check_and_clear(msr_all_available);
|
||||
}
|
||||
EXPORT_SYMBOL(giveup_all);
|
||||
|
||||
void flush_all_to_thread(struct task_struct *tsk)
|
||||
{
|
||||
if (tsk->thread.regs) {
|
||||
preempt_disable();
|
||||
BUG_ON(tsk != current);
|
||||
giveup_all(tsk);
|
||||
|
||||
#ifdef CONFIG_SPE
|
||||
if (tsk->thread.regs->msr & MSR_SPE)
|
||||
tsk->thread.spefscr = mfspr(SPRN_SPEFSCR);
|
||||
#endif
|
||||
|
||||
preempt_enable();
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(flush_all_to_thread);
|
||||
|
||||
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
|
||||
void do_send_trap(struct pt_regs *regs, unsigned long address,
|
||||
|
@ -744,13 +833,15 @@ void restore_tm_state(struct pt_regs *regs)
|
|||
msr_diff = current->thread.ckpt_regs.msr & ~regs->msr;
|
||||
msr_diff &= MSR_FP | MSR_VEC | MSR_VSX;
|
||||
if (msr_diff & MSR_FP) {
|
||||
fp_enable();
|
||||
msr_check_and_set(MSR_FP);
|
||||
load_fp_state(¤t->thread.fp_state);
|
||||
msr_check_and_clear(MSR_FP);
|
||||
regs->msr |= current->thread.fpexc_mode;
|
||||
}
|
||||
if (msr_diff & MSR_VEC) {
|
||||
vec_enable();
|
||||
msr_check_and_set(MSR_VEC);
|
||||
load_vr_state(¤t->thread.vr_state);
|
||||
msr_check_and_clear(MSR_VEC);
|
||||
}
|
||||
regs->msr |= msr_diff;
|
||||
}
|
||||
|
@ -760,6 +851,73 @@ void restore_tm_state(struct pt_regs *regs)
|
|||
#define __switch_to_tm(prev)
|
||||
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
|
||||
|
||||
static inline void save_sprs(struct thread_struct *t)
|
||||
{
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC)))
|
||||
t->vrsave = mfspr(SPRN_VRSAVE);
|
||||
#endif
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
if (cpu_has_feature(CPU_FTR_DSCR))
|
||||
t->dscr = mfspr(SPRN_DSCR);
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
|
||||
t->bescr = mfspr(SPRN_BESCR);
|
||||
t->ebbhr = mfspr(SPRN_EBBHR);
|
||||
t->ebbrr = mfspr(SPRN_EBBRR);
|
||||
|
||||
t->fscr = mfspr(SPRN_FSCR);
|
||||
|
||||
/*
|
||||
* Note that the TAR is not available for use in the kernel.
|
||||
* (To provide this, the TAR should be backed up/restored on
|
||||
* exception entry/exit instead, and be in pt_regs. FIXME,
|
||||
* this should be in pt_regs anyway (for debug).)
|
||||
*/
|
||||
t->tar = mfspr(SPRN_TAR);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void restore_sprs(struct thread_struct *old_thread,
|
||||
struct thread_struct *new_thread)
|
||||
{
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
|
||||
old_thread->vrsave != new_thread->vrsave)
|
||||
mtspr(SPRN_VRSAVE, new_thread->vrsave);
|
||||
#endif
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
if (cpu_has_feature(CPU_FTR_DSCR)) {
|
||||
u64 dscr = get_paca()->dscr_default;
|
||||
u64 fscr = old_thread->fscr & ~FSCR_DSCR;
|
||||
|
||||
if (new_thread->dscr_inherit) {
|
||||
dscr = new_thread->dscr;
|
||||
fscr |= FSCR_DSCR;
|
||||
}
|
||||
|
||||
if (old_thread->dscr != dscr)
|
||||
mtspr(SPRN_DSCR, dscr);
|
||||
|
||||
if (old_thread->fscr != fscr)
|
||||
mtspr(SPRN_FSCR, fscr);
|
||||
}
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
|
||||
if (old_thread->bescr != new_thread->bescr)
|
||||
mtspr(SPRN_BESCR, new_thread->bescr);
|
||||
if (old_thread->ebbhr != new_thread->ebbhr)
|
||||
mtspr(SPRN_EBBHR, new_thread->ebbhr);
|
||||
if (old_thread->ebbrr != new_thread->ebbrr)
|
||||
mtspr(SPRN_EBBRR, new_thread->ebbrr);
|
||||
|
||||
if (old_thread->tar != new_thread->tar)
|
||||
mtspr(SPRN_TAR, new_thread->tar);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
struct task_struct *__switch_to(struct task_struct *prev,
|
||||
struct task_struct *new)
|
||||
{
|
||||
|
@ -769,103 +927,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
|||
struct ppc64_tlb_batch *batch;
|
||||
#endif
|
||||
|
||||
WARN_ON(!irqs_disabled());
|
||||
|
||||
/* Back up the TAR and DSCR across context switches.
|
||||
* Note that the TAR is not available for use in the kernel. (To
|
||||
* provide this, the TAR should be backed up/restored on exception
|
||||
* entry/exit instead, and be in pt_regs. FIXME, this should be in
|
||||
* pt_regs anyway (for debug).)
|
||||
* Save the TAR and DSCR here before we do treclaim/trecheckpoint as
|
||||
* these will change them.
|
||||
*/
|
||||
save_early_sprs(&prev->thread);
|
||||
|
||||
__switch_to_tm(prev);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/* avoid complexity of lazy save/restore of fpu
|
||||
* by just saving it every time we switch out if
|
||||
* this task used the fpu during the last quantum.
|
||||
*
|
||||
* If it tries to use the fpu again, it'll trap and
|
||||
* reload its fp regs. So we don't have to do a restore
|
||||
* every switch, just a save.
|
||||
* -- Cort
|
||||
*/
|
||||
if (prev->thread.regs && (prev->thread.regs->msr & MSR_FP))
|
||||
giveup_fpu(prev);
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
/*
|
||||
* If the previous thread used altivec in the last quantum
|
||||
* (thus changing altivec regs) then save them.
|
||||
* We used to check the VRSAVE register but not all apps
|
||||
* set it, so we don't rely on it now (and in fact we need
|
||||
* to save & restore VSCR even if VRSAVE == 0). -- paulus
|
||||
*
|
||||
* On SMP we always save/restore altivec regs just to avoid the
|
||||
* complexity of changing processors.
|
||||
* -- Cort
|
||||
*/
|
||||
if (prev->thread.regs && (prev->thread.regs->msr & MSR_VEC))
|
||||
giveup_altivec(prev);
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
#ifdef CONFIG_VSX
|
||||
if (prev->thread.regs && (prev->thread.regs->msr & MSR_VSX))
|
||||
/* VMX and FPU registers are already save here */
|
||||
__giveup_vsx(prev);
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_SPE
|
||||
/*
|
||||
* If the previous thread used spe in the last quantum
|
||||
* (thus changing spe regs) then save them.
|
||||
*
|
||||
* On SMP we always save/restore spe regs just to avoid the
|
||||
* complexity of changing processors.
|
||||
*/
|
||||
if ((prev->thread.regs && (prev->thread.regs->msr & MSR_SPE)))
|
||||
giveup_spe(prev);
|
||||
#endif /* CONFIG_SPE */
|
||||
|
||||
#else /* CONFIG_SMP */
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
/* Avoid the trap. On smp this this never happens since
|
||||
* we don't set last_task_used_altivec -- Cort
|
||||
*/
|
||||
if (new->thread.regs && last_task_used_altivec == new)
|
||||
new->thread.regs->msr |= MSR_VEC;
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
#ifdef CONFIG_VSX
|
||||
if (new->thread.regs && last_task_used_vsx == new)
|
||||
new->thread.regs->msr |= MSR_VSX;
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_SPE
|
||||
/* Avoid the trap. On smp this this never happens since
|
||||
* we don't set last_task_used_spe
|
||||
*/
|
||||
if (new->thread.regs && last_task_used_spe == new)
|
||||
new->thread.regs->msr |= MSR_SPE;
|
||||
#endif /* CONFIG_SPE */
|
||||
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
|
||||
switch_booke_debug_regs(&new->thread.debug);
|
||||
#else
|
||||
/*
|
||||
* For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
|
||||
* schedule DABR
|
||||
*/
|
||||
#ifndef CONFIG_HAVE_HW_BREAKPOINT
|
||||
if (unlikely(!hw_brk_match(this_cpu_ptr(¤t_brk), &new->thread.hw_brk)))
|
||||
__set_breakpoint(&new->thread.hw_brk);
|
||||
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
|
||||
#endif
|
||||
|
||||
|
||||
new_thread = &new->thread;
|
||||
old_thread = ¤t->thread;
|
||||
|
||||
WARN_ON(!irqs_disabled());
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
/*
|
||||
* Collect processor utilization data per process
|
||||
|
@ -890,6 +956,30 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
|||
}
|
||||
#endif /* CONFIG_PPC_BOOK3S_64 */
|
||||
|
||||
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
|
||||
switch_booke_debug_regs(&new->thread.debug);
|
||||
#else
|
||||
/*
|
||||
* For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
|
||||
* schedule DABR
|
||||
*/
|
||||
#ifndef CONFIG_HAVE_HW_BREAKPOINT
|
||||
if (unlikely(!hw_brk_match(this_cpu_ptr(¤t_brk), &new->thread.hw_brk)))
|
||||
__set_breakpoint(&new->thread.hw_brk);
|
||||
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We need to save SPRs before treclaim/trecheckpoint as these will
|
||||
* change a number of them.
|
||||
*/
|
||||
save_sprs(&prev->thread);
|
||||
|
||||
__switch_to_tm(prev);
|
||||
|
||||
/* Save FPU, Altivec, VSX and SPE state */
|
||||
giveup_all(prev);
|
||||
|
||||
/*
|
||||
* We can't take a PMU exception inside _switch() since there is a
|
||||
* window where the kernel stack SLB and the kernel stack are out
|
||||
|
@ -899,6 +989,15 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
|||
|
||||
tm_recheckpoint_new_task(new);
|
||||
|
||||
/*
|
||||
* Call restore_sprs() before calling _switch(). If we move it after
|
||||
* _switch() then we miss out on calling it for new tasks. The reason
|
||||
* for this is we manually create a stack frame for new tasks that
|
||||
* directly returns through ret_from_fork() or
|
||||
* ret_from_kernel_thread(). See copy_thread() for details.
|
||||
*/
|
||||
restore_sprs(old_thread, new_thread);
|
||||
|
||||
last = _switch(old_thread, new_thread);
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
|
@ -952,10 +1051,12 @@ static void show_instructions(struct pt_regs *regs)
|
|||
printk("\n");
|
||||
}
|
||||
|
||||
static struct regbit {
|
||||
struct regbit {
|
||||
unsigned long bit;
|
||||
const char *name;
|
||||
} msr_bits[] = {
|
||||
};
|
||||
|
||||
static struct regbit msr_bits[] = {
|
||||
#if defined(CONFIG_PPC64) && !defined(CONFIG_BOOKE)
|
||||
{MSR_SF, "SF"},
|
||||
{MSR_HV, "HV"},
|
||||
|
@ -985,16 +1086,49 @@ static struct regbit {
|
|||
{0, NULL}
|
||||
};
|
||||
|
||||
static void printbits(unsigned long val, struct regbit *bits)
|
||||
static void print_bits(unsigned long val, struct regbit *bits, const char *sep)
|
||||
{
|
||||
const char *sep = "";
|
||||
const char *s = "";
|
||||
|
||||
printk("<");
|
||||
for (; bits->bit; ++bits)
|
||||
if (val & bits->bit) {
|
||||
printk("%s%s", sep, bits->name);
|
||||
sep = ",";
|
||||
printk("%s%s", s, bits->name);
|
||||
s = sep;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||
static struct regbit msr_tm_bits[] = {
|
||||
{MSR_TS_T, "T"},
|
||||
{MSR_TS_S, "S"},
|
||||
{MSR_TM, "E"},
|
||||
{0, NULL}
|
||||
};
|
||||
|
||||
static void print_tm_bits(unsigned long val)
|
||||
{
|
||||
/*
|
||||
* This only prints something if at least one of the TM bit is set.
|
||||
* Inside the TM[], the output means:
|
||||
* E: Enabled (bit 32)
|
||||
* S: Suspended (bit 33)
|
||||
* T: Transactional (bit 34)
|
||||
*/
|
||||
if (val & (MSR_TM | MSR_TS_S | MSR_TS_T)) {
|
||||
printk(",TM[");
|
||||
print_bits(val, msr_tm_bits, "");
|
||||
printk("]");
|
||||
}
|
||||
}
|
||||
#else
|
||||
static void print_tm_bits(unsigned long val) {}
|
||||
#endif
|
||||
|
||||
static void print_msr_bits(unsigned long val)
|
||||
{
|
||||
printk("<");
|
||||
print_bits(val, msr_bits, ",");
|
||||
print_tm_bits(val);
|
||||
printk(">");
|
||||
}
|
||||
|
||||
|
@ -1019,7 +1153,7 @@ void show_regs(struct pt_regs * regs)
|
|||
printk("REGS: %p TRAP: %04lx %s (%s)\n",
|
||||
regs, regs->trap, print_tainted(), init_utsname()->release);
|
||||
printk("MSR: "REG" ", regs->msr);
|
||||
printbits(regs->msr, msr_bits);
|
||||
print_msr_bits(regs->msr);
|
||||
printk(" CR: %08lx XER: %08lx\n", regs->ccr, regs->xer);
|
||||
trap = TRAP(regs);
|
||||
if ((regs->trap != 0xc00) && cpu_has_feature(CPU_FTR_CFAR))
|
||||
|
@ -1061,13 +1195,10 @@ void show_regs(struct pt_regs * regs)
|
|||
|
||||
void exit_thread(void)
|
||||
{
|
||||
discard_lazy_cpu_state();
|
||||
}
|
||||
|
||||
void flush_thread(void)
|
||||
{
|
||||
discard_lazy_cpu_state();
|
||||
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
flush_ptrace_hw_breakpoint(current);
|
||||
#else /* CONFIG_HAVE_HW_BREAKPOINT */
|
||||
|
@ -1086,10 +1217,7 @@ release_thread(struct task_struct *t)
|
|||
*/
|
||||
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
|
||||
{
|
||||
flush_fp_to_thread(src);
|
||||
flush_altivec_to_thread(src);
|
||||
flush_vsx_to_thread(src);
|
||||
flush_spe_to_thread(src);
|
||||
flush_all_to_thread(src);
|
||||
/*
|
||||
* Flush TM state out so we can copy it. __switch_to_tm() does this
|
||||
* flush but it removes the checkpointed state from the current CPU and
|
||||
|
@ -1212,7 +1340,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
|
|||
#ifdef CONFIG_PPC64
|
||||
if (cpu_has_feature(CPU_FTR_DSCR)) {
|
||||
p->thread.dscr_inherit = current->thread.dscr_inherit;
|
||||
p->thread.dscr = current->thread.dscr;
|
||||
p->thread.dscr = mfspr(SPRN_DSCR);
|
||||
}
|
||||
if (cpu_has_feature(CPU_FTR_HAS_PPR))
|
||||
p->thread.ppr = INIT_PPR;
|
||||
|
@ -1305,7 +1433,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
|
|||
regs->msr = MSR_USER32;
|
||||
}
|
||||
#endif
|
||||
discard_lazy_cpu_state();
|
||||
#ifdef CONFIG_VSX
|
||||
current->thread.used_vsr = 0;
|
||||
#endif
|
||||
|
|
|
@ -389,6 +389,7 @@ static void __init prom_printf(const char *format, ...)
|
|||
break;
|
||||
}
|
||||
}
|
||||
va_end(args);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -60,6 +60,7 @@ struct pt_regs_offset {
|
|||
#define STR(s) #s /* convert to string */
|
||||
#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
|
||||
#define GPR_OFFSET_NAME(num) \
|
||||
{.name = STR(r##num), .offset = offsetof(struct pt_regs, gpr[num])}, \
|
||||
{.name = STR(gpr##num), .offset = offsetof(struct pt_regs, gpr[num])}
|
||||
#define REG_OFFSET_END {.name = NULL, .offset = 0}
|
||||
|
||||
|
|
|
@ -44,6 +44,9 @@
|
|||
#include <asm/mmu.h>
|
||||
#include <asm/topology.h>
|
||||
|
||||
/* This is here deliberately so it's only used in this file */
|
||||
void enter_rtas(unsigned long);
|
||||
|
||||
struct rtas_t rtas = {
|
||||
.lock = __ARCH_SPIN_LOCK_UNLOCKED
|
||||
};
|
||||
|
@ -93,21 +96,13 @@ static void unlock_rtas(unsigned long flags)
|
|||
*/
|
||||
static void call_rtas_display_status(unsigned char c)
|
||||
{
|
||||
struct rtas_args *args = &rtas.args;
|
||||
unsigned long s;
|
||||
|
||||
if (!rtas.base)
|
||||
return;
|
||||
|
||||
s = lock_rtas();
|
||||
|
||||
args->token = cpu_to_be32(10);
|
||||
args->nargs = cpu_to_be32(1);
|
||||
args->nret = cpu_to_be32(1);
|
||||
args->rets = &(args->args[1]);
|
||||
args->args[0] = cpu_to_be32(c);
|
||||
|
||||
enter_rtas(__pa(args));
|
||||
|
||||
rtas_call_unlocked(&rtas.args, 10, 1, 1, NULL, c);
|
||||
unlock_rtas(s);
|
||||
}
|
||||
|
||||
|
@ -418,6 +413,36 @@ static char *__fetch_rtas_last_error(char *altbuf)
|
|||
#define get_errorlog_buffer() NULL
|
||||
#endif
|
||||
|
||||
|
||||
static void
|
||||
va_rtas_call_unlocked(struct rtas_args *args, int token, int nargs, int nret,
|
||||
va_list list)
|
||||
{
|
||||
int i;
|
||||
|
||||
args->token = cpu_to_be32(token);
|
||||
args->nargs = cpu_to_be32(nargs);
|
||||
args->nret = cpu_to_be32(nret);
|
||||
args->rets = &(args->args[nargs]);
|
||||
|
||||
for (i = 0; i < nargs; ++i)
|
||||
args->args[i] = cpu_to_be32(va_arg(list, __u32));
|
||||
|
||||
for (i = 0; i < nret; ++i)
|
||||
args->rets[i] = 0;
|
||||
|
||||
enter_rtas(__pa(args));
|
||||
}
|
||||
|
||||
void rtas_call_unlocked(struct rtas_args *args, int token, int nargs, int nret, ...)
|
||||
{
|
||||
va_list list;
|
||||
|
||||
va_start(list, nret);
|
||||
va_rtas_call_unlocked(args, token, nargs, nret, list);
|
||||
va_end(list);
|
||||
}
|
||||
|
||||
int rtas_call(int token, int nargs, int nret, int *outputs, ...)
|
||||
{
|
||||
va_list list;
|
||||
|
@ -431,22 +456,14 @@ int rtas_call(int token, int nargs, int nret, int *outputs, ...)
|
|||
return -1;
|
||||
|
||||
s = lock_rtas();
|
||||
|
||||
/* We use the global rtas args buffer */
|
||||
rtas_args = &rtas.args;
|
||||
|
||||
rtas_args->token = cpu_to_be32(token);
|
||||
rtas_args->nargs = cpu_to_be32(nargs);
|
||||
rtas_args->nret = cpu_to_be32(nret);
|
||||
rtas_args->rets = &(rtas_args->args[nargs]);
|
||||
va_start(list, outputs);
|
||||
for (i = 0; i < nargs; ++i)
|
||||
rtas_args->args[i] = cpu_to_be32(va_arg(list, __u32));
|
||||
va_rtas_call_unlocked(rtas_args, token, nargs, nret, list);
|
||||
va_end(list);
|
||||
|
||||
for (i = 0; i < nret; ++i)
|
||||
rtas_args->rets[i] = 0;
|
||||
|
||||
enter_rtas(__pa(rtas_args));
|
||||
|
||||
/* A -1 return code indicates that the last command couldn't
|
||||
be completed due to a hardware error. */
|
||||
if (be32_to_cpu(rtas_args->rets[0]) == -1)
|
||||
|
|
|
@ -458,7 +458,7 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
|
|||
* contains valid data
|
||||
*/
|
||||
if (current->thread.used_vsr && ctx_has_vsx_region) {
|
||||
__giveup_vsx(current);
|
||||
flush_vsx_to_thread(current);
|
||||
if (copy_vsx_to_user(&frame->mc_vsregs, current))
|
||||
return 1;
|
||||
msr |= MSR_VSX;
|
||||
|
@ -606,7 +606,7 @@ static int save_tm_user_regs(struct pt_regs *regs,
|
|||
* contains valid data
|
||||
*/
|
||||
if (current->thread.used_vsr) {
|
||||
__giveup_vsx(current);
|
||||
flush_vsx_to_thread(current);
|
||||
if (copy_vsx_to_user(&frame->mc_vsregs, current))
|
||||
return 1;
|
||||
if (msr & MSR_VSX) {
|
||||
|
@ -687,15 +687,6 @@ static long restore_user_regs(struct pt_regs *regs,
|
|||
if (sig)
|
||||
regs->msr = (regs->msr & ~MSR_LE) | (msr & MSR_LE);
|
||||
|
||||
/*
|
||||
* Do this before updating the thread state in
|
||||
* current->thread.fpr/vr/evr. That way, if we get preempted
|
||||
* and another task grabs the FPU/Altivec/SPE, it won't be
|
||||
* tempted to save the current CPU state into the thread_struct
|
||||
* and corrupt what we are writing there.
|
||||
*/
|
||||
discard_lazy_cpu_state();
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
/*
|
||||
* Force the process to reload the altivec registers from
|
||||
|
@ -798,15 +789,6 @@ static long restore_tm_user_regs(struct pt_regs *regs,
|
|||
/* Restore the previous little-endian mode */
|
||||
regs->msr = (regs->msr & ~MSR_LE) | (msr & MSR_LE);
|
||||
|
||||
/*
|
||||
* Do this before updating the thread state in
|
||||
* current->thread.fpr/vr/evr. That way, if we get preempted
|
||||
* and another task grabs the FPU/Altivec/SPE, it won't be
|
||||
* tempted to save the current CPU state into the thread_struct
|
||||
* and corrupt what we are writing there.
|
||||
*/
|
||||
discard_lazy_cpu_state();
|
||||
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
regs->msr &= ~MSR_VEC;
|
||||
if (msr & MSR_VEC) {
|
||||
|
|
|
@ -147,7 +147,7 @@ static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
|
|||
* VMX data.
|
||||
*/
|
||||
if (current->thread.used_vsr && ctx_has_vsx_region) {
|
||||
__giveup_vsx(current);
|
||||
flush_vsx_to_thread(current);
|
||||
v_regs += ELF_NVRREG;
|
||||
err |= copy_vsx_to_user(v_regs, current);
|
||||
/* set MSR_VSX in the MSR value in the frame to
|
||||
|
@ -270,7 +270,7 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
|
|||
* VMX data.
|
||||
*/
|
||||
if (current->thread.used_vsr) {
|
||||
__giveup_vsx(current);
|
||||
flush_vsx_to_thread(current);
|
||||
v_regs += ELF_NVRREG;
|
||||
tm_v_regs += ELF_NVRREG;
|
||||
|
||||
|
@ -349,15 +349,6 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
|
|||
if (set != NULL)
|
||||
err |= __get_user(set->sig[0], &sc->oldmask);
|
||||
|
||||
/*
|
||||
* Do this before updating the thread state in
|
||||
* current->thread.fpr/vr. That way, if we get preempted
|
||||
* and another task grabs the FPU/Altivec, it won't be
|
||||
* tempted to save the current CPU state into the thread_struct
|
||||
* and corrupt what we are writing there.
|
||||
*/
|
||||
discard_lazy_cpu_state();
|
||||
|
||||
/*
|
||||
* Force reload of FP/VEC.
|
||||
* This has to be done before copying stuff into current->thread.fpr/vr
|
||||
|
@ -468,15 +459,6 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
|
|||
err |= __get_user(regs->dsisr, &sc->gp_regs[PT_DSISR]);
|
||||
err |= __get_user(regs->result, &sc->gp_regs[PT_RESULT]);
|
||||
|
||||
/*
|
||||
* Do this before updating the thread state in
|
||||
* current->thread.fpr/vr. That way, if we get preempted
|
||||
* and another task grabs the FPU/Altivec, it won't be
|
||||
* tempted to save the current CPU state into the thread_struct
|
||||
* and corrupt what we are writing there.
|
||||
*/
|
||||
discard_lazy_cpu_state();
|
||||
|
||||
/*
|
||||
* Force reload of FP/VEC.
|
||||
* This has to be done before copying stuff into current->thread.fpr/vr
|
||||
|
|
|
@ -61,3 +61,10 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
|
|||
save_context_stack(trace, tsk->thread.ksp, tsk, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
|
||||
|
||||
void
|
||||
save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
|
||||
{
|
||||
save_context_stack(trace, regs->gpr[1], current, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(save_stack_trace_regs);
|
||||
|
|
|
@ -20,9 +20,7 @@ void save_processor_state(void)
|
|||
* flush out all the special registers so we don't need
|
||||
* to save them in the snapshot
|
||||
*/
|
||||
flush_fp_to_thread(current);
|
||||
flush_altivec_to_thread(current);
|
||||
flush_spe_to_thread(current);
|
||||
flush_all_to_thread(current);
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
hard_irq_disable();
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue