2005-04-17 06:20:36 +08:00
|
|
|
/*
|
[PATCH] powerpc: Merge cacheflush.h and cache.h
The ppc32 and ppc64 versions of cacheflush.h were almost identical.
The two versions of cache.h are fairly similar, except for a bunch of
register definitions in the ppc32 version which probably belong better
elsewhere. This patch, therefore, merges both headers. Notable
points:
- there are several functions in cacheflush.h which exist only
on ppc32 or only on ppc64. These are handled by #ifdef for now, but
these should probably be consolidated, along with the actual code
behind them later.
- Confusingly, both ppc32 and ppc64 have a
flush_dcache_range(), but they're subtly different: it uses dcbf on
ppc32 and dcbst on ppc64, ppc64 has a flush_inval_dcache_range() which
uses dcbf. These too should be merged and consolidated later.
- Also flush_dcache_range() was defined in cacheflush.h on
ppc64, and in cache.h on ppc32. In the merged version it's in
cacheflush.h
- On ppc32 flush_icache_range() is a normal function from
misc.S. On ppc64, it was wrapper, testing a feature bit before
calling __flush_icache_range() which does the actual flush. This
patch takes the ppc64 approach, which amounts to no change on ppc32,
since CPU_FTR_COHERENT_ICACHE will never be set there, but does mean
renaming flush_icache_range() to __flush_icache_range() in
arch/ppc/kernel/misc.S and arch/powerpc/kernel/misc_32.S
- The PReP register info from asm-ppc/cache.h has moved to
arch/ppc/platforms/prep_setup.c
- The 8xx register info from asm-ppc/cache.h has moved to a
new asm-powerpc/reg_8xx.h, included from reg.h
- flush_dcache_all() was defined on ppc32 (only), but was
never called (although it was exported). Thus this patch removes it
from cacheflush.h and from ARCH=powerpc (misc_32.S) entirely. It's
left in ARCH=ppc for now, with the prototype moved to ppc_ksyms.c.
Built for Walnut (ARCH=ppc), 32-bit multiplatform (pmac, CHRP and PReP
ARCH=ppc, pmac and CHRP ARCH=powerpc). Built and booted on POWER5
LPAR (ARCH=powerpc and ARCH=ppc64).
Built for 32-bit powermac (ARCH=ppc and ARCH=powerpc). Built and
booted on POWER5 LPAR (ARCH=powerpc and ARCH=ppc64). Built and booted
on G5 (ARCH=powerpc)
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-10 08:50:16 +08:00
|
|
|
* Contains register definitions common to PowerPC 8xx CPUs. Notice
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
[PATCH] powerpc: Merge cacheflush.h and cache.h
The ppc32 and ppc64 versions of cacheflush.h were almost identical.
The two versions of cache.h are fairly similar, except for a bunch of
register definitions in the ppc32 version which probably belong better
elsewhere. This patch, therefore, merges both headers. Notable
points:
- there are several functions in cacheflush.h which exist only
on ppc32 or only on ppc64. These are handled by #ifdef for now, but
these should probably be consolidated, along with the actual code
behind them later.
- Confusingly, both ppc32 and ppc64 have a
flush_dcache_range(), but they're subtly different: it uses dcbf on
ppc32 and dcbst on ppc64, ppc64 has a flush_inval_dcache_range() which
uses dcbf. These too should be merged and consolidated later.
- Also flush_dcache_range() was defined in cacheflush.h on
ppc64, and in cache.h on ppc32. In the merged version it's in
cacheflush.h
- On ppc32 flush_icache_range() is a normal function from
misc.S. On ppc64, it was wrapper, testing a feature bit before
calling __flush_icache_range() which does the actual flush. This
patch takes the ppc64 approach, which amounts to no change on ppc32,
since CPU_FTR_COHERENT_ICACHE will never be set there, but does mean
renaming flush_icache_range() to __flush_icache_range() in
arch/ppc/kernel/misc.S and arch/powerpc/kernel/misc_32.S
- The PReP register info from asm-ppc/cache.h has moved to
arch/ppc/platforms/prep_setup.c
- The 8xx register info from asm-ppc/cache.h has moved to a
new asm-powerpc/reg_8xx.h, included from reg.h
- flush_dcache_all() was defined on ppc32 (only), but was
never called (although it was exported). Thus this patch removes it
from cacheflush.h and from ARCH=powerpc (misc_32.S) entirely. It's
left in ARCH=ppc for now, with the prototype moved to ppc_ksyms.c.
Built for Walnut (ARCH=ppc), 32-bit multiplatform (pmac, CHRP and PReP
ARCH=ppc, pmac and CHRP ARCH=powerpc). Built and booted on POWER5
LPAR (ARCH=powerpc and ARCH=ppc64).
Built for 32-bit powermac (ARCH=ppc and ARCH=powerpc). Built and
booted on POWER5 LPAR (ARCH=powerpc and ARCH=ppc64). Built and booted
on G5 (ARCH=powerpc)
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-10 08:50:16 +08:00
|
|
|
#ifndef _ASM_POWERPC_REG_8xx_H
|
|
|
|
#define _ASM_POWERPC_REG_8xx_H
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-12-07 15:47:28 +08:00
|
|
|
#include <asm/mmu.h>
|
2016-02-10 00:08:12 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Cache control on the MPC8xx is provided through some additional
|
|
|
|
* special purpose registers.
|
|
|
|
*/
|
|
|
|
#define SPRN_IC_CST 560 /* Instruction cache control/status */
|
|
|
|
#define SPRN_IC_ADR 561 /* Address needed for some commands */
|
|
|
|
#define SPRN_IC_DAT 562 /* Read-only data register */
|
|
|
|
#define SPRN_DC_CST 568 /* Data cache control/status */
|
|
|
|
#define SPRN_DC_ADR 569 /* Address needed for some commands */
|
|
|
|
#define SPRN_DC_DAT 570 /* Read-only data register */
|
|
|
|
|
2016-02-10 00:08:12 +08:00
|
|
|
/* Misc Debug */
|
|
|
|
#define SPRN_DPDR 630
|
|
|
|
#define SPRN_MI_CAM 816
|
|
|
|
#define SPRN_MI_RAM0 817
|
|
|
|
#define SPRN_MI_RAM1 818
|
|
|
|
#define SPRN_MD_CAM 824
|
|
|
|
#define SPRN_MD_RAM0 825
|
|
|
|
#define SPRN_MD_RAM1 826
|
|
|
|
|
2016-08-23 21:58:56 +08:00
|
|
|
/* Special MSR manipulation registers */
|
|
|
|
#define SPRN_EIE 80 /* External interrupt enable (EE=1, RI=1) */
|
|
|
|
#define SPRN_EID 81 /* External interrupt disable (EE=0, RI=1) */
|
powerpc/8xx: Perf events on PPC 8xx
This patch has been reworked since RFC version. In the RFC, this patch
was preceded by a patch clearing MSR RI for all PPC32 at all time at
exception prologs. Now MSR RI clearing is done only when this 8xx perf
events functionality is compiled in, it is therefore limited to 8xx
and merged inside this patch.
Other main changes have been to take into account detailed review from
Peter Zijlstra. The instructions counter has been reworked to behave
as a free running counter like the three other counters.
The 8xx has no PMU, however some events can be emulated by other means.
This patch implements the following events (as reported by 'perf list'):
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
dTLB-load-misses [Hardware cache event]
iTLB-load-misses [Hardware cache event]
'cycles' event is implemented using the timebase clock. Timebase clock
corresponds to CPU clock divided by 16, so number of cycles is
approximatly 16 times the number of TB ticks
On the 8xx, TLB misses are handled by software. It is therefore
easy to count all TLB misses each time the TLB miss exception is
called.
'instructions' is calculated by using instruction watchpoint counter.
This patch sets counter A to count instructions at address greater
than 0, hence we count all instructions executed while MSR RI bit is
set. The counter is set to the maximum which is 0xffff. Every 65535
instructions, debug instruction breakpoint exception fires. The
exception handler increments a counter in memory which then
represent the upper part of the instruction counter. We therefore
end up with a 48 bits counter. In order to avoid unnecessary overhead
while no perf event is active, this counter is started when the first
event referring to this counter is added, and the counter is stopped
when the last event referring to it is deleted. In order to properly
support breakpoint exceptions, MSR RI bit has to be unset in exception
epilogs in order to avoid breakpoint exceptions during critical
sections during changes to SRR0 and SRR1 would be problematic.
All counters are handled as free running counters.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-12-15 20:42:18 +08:00
|
|
|
#define SPRN_NRI 82 /* Non recoverable interrupt (EE=0, RI=0) */
|
2016-08-23 21:58:56 +08:00
|
|
|
|
powerpc/8xx: Implement hw_breakpoint
This patch implements HW breakpoint on the 8xx. The 8xx has
capability to manage HW breakpoints, which is slightly different
than BOOK3S:
1/ The breakpoint match doesn't trigger a DSI exception but a
dedicated data breakpoint exception.
2/ The breakpoint happens after the instruction has completed,
no need to single step or emulate the instruction,
3/ Matched address is not set in DAR but in BAR,
4/ DABR register doesn't exist, instead we have registers
LCTRL1, LCTRL2 and CMPx registers,
5/ The match on one comparator is not on a double word but
on a single word.
The patch does:
1/ Prepare the dedicated registers in call to __set_dabr(). In order
to emulate the double word handling of BOOK3S, comparator E is set to
DABR address value and comparator F to address + 4. Then breakpoint 1
is set to match comparator E or F,
2/ Skip the singlestepping stage when compiled for CONFIG_PPC_8xx,
3/ Implement the exception. In that exception, the matched address
is taken from SPRN_BAR and manage as if it was from SPRN_DAR.
4/ I/D TLB error exception routines perform a tlbie on bad TLBs. That
tlbie triggers the breakpoint exception when performed on the
breakpoint address. For this reason, the routine returns if the match
is from one of those two tlbie.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-11-29 16:52:15 +08:00
|
|
|
/* Debug registers */
|
powerpc/8xx: Perf events on PPC 8xx
This patch has been reworked since RFC version. In the RFC, this patch
was preceded by a patch clearing MSR RI for all PPC32 at all time at
exception prologs. Now MSR RI clearing is done only when this 8xx perf
events functionality is compiled in, it is therefore limited to 8xx
and merged inside this patch.
Other main changes have been to take into account detailed review from
Peter Zijlstra. The instructions counter has been reworked to behave
as a free running counter like the three other counters.
The 8xx has no PMU, however some events can be emulated by other means.
This patch implements the following events (as reported by 'perf list'):
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
dTLB-load-misses [Hardware cache event]
iTLB-load-misses [Hardware cache event]
'cycles' event is implemented using the timebase clock. Timebase clock
corresponds to CPU clock divided by 16, so number of cycles is
approximatly 16 times the number of TB ticks
On the 8xx, TLB misses are handled by software. It is therefore
easy to count all TLB misses each time the TLB miss exception is
called.
'instructions' is calculated by using instruction watchpoint counter.
This patch sets counter A to count instructions at address greater
than 0, hence we count all instructions executed while MSR RI bit is
set. The counter is set to the maximum which is 0xffff. Every 65535
instructions, debug instruction breakpoint exception fires. The
exception handler increments a counter in memory which then
represent the upper part of the instruction counter. We therefore
end up with a 48 bits counter. In order to avoid unnecessary overhead
while no perf event is active, this counter is started when the first
event referring to this counter is added, and the counter is stopped
when the last event referring to it is deleted. In order to properly
support breakpoint exceptions, MSR RI bit has to be unset in exception
epilogs in order to avoid breakpoint exceptions during critical
sections during changes to SRR0 and SRR1 would be problematic.
All counters are handled as free running counters.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-12-15 20:42:18 +08:00
|
|
|
#define SPRN_CMPA 144
|
|
|
|
#define SPRN_COUNTA 150
|
powerpc/8xx: Implement hw_breakpoint
This patch implements HW breakpoint on the 8xx. The 8xx has
capability to manage HW breakpoints, which is slightly different
than BOOK3S:
1/ The breakpoint match doesn't trigger a DSI exception but a
dedicated data breakpoint exception.
2/ The breakpoint happens after the instruction has completed,
no need to single step or emulate the instruction,
3/ Matched address is not set in DAR but in BAR,
4/ DABR register doesn't exist, instead we have registers
LCTRL1, LCTRL2 and CMPx registers,
5/ The match on one comparator is not on a double word but
on a single word.
The patch does:
1/ Prepare the dedicated registers in call to __set_dabr(). In order
to emulate the double word handling of BOOK3S, comparator E is set to
DABR address value and comparator F to address + 4. Then breakpoint 1
is set to match comparator E or F,
2/ Skip the singlestepping stage when compiled for CONFIG_PPC_8xx,
3/ Implement the exception. In that exception, the matched address
is taken from SPRN_BAR and manage as if it was from SPRN_DAR.
4/ I/D TLB error exception routines perform a tlbie on bad TLBs. That
tlbie triggers the breakpoint exception when performed on the
breakpoint address. For this reason, the routine returns if the match
is from one of those two tlbie.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-11-29 16:52:15 +08:00
|
|
|
#define SPRN_CMPE 152
|
|
|
|
#define SPRN_CMPF 153
|
|
|
|
#define SPRN_LCTRL1 156
|
|
|
|
#define SPRN_LCTRL2 157
|
powerpc/8xx: Perf events on PPC 8xx
This patch has been reworked since RFC version. In the RFC, this patch
was preceded by a patch clearing MSR RI for all PPC32 at all time at
exception prologs. Now MSR RI clearing is done only when this 8xx perf
events functionality is compiled in, it is therefore limited to 8xx
and merged inside this patch.
Other main changes have been to take into account detailed review from
Peter Zijlstra. The instructions counter has been reworked to behave
as a free running counter like the three other counters.
The 8xx has no PMU, however some events can be emulated by other means.
This patch implements the following events (as reported by 'perf list'):
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
dTLB-load-misses [Hardware cache event]
iTLB-load-misses [Hardware cache event]
'cycles' event is implemented using the timebase clock. Timebase clock
corresponds to CPU clock divided by 16, so number of cycles is
approximatly 16 times the number of TB ticks
On the 8xx, TLB misses are handled by software. It is therefore
easy to count all TLB misses each time the TLB miss exception is
called.
'instructions' is calculated by using instruction watchpoint counter.
This patch sets counter A to count instructions at address greater
than 0, hence we count all instructions executed while MSR RI bit is
set. The counter is set to the maximum which is 0xffff. Every 65535
instructions, debug instruction breakpoint exception fires. The
exception handler increments a counter in memory which then
represent the upper part of the instruction counter. We therefore
end up with a 48 bits counter. In order to avoid unnecessary overhead
while no perf event is active, this counter is started when the first
event referring to this counter is added, and the counter is stopped
when the last event referring to it is deleted. In order to properly
support breakpoint exceptions, MSR RI bit has to be unset in exception
epilogs in order to avoid breakpoint exceptions during critical
sections during changes to SRR0 and SRR1 would be problematic.
All counters are handled as free running counters.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-12-15 20:42:18 +08:00
|
|
|
#define SPRN_ICTRL 158
|
powerpc/8xx: Implement hw_breakpoint
This patch implements HW breakpoint on the 8xx. The 8xx has
capability to manage HW breakpoints, which is slightly different
than BOOK3S:
1/ The breakpoint match doesn't trigger a DSI exception but a
dedicated data breakpoint exception.
2/ The breakpoint happens after the instruction has completed,
no need to single step or emulate the instruction,
3/ Matched address is not set in DAR but in BAR,
4/ DABR register doesn't exist, instead we have registers
LCTRL1, LCTRL2 and CMPx registers,
5/ The match on one comparator is not on a double word but
on a single word.
The patch does:
1/ Prepare the dedicated registers in call to __set_dabr(). In order
to emulate the double word handling of BOOK3S, comparator E is set to
DABR address value and comparator F to address + 4. Then breakpoint 1
is set to match comparator E or F,
2/ Skip the singlestepping stage when compiled for CONFIG_PPC_8xx,
3/ Implement the exception. In that exception, the matched address
is taken from SPRN_BAR and manage as if it was from SPRN_DAR.
4/ I/D TLB error exception routines perform a tlbie on bad TLBs. That
tlbie triggers the breakpoint exception when performed on the
breakpoint address. For this reason, the routine returns if the match
is from one of those two tlbie.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-11-29 16:52:15 +08:00
|
|
|
#define SPRN_BAR 159
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Commands. Only the first few are available to the instruction cache.
|
|
|
|
*/
|
|
|
|
#define IDC_ENABLE 0x02000000 /* Cache enable */
|
|
|
|
#define IDC_DISABLE 0x04000000 /* Cache disable */
|
|
|
|
#define IDC_LDLCK 0x06000000 /* Load and lock */
|
|
|
|
#define IDC_UNLINE 0x08000000 /* Unlock line */
|
|
|
|
#define IDC_UNALL 0x0a000000 /* Unlock all */
|
|
|
|
#define IDC_INVALL 0x0c000000 /* Invalidate all */
|
|
|
|
|
|
|
|
#define DC_FLINE 0x0e000000 /* Flush data cache line */
|
|
|
|
#define DC_SFWT 0x01000000 /* Set forced writethrough mode */
|
|
|
|
#define DC_CFWT 0x03000000 /* Clear forced writethrough mode */
|
|
|
|
#define DC_SLES 0x05000000 /* Set little endian swap mode */
|
|
|
|
#define DC_CLES 0x07000000 /* Clear little endian swap mode */
|
|
|
|
|
|
|
|
/* Status.
|
|
|
|
*/
|
|
|
|
#define IDC_ENABLED 0x80000000 /* Cache is enabled */
|
|
|
|
#define IDC_CERR1 0x00200000 /* Cache error 1 */
|
|
|
|
#define IDC_CERR2 0x00100000 /* Cache error 2 */
|
|
|
|
#define IDC_CERR3 0x00080000 /* Cache error 3 */
|
|
|
|
|
|
|
|
#define DC_DFWT 0x40000000 /* Data cache is forced write through */
|
|
|
|
#define DC_LES 0x20000000 /* Caches are little endian mode */
|
|
|
|
|
2016-02-10 00:08:14 +08:00
|
|
|
#ifdef CONFIG_8xx_CPU6
|
|
|
|
#define do_mtspr_cpu6(rn, rn_addr, v) \
|
|
|
|
do { \
|
2016-03-15 21:07:49 +08:00
|
|
|
int _reg_cpu6 = rn_addr, _tmp_cpu6; \
|
2016-02-10 00:08:14 +08:00
|
|
|
asm volatile("stw %0, %1;" \
|
|
|
|
"lwz %0, %1;" \
|
|
|
|
"mtspr " __stringify(rn) ",%2" : \
|
|
|
|
: "r" (_reg_cpu6), "m"(_tmp_cpu6), \
|
|
|
|
"r" ((unsigned long)(v)) \
|
|
|
|
: "memory"); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define do_mtspr(rn, v) asm volatile("mtspr " __stringify(rn) ",%0" : \
|
|
|
|
: "r" ((unsigned long)(v)) \
|
|
|
|
: "memory")
|
|
|
|
#define mtspr(rn, v) \
|
|
|
|
do { \
|
|
|
|
if (rn == SPRN_IMMR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3d30, v); \
|
|
|
|
else if (rn == SPRN_IC_CST) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2110, v); \
|
|
|
|
else if (rn == SPRN_IC_ADR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2310, v); \
|
|
|
|
else if (rn == SPRN_IC_DAT) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2510, v); \
|
|
|
|
else if (rn == SPRN_DC_CST) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3110, v); \
|
|
|
|
else if (rn == SPRN_DC_ADR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3310, v); \
|
|
|
|
else if (rn == SPRN_DC_DAT) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3510, v); \
|
|
|
|
else if (rn == SPRN_MI_CTR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2180, v); \
|
|
|
|
else if (rn == SPRN_MI_AP) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2580, v); \
|
|
|
|
else if (rn == SPRN_MI_EPN) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2780, v); \
|
|
|
|
else if (rn == SPRN_MI_TWC) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2b80, v); \
|
|
|
|
else if (rn == SPRN_MI_RPN) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2d80, v); \
|
|
|
|
else if (rn == SPRN_MI_CAM) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2190, v); \
|
|
|
|
else if (rn == SPRN_MI_RAM0) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2390, v); \
|
|
|
|
else if (rn == SPRN_MI_RAM1) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2590, v); \
|
|
|
|
else if (rn == SPRN_MD_CTR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3180, v); \
|
|
|
|
else if (rn == SPRN_M_CASID) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3380, v); \
|
|
|
|
else if (rn == SPRN_MD_AP) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3580, v); \
|
|
|
|
else if (rn == SPRN_MD_EPN) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3780, v); \
|
|
|
|
else if (rn == SPRN_M_TWB) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3980, v); \
|
|
|
|
else if (rn == SPRN_MD_TWC) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3b80, v); \
|
|
|
|
else if (rn == SPRN_MD_RPN) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3d80, v); \
|
|
|
|
else if (rn == SPRN_M_TW) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3f80, v); \
|
|
|
|
else if (rn == SPRN_MD_CAM) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3190, v); \
|
|
|
|
else if (rn == SPRN_MD_RAM0) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3390, v); \
|
|
|
|
else if (rn == SPRN_MD_RAM1) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3590, v); \
|
|
|
|
else if (rn == SPRN_DEC) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2c00, v); \
|
|
|
|
else if (rn == SPRN_TBWL) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3880, v); \
|
|
|
|
else if (rn == SPRN_TBWU) \
|
|
|
|
do_mtspr_cpu6(rn, 0x3a80, v); \
|
|
|
|
else if (rn == SPRN_DPDR) \
|
|
|
|
do_mtspr_cpu6(rn, 0x2d30, v); \
|
|
|
|
else \
|
|
|
|
do_mtspr(rn, v); \
|
|
|
|
} while (0)
|
|
|
|
#endif
|
|
|
|
|
[PATCH] powerpc: Merge cacheflush.h and cache.h
The ppc32 and ppc64 versions of cacheflush.h were almost identical.
The two versions of cache.h are fairly similar, except for a bunch of
register definitions in the ppc32 version which probably belong better
elsewhere. This patch, therefore, merges both headers. Notable
points:
- there are several functions in cacheflush.h which exist only
on ppc32 or only on ppc64. These are handled by #ifdef for now, but
these should probably be consolidated, along with the actual code
behind them later.
- Confusingly, both ppc32 and ppc64 have a
flush_dcache_range(), but they're subtly different: it uses dcbf on
ppc32 and dcbst on ppc64, ppc64 has a flush_inval_dcache_range() which
uses dcbf. These too should be merged and consolidated later.
- Also flush_dcache_range() was defined in cacheflush.h on
ppc64, and in cache.h on ppc32. In the merged version it's in
cacheflush.h
- On ppc32 flush_icache_range() is a normal function from
misc.S. On ppc64, it was wrapper, testing a feature bit before
calling __flush_icache_range() which does the actual flush. This
patch takes the ppc64 approach, which amounts to no change on ppc32,
since CPU_FTR_COHERENT_ICACHE will never be set there, but does mean
renaming flush_icache_range() to __flush_icache_range() in
arch/ppc/kernel/misc.S and arch/powerpc/kernel/misc_32.S
- The PReP register info from asm-ppc/cache.h has moved to
arch/ppc/platforms/prep_setup.c
- The 8xx register info from asm-ppc/cache.h has moved to a
new asm-powerpc/reg_8xx.h, included from reg.h
- flush_dcache_all() was defined on ppc32 (only), but was
never called (although it was exported). Thus this patch removes it
from cacheflush.h and from ARCH=powerpc (misc_32.S) entirely. It's
left in ARCH=ppc for now, with the prototype moved to ppc_ksyms.c.
Built for Walnut (ARCH=ppc), 32-bit multiplatform (pmac, CHRP and PReP
ARCH=ppc, pmac and CHRP ARCH=powerpc). Built and booted on POWER5
LPAR (ARCH=powerpc and ARCH=ppc64).
Built for 32-bit powermac (ARCH=ppc and ARCH=powerpc). Built and
booted on POWER5 LPAR (ARCH=powerpc and ARCH=ppc64). Built and booted
on G5 (ARCH=powerpc)
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-10 08:50:16 +08:00
|
|
|
#endif /* _ASM_POWERPC_REG_8xx_H */
|