2010-08-18 00:20:04 +08:00
|
|
|
//===- PPCCallingConv.td - Calling Conventions for PowerPC -*- tablegen -*-===//
|
2012-02-18 20:03:15 +08:00
|
|
|
//
|
2007-03-06 08:59:59 +08:00
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
2007-12-30 04:36:04 +08:00
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
2012-02-18 20:03:15 +08:00
|
|
|
//
|
2007-03-06 08:59:59 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This describes the calling conventions for the PowerPC 32- and 64-bit
|
|
|
|
// architectures.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2012-11-06 03:39:45 +08:00
|
|
|
/// CCIfSubtarget - Match if the current subtarget has a feature F.
|
|
|
|
class CCIfSubtarget<string F, CCAction A>
|
|
|
|
: CCIf<!strconcat("State.getTarget().getSubtarget<PPCSubtarget>().", F), A>;
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 08:27:01 +08:00
|
|
|
class CCIfNotSubtarget<string F, CCAction A>
|
|
|
|
: CCIf<!strconcat("!State.getTarget().getSubtarget<PPCSubtarget>().", F), A>;
|
2012-11-06 03:39:45 +08:00
|
|
|
|
2007-03-06 08:59:59 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Return Value Calling Convention
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
// Return-value convention for PowerPC
|
|
|
|
def RetCC_PPC : CallingConv<[
|
2012-11-06 03:39:45 +08:00
|
|
|
// On PPC64, integer return values are always promoted to i64
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 08:27:01 +08:00
|
|
|
CCIfType<[i32, i1], CCIfSubtarget<"isPPC64()", CCPromoteToType<i64>>>,
|
|
|
|
CCIfType<[i1], CCIfNotSubtarget<"isPPC64()", CCPromoteToType<i32>>>,
|
2012-11-06 03:39:45 +08:00
|
|
|
|
2008-03-17 10:13:43 +08:00
|
|
|
CCIfType<[i32], CCAssignToReg<[R3, R4, R5, R6, R7, R8, R9, R10]>>,
|
2008-03-18 01:11:08 +08:00
|
|
|
CCIfType<[i64], CCAssignToReg<[X3, X4, X5, X6]>>,
|
2013-01-18 03:34:57 +08:00
|
|
|
CCIfType<[i128], CCAssignToReg<[X3, X4, X5, X6]>>,
|
2007-03-06 08:59:59 +08:00
|
|
|
|
2013-01-18 01:45:19 +08:00
|
|
|
CCIfType<[f32], CCAssignToReg<[F1, F2]>>,
|
|
|
|
CCIfType<[f64], CCAssignToReg<[F1, F2, F3, F4]>>,
|
2007-03-06 08:59:59 +08:00
|
|
|
|
|
|
|
// Vector types are always returned in V2.
|
2014-03-27 00:12:58 +08:00
|
|
|
CCIfType<[v16i8, v8i16, v4i32, v4f32, v2f64, v2i64], CCAssignToReg<[V2]>>
|
2007-03-06 08:59:59 +08:00
|
|
|
]>;
|
|
|
|
|
|
|
|
|
2013-08-27 03:42:51 +08:00
|
|
|
// Note that we don't currently have calling conventions for 64-bit
|
|
|
|
// PowerPC, but handle all the complexities of the ABI in the lowering
|
|
|
|
// logic. FIXME: See if the logic can be simplified with use of CCs.
|
|
|
|
// This may require some extensions to current table generation.
|
|
|
|
|
2013-08-31 06:18:55 +08:00
|
|
|
// Simple calling convention for 64-bit ELF PowerPC fast isel.
|
|
|
|
// Only handle ints and floats. All ints are promoted to i64.
|
|
|
|
// Vector types and quadword ints are not handled.
|
|
|
|
def CC_PPC64_ELF_FIS : CallingConv<[
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 08:27:01 +08:00
|
|
|
CCIfType<[i1], CCPromoteToType<i64>>,
|
2013-08-31 06:18:55 +08:00
|
|
|
CCIfType<[i8], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i16], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i32], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i64], CCAssignToReg<[X3, X4, X5, X6, X7, X8, X9, X10]>>,
|
|
|
|
CCIfType<[f32, f64], CCAssignToReg<[F1, F2, F3, F4, F5, F6, F7, F8]>>
|
|
|
|
]>;
|
|
|
|
|
2013-08-27 03:42:51 +08:00
|
|
|
// Simple return-value convention for 64-bit ELF PowerPC fast isel.
|
|
|
|
// All small ints are promoted to i64. Vector types, quadword ints,
|
|
|
|
// and multiple register returns are "supported" to avoid compile
|
|
|
|
// errors, but none are handled by the fast selector.
|
|
|
|
def RetCC_PPC64_ELF_FIS : CallingConv<[
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 08:27:01 +08:00
|
|
|
CCIfType<[i1], CCPromoteToType<i64>>,
|
2013-08-27 03:42:51 +08:00
|
|
|
CCIfType<[i8], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i16], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i32], CCPromoteToType<i64>>,
|
|
|
|
CCIfType<[i64], CCAssignToReg<[X3, X4]>>,
|
|
|
|
CCIfType<[i128], CCAssignToReg<[X3, X4, X5, X6]>>,
|
|
|
|
CCIfType<[f32], CCAssignToReg<[F1, F2]>>,
|
|
|
|
CCIfType<[f64], CCAssignToReg<[F1, F2, F3, F4]>>,
|
2014-03-27 00:12:58 +08:00
|
|
|
CCIfType<[v16i8, v8i16, v4i32, v4f32, v2f64, v2i64], CCAssignToReg<[V2]>>
|
2013-08-27 03:42:51 +08:00
|
|
|
]>;
|
|
|
|
|
2007-03-06 08:59:59 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
2013-02-07 01:33:58 +08:00
|
|
|
// PowerPC System V Release 4 32-bit ABI
|
2009-07-03 14:45:56 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2013-02-07 01:33:58 +08:00
|
|
|
def CC_PPC32_SVR4_Common : CallingConv<[
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 08:27:01 +08:00
|
|
|
CCIfType<[i1], CCPromoteToType<i32>>,
|
|
|
|
|
2009-07-03 14:45:56 +08:00
|
|
|
// The ABI requires i64 to be passed in two adjacent registers with the first
|
|
|
|
// register having an odd register number.
|
2013-02-07 01:33:58 +08:00
|
|
|
CCIfType<[i32], CCIfSplit<CCCustom<"CC_PPC32_SVR4_Custom_AlignArgRegs">>>,
|
2009-07-03 14:45:56 +08:00
|
|
|
|
|
|
|
// The first 8 integer arguments are passed in integer registers.
|
2010-02-16 09:50:18 +08:00
|
|
|
CCIfType<[i32], CCAssignToReg<[R3, R4, R5, R6, R7, R8, R9, R10]>>,
|
2009-07-03 14:45:56 +08:00
|
|
|
|
|
|
|
// Make sure the i64 words from a long double are either both passed in
|
|
|
|
// registers or both passed on the stack.
|
2013-02-07 01:33:58 +08:00
|
|
|
CCIfType<[f64], CCIfSplit<CCCustom<"CC_PPC32_SVR4_Custom_AlignFPArgRegs">>>,
|
2009-07-03 14:45:56 +08:00
|
|
|
|
|
|
|
// FP values are passed in F1 - F8.
|
|
|
|
CCIfType<[f32, f64], CCAssignToReg<[F1, F2, F3, F4, F5, F6, F7, F8]>>,
|
|
|
|
|
|
|
|
// Split arguments have an alignment of 8 bytes on the stack.
|
|
|
|
CCIfType<[i32], CCIfSplit<CCAssignToStack<4, 8>>>,
|
|
|
|
|
|
|
|
CCIfType<[i32], CCAssignToStack<4, 4>>,
|
|
|
|
|
|
|
|
// Floats are stored in double precision format, thus they have the same
|
|
|
|
// alignment and size as doubles.
|
|
|
|
CCIfType<[f32,f64], CCAssignToStack<8, 8>>,
|
|
|
|
|
|
|
|
// Vectors get 16-byte stack slots that are 16-byte aligned.
|
2014-03-27 00:12:58 +08:00
|
|
|
CCIfType<[v16i8, v8i16, v4i32, v4f32, v2f64, v2i64], CCAssignToStack<16, 16>>
|
2009-07-03 14:45:56 +08:00
|
|
|
]>;
|
|
|
|
|
|
|
|
// This calling convention puts vector arguments always on the stack. It is used
|
|
|
|
// to assign vector arguments which belong to the variable portion of the
|
|
|
|
// parameter list of a variable argument function.
|
2013-02-07 01:33:58 +08:00
|
|
|
def CC_PPC32_SVR4_VarArg : CallingConv<[
|
|
|
|
CCDelegateTo<CC_PPC32_SVR4_Common>
|
2009-07-03 14:45:56 +08:00
|
|
|
]>;
|
|
|
|
|
2013-02-07 01:33:58 +08:00
|
|
|
// In contrast to CC_PPC32_SVR4_VarArg, this calling convention first tries to
|
|
|
|
// put vector arguments in vector registers before putting them on the stack.
|
|
|
|
def CC_PPC32_SVR4 : CallingConv<[
|
2009-07-03 14:45:56 +08:00
|
|
|
// The first 12 Vector arguments are passed in AltiVec registers.
|
2014-03-27 00:12:58 +08:00
|
|
|
CCIfType<[v16i8, v8i16, v4i32, v4f32, v2f64, v2i64],
|
2009-07-03 14:45:56 +08:00
|
|
|
CCAssignToReg<[V2, V3, V4, V5, V6, V7, V8, V9, V10, V11, V12, V13]>>,
|
|
|
|
|
2013-02-07 01:33:58 +08:00
|
|
|
CCDelegateTo<CC_PPC32_SVR4_Common>
|
2009-07-03 14:45:56 +08:00
|
|
|
]>;
|
|
|
|
|
|
|
|
// Helper "calling convention" to handle aggregate by value arguments.
|
|
|
|
// Aggregate by value arguments are always placed in the local variable space
|
|
|
|
// of the caller. This calling convention is only used to assign those stack
|
|
|
|
// offsets in the callers stack frame.
|
|
|
|
//
|
|
|
|
// Still, the address of the aggregate copy in the callers stack frame is passed
|
|
|
|
// in a GPR (or in the parameter list area if all GPRs are allocated) from the
|
|
|
|
// caller to the callee. The location for the address argument is assigned by
|
2013-02-07 01:33:58 +08:00
|
|
|
// the CC_PPC32_SVR4 calling convention.
|
2009-07-03 14:45:56 +08:00
|
|
|
//
|
2013-02-07 01:33:58 +08:00
|
|
|
// The only purpose of CC_PPC32_SVR4_Custom_Dummy is to skip arguments which are
|
2009-07-03 14:45:56 +08:00
|
|
|
// not passed by value.
|
|
|
|
|
2013-02-07 01:33:58 +08:00
|
|
|
def CC_PPC32_SVR4_ByVal : CallingConv<[
|
2009-07-03 14:45:56 +08:00
|
|
|
CCIfByVal<CCPassByVal<4, 4>>,
|
|
|
|
|
2013-02-07 01:33:58 +08:00
|
|
|
CCCustom<"CC_PPC32_SVR4_Custom_Dummy">
|
2009-07-03 14:45:56 +08:00
|
|
|
]>;
|
|
|
|
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
def CSR_Altivec : CalleeSavedRegs<(add V20, V21, V22, V23, V24, V25, V26, V27,
|
|
|
|
V28, V29, V30, V31)>;
|
|
|
|
|
2012-03-07 00:41:49 +08:00
|
|
|
def CSR_Darwin32 : CalleeSavedRegs<(add R13, R14, R15, R16, R17, R18, R19, R20,
|
|
|
|
R21, R22, R23, R24, R25, R26, R27, R28,
|
|
|
|
R29, R30, R31, F14, F15, F16, F17, F18,
|
|
|
|
F19, F20, F21, F22, F23, F24, F25, F26,
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
F27, F28, F29, F30, F31, CR2, CR3, CR4
|
|
|
|
)>;
|
|
|
|
|
|
|
|
def CSR_Darwin32_Altivec : CalleeSavedRegs<(add CSR_Darwin32, CSR_Altivec)>;
|
2012-03-07 00:41:49 +08:00
|
|
|
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
def CSR_SVR432 : CalleeSavedRegs<(add R14, R15, R16, R17, R18, R19, R20,
|
2012-03-07 00:41:49 +08:00
|
|
|
R21, R22, R23, R24, R25, R26, R27, R28,
|
|
|
|
R29, R30, R31, F14, F15, F16, F17, F18,
|
|
|
|
F19, F20, F21, F22, F23, F24, F25, F26,
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
F27, F28, F29, F30, F31, CR2, CR3, CR4
|
|
|
|
)>;
|
|
|
|
|
|
|
|
def CSR_SVR432_Altivec : CalleeSavedRegs<(add CSR_SVR432, CSR_Altivec)>;
|
2012-03-07 00:41:49 +08:00
|
|
|
|
|
|
|
def CSR_Darwin64 : CalleeSavedRegs<(add X13, X14, X15, X16, X17, X18, X19, X20,
|
|
|
|
X21, X22, X23, X24, X25, X26, X27, X28,
|
|
|
|
X29, X30, X31, F14, F15, F16, F17, F18,
|
|
|
|
F19, F20, F21, F22, F23, F24, F25, F26,
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
F27, F28, F29, F30, F31, CR2, CR3, CR4
|
|
|
|
)>;
|
2012-03-07 00:41:49 +08:00
|
|
|
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
def CSR_Darwin64_Altivec : CalleeSavedRegs<(add CSR_Darwin64, CSR_Altivec)>;
|
|
|
|
|
|
|
|
def CSR_SVR464 : CalleeSavedRegs<(add X14, X15, X16, X17, X18, X19, X20,
|
2012-03-07 00:41:49 +08:00
|
|
|
X21, X22, X23, X24, X25, X26, X27, X28,
|
|
|
|
X29, X30, X31, F14, F15, F16, F17, F18,
|
|
|
|
F19, F20, F21, F22, F23, F24, F25, F26,
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
F27, F28, F29, F30, F31, CR2, CR3, CR4
|
|
|
|
)>;
|
|
|
|
|
2013-03-22 05:37:52 +08:00
|
|
|
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
def CSR_SVR464_Altivec : CalleeSavedRegs<(add CSR_SVR464, CSR_Altivec)>;
|
2013-03-22 05:37:52 +08:00
|
|
|
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 11:39:34 +08:00
|
|
|
def CSR_NoRegs : CalleeSavedRegs<(add)>;
|
2013-03-22 05:37:52 +08:00
|
|
|
|