2014-09-10 04:07:07 +08:00
|
|
|
//=- X86ScheduleBtVer2.td - X86 BtVer2 (Jaguar) Scheduling ---*- tablegen -*-=//
|
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2014-09-10 04:07:07 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file defines the machine model for AMD btver2 (Jaguar) to support
|
|
|
|
// instruction scheduling and other instruction cost heuristics. Based off AMD Software
|
|
|
|
// Optimization Guide for AMD Family 16h Processors & Instruction Latency appendix.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
def BtVer2Model : SchedMachineModel {
|
|
|
|
// All x86 instructions are modeled as a single micro-op, and btver2 can
|
|
|
|
// decode 2 instructions per cycle.
|
|
|
|
let IssueWidth = 2;
|
|
|
|
let MicroOpBufferSize = 64; // Retire Control Unit
|
|
|
|
let LoadLatency = 5; // FPU latency (worse case cf Integer 3 cycle latency)
|
|
|
|
let HighLatency = 25;
|
|
|
|
let MispredictPenalty = 14; // Minimum branch misdirection penalty
|
|
|
|
let PostRAScheduler = 1;
|
2017-12-13 00:12:53 +08:00
|
|
|
|
|
|
|
// FIXME: SSE4/AVX is unimplemented. This flag is set to allow
|
|
|
|
// the scheduler to assign a default model to unrecognized opcodes.
|
|
|
|
let CompleteModel = 0;
|
2014-09-10 04:07:07 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
let SchedModel = BtVer2Model in {
|
|
|
|
|
|
|
|
// Jaguar can issue up to 6 micro-ops in one cycle
|
|
|
|
def JALU0 : ProcResource<1>; // Integer Pipe0: integer ALU0 (also handle FP->INT jam)
|
|
|
|
def JALU1 : ProcResource<1>; // Integer Pipe1: integer ALU1/MUL/DIV
|
|
|
|
def JLAGU : ProcResource<1>; // Integer Pipe2: LAGU
|
|
|
|
def JSAGU : ProcResource<1>; // Integer Pipe3: SAGU (also handles 3-operand LEA)
|
|
|
|
def JFPU0 : ProcResource<1>; // Vector/FPU Pipe0: VALU0/VIMUL/FPA
|
|
|
|
def JFPU1 : ProcResource<1>; // Vector/FPU Pipe1: VALU1/STC/FPM
|
|
|
|
|
[MC][Tablegen] Allow the definition of processor register files in the scheduling model for llvm-mca
This patch allows the description of register files in processor scheduling
models. This addresses PR36662.
A new tablegen class named 'RegisterFile' has been added to TargetSchedule.td.
Targets can optionally describe register files for their processors using that
class. In particular, class RegisterFile allows to specify:
- The total number of physical registers.
- Which target registers are accessible through the register file.
- The cost of allocating a register at register renaming stage.
Example (from this patch - see file X86/X86ScheduleBtVer2.td)
def FpuPRF : RegisterFile<72, [VR64, VR128, VR256], [1, 1, 2]>
Here, FpuPRF describes a register file for MMX/XMM/YMM registers. On Jaguar
(btver2), a YMM register definition consumes 2 physical registers, while MMX/XMM
register definitions only cost 1 physical register.
The syntax allows to specify an empty set of register classes. An empty set of
register classes means: this register file models all the registers specified by
the Target. For each register class, users can specify an optional register
cost. By default, register costs default to 1. A value of 0 for the number of
physical registers means: "this register file has an unbounded number of
physical registers".
This patch is structured in two parts.
* Part 1 - MC/Tablegen *
A first part adds the tablegen definition of RegisterFile, and teaches the
SubtargetEmitter how to emit information related to register files.
Information about register files is accessible through an instance of
MCExtraProcessorInfo.
The idea behind this design is to logically partition the processor description
which is only used by external tools (like llvm-mca) from the processor
information used by the llvm machine schedulers.
I think that this design would make easier for targets to get rid of the extra
processor information if they don't want it.
* Part 2 - llvm-mca related *
The second part of this patch is related to changes to llvm-mca.
The main differences are:
1) class RegisterFile now needs to take into account the "cost of a register"
when allocating physical registers at register renaming stage.
2) Point 1. triggered a minor refactoring which lef to the removal of the
"maximum 32 register files" restriction.
3) The BackendStatistics view has been updated so that we can print out extra
details related to each register file implemented by the processor.
The effect of point 3. is also visible in tests register-files-[1..5].s.
Differential Revision: https://reviews.llvm.org/D44980
llvm-svn: 329067
2018-04-03 21:36:24 +08:00
|
|
|
// The Integer PRF for Jaguar is 64 entries, and it holds the architectural and
|
|
|
|
// speculative version of the 64-bit integer registers.
|
|
|
|
// Reference: www.realworldtech.com/jaguar/4/
|
[llvm-mca][BtVer2] teach how to identify false dependencies on partially written
registers.
The goal of this patch is to improve the throughput analysis in llvm-mca for the
case where instructions perform partial register writes.
On x86, partial register writes are quite difficult to model, mainly because
different processors tend to implement different register merging schemes in
hardware.
When the code contains partial register writes, the IPC (instructions per
cycles) estimated by llvm-mca tends to diverge quite significantly from the
observed IPC (using perf).
Modern AMD processors (at least, from Bulldozer onwards) don't rename partial
registers. Quoting Agner Fog's microarchitecture.pdf:
" The processor always keeps the different parts of an integer register together.
For example, AL and AH are not treated as independent by the out-of-order
execution mechanism. An instruction that writes to part of a register will
therefore have a false dependence on any previous write to the same register or
any part of it."
This patch is a first important step towards improving the analysis of partial
register updates. It changes the semantic of RegisterFile descriptors in
tablegen, and teaches llvm-mca how to identify false dependences in the presence
of partial register writes (for more details: see the new code comments in
include/Target/TargetSchedule.h - class RegisterFile).
This patch doesn't address the case where a write to a part of a register is
followed by a read from the whole register. On Intel chips, high8 registers
(AH/BH/CH/DH)) can be stored in separate physical registers. However, a later
(dirty) read of the full register (example: AX/EAX) triggers a merge uOp, which
adds extra latency (and potentially affects the pipe usage).
This is a very interesting article on the subject with a very informative answer
from Peter Cordes:
https://stackoverflow.com/questions/45660139/how-exactly-do-partial-registers-on-haswell-skylake-perform-writing-al-seems-to
In future, the definition of RegisterFile can be extended with extra information
that may be used to identify delays caused by merge opcodes triggered by a dirty
read of a partial write.
Differential Revision: https://reviews.llvm.org/D49196
llvm-svn: 337123
2018-07-15 19:01:38 +08:00
|
|
|
//
|
|
|
|
// The processor always keeps the different parts of an integer register
|
|
|
|
// together. An instruction that writes to a part of a register will therefore
|
|
|
|
// have a false dependence on any previous write to the same register or any
|
|
|
|
// part of it.
|
|
|
|
// Reference: Section 21.10 "AMD Bobcat and Jaguar pipeline: Partial register
|
|
|
|
// access" - Agner Fog's "microarchitecture.pdf".
|
[tblgen][llvm-mca] Add the ability to describe move elimination candidates via tablegen.
This patch adds the ability to identify instructions that are "move elimination
candidates". It also allows scheduling models to describe processor register
files that allow move elimination.
A move elimination candidate is an instruction that can be eliminated at
register renaming stage.
Each subtarget can specify which instructions are move elimination candidates
with the help of tablegen class "IsOptimizableRegisterMove" (see
llvm/Target/TargetInstrPredicate.td).
For example, on X86, BtVer2 allows both GPR and MMX/SSE moves to be eliminated.
The definition of 'IsOptimizableRegisterMove' for BtVer2 looks like this:
```
def : IsOptimizableRegisterMove<[
InstructionEquivalenceClass<[
// GPR variants.
MOV32rr, MOV64rr,
// MMX variants.
MMX_MOVQ64rr,
// SSE variants.
MOVAPSrr, MOVUPSrr,
MOVAPDrr, MOVUPDrr,
MOVDQArr, MOVDQUrr,
// AVX variants.
VMOVAPSrr, VMOVUPSrr,
VMOVAPDrr, VMOVUPDrr,
VMOVDQArr, VMOVDQUrr
], CheckNot<CheckSameRegOperand<0, 1>> >
]>;
```
Definitions of IsOptimizableRegisterMove from processor models of a same
Target are processed by the SubtargetEmitter to auto-generate a target-specific
override for each of the following predicate methods:
```
bool TargetSubtargetInfo::isOptimizableRegisterMove(const MachineInstr *MI)
const;
bool MCInstrAnalysis::isOptimizableRegisterMove(const MCInst &MI, unsigned
CPUID) const;
```
By default, those methods return false (i.e. conservatively assume that there
are no move elimination candidates).
Tablegen class RegisterFile has been extended with the following information:
- The set of register classes that allow move elimination.
- Maxium number of moves that can be eliminated every cycle.
- Whether move elimination is restricted to moves from registers that are
known to be zero.
This patch is structured in three part:
A first part (which is mostly boilerplate) adds the new
'isOptimizableRegisterMove' target hooks, and extends existing register file
descriptors in MC by introducing new fields to describe properties related to
move elimination.
A second part, uses the new tablegen constructs to describe move elimination in
the BtVer2 scheduling model.
A third part, teaches llm-mca how to query the new 'isOptimizableRegisterMove'
hook to mark instructions that are candidates for move elimination. It also
teaches class RegisterFile how to describe constraints on move elimination at
PRF granularity.
llvm-mca tests for btver2 show differences before/after this patch.
Differential Revision: https://reviews.llvm.org/D53134
llvm-svn: 344334
2018-10-12 19:23:04 +08:00
|
|
|
def JIntegerPRF : RegisterFile<64, [GR64, CCR], [1, 1], [1, 0],
|
|
|
|
0, // Max moves that can be eliminated per cycle.
|
|
|
|
1>; // Restrict move elimination to zero regs.
|
[MC][Tablegen] Allow the definition of processor register files in the scheduling model for llvm-mca
This patch allows the description of register files in processor scheduling
models. This addresses PR36662.
A new tablegen class named 'RegisterFile' has been added to TargetSchedule.td.
Targets can optionally describe register files for their processors using that
class. In particular, class RegisterFile allows to specify:
- The total number of physical registers.
- Which target registers are accessible through the register file.
- The cost of allocating a register at register renaming stage.
Example (from this patch - see file X86/X86ScheduleBtVer2.td)
def FpuPRF : RegisterFile<72, [VR64, VR128, VR256], [1, 1, 2]>
Here, FpuPRF describes a register file for MMX/XMM/YMM registers. On Jaguar
(btver2), a YMM register definition consumes 2 physical registers, while MMX/XMM
register definitions only cost 1 physical register.
The syntax allows to specify an empty set of register classes. An empty set of
register classes means: this register file models all the registers specified by
the Target. For each register class, users can specify an optional register
cost. By default, register costs default to 1. A value of 0 for the number of
physical registers means: "this register file has an unbounded number of
physical registers".
This patch is structured in two parts.
* Part 1 - MC/Tablegen *
A first part adds the tablegen definition of RegisterFile, and teaches the
SubtargetEmitter how to emit information related to register files.
Information about register files is accessible through an instance of
MCExtraProcessorInfo.
The idea behind this design is to logically partition the processor description
which is only used by external tools (like llvm-mca) from the processor
information used by the llvm machine schedulers.
I think that this design would make easier for targets to get rid of the extra
processor information if they don't want it.
* Part 2 - llvm-mca related *
The second part of this patch is related to changes to llvm-mca.
The main differences are:
1) class RegisterFile now needs to take into account the "cost of a register"
when allocating physical registers at register renaming stage.
2) Point 1. triggered a minor refactoring which lef to the removal of the
"maximum 32 register files" restriction.
3) The BackendStatistics view has been updated so that we can print out extra
details related to each register file implemented by the processor.
The effect of point 3. is also visible in tests register-files-[1..5].s.
Differential Revision: https://reviews.llvm.org/D44980
llvm-svn: 329067
2018-04-03 21:36:24 +08:00
|
|
|
|
|
|
|
// The Jaguar FP Retire Queue renames SIMD and FP uOps onto a pool of 72 SSE
|
|
|
|
// registers. Operations on 256-bit data types are cracked into two COPs.
|
|
|
|
// Reference: www.realworldtech.com/jaguar/4/
|
[tblgen][llvm-mca] Add the ability to describe move elimination candidates via tablegen.
This patch adds the ability to identify instructions that are "move elimination
candidates". It also allows scheduling models to describe processor register
files that allow move elimination.
A move elimination candidate is an instruction that can be eliminated at
register renaming stage.
Each subtarget can specify which instructions are move elimination candidates
with the help of tablegen class "IsOptimizableRegisterMove" (see
llvm/Target/TargetInstrPredicate.td).
For example, on X86, BtVer2 allows both GPR and MMX/SSE moves to be eliminated.
The definition of 'IsOptimizableRegisterMove' for BtVer2 looks like this:
```
def : IsOptimizableRegisterMove<[
InstructionEquivalenceClass<[
// GPR variants.
MOV32rr, MOV64rr,
// MMX variants.
MMX_MOVQ64rr,
// SSE variants.
MOVAPSrr, MOVUPSrr,
MOVAPDrr, MOVUPDrr,
MOVDQArr, MOVDQUrr,
// AVX variants.
VMOVAPSrr, VMOVUPSrr,
VMOVAPDrr, VMOVUPDrr,
VMOVDQArr, VMOVDQUrr
], CheckNot<CheckSameRegOperand<0, 1>> >
]>;
```
Definitions of IsOptimizableRegisterMove from processor models of a same
Target are processed by the SubtargetEmitter to auto-generate a target-specific
override for each of the following predicate methods:
```
bool TargetSubtargetInfo::isOptimizableRegisterMove(const MachineInstr *MI)
const;
bool MCInstrAnalysis::isOptimizableRegisterMove(const MCInst &MI, unsigned
CPUID) const;
```
By default, those methods return false (i.e. conservatively assume that there
are no move elimination candidates).
Tablegen class RegisterFile has been extended with the following information:
- The set of register classes that allow move elimination.
- Maxium number of moves that can be eliminated every cycle.
- Whether move elimination is restricted to moves from registers that are
known to be zero.
This patch is structured in three part:
A first part (which is mostly boilerplate) adds the new
'isOptimizableRegisterMove' target hooks, and extends existing register file
descriptors in MC by introducing new fields to describe properties related to
move elimination.
A second part, uses the new tablegen constructs to describe move elimination in
the BtVer2 scheduling model.
A third part, teaches llm-mca how to query the new 'isOptimizableRegisterMove'
hook to mark instructions that are candidates for move elimination. It also
teaches class RegisterFile how to describe constraints on move elimination at
PRF granularity.
llvm-mca tests for btver2 show differences before/after this patch.
Differential Revision: https://reviews.llvm.org/D53134
llvm-svn: 344334
2018-10-12 19:23:04 +08:00
|
|
|
|
|
|
|
// The PRF in the floating point unit can eliminate a move from a MMX or SSE
|
|
|
|
// register that is know to be zero (i.e. it has been zeroed using a zero-idiom
|
|
|
|
// dependency breaking instruction, or via VZEROALL).
|
|
|
|
// Reference: Section 21.8 "AMD Bobcat and Jaguar pipeline: Dependency-breaking
|
|
|
|
// instructions" - Agner Fog's "microarchitecture.pdf"
|
|
|
|
def JFpuPRF: RegisterFile<72, [VR64, VR128, VR256], [1, 1, 2], [1, 1, 0],
|
|
|
|
0, // Max moves that can be eliminated per cycle.
|
|
|
|
1>; // Restrict move elimination to zero regs.
|
[MC][Tablegen] Allow the definition of processor register files in the scheduling model for llvm-mca
This patch allows the description of register files in processor scheduling
models. This addresses PR36662.
A new tablegen class named 'RegisterFile' has been added to TargetSchedule.td.
Targets can optionally describe register files for their processors using that
class. In particular, class RegisterFile allows to specify:
- The total number of physical registers.
- Which target registers are accessible through the register file.
- The cost of allocating a register at register renaming stage.
Example (from this patch - see file X86/X86ScheduleBtVer2.td)
def FpuPRF : RegisterFile<72, [VR64, VR128, VR256], [1, 1, 2]>
Here, FpuPRF describes a register file for MMX/XMM/YMM registers. On Jaguar
(btver2), a YMM register definition consumes 2 physical registers, while MMX/XMM
register definitions only cost 1 physical register.
The syntax allows to specify an empty set of register classes. An empty set of
register classes means: this register file models all the registers specified by
the Target. For each register class, users can specify an optional register
cost. By default, register costs default to 1. A value of 0 for the number of
physical registers means: "this register file has an unbounded number of
physical registers".
This patch is structured in two parts.
* Part 1 - MC/Tablegen *
A first part adds the tablegen definition of RegisterFile, and teaches the
SubtargetEmitter how to emit information related to register files.
Information about register files is accessible through an instance of
MCExtraProcessorInfo.
The idea behind this design is to logically partition the processor description
which is only used by external tools (like llvm-mca) from the processor
information used by the llvm machine schedulers.
I think that this design would make easier for targets to get rid of the extra
processor information if they don't want it.
* Part 2 - llvm-mca related *
The second part of this patch is related to changes to llvm-mca.
The main differences are:
1) class RegisterFile now needs to take into account the "cost of a register"
when allocating physical registers at register renaming stage.
2) Point 1. triggered a minor refactoring which lef to the removal of the
"maximum 32 register files" restriction.
3) The BackendStatistics view has been updated so that we can print out extra
details related to each register file implemented by the processor.
The effect of point 3. is also visible in tests register-files-[1..5].s.
Differential Revision: https://reviews.llvm.org/D44980
llvm-svn: 329067
2018-04-03 21:36:24 +08:00
|
|
|
|
2018-04-05 23:41:41 +08:00
|
|
|
// The retire control unit (RCU) can track up to 64 macro-ops in-flight. It can
|
|
|
|
// retire up to two macro-ops per cycle.
|
|
|
|
// Reference: "Software Optimization Guide for AMD Family 16h Processors"
|
2018-05-22 00:30:26 +08:00
|
|
|
def JRCU : RetireControlUnit<64, 2>;
|
2018-04-05 23:41:41 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
// Integer Pipe Scheduler
|
|
|
|
def JALU01 : ProcResGroup<[JALU0, JALU1]> {
|
|
|
|
let BufferSize=20;
|
|
|
|
}
|
|
|
|
|
|
|
|
// AGU Pipe Scheduler
|
|
|
|
def JLSAGU : ProcResGroup<[JLAGU, JSAGU]> {
|
|
|
|
let BufferSize=12;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Fpu Pipe Scheduler
|
|
|
|
def JFPU01 : ProcResGroup<[JFPU0, JFPU1]> {
|
|
|
|
let BufferSize=18;
|
|
|
|
}
|
|
|
|
|
2018-03-18 20:09:17 +08:00
|
|
|
// Functional units
|
2014-09-10 04:07:07 +08:00
|
|
|
def JDiv : ProcResource<1>; // integer division
|
|
|
|
def JMul : ProcResource<1>; // integer multiplication
|
|
|
|
def JVALU0 : ProcResource<1>; // vector integer
|
|
|
|
def JVALU1 : ProcResource<1>; // vector integer
|
|
|
|
def JVIMUL : ProcResource<1>; // vector integer multiplication
|
|
|
|
def JSTC : ProcResource<1>; // vector store/convert
|
|
|
|
def JFPM : ProcResource<1>; // FP multiplication
|
|
|
|
def JFPA : ProcResource<1>; // FP addition
|
|
|
|
|
2018-03-18 20:09:17 +08:00
|
|
|
// Functional unit groups
|
|
|
|
def JFPX : ProcResGroup<[JFPA, JFPM]>;
|
|
|
|
def JVALU : ProcResGroup<[JVALU0, JVALU1]>;
|
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
// Integer loads are 3 cycles, so ReadAfterLd registers needn't be available until 3
|
|
|
|
// cycles after the memory operand.
|
|
|
|
def : ReadAdvance<ReadAfterLd, 3>;
|
|
|
|
|
2018-10-06 01:57:29 +08:00
|
|
|
// Vector loads are 5 cycles, so ReadAfterVec*Ld registers needn't be available until 5
|
|
|
|
// cycles after the memory operand.
|
|
|
|
def : ReadAdvance<ReadAfterVecLd, 5>;
|
|
|
|
def : ReadAdvance<ReadAfterVecXLd, 5>;
|
|
|
|
def : ReadAdvance<ReadAfterVecYLd, 5>;
|
|
|
|
|
2019-01-24 00:35:07 +08:00
|
|
|
/// "Additional 6 cycle transfer operation which moves a floating point
|
|
|
|
/// operation input value from the integer unit to the floating point unit.
|
|
|
|
/// Reference: AMDfam16h SOG (Appendix A "Instruction Latencies", Section A.2).
|
|
|
|
def : ReadAdvance<ReadInt2Fpu, -6>;
|
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
// Many SchedWrites are defined in pairs with and without a folded load.
|
|
|
|
// Instructions with folded loads are usually micro-fused, so they only appear
|
|
|
|
// as two micro-ops when dispatched by the schedulers.
|
|
|
|
// This multiclass defines the resource usage for variants with and without
|
|
|
|
// folded loads.
|
|
|
|
multiclass JWriteResIntPair<X86FoldableSchedWrite SchedRW,
|
2018-03-16 07:46:12 +08:00
|
|
|
list<ProcResourceKind> ExePorts,
|
2018-09-30 23:58:56 +08:00
|
|
|
int Lat, list<int> Res = [], int UOps = 1,
|
|
|
|
int LoadUOps = 0> {
|
2014-09-10 04:07:07 +08:00
|
|
|
// Register variant is using a single cycle on ExePort.
|
2018-03-16 07:46:12 +08:00
|
|
|
def : WriteRes<SchedRW, ExePorts> {
|
2018-03-15 05:55:54 +08:00
|
|
|
let Latency = Lat;
|
2018-03-16 07:46:12 +08:00
|
|
|
let ResourceCycles = Res;
|
2018-03-15 05:55:54 +08:00
|
|
|
let NumMicroOps = UOps;
|
|
|
|
}
|
2014-09-10 04:07:07 +08:00
|
|
|
|
|
|
|
// Memory variant also uses a cycle on JLAGU and adds 3 cycles to the
|
|
|
|
// latency.
|
2018-03-16 07:46:12 +08:00
|
|
|
def : WriteRes<SchedRW.Folded, !listconcat([JLAGU], ExePorts)> {
|
2018-03-13 05:35:12 +08:00
|
|
|
let Latency = !add(Lat, 3);
|
2018-05-03 14:08:47 +08:00
|
|
|
let ResourceCycles = !if(!empty(Res), [], !listconcat([1], Res));
|
2018-09-30 23:58:56 +08:00
|
|
|
let NumMicroOps = !add(UOps, LoadUOps);
|
2014-09-10 04:07:07 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
multiclass JWriteResFpuPair<X86FoldableSchedWrite SchedRW,
|
2018-03-15 07:12:09 +08:00
|
|
|
list<ProcResourceKind> ExePorts,
|
2018-09-30 23:58:56 +08:00
|
|
|
int Lat, list<int> Res = [], int UOps = 1,
|
|
|
|
int LoadUOps = 0> {
|
2014-09-10 04:07:07 +08:00
|
|
|
// Register variant is using a single cycle on ExePort.
|
2018-03-15 07:12:09 +08:00
|
|
|
def : WriteRes<SchedRW, ExePorts> {
|
2018-03-13 00:02:56 +08:00
|
|
|
let Latency = Lat;
|
2018-03-15 07:12:09 +08:00
|
|
|
let ResourceCycles = Res;
|
2018-03-13 00:02:56 +08:00
|
|
|
let NumMicroOps = UOps;
|
|
|
|
}
|
2014-09-10 04:07:07 +08:00
|
|
|
|
|
|
|
// Memory variant also uses a cycle on JLAGU and adds 5 cycles to the
|
|
|
|
// latency.
|
2018-03-15 07:12:09 +08:00
|
|
|
def : WriteRes<SchedRW.Folded, !listconcat([JLAGU], ExePorts)> {
|
2018-03-13 05:35:12 +08:00
|
|
|
let Latency = !add(Lat, 5);
|
2018-05-03 14:08:47 +08:00
|
|
|
let ResourceCycles = !if(!empty(Res), [], !listconcat([1], Res));
|
2018-09-30 23:58:56 +08:00
|
|
|
let NumMicroOps = !add(UOps, LoadUOps);
|
2014-09-10 04:07:07 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-27 23:50:33 +08:00
|
|
|
multiclass JWriteResYMMPair<X86FoldableSchedWrite SchedRW,
|
|
|
|
list<ProcResourceKind> ExePorts,
|
2018-09-30 23:58:56 +08:00
|
|
|
int Lat, list<int> Res = [2], int UOps = 2,
|
|
|
|
int LoadUOps = 0> {
|
2018-04-27 23:50:33 +08:00
|
|
|
// Register variant is using a single cycle on ExePort.
|
|
|
|
def : WriteRes<SchedRW, ExePorts> {
|
|
|
|
let Latency = Lat;
|
|
|
|
let ResourceCycles = Res;
|
|
|
|
let NumMicroOps = UOps;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Memory variant also uses 2 cycles on JLAGU and adds 5 cycles to the
|
|
|
|
// latency.
|
|
|
|
def : WriteRes<SchedRW.Folded, !listconcat([JLAGU], ExePorts)> {
|
|
|
|
let Latency = !add(Lat, 5);
|
|
|
|
let ResourceCycles = !listconcat([2], Res);
|
2018-09-30 23:58:56 +08:00
|
|
|
let NumMicroOps = !add(UOps, LoadUOps);
|
2018-04-27 23:50:33 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-17 02:18:01 +08:00
|
|
|
// Instructions that have local forwarding disabled have an extra +1cy latency.
|
|
|
|
|
[X86][BtVer2] Fix latency of ALU RMW instructions.
Excluding ADC/SBB and the bit-test instructions (BTR/BTS/BTC), the observed
latency of all other RMW integer arithmetic/logic instructions is 6cy and not
5cy.
Example (ADD):
```
addb $0, (%rsp) # Latency: 6cy
addb $7, (%rsp) # Latency: 6cy
addb %sil, (%rsp) # Latency: 6cy
addw $0, (%rsp) # Latency: 6cy
addw $511, (%rsp) # Latency: 6cy
addw %si, (%rsp) # Latency: 6cy
addl $0, (%rsp) # Latency: 6cy
addl $511, (%rsp) # Latency: 6cy
addl %esi, (%rsp) # Latency: 6cy
addq $0, (%rsp) # Latency: 6cy
addq $511, (%rsp) # Latency: 6cy
addq %rsi, (%rsp) # Latency: 6cy
```
The same latency profile applies to SUB/AND/OR/XOR/INC/DEC.
The observed latency of ADC/SBB is 7-8cy. So we need a different write to model
those. Latency of BTS/BTR/BTC is not fixed by this patch (they are much slower
than what the model for btver2 currently reports).
Differential Revision: https://reviews.llvm.org/D66636
llvm-svn: 369748
2019-08-23 19:34:10 +08:00
|
|
|
// A folded store needs a cycle on the SAGU for the store data, most RMW
|
|
|
|
// instructions don't need an extra uop. ALU RMW operations don't seem to
|
|
|
|
// benefit from STLF, and their observed latency is 6cy. That is the reason why
|
|
|
|
// this write adds two extra cycles (instead of just 1cy for the store).
|
|
|
|
defm : X86WriteRes<WriteRMW, [JSAGU], 2, [1], 0>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Arithmetic.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-05-08 21:51:45 +08:00
|
|
|
defm : JWriteResIntPair<WriteALU, [JALU01], 1>;
|
2018-05-17 23:43:23 +08:00
|
|
|
defm : JWriteResIntPair<WriteADC, [JALU01], 1, [2]>;
|
2018-05-08 21:51:45 +08:00
|
|
|
|
[X86][Btver2] Fix latency and throughput of CMPXCHG instructions.
On Jaguar, CMPXCHG has a latency of 11cy, and a maximum throughput of 0.33 IPC.
Throughput is superiorly limited to 0.33 because of the implicit in/out
dependency on register EAX. In the case of repeated non-atomic CMPXCHG with the
same memory location, store-to-load forwarding occurs and values for sequent
loads are quickly forwarded from the store buffer.
Interestingly, the functionality in LLVM that computes the reciprocal throughput
doesn't seem to know about RMW instructions. That functionality only looks at
the "consumed resource cycles" for the throughput computation. It should be
fixed/improved by a future patch. In particular, for RMW instructions, that
logic should also take into account for the write latency of in/out register
operands.
An atomic CMPXCHG has a latency of ~17cy. Throughput is also limited to
~17cy/inst due to cache locking, which prevents other memory uOPs to start
executing before the "lock releasing" store uOP.
CMPXCHG8rr and CMPXCHG8rm are treated specially because they decode to one less
macro opcode. Their latency tend to be the same as the other RR/RM variants. RR
variants are relatively fast 3cy (but still microcoded - 5 macro opcodes).
CMPXCHG8B is 11cy and unfortunately doesn't seem to benefit from store-to-load
forwarding. That means, throughput is clearly limited by the in/out dependency
on GPR registers. The uOP composition is sadly unknown (due to the lack of PMCs
for the Integer pipes). I have reused the same mix of consumed resource from the
other CMPXCHG instructions for CMPXCHG8B too.
LOCK CMPXCHG8B is instead 18cycles.
CMPXCHG16B is 32cycles. Up to 38cycles when the LOCK prefix is specified. Due to
the in/out dependencies, throughput is limited to 1 instruction every 32 (or 38)
cycles dependeing on whether the LOCK prefix is specified or not.
I wouldn't be surprised if the microcode for CMPXCHG16B is similar to 2x
microcode from CMPXCHG8B. So, I have speculatively set the JALU01 consumption to
2x the resource cycles used for CMPXCHG8B.
The two new hasLockPrefix() functions are used by the btver2 scheduling model
check if a MCInst/MachineInst has a LOCK prefix. Calls to hasLockPrefix() have
been encoded in predicates of variant scheduling classes that describe lat/thr
of CMPXCHG.
Differential Revision: https://reviews.llvm.org/D66424
llvm-svn: 369365
2019-08-20 18:23:55 +08:00
|
|
|
defm : X86WriteRes<WriteBSWAP32, [JALU01], 1, [1], 1>;
|
|
|
|
defm : X86WriteRes<WriteBSWAP64, [JALU01], 1, [1], 1>;
|
|
|
|
defm : X86WriteRes<WriteCMPXCHG, [JALU01], 3, [3], 5>;
|
|
|
|
defm : X86WriteRes<WriteCMPXCHGRMW, [JALU01, JSAGU, JLAGU], 11, [3, 1, 1], 6>;
|
[X86][BtVer2] Fix latency and throughput of XCHG and XADD.
On Jaguar, XCHG has a latency of 1cy and decodes to 2 macro-opcodes. Maximum
throughput for XCHG is 1 IPC. The byte exchange has worse latency and decodes to
1 extra uOP; maximum observed throughput is 0.5 IPC.
```
xchgb %cl, %dl # Latency: 2cy - uOPs: 3 - 2 ALU
xchgw %cx, %dx # Latency: 1cy - uOPs: 2 - 2 ALU
xchgl %ecx, %edx # Latency: 1cy - uOPs: 2 - 2 ALU
xchgq %rcx, %rdx # Latency: 1cy - uOPs: 2 - 2 ALU
```
The reg-mem forms of XCHG are atomic operations with an observed latency of
16cy. The resource usage is similar to the XCHGrr variants. The biggest
difference is obviously the bus-locking, which prevents the LS to issue other
memory uOPs in parallel until the unlocking store uOP is executed.
```
xchgb %cl, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgw %cx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgl %ecx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgq %rcx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
```
The exchanged in/out register operand becomes available after 11cy from the
start of execution. Added test xchg.s to verify that we correctly see that
register write committed in 11cy (and not 16cy).
Reg-reg XADD instructions have the same latency/throughput than the byte
exchange (register-register variant).
```
xaddb %cl, %dl # latency: 2cy - uOPs: 3 - 3 ALU
xaddw %cx, %dx # latency: 2cy - uOPs: 3 - 3 ALU
xaddl %ecx, %edx # latency: 2cy - uOPs: 3 - 3 ALU
xaddq %rcx, %rdx # latency: 2cy - uOPs: 3 - 3 ALU
```
The non-atomic RM variants have a latency of 11cy, and decode to 4
macro-opcodes. They still consume 2 ALU pipes, and the exchange in/out register
operand becomes available in 3cy (it matches the 'load-to-use latency').
```
xaddb %cl, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddw %cx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddl %ecx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddq %rcx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
```
The atomic XADD variants execute in 16cy. The in/out register operand is
available after 11cy from the start of execution.
```
lock xaddb %cl, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddw %cx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddl %ecx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddq %rcx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
```
Added test xadd.s to verify those latencies as well as read-advance values.
Differential Revision: https://reviews.llvm.org/D66535
llvm-svn: 369642
2019-08-22 19:32:47 +08:00
|
|
|
defm : X86WriteRes<WriteXCHG, [JALU01], 1, [2], 2>;
|
2018-07-20 17:39:14 +08:00
|
|
|
|
[X86][BtVer2] Fix latency/throughput of scalar integer MUL instructions.
Single operand MUL instructions that implicitly set EAX have the following
latency/throughput profile (see below):
imul %cl # latency: 3cy - uOPs: 1 - 1 JMul
imul %cx # latency: 3cy - uOPs: 3 - 3 JMul
imul %ecx # latency: 3cy - uOPs: 2 - 2 JMul
imul %rcx # latency: 6cy - uOPs: 2 - 4 JMul
mul %cl # latency: 3cy - uOPs: 1 - 1 JMul
mul %cx # latency: 3cy - uOPs: 3 - 3 JMul
mul %ecx # latency: 3cy - uOPs: 2 - 2 JMul
mul %rcx # latency: 6cy - uOPs: 2 - 4 JMul
Excluding the 64bit variant, which has a latency of 6cy, every other instruction
has a latency of 3cy. However, the number of decoded macro-opcodes (as well as
the resource cyles) depend on the MUL size.
The two operand MULs have a more predictable profile (see below):
imul %dx, %dx # latency: 3cy - uOPs: 1 - 1 JMul
imul %edx, %edx # latency: 3cy - uOPs: 1 - 1 JMul
imul %rdx, %rdx # latency: 6cy - uOPs: 1 - 4 JMul
imul $3, %dx, %dx # latency: 4cy - uOPs: 2 - 2 JMul
imul $3, %ecx, %ecx # latency: 3cy - uOPs: 1 - 1 JMul
imul $3, %rdx, %rdx # latency: 6cy - uOPs: 1 - 4 JMul
This patch updates the values in the Jaguar scheduling model and regenerates
llvm-mca tests.
Differential Revision: https://reviews.llvm.org/D66547
llvm-svn: 369661
2019-08-22 23:20:16 +08:00
|
|
|
defm : JWriteResIntPair<WriteIMul8, [JALU1, JMul], 3, [1, 1], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul16, [JALU1, JMul], 3, [1, 3], 3>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul16Imm, [JALU1, JMul], 4, [1, 2], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul16Reg, [JALU1, JMul], 3, [1, 1], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul32, [JALU1, JMul], 3, [1, 2], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul32Imm, [JALU1, JMul], 3, [1, 1], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul32Reg, [JALU1, JMul], 3, [1, 1], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul64, [JALU1, JMul], 6, [1, 4], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul64Imm, [JALU1, JMul], 6, [1, 4], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIMul64Reg, [JALU1, JMul], 6, [1, 4], 1>;
|
2018-09-24 23:21:57 +08:00
|
|
|
defm : X86WriteRes<WriteIMulH, [JALU1], 6, [4], 1>;
|
|
|
|
|
2018-05-08 21:51:45 +08:00
|
|
|
defm : JWriteResIntPair<WriteDiv8, [JALU1, JDiv], 12, [1, 12], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteDiv16, [JALU1, JDiv], 17, [1, 17], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteDiv32, [JALU1, JDiv], 25, [1, 25], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteDiv64, [JALU1, JDiv], 41, [1, 41], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIDiv8, [JALU1, JDiv], 12, [1, 12], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteIDiv16, [JALU1, JDiv], 17, [1, 17], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIDiv32, [JALU1, JDiv], 25, [1, 25], 2>;
|
|
|
|
defm : JWriteResIntPair<WriteIDiv64, [JALU1, JDiv], 41, [1, 41], 2>;
|
|
|
|
|
|
|
|
defm : JWriteResIntPair<WriteCRC32, [JALU01], 3, [4], 3>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-04-09 01:53:18 +08:00
|
|
|
defm : JWriteResIntPair<WriteCMOV, [JALU01], 1>; // Conditional move.
|
2018-05-13 02:07:07 +08:00
|
|
|
defm : X86WriteRes<WriteFCMOV, [JFPU0, JFPA], 3, [1,1], 1>; // x87 conditional move.
|
2018-04-09 01:53:18 +08:00
|
|
|
def : WriteRes<WriteSETCC, [JALU01]>; // Setcc.
|
|
|
|
def : WriteRes<WriteSETCCStore, [JALU01,JSAGU]>;
|
2018-06-20 14:13:39 +08:00
|
|
|
def : WriteRes<WriteLAHFSAHF, [JALU01]>;
|
2018-09-28 00:24:42 +08:00
|
|
|
|
2018-10-02 00:12:44 +08:00
|
|
|
defm : X86WriteRes<WriteBitTest, [JALU01], 1, [1], 1>;
|
|
|
|
defm : X86WriteRes<WriteBitTestImmLd, [JALU01,JLAGU], 4, [1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteBitTestRegLd, [JALU01,JLAGU], 4, [1,1], 5>;
|
|
|
|
defm : X86WriteRes<WriteBitTestSet, [JALU01], 1, [1], 2>;
|
2018-10-03 18:28:43 +08:00
|
|
|
defm : X86WriteRes<WriteBitTestSetImmLd, [JALU01,JLAGU], 4, [1,1], 4>;
|
|
|
|
defm : X86WriteRes<WriteBitTestSetRegLd, [JALU01,JLAGU], 4, [1,1], 8>;
|
2018-04-09 01:53:18 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
// This is for simple LEAs with one or two input operands.
|
|
|
|
def : WriteRes<WriteLEA, [JALU01]>;
|
|
|
|
|
2018-03-27 02:19:28 +08:00
|
|
|
// Bit counts.
|
2018-09-28 18:26:48 +08:00
|
|
|
defm : JWriteResIntPair<WriteBSF, [JALU01], 4, [8], 7>;
|
|
|
|
defm : JWriteResIntPair<WriteBSR, [JALU01], 5, [8], 8>;
|
2018-07-08 17:50:25 +08:00
|
|
|
defm : JWriteResIntPair<WritePOPCNT, [JALU01], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteLZCNT, [JALU01], 1>;
|
2018-09-27 20:28:47 +08:00
|
|
|
defm : JWriteResIntPair<WriteTZCNT, [JALU01], 2, [2], 2>;
|
2018-03-16 21:43:55 +08:00
|
|
|
|
2018-09-14 21:09:56 +08:00
|
|
|
// BMI1 BEXTR/BLS, BMI2 BZHI
|
2018-03-30 04:41:39 +08:00
|
|
|
defm : JWriteResIntPair<WriteBEXTR, [JALU01], 1>;
|
2018-09-27 22:57:57 +08:00
|
|
|
defm : JWriteResIntPair<WriteBLS, [JALU01], 2, [2], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteBZHI>;
|
2018-03-30 04:41:39 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Integer shifts and rotates.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-09-24 05:19:15 +08:00
|
|
|
defm : JWriteResIntPair<WriteShift, [JALU01], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteShiftCL, [JALU01], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteRotate, [JALU01], 1>;
|
|
|
|
defm : JWriteResIntPair<WriteRotateCL, [JALU01], 1>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-07-31 18:14:43 +08:00
|
|
|
// SHLD/SHRD.
|
|
|
|
defm : X86WriteRes<WriteSHDrri, [JALU01], 3, [6], 6>;
|
|
|
|
defm : X86WriteRes<WriteSHDrrcl,[JALU01], 4, [8], 7>;
|
|
|
|
defm : X86WriteRes<WriteSHDmri, [JLAGU, JALU01], 9, [1, 22], 8>;
|
|
|
|
defm : X86WriteRes<WriteSHDmrcl,[JLAGU, JALU01], 9, [1, 22], 8>;
|
2017-11-25 18:46:53 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Loads, stores, and moves, not folded with other operations.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2019-01-21 20:04:10 +08:00
|
|
|
def : WriteRes<WriteLoad, [JLAGU]> { let Latency = 3; }
|
2018-05-15 02:37:19 +08:00
|
|
|
def : WriteRes<WriteStore, [JSAGU]>;
|
|
|
|
def : WriteRes<WriteStoreNT, [JSAGU]>;
|
|
|
|
def : WriteRes<WriteMove, [JALU01]>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-04-22 02:07:36 +08:00
|
|
|
// Load/store MXCSR.
|
2019-01-21 20:04:10 +08:00
|
|
|
def : WriteRes<WriteLDMXCSR, [JLAGU]> { let Latency = 3; }
|
2018-04-22 02:07:36 +08:00
|
|
|
def : WriteRes<WriteSTMXCSR, [JSAGU]>;
|
|
|
|
|
2017-12-10 19:51:29 +08:00
|
|
|
// Treat misc copies as a move.
|
|
|
|
def : InstRW<[WriteMove], (instrs COPY)>;
|
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Idioms that clear a register, like xorps %xmm0, %xmm0.
|
|
|
|
// These can often bypass execution ports completely.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
|
|
|
def : WriteRes<WriteZero, []>;
|
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Branches don't produce values, so they have no latency, but they still
|
|
|
|
// consume resources. Indirect branches can fold loads.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-03-16 07:46:12 +08:00
|
|
|
defm : JWriteResIntPair<WriteJump, [JALU01], 1>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-03-13 05:35:12 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Special case scheduling classes.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-03-15 23:12:12 +08:00
|
|
|
def : WriteRes<WriteSystem, [JALU01]> { let Latency = 100; }
|
|
|
|
def : WriteRes<WriteMicrocoded, [JALU01]> { let Latency = 100; }
|
2018-03-13 05:35:12 +08:00
|
|
|
def : WriteRes<WriteFence, [JSAGU]>;
|
2018-04-20 21:12:04 +08:00
|
|
|
|
2018-03-19 22:26:50 +08:00
|
|
|
// Nops don't have dependencies, so there's no actual latency, but we set this
|
|
|
|
// to '1' to tell the scheduler that the nop uses an ALU slot for a cycle.
|
|
|
|
def : WriteRes<WriteNop, [JALU01]> { let Latency = 1; }
|
2018-03-13 05:35:12 +08:00
|
|
|
|
[X86][Btver2] Fix latency and throughput of CMPXCHG instructions.
On Jaguar, CMPXCHG has a latency of 11cy, and a maximum throughput of 0.33 IPC.
Throughput is superiorly limited to 0.33 because of the implicit in/out
dependency on register EAX. In the case of repeated non-atomic CMPXCHG with the
same memory location, store-to-load forwarding occurs and values for sequent
loads are quickly forwarded from the store buffer.
Interestingly, the functionality in LLVM that computes the reciprocal throughput
doesn't seem to know about RMW instructions. That functionality only looks at
the "consumed resource cycles" for the throughput computation. It should be
fixed/improved by a future patch. In particular, for RMW instructions, that
logic should also take into account for the write latency of in/out register
operands.
An atomic CMPXCHG has a latency of ~17cy. Throughput is also limited to
~17cy/inst due to cache locking, which prevents other memory uOPs to start
executing before the "lock releasing" store uOP.
CMPXCHG8rr and CMPXCHG8rm are treated specially because they decode to one less
macro opcode. Their latency tend to be the same as the other RR/RM variants. RR
variants are relatively fast 3cy (but still microcoded - 5 macro opcodes).
CMPXCHG8B is 11cy and unfortunately doesn't seem to benefit from store-to-load
forwarding. That means, throughput is clearly limited by the in/out dependency
on GPR registers. The uOP composition is sadly unknown (due to the lack of PMCs
for the Integer pipes). I have reused the same mix of consumed resource from the
other CMPXCHG instructions for CMPXCHG8B too.
LOCK CMPXCHG8B is instead 18cycles.
CMPXCHG16B is 32cycles. Up to 38cycles when the LOCK prefix is specified. Due to
the in/out dependencies, throughput is limited to 1 instruction every 32 (or 38)
cycles dependeing on whether the LOCK prefix is specified or not.
I wouldn't be surprised if the microcode for CMPXCHG16B is similar to 2x
microcode from CMPXCHG8B. So, I have speculatively set the JALU01 consumption to
2x the resource cycles used for CMPXCHG8B.
The two new hasLockPrefix() functions are used by the btver2 scheduling model
check if a MCInst/MachineInst has a LOCK prefix. Calls to hasLockPrefix() have
been encoded in predicates of variant scheduling classes that describe lat/thr
of CMPXCHG.
Differential Revision: https://reviews.llvm.org/D66424
llvm-svn: 369365
2019-08-20 18:23:55 +08:00
|
|
|
def JWriteCMPXCHG8rr : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 3;
|
|
|
|
let ResourceCycles = [3];
|
|
|
|
let NumMicroOps = 3;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteLOCK_CMPXCHG8rm : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 16;
|
|
|
|
let ResourceCycles = [3,16,16];
|
|
|
|
let NumMicroOps = 5;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteLOCK_CMPXCHGrm : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 17;
|
|
|
|
let ResourceCycles = [3,17,17];
|
|
|
|
let NumMicroOps = 6;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteCMPXCHG8rm : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 11;
|
|
|
|
let ResourceCycles = [3,1,1];
|
|
|
|
let NumMicroOps = 5;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteCMPXCHG8B : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 11;
|
|
|
|
let ResourceCycles = [3,1,1];
|
|
|
|
let NumMicroOps = 18;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteCMPXCHG16B : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 32;
|
|
|
|
let ResourceCycles = [6,1,1];
|
|
|
|
let NumMicroOps = 28;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteLOCK_CMPXCHG8B : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 19;
|
|
|
|
let ResourceCycles = [3,19,19];
|
|
|
|
let NumMicroOps = 18;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteLOCK_CMPXCHG16B : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 38;
|
|
|
|
let ResourceCycles = [6,38,38];
|
|
|
|
let NumMicroOps = 28;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteCMPXCHGVariant : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<IsAtomicCompareAndSwap8B>, [JWriteLOCK_CMPXCHG8B]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsAtomicCompareAndSwap16B>, [JWriteLOCK_CMPXCHG16B]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsAtomicCompareAndSwap_8>, [JWriteLOCK_CMPXCHG8rm]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsAtomicCompareAndSwap>, [JWriteLOCK_CMPXCHGrm]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsCompareAndSwap8B>, [JWriteCMPXCHG8B]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsCompareAndSwap16B>, [JWriteCMPXCHG16B]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsRegMemCompareAndSwap_8>, [JWriteCMPXCHG8rm]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsRegMemCompareAndSwap>, [WriteCMPXCHGRMW]>,
|
|
|
|
SchedVar<MCSchedPredicate<IsRegRegCompareAndSwap_8>, [JWriteCMPXCHG8rr]>,
|
|
|
|
SchedVar<NoSchedPred, [WriteCMPXCHG]>
|
|
|
|
]>;
|
2019-08-21 01:05:56 +08:00
|
|
|
|
|
|
|
// The first five reads are contributed by the memory load operand.
|
|
|
|
// We ignore those reads and set a read-advance for the other input operands
|
|
|
|
// including the implicit read of RAX.
|
|
|
|
def : InstRW<[JWriteCMPXCHGVariant,
|
|
|
|
ReadDefault, ReadDefault, ReadDefault, ReadDefault, ReadDefault,
|
|
|
|
ReadAfterLd, ReadAfterLd], (instrs LCMPXCHG8, LCMPXCHG16,
|
|
|
|
LCMPXCHG32, LCMPXCHG64,
|
|
|
|
CMPXCHG8rm, CMPXCHG16rm,
|
2019-08-23 20:19:45 +08:00
|
|
|
CMPXCHG32rm, CMPXCHG64rm)>;
|
2019-08-21 01:05:56 +08:00
|
|
|
|
2019-08-23 20:19:45 +08:00
|
|
|
def : InstRW<[JWriteCMPXCHGVariant], (instrs CMPXCHG8rr, CMPXCHG16rr,
|
|
|
|
CMPXCHG32rr, CMPXCHG64rr)>;
|
|
|
|
|
|
|
|
def : InstRW<[JWriteCMPXCHGVariant,
|
|
|
|
// Ignore reads contributed by the memory operand.
|
|
|
|
ReadDefault, ReadDefault, ReadDefault, ReadDefault, ReadDefault,
|
|
|
|
// Add a read-advance to every implicit register read.
|
|
|
|
ReadAfterLd, ReadAfterLd, ReadAfterLd, ReadAfterLd], (instrs LCMPXCHG8B, LCMPXCHG16B,
|
|
|
|
CMPXCHG8B, CMPXCHG16B)>;
|
[X86][Btver2] Fix latency and throughput of CMPXCHG instructions.
On Jaguar, CMPXCHG has a latency of 11cy, and a maximum throughput of 0.33 IPC.
Throughput is superiorly limited to 0.33 because of the implicit in/out
dependency on register EAX. In the case of repeated non-atomic CMPXCHG with the
same memory location, store-to-load forwarding occurs and values for sequent
loads are quickly forwarded from the store buffer.
Interestingly, the functionality in LLVM that computes the reciprocal throughput
doesn't seem to know about RMW instructions. That functionality only looks at
the "consumed resource cycles" for the throughput computation. It should be
fixed/improved by a future patch. In particular, for RMW instructions, that
logic should also take into account for the write latency of in/out register
operands.
An atomic CMPXCHG has a latency of ~17cy. Throughput is also limited to
~17cy/inst due to cache locking, which prevents other memory uOPs to start
executing before the "lock releasing" store uOP.
CMPXCHG8rr and CMPXCHG8rm are treated specially because they decode to one less
macro opcode. Their latency tend to be the same as the other RR/RM variants. RR
variants are relatively fast 3cy (but still microcoded - 5 macro opcodes).
CMPXCHG8B is 11cy and unfortunately doesn't seem to benefit from store-to-load
forwarding. That means, throughput is clearly limited by the in/out dependency
on GPR registers. The uOP composition is sadly unknown (due to the lack of PMCs
for the Integer pipes). I have reused the same mix of consumed resource from the
other CMPXCHG instructions for CMPXCHG8B too.
LOCK CMPXCHG8B is instead 18cycles.
CMPXCHG16B is 32cycles. Up to 38cycles when the LOCK prefix is specified. Due to
the in/out dependencies, throughput is limited to 1 instruction every 32 (or 38)
cycles dependeing on whether the LOCK prefix is specified or not.
I wouldn't be surprised if the microcode for CMPXCHG16B is similar to 2x
microcode from CMPXCHG8B. So, I have speculatively set the JALU01 consumption to
2x the resource cycles used for CMPXCHG8B.
The two new hasLockPrefix() functions are used by the btver2 scheduling model
check if a MCInst/MachineInst has a LOCK prefix. Calls to hasLockPrefix() have
been encoded in predicates of variant scheduling classes that describe lat/thr
of CMPXCHG.
Differential Revision: https://reviews.llvm.org/D66424
llvm-svn: 369365
2019-08-20 18:23:55 +08:00
|
|
|
|
2019-08-20 22:31:27 +08:00
|
|
|
def JWriteLOCK_ALURMW : SchedWriteRes<[JALU01, JLAGU, JSAGU]> {
|
|
|
|
let Latency = 19;
|
|
|
|
let ResourceCycles = [1,19,19];
|
|
|
|
let NumMicroOps = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteLOCK_ALURMWVariant : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<CheckLockPrefix>, [JWriteLOCK_ALURMW]>,
|
|
|
|
SchedVar<NoSchedPred, [WriteALURMW]>
|
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteLOCK_ALURMWVariant], (instrs INC8m, INC16m, INC32m, INC64m,
|
|
|
|
DEC8m, DEC16m, DEC32m, DEC64m,
|
|
|
|
NOT8m, NOT16m, NOT32m, NOT64m,
|
|
|
|
NEG8m, NEG16m, NEG32m, NEG64m)>;
|
[X86][Btver2] Fix latency and throughput of CMPXCHG instructions.
On Jaguar, CMPXCHG has a latency of 11cy, and a maximum throughput of 0.33 IPC.
Throughput is superiorly limited to 0.33 because of the implicit in/out
dependency on register EAX. In the case of repeated non-atomic CMPXCHG with the
same memory location, store-to-load forwarding occurs and values for sequent
loads are quickly forwarded from the store buffer.
Interestingly, the functionality in LLVM that computes the reciprocal throughput
doesn't seem to know about RMW instructions. That functionality only looks at
the "consumed resource cycles" for the throughput computation. It should be
fixed/improved by a future patch. In particular, for RMW instructions, that
logic should also take into account for the write latency of in/out register
operands.
An atomic CMPXCHG has a latency of ~17cy. Throughput is also limited to
~17cy/inst due to cache locking, which prevents other memory uOPs to start
executing before the "lock releasing" store uOP.
CMPXCHG8rr and CMPXCHG8rm are treated specially because they decode to one less
macro opcode. Their latency tend to be the same as the other RR/RM variants. RR
variants are relatively fast 3cy (but still microcoded - 5 macro opcodes).
CMPXCHG8B is 11cy and unfortunately doesn't seem to benefit from store-to-load
forwarding. That means, throughput is clearly limited by the in/out dependency
on GPR registers. The uOP composition is sadly unknown (due to the lack of PMCs
for the Integer pipes). I have reused the same mix of consumed resource from the
other CMPXCHG instructions for CMPXCHG8B too.
LOCK CMPXCHG8B is instead 18cycles.
CMPXCHG16B is 32cycles. Up to 38cycles when the LOCK prefix is specified. Due to
the in/out dependencies, throughput is limited to 1 instruction every 32 (or 38)
cycles dependeing on whether the LOCK prefix is specified or not.
I wouldn't be surprised if the microcode for CMPXCHG16B is similar to 2x
microcode from CMPXCHG8B. So, I have speculatively set the JALU01 consumption to
2x the resource cycles used for CMPXCHG8B.
The two new hasLockPrefix() functions are used by the btver2 scheduling model
check if a MCInst/MachineInst has a LOCK prefix. Calls to hasLockPrefix() have
been encoded in predicates of variant scheduling classes that describe lat/thr
of CMPXCHG.
Differential Revision: https://reviews.llvm.org/D66424
llvm-svn: 369365
2019-08-20 18:23:55 +08:00
|
|
|
|
[X86][BtVer2] Fix latency and throughput of XCHG and XADD.
On Jaguar, XCHG has a latency of 1cy and decodes to 2 macro-opcodes. Maximum
throughput for XCHG is 1 IPC. The byte exchange has worse latency and decodes to
1 extra uOP; maximum observed throughput is 0.5 IPC.
```
xchgb %cl, %dl # Latency: 2cy - uOPs: 3 - 2 ALU
xchgw %cx, %dx # Latency: 1cy - uOPs: 2 - 2 ALU
xchgl %ecx, %edx # Latency: 1cy - uOPs: 2 - 2 ALU
xchgq %rcx, %rdx # Latency: 1cy - uOPs: 2 - 2 ALU
```
The reg-mem forms of XCHG are atomic operations with an observed latency of
16cy. The resource usage is similar to the XCHGrr variants. The biggest
difference is obviously the bus-locking, which prevents the LS to issue other
memory uOPs in parallel until the unlocking store uOP is executed.
```
xchgb %cl, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgw %cx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgl %ecx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
xchgq %rcx, (%rsp) # Latency: 16cy - uOPs: 3 - ECX latency: 11cy
```
The exchanged in/out register operand becomes available after 11cy from the
start of execution. Added test xchg.s to verify that we correctly see that
register write committed in 11cy (and not 16cy).
Reg-reg XADD instructions have the same latency/throughput than the byte
exchange (register-register variant).
```
xaddb %cl, %dl # latency: 2cy - uOPs: 3 - 3 ALU
xaddw %cx, %dx # latency: 2cy - uOPs: 3 - 3 ALU
xaddl %ecx, %edx # latency: 2cy - uOPs: 3 - 3 ALU
xaddq %rcx, %rdx # latency: 2cy - uOPs: 3 - 3 ALU
```
The non-atomic RM variants have a latency of 11cy, and decode to 4
macro-opcodes. They still consume 2 ALU pipes, and the exchange in/out register
operand becomes available in 3cy (it matches the 'load-to-use latency').
```
xaddb %cl, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddw %cx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddl %ecx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
xaddq %rcx, (%rsp) # latency: 11cy - uOPs: 4 - 3 ALU
```
The atomic XADD variants execute in 16cy. The in/out register operand is
available after 11cy from the start of execution.
```
lock xaddb %cl, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddw %cx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddl %ecx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
lock xaddq %rcx, (%rsp) # latency: 16cy - uOPs: 4 - 3 ALU -- ECX latency: 11cy
```
Added test xadd.s to verify those latencies as well as read-advance values.
Differential Revision: https://reviews.llvm.org/D66535
llvm-svn: 369642
2019-08-22 19:32:47 +08:00
|
|
|
def JWriteXCHG8rr_XADDrr : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 2;
|
|
|
|
let ResourceCycles = [3];
|
|
|
|
let NumMicroOps = 3;
|
|
|
|
}
|
|
|
|
def : InstRW<[JWriteXCHG8rr_XADDrr], (instrs XCHG8rr, XADD8rr, XADD16rr,
|
|
|
|
XADD32rr, XADD64rr)>;
|
|
|
|
|
|
|
|
// This write defines the latency of the in/out register operand of a non-atomic
|
|
|
|
// XADDrm. This is the first of a pair of writes that model non-atomic
|
|
|
|
// XADDrm instructions (the second write definition is JWriteXADDrm_LdSt_Part).
|
|
|
|
//
|
|
|
|
// We need two writes because the instruction latency differs from the output
|
|
|
|
// register operand latency. In particular, the first write describes the first
|
|
|
|
// (and only) output register operand of the instruction. However, the
|
|
|
|
// instruction latency is set to the MAX of all the write latencies. That's why
|
|
|
|
// a second write is needed in this case (see example below).
|
|
|
|
//
|
|
|
|
// Example:
|
|
|
|
// XADD %ecx, (%rsp) ## Instruction latency: 11cy
|
|
|
|
// ## ECX write Latency: 3cy
|
|
|
|
//
|
|
|
|
// Register ECX becomes available in 3 cycles. That is because the value of ECX
|
|
|
|
// is exchanged with the value read from the stack pointer, and the load-to-use
|
|
|
|
// latency is assumed to be 3cy.
|
|
|
|
def JWriteXADDrm_XCHG_Part : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 3; // load-to-use latency
|
|
|
|
let ResourceCycles = [3];
|
|
|
|
let NumMicroOps = 3;
|
|
|
|
}
|
|
|
|
|
|
|
|
// This write defines the latency of the in/out register operand of an atomic
|
|
|
|
// XADDrm. This is the first of a sequence of two writes used to model atomic
|
|
|
|
// XADD instructions. The second write of the sequence is JWriteXCHGrm_LdSt_Part.
|
|
|
|
//
|
|
|
|
//
|
|
|
|
// Example:
|
|
|
|
// LOCK XADD %ecx, (%rsp) ## Instruction Latency: 16cy
|
|
|
|
// ## ECX write Latency: 11cy
|
|
|
|
//
|
|
|
|
// The value of ECX becomes available only after 11cy from the start of
|
|
|
|
// execution. This write is used to specifically set that operand latency.
|
|
|
|
def JWriteLOCK_XADDrm_XCHG_Part : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 11;
|
|
|
|
let ResourceCycles = [3];
|
|
|
|
let NumMicroOps = 3;
|
|
|
|
}
|
|
|
|
|
|
|
|
// This write defines the latency of the in/out register operand of an atomic
|
|
|
|
// XCHGrm. This write is the first of a sequence of two writes that describe
|
|
|
|
// atomic XCHG operations. We need two writes because the instruction latency
|
|
|
|
// differs from the output register write latency. We want to make sure that
|
|
|
|
// the output register operand becomes visible after 11cy. However, we want to
|
|
|
|
// set the instruction latency to 16cy.
|
|
|
|
def JWriteXCHGrm_XCHG_Part : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 11;
|
|
|
|
let ResourceCycles = [2];
|
|
|
|
let NumMicroOps = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteXADDrm_LdSt_Part : SchedWriteRes<[JLAGU, JSAGU]> {
|
|
|
|
let Latency = 11;
|
|
|
|
let ResourceCycles = [1, 1];
|
|
|
|
let NumMicroOps = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteXCHGrm_LdSt_Part : SchedWriteRes<[JLAGU, JSAGU]> {
|
|
|
|
let Latency = 16;
|
|
|
|
let ResourceCycles = [16, 16];
|
|
|
|
let NumMicroOps = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
def JWriteXADDrm_Part1 : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<CheckLockPrefix>, [JWriteLOCK_XADDrm_XCHG_Part]>,
|
|
|
|
SchedVar<NoSchedPred, [JWriteXADDrm_XCHG_Part]>
|
|
|
|
]>;
|
|
|
|
|
|
|
|
def JWriteXADDrm_Part2 : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<CheckLockPrefix>, [JWriteXCHGrm_LdSt_Part]>,
|
|
|
|
SchedVar<NoSchedPred, [JWriteXADDrm_LdSt_Part]>
|
|
|
|
]>;
|
|
|
|
|
|
|
|
def : InstRW<[JWriteXADDrm_Part1, JWriteXADDrm_Part2, ReadAfterLd],
|
|
|
|
(instrs XADD8rm, XADD16rm, XADD32rm, XADD64rm,
|
|
|
|
LXADD8, LXADD16, LXADD32, LXADD64)>;
|
|
|
|
|
|
|
|
def : InstRW<[JWriteXCHGrm_XCHG_Part, JWriteXCHGrm_LdSt_Part, ReadAfterLd],
|
|
|
|
(instrs XCHG8rm, XCHG16rm, XCHG32rm, XCHG64rm)>;
|
|
|
|
|
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Floating point. This covers both scalar and vector operations.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-05-31 19:41:27 +08:00
|
|
|
defm : X86WriteRes<WriteFLD0, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFLD1, [JFPU1, JSTC], 3, [1,1], 1>;
|
[X86] Introduce WriteFLDC for x87 constant loads.
Summary:
{FLDL2E, FLDL2T, FLDLG2, FLDLN2, FLDPI} were using WriteMicrocoded.
- I've measured the values for Broadwell, Haswell, SandyBridge, Skylake.
- For ZnVer1 and Atom, values were transferred form InstRWs.
- For SLM and BtVer2, I've guessed some values :(
Reviewers: RKSimon, craig.topper, andreadb
Subscribers: gbedwell, llvm-commits
Differential Revision: https://reviews.llvm.org/D47585
llvm-svn: 333656
2018-05-31 22:22:01 +08:00
|
|
|
defm : X86WriteRes<WriteFLDC, [JFPU1, JSTC], 3, [1,1], 1>;
|
2018-05-08 20:17:55 +08:00
|
|
|
defm : X86WriteRes<WriteFLoad, [JLAGU, JFPU01, JFPX], 5, [1, 1, 1], 1>;
|
2019-10-14 19:12:18 +08:00
|
|
|
defm : X86WriteRes<WriteFLoadX, [JLAGU], 5, [1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFLoadY, [JLAGU], 5, [2], 2>;
|
2018-10-01 21:12:05 +08:00
|
|
|
defm : X86WriteRes<WriteFMaskedLoad, [JLAGU, JFPU01, JFPX], 6, [1, 2, 2], 1>;
|
|
|
|
defm : X86WriteRes<WriteFMaskedLoadY, [JLAGU, JFPU01, JFPX], 6, [2, 4, 4], 2>;
|
2018-05-08 20:17:55 +08:00
|
|
|
|
2018-05-18 22:22:22 +08:00
|
|
|
defm : X86WriteRes<WriteFStore, [JSAGU, JFPU1, JSTC], 2, [1, 1, 1], 1>;
|
2018-05-11 22:30:54 +08:00
|
|
|
defm : X86WriteRes<WriteFStoreX, [JSAGU, JFPU1, JSTC], 1, [1, 1, 1], 1>;
|
2019-10-14 19:12:18 +08:00
|
|
|
defm : X86WriteRes<WriteFStoreY, [JSAGU, JFPU1, JSTC], 1, [2, 2, 2], 2>;
|
2018-05-15 02:37:19 +08:00
|
|
|
defm : X86WriteRes<WriteFStoreNT, [JSAGU, JFPU1, JSTC], 3, [1, 1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFStoreNTX, [JSAGU, JFPU1, JSTC], 3, [1, 1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFStoreNTY, [JSAGU, JFPU1, JSTC], 3, [2, 2, 2], 1>;
|
2019-09-02 20:32:28 +08:00
|
|
|
|
|
|
|
defm : X86WriteRes<WriteFMaskedStore32, [JFPU0, JFPA, JFPU1, JSTC, JLAGU, JSAGU, JALU01], 16, [1,1, 5, 5,4,4,4], 19>;
|
|
|
|
defm : X86WriteRes<WriteFMaskedStore64, [JFPU0, JFPA, JFPU1, JSTC, JLAGU, JSAGU, JALU01], 13, [1,1, 2, 2,2,2,2], 10>;
|
|
|
|
defm : X86WriteRes<WriteFMaskedStore32Y, [JFPU0, JFPA, JFPU1, JSTC, JLAGU, JSAGU, JALU01], 22, [1,1,10,10,8,8,8], 36>;
|
|
|
|
defm : X86WriteRes<WriteFMaskedStore64Y, [JFPU0, JFPA, JFPU1, JSTC, JLAGU, JSAGU, JALU01], 16, [1,1, 4, 4,4,4,4], 18>;
|
2018-05-08 20:17:55 +08:00
|
|
|
|
2018-05-12 01:38:36 +08:00
|
|
|
defm : X86WriteRes<WriteFMove, [JFPU01, JFPX], 1, [1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFMoveX, [JFPU01, JFPX], 1, [1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteFMoveY, [JFPU01, JFPX], 1, [2, 2], 2>;
|
2018-05-11 22:30:54 +08:00
|
|
|
|
2018-05-12 01:38:36 +08:00
|
|
|
defm : X86WriteRes<WriteEMMS, [JFPU01, JFPX], 2, [1, 1], 1>;
|
2018-03-15 22:45:30 +08:00
|
|
|
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFAdd, [JFPU0, JFPA], 3>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFAddX, [JFPU0, JFPA], 3>;
|
2018-05-02 00:13:42 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFAddY, [JFPU0, JFPA], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFAddZ>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFAdd64, [JFPU0, JFPA], 3>;
|
|
|
|
defm : JWriteResFpuPair<WriteFAdd64X, [JFPU0, JFPA], 3>;
|
|
|
|
defm : JWriteResYMMPair<WriteFAdd64Y, [JFPU0, JFPA], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFAdd64Z>;
|
2018-04-17 15:22:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFCmp, [JFPU0, JFPA], 2>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFCmpX, [JFPU0, JFPA], 2>;
|
2018-05-02 00:50:16 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFCmpY, [JFPU0, JFPA], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFCmpZ>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFCmp64, [JFPU0, JFPA], 2>;
|
|
|
|
defm : JWriteResFpuPair<WriteFCmp64X, [JFPU0, JFPA], 2>;
|
|
|
|
defm : JWriteResYMMPair<WriteFCmp64Y, [JFPU0, JFPA], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFCmp64Z>;
|
2018-04-17 15:22:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFCom, [JFPU0, JFPA, JALU0], 3>;
|
2020-01-17 21:23:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFComX, [JFPU0, JFPA, JALU0], 3>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFMul, [JFPU1, JFPM], 2>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFMulX, [JFPU1, JFPM], 2>;
|
2018-05-02 02:22:53 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFMulY, [JFPU1, JFPM], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFMulZ>;
|
2018-05-08 04:52:53 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFMul64, [JFPU1, JFPM], 4, [1,2]>;
|
|
|
|
defm : JWriteResFpuPair<WriteFMul64X, [JFPU1, JFPM], 4, [1,2]>;
|
|
|
|
defm : JWriteResYMMPair<WriteFMul64Y, [JFPU1, JFPM], 4, [2,4], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFMul64Z>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFMA>;
|
|
|
|
defm : X86WriteResPairUnsupported<WriteFMAX>;
|
|
|
|
defm : X86WriteResPairUnsupported<WriteFMAY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFMAZ>;
|
2018-05-04 06:31:19 +08:00
|
|
|
defm : JWriteResFpuPair<WriteDPPD, [JFPU1, JFPM, JFPA], 9, [1, 3, 3], 3>;
|
|
|
|
defm : JWriteResFpuPair<WriteDPPS, [JFPU1, JFPM, JFPA], 11, [1, 3, 3], 5>;
|
|
|
|
defm : JWriteResYMMPair<WriteDPPSY, [JFPU1, JFPM, JFPA], 12, [2, 6, 6], 10>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteDPPSZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFRcp, [JFPU1, JFPM], 2>;
|
2018-05-07 19:50:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFRcpX, [JFPU1, JFPM], 2>;
|
2018-05-02 02:06:07 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFRcpY, [JFPU1, JFPM], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFRcpZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFRsqrt, [JFPU1, JFPM], 2>;
|
2018-05-07 19:50:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFRsqrtX, [JFPU1, JFPM], 2>;
|
2018-05-02 02:06:07 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFRsqrtY, [JFPU1, JFPM], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFRsqrtZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFDiv, [JFPU1, JFPM], 19, [1, 19]>;
|
2018-05-08 00:15:46 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFDivX, [JFPU1, JFPM], 19, [1, 19]>;
|
2018-05-02 02:22:53 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFDivY, [JFPU1, JFPM], 38, [2, 38], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFDivZ>;
|
2018-05-08 00:15:46 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFDiv64, [JFPU1, JFPM], 19, [1, 19]>;
|
|
|
|
defm : JWriteResFpuPair<WriteFDiv64X, [JFPU1, JFPM], 19, [1, 19]>;
|
|
|
|
defm : JWriteResYMMPair<WriteFDiv64Y, [JFPU1, JFPM], 38, [2, 38], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFDiv64Z>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFSqrt, [JFPU1, JFPM], 21, [1, 21]>;
|
2018-05-07 19:50:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFSqrtX, [JFPU1, JFPM], 21, [1, 21]>;
|
2018-05-02 02:06:07 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFSqrtY, [JFPU1, JFPM], 42, [2, 42], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFSqrtZ>;
|
2018-05-07 19:50:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFSqrt64, [JFPU1, JFPM], 27, [1, 27]>;
|
|
|
|
defm : JWriteResFpuPair<WriteFSqrt64X, [JFPU1, JFPM], 27, [1, 27]>;
|
|
|
|
defm : JWriteResYMMPair<WriteFSqrt64Y, [JFPU1, JFPM], 54, [2, 54], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFSqrt64Z>;
|
2018-05-07 19:50:44 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFSqrt80, [JFPU1, JFPM], 35, [1, 35]>;
|
2018-04-21 05:16:05 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFSign, [JFPU1, JFPM], 2>;
|
2018-05-04 20:59:24 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFRnd, [JFPU1, JSTC], 3>;
|
|
|
|
defm : JWriteResYMMPair<WriteFRndY, [JFPU1, JSTC], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFRndZ>;
|
2018-04-21 05:16:05 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFLogic, [JFPU01, JFPX], 1>;
|
2018-04-27 23:50:33 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFLogicY, [JFPU01, JFPX], 1, [2, 2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFLogicZ>;
|
2018-05-08 18:28:03 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFTest, [JFPU0, JFPA, JALU0], 3>;
|
|
|
|
defm : JWriteResYMMPair<WriteFTestY , [JFPU01, JFPX, JFPA, JALU0], 4, [2, 2, 2, 1], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFTestZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFShuffle, [JFPU01, JFPX], 1>;
|
2018-05-01 22:25:01 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFShuffleY, [JFPU01, JFPX], 1, [2, 2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFShuffleZ>;
|
2019-01-22 21:13:57 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFVarShuffle, [JFPU01, JFPX], 3, [1, 4], 3>; // +1cy latency.
|
|
|
|
defm : JWriteResYMMPair<WriteFVarShuffleY,[JFPU01, JFPX], 4, [2, 6], 6>; // +1cy latency.
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFVarShuffleZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFBlend, [JFPU01, JFPX], 1>;
|
2018-04-28 02:19:48 +08:00
|
|
|
defm : JWriteResYMMPair<WriteFBlendY, [JFPU01, JFPX], 1, [2, 2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFBlendZ>;
|
2018-10-02 23:13:18 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFVarBlend, [JFPU01, JFPX], 2, [4, 4], 3>;
|
|
|
|
defm : JWriteResYMMPair<WriteFVarBlendY, [JFPU01, JFPX], 3, [6, 6], 6>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFVarBlendZ>;
|
2018-08-31 16:30:47 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFShuffle256, [JFPU01, JFPX], 1, [2, 2], 2>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteFVarShuffle256>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-03-13 05:35:12 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Conversions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-09-28 21:19:22 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtSS2I, [JFPU1, JSTC, JFPU0, JFPA, JALU0], 7, [1,1,1,1,1], 2>;
|
2018-05-16 18:53:45 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtPS2I, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtPS2IY, [JFPU1, JSTC], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtPS2IZ>;
|
2018-09-28 21:19:22 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtSD2I, [JFPU1, JSTC, JFPU0, JFPA, JALU0], 7, [1,1,1,1,1], 2>;
|
2018-05-16 18:53:45 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtPD2I, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtPD2IY, [JFPU1, JSTC, JFPX], 6, [2,2,4], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtPD2IZ>;
|
2018-05-16 18:53:45 +08:00
|
|
|
|
2019-01-30 00:47:27 +08:00
|
|
|
defm : X86WriteRes<WriteCvtI2SS, [JFPU1, JSTC], 4, [1,1], 2>;
|
|
|
|
defm : X86WriteRes<WriteCvtI2SSLd, [JLAGU, JFPU1, JSTC], 9, [1,1,1], 1>;
|
2018-05-16 18:53:45 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtI2PS, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtI2PSY, [JFPU1, JSTC], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtI2PSZ>;
|
2019-01-30 00:47:27 +08:00
|
|
|
defm : X86WriteRes<WriteCvtI2SD, [JFPU1, JSTC], 4, [1,1], 2>;
|
|
|
|
defm : X86WriteRes<WriteCvtI2SDLd, [JLAGU, JFPU1, JSTC], 9, [1,1,1], 1>;
|
2018-05-16 18:53:45 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCvtI2PD, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtI2PDY, [JFPU1, JSTC], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtI2PDZ>;
|
2018-05-16 01:36:49 +08:00
|
|
|
|
|
|
|
defm : JWriteResFpuPair<WriteCvtSS2SD, [JFPU1, JSTC], 7, [1,2], 2>;
|
|
|
|
defm : JWriteResFpuPair<WriteCvtPS2PD, [JFPU1, JSTC], 2, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtPS2PDY, [JFPU1, JSTC], 2, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtPS2PDZ>;
|
2018-05-16 01:36:49 +08:00
|
|
|
|
|
|
|
defm : JWriteResFpuPair<WriteCvtSD2SS, [JFPU1, JSTC], 7, [1,2], 2>;
|
|
|
|
defm : JWriteResFpuPair<WriteCvtPD2PS, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtPD2PSY, [JFPU1, JSTC, JFPX], 6, [2,2,4], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtPD2PSZ>;
|
2018-05-15 22:12:32 +08:00
|
|
|
|
|
|
|
defm : JWriteResFpuPair<WriteCvtPH2PS, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : JWriteResYMMPair<WriteCvtPH2PSY, [JFPU1, JSTC], 3, [2,2], 2>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteCvtPH2PSZ>;
|
2018-05-15 22:12:32 +08:00
|
|
|
|
|
|
|
defm : X86WriteRes<WriteCvtPS2PH, [JFPU1, JSTC], 3, [1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteCvtPS2PHY, [JFPU1, JSTC, JFPX], 6, [2,2,2], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResUnsupported<WriteCvtPS2PHZ>;
|
2018-05-15 22:12:32 +08:00
|
|
|
defm : X86WriteRes<WriteCvtPS2PHSt, [JFPU1, JSTC, JSAGU], 4, [1,1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteCvtPS2PHYSt, [JFPU1, JSTC, JFPX, JSAGU], 7, [2,2,2,1], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResUnsupported<WriteCvtPS2PHZSt>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-03-13 00:02:56 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
2014-09-10 04:07:07 +08:00
|
|
|
// Vector integer operations.
|
2018-03-13 00:02:56 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-05-08 20:17:55 +08:00
|
|
|
defm : X86WriteRes<WriteVecLoad, [JLAGU, JFPU01, JVALU], 5, [1, 1, 1], 1>;
|
2019-10-14 19:12:18 +08:00
|
|
|
defm : X86WriteRes<WriteVecLoadX, [JLAGU], 5, [1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecLoadY, [JLAGU], 5, [2], 2>;
|
2018-05-15 02:37:19 +08:00
|
|
|
defm : X86WriteRes<WriteVecLoadNT, [JLAGU, JFPU01, JVALU], 5, [1, 1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecLoadNTY, [JLAGU, JFPU01, JVALU], 5, [1, 1, 1], 1>;
|
2018-10-01 21:12:05 +08:00
|
|
|
defm : X86WriteRes<WriteVecMaskedLoad, [JLAGU, JFPU01, JVALU], 6, [1, 2, 2], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecMaskedLoadY, [JLAGU, JFPU01, JVALU], 6, [2, 4, 4], 2>;
|
2018-05-08 20:17:55 +08:00
|
|
|
|
2018-05-18 22:22:22 +08:00
|
|
|
defm : X86WriteRes<WriteVecStore, [JSAGU, JFPU1, JSTC], 2, [1, 1, 1], 1>;
|
2018-05-11 22:30:54 +08:00
|
|
|
defm : X86WriteRes<WriteVecStoreX, [JSAGU, JFPU1, JSTC], 1, [1, 1, 1], 1>;
|
2019-10-14 19:12:18 +08:00
|
|
|
defm : X86WriteRes<WriteVecStoreY, [JSAGU, JFPU1, JSTC], 1, [2, 2, 2], 2>;
|
2018-05-15 02:37:19 +08:00
|
|
|
defm : X86WriteRes<WriteVecStoreNT, [JSAGU, JFPU1, JSTC], 2, [1, 1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecStoreNTY, [JSAGU, JFPU1, JSTC], 2, [2, 2, 2], 1>;
|
2018-05-08 20:17:55 +08:00
|
|
|
defm : X86WriteRes<WriteVecMaskedStore, [JSAGU, JFPU01, JVALU], 6, [1, 1, 4], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecMaskedStoreY, [JSAGU, JFPU01, JVALU], 6, [2, 2, 4], 2>;
|
|
|
|
|
2018-05-12 01:38:36 +08:00
|
|
|
defm : X86WriteRes<WriteVecMove, [JFPU01, JVALU], 1, [1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecMoveX, [JFPU01, JVALU], 1, [1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecMoveY, [JFPU01, JVALU], 1, [2, 2], 2>;
|
2018-05-19 01:58:36 +08:00
|
|
|
defm : X86WriteRes<WriteVecMoveToGpr, [JFPU0, JFPA, JALU0], 4, [1, 1, 1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecMoveFromGpr, [JFPU01, JFPX], 8, [1, 1], 2>;
|
2018-03-15 22:45:30 +08:00
|
|
|
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecALU, [JFPU01, JVALU], 1>;
|
2018-05-11 01:06:09 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecALUX, [JFPU01, JVALU], 1>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecALUY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecALUZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecShift, [JFPU01, JVALU], 1>;
|
2019-01-22 21:27:18 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecShiftX, [JFPU01, JVALU], 2>; // +1cy latency.
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecShiftY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecShiftZ>;
|
2018-05-05 01:47:46 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecShiftImm, [JFPU01, JVALU], 1>;
|
2019-01-22 21:27:18 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecShiftImmX,[JFPU01, JVALU], 2>; // +1cy latency.
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecShiftImmY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecShiftImmZ>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarVecShift>;
|
|
|
|
defm : X86WriteResPairUnsupported<WriteVarVecShiftY>;
|
|
|
|
defm : X86WriteResPairUnsupported<WriteVarVecShiftZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecIMul, [JFPU0, JVIMUL], 2>;
|
2018-05-05 01:47:46 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecIMulX, [JFPU0, JVIMUL], 2>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecIMulY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecIMulZ>;
|
2018-03-31 12:54:32 +08:00
|
|
|
defm : JWriteResFpuPair<WritePMULLD, [JFPU0, JFPU01, JVIMUL, JVALU], 4, [2, 1, 2, 1], 3>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WritePMULLDY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WritePMULLDZ>;
|
2018-09-28 01:13:57 +08:00
|
|
|
defm : JWriteResFpuPair<WriteMPSAD, [JFPU0, JVIMUL], 3, [1, 2], 3>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteMPSADY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteMPSADZ>;
|
2018-04-18 03:35:19 +08:00
|
|
|
defm : JWriteResFpuPair<WritePSADBW, [JFPU01, JVALU], 2>;
|
2018-05-11 01:06:09 +08:00
|
|
|
defm : JWriteResFpuPair<WritePSADBWX, [JFPU01, JVALU], 2>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WritePSADBWY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WritePSADBWZ>;
|
2018-09-28 16:21:39 +08:00
|
|
|
defm : JWriteResFpuPair<WritePHMINPOS, [JFPU01, JVALU], 2>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteShuffle, [JFPU01, JVALU], 1>;
|
2018-05-11 01:06:09 +08:00
|
|
|
defm : JWriteResFpuPair<WriteShuffleX, [JFPU01, JVALU], 1>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteShuffleY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteShuffleZ>;
|
2018-10-04 02:18:50 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVarShuffle, [JFPU01, JVALU], 2, [1, 1], 1>;
|
2018-05-11 01:06:09 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVarShuffleX, [JFPU01, JVALU], 2, [1, 4], 3>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarShuffleY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarShuffleZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteBlend, [JFPU01, JVALU], 1>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteBlendY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteBlendZ>;
|
2018-10-02 23:13:18 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVarBlend, [JFPU01, JVALU], 2, [4, 4], 3>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarBlendY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarBlendZ>;
|
2018-03-18 20:09:17 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecLogic, [JFPU01, JVALU], 1>;
|
2018-05-11 01:06:09 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecLogicX, [JFPU01, JVALU], 1>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecLogicY>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVecLogicZ>;
|
2018-05-08 18:28:03 +08:00
|
|
|
defm : JWriteResFpuPair<WriteVecTest, [JFPU0, JFPA, JALU0], 3>;
|
2018-06-11 22:37:53 +08:00
|
|
|
defm : JWriteResYMMPair<WriteVecTestY, [JFPU01, JFPX, JFPA, JALU0], 4, [2, 2, 2, 1], 3>;
|
|
|
|
defm : X86WriteResPairUnsupported<WriteVecTestZ>;
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteShuffle256>;
|
2018-06-11 15:00:08 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WriteVarShuffle256>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-04-08 19:26:26 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
2018-04-24 21:21:41 +08:00
|
|
|
// Vector insert/extract operations.
|
2018-04-08 19:26:26 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2019-01-24 00:35:07 +08:00
|
|
|
defm : X86WriteRes<WriteVecInsert, [JFPU01, JVALU], 1, [1,1], 2>;
|
2018-05-19 01:09:41 +08:00
|
|
|
defm : X86WriteRes<WriteVecInsertLd, [JFPU01, JVALU, JLAGU], 4, [1,1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecExtract, [JFPU0, JFPA, JALU0], 3, [1,1,1], 1>;
|
|
|
|
defm : X86WriteRes<WriteVecExtractSt, [JFPU1, JSTC, JSAGU], 3, [1,1,1], 1>;
|
2018-04-08 19:26:26 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
2018-03-15 07:12:09 +08:00
|
|
|
// SSE42 String instructions.
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-10-01 00:38:38 +08:00
|
|
|
defm : JWriteResFpuPair<WritePCmpIStrI, [JFPU1, JVALU1, JFPU0, JFPA, JALU0], 7, [2, 2, 1, 1, 1], 3>;
|
|
|
|
defm : JWriteResFpuPair<WritePCmpIStrM, [JFPU1, JVALU1, JFPU0, JFPA, JALU0], 8, [2, 2, 1, 1, 1], 3>;
|
2018-03-27 00:10:08 +08:00
|
|
|
defm : JWriteResFpuPair<WritePCmpEStrI, [JFPU1, JSAGU, JLAGU, JVALU, JVALU1, JFPA, JALU0], 14, [1, 2, 2, 6, 4, 1, 1], 9>;
|
|
|
|
defm : JWriteResFpuPair<WritePCmpEStrM, [JFPU1, JSAGU, JLAGU, JVALU, JVALU1, JFPA, JALU0], 14, [1, 2, 2, 6, 4, 1, 1], 9>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2018-03-28 04:38:54 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// MOVMSK Instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-06-18 22:31:14 +08:00
|
|
|
def : WriteRes<WriteFMOVMSK, [JFPU0, JFPA, JALU0]> { let Latency = 3; }
|
|
|
|
def : WriteRes<WriteVecMOVMSK, [JFPU0, JFPA, JALU0]> { let Latency = 3; }
|
|
|
|
defm : X86WriteResUnsupported<WriteVecMOVMSKY>;
|
|
|
|
def : WriteRes<WriteMMXMOVMSK, [JFPU0, JFPA, JALU0]> { let Latency = 3; }
|
2018-03-28 04:38:54 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// AES Instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-03-16 01:45:10 +08:00
|
|
|
defm : JWriteResFpuPair<WriteAESIMC, [JFPU0, JVIMUL], 2>;
|
|
|
|
defm : JWriteResFpuPair<WriteAESKeyGen, [JFPU0, JVIMUL], 2>;
|
2018-10-02 23:13:18 +08:00
|
|
|
defm : JWriteResFpuPair<WriteAESDecEnc, [JFPU01, JVALU, JFPU0, JVIMUL], 3, [1,1,1,1], 2>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2017-06-09 00:44:13 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Horizontal add/sub instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2019-01-17 02:18:01 +08:00
|
|
|
defm : JWriteResFpuPair<WriteFHAdd, [JFPU0, JFPA], 4>; // +1cy latency.
|
|
|
|
defm : JWriteResYMMPair<WriteFHAddY, [JFPU0, JFPA], 4, [2,2], 2>; // +1cy latency.
|
2019-01-22 02:04:25 +08:00
|
|
|
defm : JWriteResFpuPair<WritePHAdd, [JFPU01, JVALU], 1>;
|
2019-01-17 02:18:01 +08:00
|
|
|
defm : JWriteResFpuPair<WritePHAddX, [JFPU01, JVALU], 2>; // +1cy latency.
|
2018-06-18 22:31:14 +08:00
|
|
|
defm : X86WriteResPairUnsupported<WritePHAddY>;
|
2017-06-09 00:44:13 +08:00
|
|
|
|
2014-09-10 04:07:07 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Carry-less multiplication instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-03-16 01:45:10 +08:00
|
|
|
defm : JWriteResFpuPair<WriteCLMul, [JFPU0, JVIMUL], 2>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2017-07-16 20:06:06 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// SSE4A instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-03-18 21:05:09 +08:00
|
|
|
def JWriteINSERTQ: SchedWriteRes<[JFPU01, JVALU]> {
|
2017-07-16 20:06:06 +08:00
|
|
|
let Latency = 2;
|
2018-03-18 21:05:09 +08:00
|
|
|
let ResourceCycles = [1, 4];
|
2017-07-16 20:06:06 +08:00
|
|
|
}
|
2018-03-13 01:07:08 +08:00
|
|
|
def : InstRW<[JWriteINSERTQ], (instrs INSERTQ, INSERTQI)>;
|
2017-07-16 20:06:06 +08:00
|
|
|
|
2017-07-11 00:36:03 +08:00
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
// AVX instructions.
|
|
|
|
////////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2018-08-31 16:30:47 +08:00
|
|
|
def JWriteVecExtractF128: SchedWriteRes<[JFPU01, JFPX]>;
|
|
|
|
def : InstRW<[JWriteVecExtractF128], (instrs VEXTRACTF128rr)>;
|
|
|
|
|
2018-03-24 00:17:56 +08:00
|
|
|
def JWriteVBROADCASTYLd: SchedWriteRes<[JLAGU, JFPU01, JFPX]> {
|
2017-11-02 18:33:41 +08:00
|
|
|
let Latency = 6;
|
2018-03-26 21:15:20 +08:00
|
|
|
let ResourceCycles = [1, 2, 4];
|
2018-03-28 20:12:04 +08:00
|
|
|
let NumMicroOps = 2;
|
2017-11-02 18:33:41 +08:00
|
|
|
}
|
2018-09-01 00:05:48 +08:00
|
|
|
def : InstRW<[JWriteVBROADCASTYLd], (instrs VBROADCASTSDYrm,
|
|
|
|
VBROADCASTSSYrm,
|
|
|
|
VBROADCASTF128)>;
|
2017-11-02 18:33:41 +08:00
|
|
|
|
2018-03-13 01:07:08 +08:00
|
|
|
def JWriteJVZEROALL: SchedWriteRes<[]> {
|
2017-07-27 21:12:08 +08:00
|
|
|
let Latency = 90;
|
|
|
|
let NumMicroOps = 73;
|
|
|
|
}
|
2018-03-13 01:07:08 +08:00
|
|
|
def : InstRW<[JWriteJVZEROALL], (instrs VZEROALL)>;
|
2017-07-27 21:12:08 +08:00
|
|
|
|
2018-03-13 01:07:08 +08:00
|
|
|
def JWriteJVZEROUPPER: SchedWriteRes<[]> {
|
2017-07-27 21:12:08 +08:00
|
|
|
let Latency = 46;
|
|
|
|
let NumMicroOps = 37;
|
|
|
|
}
|
2018-03-13 01:07:08 +08:00
|
|
|
def : InstRW<[JWriteJVZEROUPPER], (instrs VZEROUPPER)>;
|
2014-09-10 04:07:07 +08:00
|
|
|
|
2019-09-02 20:32:28 +08:00
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
// SSE2/AVX Store Selected Bytes of Double Quadword - (V)MASKMOVDQ
|
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
|
|
|
def JWriteMASKMOVDQU: SchedWriteRes<[JFPU0, JFPA, JFPU1, JSTC, JLAGU, JSAGU, JALU01]> {
|
|
|
|
let Latency = 34;
|
|
|
|
let ResourceCycles = [1, 1, 2, 2, 2, 16, 42];
|
|
|
|
let NumMicroOps = 63;
|
|
|
|
}
|
|
|
|
def : InstRW<[JWriteMASKMOVDQU], (instrs MASKMOVDQU, MASKMOVDQU64,
|
|
|
|
VMASKMOVDQU, VMASKMOVDQU64)>;
|
|
|
|
|
2018-06-04 23:43:09 +08:00
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
// SchedWriteVariant definitions.
|
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
|
|
|
def JWriteZeroLatency : SchedWriteRes<[]> {
|
|
|
|
let Latency = 0;
|
|
|
|
}
|
|
|
|
|
2018-09-21 20:43:07 +08:00
|
|
|
def JWriteZeroIdiomYmm : SchedWriteRes<[JFPU01, JFPX]> {
|
|
|
|
let NumMicroOps = 2;
|
|
|
|
}
|
|
|
|
|
2018-06-11 22:37:53 +08:00
|
|
|
// Certain instructions that use the same register for both source
|
2018-06-04 23:43:09 +08:00
|
|
|
// operands do not have a real dependency on the previous contents of the
|
|
|
|
// register, and thus, do not have to wait before completing. They can be
|
|
|
|
// optimized out at register renaming stage.
|
|
|
|
// Reference: Section 10.8 of the "Software Optimization Guide for AMD Family
|
|
|
|
// 15h Processors".
|
|
|
|
// Reference: Agner's Fog "The microarchitecture of Intel, AMD and VIA CPUs",
|
|
|
|
// Section 21.8 [Dependency-breaking instructions].
|
|
|
|
|
2018-06-09 01:00:45 +08:00
|
|
|
def JWriteZeroIdiom : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteALU]>
|
2018-06-09 01:00:45 +08:00
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteZeroIdiom], (instrs SUB32rr, SUB64rr,
|
|
|
|
XOR32rr, XOR64rr)>;
|
|
|
|
|
2018-06-04 23:43:09 +08:00
|
|
|
def JWriteFZeroIdiom : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteFLogic]>
|
2018-06-04 23:43:09 +08:00
|
|
|
]>;
|
2018-06-07 03:06:09 +08:00
|
|
|
def : InstRW<[JWriteFZeroIdiom], (instrs XORPSrr, VXORPSrr, XORPDrr, VXORPDrr,
|
|
|
|
ANDNPSrr, VANDNPSrr,
|
|
|
|
ANDNPDrr, VANDNPDrr)>;
|
2018-06-04 23:43:09 +08:00
|
|
|
|
2018-09-21 20:43:07 +08:00
|
|
|
def JWriteFZeroIdiomY : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroIdiomYmm]>,
|
|
|
|
SchedVar<NoSchedPred, [WriteFLogicY]>
|
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteFZeroIdiomY], (instrs VXORPSYrr, VXORPDYrr,
|
|
|
|
VANDNPSYrr, VANDNPDYrr)>;
|
|
|
|
|
2018-06-07 03:06:09 +08:00
|
|
|
def JWriteVZeroIdiomLogic : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteVecLogic]>
|
2018-06-07 03:06:09 +08:00
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteVZeroIdiomLogic], (instrs MMX_PXORirr, MMX_PANDNirr)>;
|
2018-06-04 23:43:09 +08:00
|
|
|
|
2018-06-07 03:06:09 +08:00
|
|
|
def JWriteVZeroIdiomLogicX : SchedWriteVariant<[
|
2018-06-04 23:43:09 +08:00
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteVecLogicX]>
|
2018-06-04 23:43:09 +08:00
|
|
|
]>;
|
2018-06-07 03:06:09 +08:00
|
|
|
def : InstRW<[JWriteVZeroIdiomLogicX], (instrs PXORrr, VPXORrr,
|
|
|
|
PANDNrr, VPANDNrr)>;
|
2018-06-04 23:43:09 +08:00
|
|
|
|
2018-06-07 03:06:09 +08:00
|
|
|
def JWriteVZeroIdiomALU : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteVecALU]>
|
2018-06-07 03:06:09 +08:00
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteVZeroIdiomALU], (instrs MMX_PSUBBirr, MMX_PSUBDirr,
|
|
|
|
MMX_PSUBQirr, MMX_PSUBWirr,
|
2018-09-28 22:20:42 +08:00
|
|
|
MMX_PSUBSBirr, MMX_PSUBSWirr,
|
|
|
|
MMX_PSUBUSBirr, MMX_PSUBUSWirr,
|
2018-06-07 03:06:09 +08:00
|
|
|
MMX_PCMPGTBirr, MMX_PCMPGTDirr,
|
|
|
|
MMX_PCMPGTWirr)>;
|
2018-06-04 23:43:09 +08:00
|
|
|
|
2018-06-07 03:06:09 +08:00
|
|
|
def JWriteVZeroIdiomALUX : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomPredicate>, [JWriteZeroLatency]>,
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<NoSchedPred, [WriteVecALUX]>
|
2018-06-07 03:06:09 +08:00
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteVZeroIdiomALUX], (instrs PSUBBrr, VPSUBBrr,
|
|
|
|
PSUBDrr, VPSUBDrr,
|
|
|
|
PSUBQrr, VPSUBQrr,
|
|
|
|
PSUBWrr, VPSUBWrr,
|
2018-09-28 22:20:42 +08:00
|
|
|
PSUBSBrr, VPSUBSBrr,
|
|
|
|
PSUBSWrr, VPSUBSWrr,
|
|
|
|
PSUBUSBrr, VPSUBUSBrr,
|
|
|
|
PSUBUSWrr, VPSUBUSWrr,
|
2018-06-07 03:06:09 +08:00
|
|
|
PCMPGTBrr, VPCMPGTBrr,
|
|
|
|
PCMPGTDrr, VPCMPGTDrr,
|
|
|
|
PCMPGTQrr, VPCMPGTQrr,
|
|
|
|
PCMPGTWrr, VPCMPGTWrr)>;
|
2018-07-20 00:42:15 +08:00
|
|
|
|
2018-10-01 18:35:13 +08:00
|
|
|
def JWriteVPERM2F128 : SchedWriteVariant<[
|
|
|
|
SchedVar<MCSchedPredicate<ZeroIdiomVPERMPredicate>, [JWriteZeroIdiomYmm]>,
|
|
|
|
SchedVar<NoSchedPred, [WriteFShuffle256]>
|
|
|
|
]>;
|
|
|
|
def : InstRW<[JWriteVPERM2F128], (instrs VPERM2F128rr)>;
|
|
|
|
|
2018-07-20 00:42:15 +08:00
|
|
|
// This write is used for slow LEA instructions.
|
|
|
|
def JWrite3OpsLEA : SchedWriteRes<[JALU1, JSAGU]> {
|
|
|
|
let Latency = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
// On Jaguar, a slow LEA is either a 3Ops LEA (base, index, offset), or an LEA
|
|
|
|
// with a `Scale` value different than 1.
|
|
|
|
def JSlowLEAPredicate : MCSchedPredicate<
|
|
|
|
CheckAny<[
|
|
|
|
// A 3-operand LEA (base, index, offset).
|
|
|
|
IsThreeOperandsLEAFn,
|
|
|
|
// An LEA with a "Scale" different than 1.
|
|
|
|
CheckAll<[
|
|
|
|
CheckIsImmOperand<2>,
|
|
|
|
CheckNot<CheckImmOperand<2, 1>>
|
|
|
|
]>
|
|
|
|
]>
|
|
|
|
>;
|
|
|
|
|
|
|
|
def JWriteLEA : SchedWriteVariant<[
|
2018-08-14 01:52:39 +08:00
|
|
|
SchedVar<JSlowLEAPredicate, [JWrite3OpsLEA]>,
|
|
|
|
SchedVar<NoSchedPred, [WriteLEA]>
|
2018-07-20 00:42:15 +08:00
|
|
|
]>;
|
|
|
|
|
|
|
|
def : InstRW<[JWriteLEA], (instrs LEA32r, LEA64r, LEA64_32r)>;
|
|
|
|
|
|
|
|
def JSlowLEA16r : SchedWriteRes<[JALU01]> {
|
|
|
|
let Latency = 3;
|
|
|
|
let ResourceCycles = [4];
|
|
|
|
}
|
|
|
|
|
|
|
|
def : InstRW<[JSlowLEA16r], (instrs LEA16r)>;
|
|
|
|
|
[TableGen][SubtargetEmitter] Add the ability for processor models to describe dependency breaking instructions.
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
2018-09-19 23:57:45 +08:00
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
// Dependency breaking instructions.
|
|
|
|
///////////////////////////////////////////////////////////////////////////////
|
|
|
|
|
|
|
|
def : IsZeroIdiomFunction<[
|
|
|
|
// GPR Zero-idioms.
|
|
|
|
DepBreakingClass<[ SUB32rr, SUB64rr, XOR32rr, XOR64rr ], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
// MMX Zero-idioms.
|
|
|
|
DepBreakingClass<[
|
|
|
|
MMX_PXORirr, MMX_PANDNirr, MMX_PSUBBirr,
|
|
|
|
MMX_PSUBDirr, MMX_PSUBQirr, MMX_PSUBWirr,
|
2018-09-28 22:20:42 +08:00
|
|
|
MMX_PSUBSBirr, MMX_PSUBSWirr, MMX_PSUBUSBirr, MMX_PSUBUSWirr,
|
[TableGen][SubtargetEmitter] Add the ability for processor models to describe dependency breaking instructions.
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
2018-09-19 23:57:45 +08:00
|
|
|
MMX_PCMPGTBirr, MMX_PCMPGTDirr, MMX_PCMPGTWirr
|
|
|
|
], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
// SSE Zero-idioms.
|
|
|
|
DepBreakingClass<[
|
|
|
|
// fp variants.
|
|
|
|
XORPSrr, XORPDrr, ANDNPSrr, ANDNPDrr,
|
|
|
|
|
|
|
|
// int variants.
|
|
|
|
PXORrr, PANDNrr,
|
|
|
|
PSUBBrr, PSUBWrr, PSUBDrr, PSUBQrr,
|
2018-09-28 22:20:42 +08:00
|
|
|
PSUBSBrr, PSUBSWrr, PSUBUSBrr, PSUBUSWrr,
|
[TableGen][SubtargetEmitter] Add the ability for processor models to describe dependency breaking instructions.
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
2018-09-19 23:57:45 +08:00
|
|
|
PCMPGTBrr, PCMPGTDrr, PCMPGTQrr, PCMPGTWrr
|
|
|
|
], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
// AVX Zero-idioms.
|
|
|
|
DepBreakingClass<[
|
|
|
|
// xmm fp variants.
|
|
|
|
VXORPSrr, VXORPDrr, VANDNPSrr, VANDNPDrr,
|
|
|
|
|
|
|
|
// xmm int variants.
|
|
|
|
VPXORrr, VPANDNrr,
|
|
|
|
VPSUBBrr, VPSUBWrr, VPSUBDrr, VPSUBQrr,
|
2018-09-28 22:20:42 +08:00
|
|
|
VPSUBSBrr, VPSUBSWrr, VPSUBUSBrr, VPSUBUSWrr,
|
[TableGen][SubtargetEmitter] Add the ability for processor models to describe dependency breaking instructions.
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
2018-09-19 23:57:45 +08:00
|
|
|
VPCMPGTBrr, VPCMPGTWrr, VPCMPGTDrr, VPCMPGTQrr,
|
|
|
|
|
|
|
|
// ymm variants.
|
|
|
|
VXORPSYrr, VXORPDYrr, VANDNPSYrr, VANDNPDYrr
|
2018-10-01 18:35:13 +08:00
|
|
|
], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
DepBreakingClass<[ VPERM2F128rr ], ZeroIdiomVPERMPredicate>
|
[TableGen][SubtargetEmitter] Add the ability for processor models to describe dependency breaking instructions.
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
2018-09-19 23:57:45 +08:00
|
|
|
]>;
|
|
|
|
|
|
|
|
def : IsDepBreakingFunction<[
|
|
|
|
// GPR
|
|
|
|
DepBreakingClass<[ SBB32rr, SBB64rr ], ZeroIdiomPredicate>,
|
|
|
|
DepBreakingClass<[ CMP32rr, CMP64rr ], CheckSameRegOperand<0, 1> >,
|
|
|
|
|
|
|
|
// MMX
|
|
|
|
DepBreakingClass<[
|
|
|
|
MMX_PCMPEQBirr, MMX_PCMPEQDirr, MMX_PCMPEQWirr
|
|
|
|
], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
// SSE
|
|
|
|
DepBreakingClass<[
|
|
|
|
PCMPEQBrr, PCMPEQWrr, PCMPEQDrr, PCMPEQQrr
|
|
|
|
], ZeroIdiomPredicate>,
|
|
|
|
|
|
|
|
// AVX
|
|
|
|
DepBreakingClass<[
|
|
|
|
VPCMPEQBrr, VPCMPEQWrr, VPCMPEQDrr, VPCMPEQQrr
|
|
|
|
], ZeroIdiomPredicate>
|
|
|
|
]>;
|
|
|
|
|
[tblgen][llvm-mca] Add the ability to describe move elimination candidates via tablegen.
This patch adds the ability to identify instructions that are "move elimination
candidates". It also allows scheduling models to describe processor register
files that allow move elimination.
A move elimination candidate is an instruction that can be eliminated at
register renaming stage.
Each subtarget can specify which instructions are move elimination candidates
with the help of tablegen class "IsOptimizableRegisterMove" (see
llvm/Target/TargetInstrPredicate.td).
For example, on X86, BtVer2 allows both GPR and MMX/SSE moves to be eliminated.
The definition of 'IsOptimizableRegisterMove' for BtVer2 looks like this:
```
def : IsOptimizableRegisterMove<[
InstructionEquivalenceClass<[
// GPR variants.
MOV32rr, MOV64rr,
// MMX variants.
MMX_MOVQ64rr,
// SSE variants.
MOVAPSrr, MOVUPSrr,
MOVAPDrr, MOVUPDrr,
MOVDQArr, MOVDQUrr,
// AVX variants.
VMOVAPSrr, VMOVUPSrr,
VMOVAPDrr, VMOVUPDrr,
VMOVDQArr, VMOVDQUrr
], CheckNot<CheckSameRegOperand<0, 1>> >
]>;
```
Definitions of IsOptimizableRegisterMove from processor models of a same
Target are processed by the SubtargetEmitter to auto-generate a target-specific
override for each of the following predicate methods:
```
bool TargetSubtargetInfo::isOptimizableRegisterMove(const MachineInstr *MI)
const;
bool MCInstrAnalysis::isOptimizableRegisterMove(const MCInst &MI, unsigned
CPUID) const;
```
By default, those methods return false (i.e. conservatively assume that there
are no move elimination candidates).
Tablegen class RegisterFile has been extended with the following information:
- The set of register classes that allow move elimination.
- Maxium number of moves that can be eliminated every cycle.
- Whether move elimination is restricted to moves from registers that are
known to be zero.
This patch is structured in three part:
A first part (which is mostly boilerplate) adds the new
'isOptimizableRegisterMove' target hooks, and extends existing register file
descriptors in MC by introducing new fields to describe properties related to
move elimination.
A second part, uses the new tablegen constructs to describe move elimination in
the BtVer2 scheduling model.
A third part, teaches llm-mca how to query the new 'isOptimizableRegisterMove'
hook to mark instructions that are candidates for move elimination. It also
teaches class RegisterFile how to describe constraints on move elimination at
PRF granularity.
llvm-mca tests for btver2 show differences before/after this patch.
Differential Revision: https://reviews.llvm.org/D53134
llvm-svn: 344334
2018-10-12 19:23:04 +08:00
|
|
|
def : IsOptimizableRegisterMove<[
|
|
|
|
InstructionEquivalenceClass<[
|
|
|
|
// GPR variants.
|
|
|
|
MOV32rr, MOV64rr,
|
|
|
|
|
|
|
|
// MMX variants.
|
|
|
|
MMX_MOVQ64rr,
|
|
|
|
|
|
|
|
// SSE variants.
|
|
|
|
MOVAPSrr, MOVUPSrr,
|
|
|
|
MOVAPDrr, MOVUPDrr,
|
|
|
|
MOVDQArr, MOVDQUrr,
|
|
|
|
|
|
|
|
// AVX variants.
|
|
|
|
VMOVAPSrr, VMOVUPSrr,
|
|
|
|
VMOVAPDrr, VMOVUPDrr,
|
|
|
|
VMOVDQArr, VMOVDQUrr
|
|
|
|
], TruePred >
|
|
|
|
]>;
|
|
|
|
|
2018-06-04 23:43:09 +08:00
|
|
|
} // SchedModel
|