2019-04-17 14:02:05 +08:00
|
|
|
//===--------------------- BottleneckAnalysis.h -----------------*- C++ -*-===//
|
|
|
|
//
|
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
/// \file
|
|
|
|
///
|
|
|
|
/// This file implements the bottleneck analysis view.
|
2019-06-01 01:18:34 +08:00
|
|
|
///
|
2019-04-17 14:02:05 +08:00
|
|
|
/// This view internally observes backend pressure increase events in order to
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
/// identify problematic data dependencies and processor resource interferences.
|
2019-06-01 01:18:34 +08:00
|
|
|
///
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
/// Example of bottleneck analysis report for a dot-product on X86 btver2:
|
2019-04-17 14:02:05 +08:00
|
|
|
///
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
/// Cycles with backend pressure increase [ 40.76% ]
|
|
|
|
/// Throughput Bottlenecks:
|
|
|
|
/// Resource Pressure [ 39.34% ]
|
|
|
|
/// - JFPA [ 39.34% ]
|
|
|
|
/// - JFPU0 [ 39.34% ]
|
|
|
|
/// Data Dependencies: [ 1.42% ]
|
|
|
|
/// - Register Dependencies [ 1.42% ]
|
|
|
|
/// - Memory Dependencies [ 0.00% ]
|
2019-04-17 14:02:05 +08:00
|
|
|
///
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
/// According to the example, backend pressure increased during the 40.76% of
|
|
|
|
/// the simulated cycles. In particular, the major cause of backend pressure
|
|
|
|
/// increases was the contention on floating point adder JFPA accessible from
|
|
|
|
/// pipeline resource JFPU0.
|
|
|
|
///
|
|
|
|
/// At the end of each cycle, if pressure on the simulated out-of-order buffers
|
|
|
|
/// has increased, a backend pressure event is reported.
|
|
|
|
/// In particular, this occurs when there is a delta between the number of uOps
|
|
|
|
/// dispatched and the number of uOps issued to the underlying pipelines.
|
|
|
|
///
|
|
|
|
/// The bottleneck analysis view is also responsible for identifying and printing
|
|
|
|
/// the most "critical" sequence of dependent instructions according to the
|
|
|
|
/// simulated run.
|
|
|
|
///
|
|
|
|
/// Below is the critical sequence computed for the dot-product example on
|
|
|
|
/// btver2:
|
|
|
|
///
|
|
|
|
/// Instruction Dependency Information
|
|
|
|
/// +----< 2. vhaddps %xmm3, %xmm3, %xmm4
|
|
|
|
/// |
|
|
|
|
/// | < loop carried >
|
|
|
|
/// |
|
|
|
|
/// | 0. vmulps %xmm0, %xmm0, %xmm2
|
|
|
|
/// +----> 1. vhaddps %xmm2, %xmm2, %xmm3 ## RESOURCE interference: JFPA [ probability: 73% ]
|
|
|
|
/// +----> 2. vhaddps %xmm3, %xmm3, %xmm4 ## REGISTER dependency: %xmm3
|
|
|
|
/// |
|
|
|
|
/// | < loop carried >
|
|
|
|
/// |
|
|
|
|
/// +----> 1. vhaddps %xmm2, %xmm2, %xmm3 ## RESOURCE interference: JFPA [ probability: 73% ]
|
|
|
|
///
|
|
|
|
///
|
|
|
|
/// The algorithm that computes the critical sequence is very similar to a
|
|
|
|
/// critical path analysis.
|
|
|
|
///
|
|
|
|
/// A dependency graph is used internally to track dependencies between nodes.
|
|
|
|
/// Nodes of the graph represent instructions from the input assembly sequence,
|
|
|
|
/// and edges of the graph represent data dependencies or processor resource
|
|
|
|
/// interferences.
|
|
|
|
///
|
|
|
|
/// Edges are dynamically 'discovered' by observing instruction state transitions
|
|
|
|
/// and backend pressure increase events. Edges are internally ranked based on
|
|
|
|
/// their "criticality". A dependency is considered to be critical if it takes a
|
|
|
|
/// long time to execute, and if it contributes to backend pressure increases.
|
|
|
|
/// Criticality is internally measured in terms of cycles; it is computed for
|
|
|
|
/// every edge in the graph as a function of the edge latency and the number of
|
|
|
|
/// backend pressure increase cycles contributed by that edge.
|
|
|
|
///
|
|
|
|
/// At the end of simulation, costs are propagated to nodes through the edges of
|
|
|
|
/// the graph, and the most expensive path connecting the root-set (a
|
|
|
|
/// set of nodes with no predecessors) to a leaf node is reported as critical
|
|
|
|
/// sequence.
|
|
|
|
//
|
2019-04-17 14:02:05 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#ifndef LLVM_TOOLS_LLVM_MCA_BOTTLENECK_ANALYSIS_H
|
|
|
|
#define LLVM_TOOLS_LLVM_MCA_BOTTLENECK_ANALYSIS_H
|
|
|
|
|
|
|
|
#include "Views/View.h"
|
|
|
|
#include "llvm/ADT/DenseMap.h"
|
2019-06-01 01:18:34 +08:00
|
|
|
#include "llvm/ADT/SmallVector.h"
|
|
|
|
#include "llvm/MC/MCInstPrinter.h"
|
2019-04-17 14:02:05 +08:00
|
|
|
#include "llvm/MC/MCSchedule.h"
|
2019-06-01 01:18:34 +08:00
|
|
|
#include "llvm/MC/MCSubtargetInfo.h"
|
2019-04-17 14:02:05 +08:00
|
|
|
#include "llvm/Support/raw_ostream.h"
|
2020-08-22 06:52:02 +08:00
|
|
|
#include "llvm/Support/FormattedStream.h"
|
2019-04-17 14:02:05 +08:00
|
|
|
|
|
|
|
namespace llvm {
|
|
|
|
namespace mca {
|
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
class PressureTracker {
|
|
|
|
const MCSchedModel &SM;
|
|
|
|
|
|
|
|
// Resource pressure distribution. There is an element for every processor
|
|
|
|
// resource declared by the scheduling model. Quantities are number of cycles.
|
|
|
|
SmallVector<unsigned, 4> ResourcePressureDistribution;
|
|
|
|
|
|
|
|
// Each processor resource is associated with a so-called processor resource
|
|
|
|
// mask. This vector allows to correlate processor resource IDs with processor
|
|
|
|
// resource masks. There is exactly one element per each processor resource
|
|
|
|
// declared by the scheduling model.
|
|
|
|
SmallVector<uint64_t, 4> ProcResID2Mask;
|
|
|
|
|
|
|
|
// Maps processor resource state indices (returned by calls to
|
|
|
|
// `getResourceStateIndex(Mask)` to processor resource identifiers.
|
|
|
|
SmallVector<unsigned, 4> ResIdx2ProcResID;
|
|
|
|
|
|
|
|
// Maps Processor Resource identifiers to ResourceUsers indices.
|
|
|
|
SmallVector<unsigned, 4> ProcResID2ResourceUsersIndex;
|
|
|
|
|
|
|
|
// Identifies the last user of a processor resource unit.
|
|
|
|
// This vector is updated on every instruction issued event.
|
|
|
|
// There is one entry for every processor resource unit declared by the
|
|
|
|
// processor model. An all_ones value is treated like an invalid instruction
|
|
|
|
// identifier.
|
2019-06-10 20:50:08 +08:00
|
|
|
using User = std::pair<unsigned, unsigned>;
|
|
|
|
SmallVector<User, 4> ResourceUsers;
|
2019-06-01 01:18:34 +08:00
|
|
|
|
|
|
|
struct InstructionPressureInfo {
|
|
|
|
unsigned RegisterPressureCycles;
|
|
|
|
unsigned MemoryPressureCycles;
|
|
|
|
unsigned ResourcePressureCycles;
|
|
|
|
};
|
|
|
|
DenseMap<unsigned, InstructionPressureInfo> IPI;
|
|
|
|
|
|
|
|
void updateResourcePressureDistribution(uint64_t CumulativeMask);
|
|
|
|
|
2019-06-10 20:50:08 +08:00
|
|
|
User getResourceUser(unsigned ProcResID, unsigned UnitID) const {
|
2019-06-01 01:18:34 +08:00
|
|
|
unsigned Index = ProcResID2ResourceUsersIndex[ProcResID];
|
|
|
|
return ResourceUsers[Index + UnitID];
|
|
|
|
}
|
|
|
|
|
|
|
|
public:
|
|
|
|
PressureTracker(const MCSchedModel &Model);
|
|
|
|
|
|
|
|
ArrayRef<unsigned> getResourcePressureDistribution() const {
|
|
|
|
return ResourcePressureDistribution;
|
|
|
|
}
|
|
|
|
|
2019-06-10 20:50:08 +08:00
|
|
|
void getResourceUsers(uint64_t ResourceMask,
|
|
|
|
SmallVectorImpl<User> &Users) const;
|
2019-06-01 01:18:34 +08:00
|
|
|
|
|
|
|
unsigned getRegisterPressureCycles(unsigned IID) const {
|
|
|
|
assert(IPI.find(IID) != IPI.end() && "Instruction is not tracked!");
|
|
|
|
const InstructionPressureInfo &Info = IPI.find(IID)->second;
|
|
|
|
return Info.RegisterPressureCycles;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned getMemoryPressureCycles(unsigned IID) const {
|
|
|
|
assert(IPI.find(IID) != IPI.end() && "Instruction is not tracked!");
|
|
|
|
const InstructionPressureInfo &Info = IPI.find(IID)->second;
|
|
|
|
return Info.MemoryPressureCycles;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned getResourcePressureCycles(unsigned IID) const {
|
|
|
|
assert(IPI.find(IID) != IPI.end() && "Instruction is not tracked!");
|
|
|
|
const InstructionPressureInfo &Info = IPI.find(IID)->second;
|
|
|
|
return Info.ResourcePressureCycles;
|
|
|
|
}
|
|
|
|
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
const char *resolveResourceName(uint64_t ResourceMask) const {
|
|
|
|
unsigned Index = getResourceStateIndex(ResourceMask);
|
|
|
|
unsigned ProcResID = ResIdx2ProcResID[Index];
|
|
|
|
const MCProcResourceDesc &PRDesc = *SM.getProcResource(ProcResID);
|
|
|
|
return PRDesc.Name;
|
|
|
|
}
|
|
|
|
|
2019-06-10 20:50:08 +08:00
|
|
|
void onInstructionDispatched(unsigned IID);
|
|
|
|
void onInstructionExecuted(unsigned IID);
|
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
void handlePressureEvent(const HWPressureEvent &Event);
|
2019-06-10 20:50:08 +08:00
|
|
|
void handleInstructionIssuedEvent(const HWInstructionIssuedEvent &Event);
|
2019-06-01 01:18:34 +08:00
|
|
|
};
|
|
|
|
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
// A dependency edge.
|
2019-06-18 20:59:46 +08:00
|
|
|
struct DependencyEdge {
|
|
|
|
enum DependencyType { DT_INVALID, DT_REGISTER, DT_MEMORY, DT_RESOURCE };
|
|
|
|
|
|
|
|
// Dependency edge descriptor.
|
|
|
|
//
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
// It specifies the dependency type, as well as the edge cost in cycles.
|
2019-06-18 20:59:46 +08:00
|
|
|
struct Dependency {
|
2019-06-10 20:50:08 +08:00
|
|
|
DependencyType Type;
|
2019-06-01 01:18:34 +08:00
|
|
|
uint64_t ResourceOrRegID;
|
2019-06-18 20:59:46 +08:00
|
|
|
uint64_t Cost;
|
2019-06-01 01:18:34 +08:00
|
|
|
};
|
2019-06-18 20:59:46 +08:00
|
|
|
Dependency Dep;
|
|
|
|
|
|
|
|
unsigned FromIID;
|
|
|
|
unsigned ToIID;
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
|
|
|
|
// Used by the bottleneck analysis to compute the interference
|
|
|
|
// probability for processor resources.
|
|
|
|
unsigned Frequency;
|
2019-06-18 20:59:46 +08:00
|
|
|
};
|
2019-06-01 01:18:34 +08:00
|
|
|
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
// A dependency graph used by the bottleneck analysis to describe data
|
|
|
|
// dependencies and processor resource interferences between instructions.
|
|
|
|
//
|
|
|
|
// There is a node (an instance of struct DGNode) for every instruction in the
|
|
|
|
// input assembly sequence. Edges of the graph represent dependencies between
|
|
|
|
// instructions.
|
|
|
|
//
|
|
|
|
// Each edge of the graph is associated with a cost value which is used
|
|
|
|
// internally to rank dependency based on their impact on the runtime
|
|
|
|
// performance (see field DependencyEdge::Dependency::Cost). In general, the
|
|
|
|
// higher the cost of an edge, the higher the impact on performance.
|
|
|
|
//
|
|
|
|
// The cost of a dependency is a function of both the latency and the number of
|
|
|
|
// cycles where the dependency has been seen as critical (i.e. contributing to
|
|
|
|
// back-pressure increases).
|
|
|
|
//
|
|
|
|
// Loop carried dependencies are carefully expanded by the bottleneck analysis
|
|
|
|
// to guarantee that the graph stays acyclic. To this end, extra nodes are
|
|
|
|
// pre-allocated at construction time to describe instructions from "past and
|
|
|
|
// future" iterations. The graph is kept acyclic mainly because it simplifies the
|
|
|
|
// complexity of the algorithm that computes the critical sequence.
|
2019-06-18 20:59:46 +08:00
|
|
|
class DependencyGraph {
|
2019-06-01 01:18:34 +08:00
|
|
|
struct DGNode {
|
|
|
|
unsigned NumPredecessors;
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
unsigned NumVisitedPredecessors;
|
|
|
|
uint64_t Cost;
|
|
|
|
unsigned Depth;
|
|
|
|
|
|
|
|
DependencyEdge CriticalPredecessor;
|
2019-06-10 20:50:08 +08:00
|
|
|
SmallVector<DependencyEdge, 8> OutgoingEdges;
|
2019-06-01 01:18:34 +08:00
|
|
|
};
|
|
|
|
SmallVector<DGNode, 16> Nodes;
|
|
|
|
|
|
|
|
DependencyGraph(const DependencyGraph &) = delete;
|
|
|
|
DependencyGraph &operator=(const DependencyGraph &) = delete;
|
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
void addDependency(unsigned From, unsigned To,
|
|
|
|
DependencyEdge::Dependency &&DE);
|
2019-06-10 20:50:08 +08:00
|
|
|
|
2019-09-20 00:05:11 +08:00
|
|
|
void pruneEdges(unsigned Iterations);
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
void initializeRootSet(SmallVectorImpl<unsigned> &RootSet) const;
|
2019-09-20 00:05:11 +08:00
|
|
|
void propagateThroughEdges(SmallVectorImpl<unsigned> &RootSet, unsigned Iterations);
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
|
2019-06-10 20:50:08 +08:00
|
|
|
#ifndef NDEBUG
|
2019-06-18 20:59:46 +08:00
|
|
|
void dumpDependencyEdge(raw_ostream &OS, const DependencyEdge &DE,
|
|
|
|
MCInstPrinter &MCIP) const;
|
2019-06-10 20:50:08 +08:00
|
|
|
#endif
|
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
public:
|
2019-06-10 20:50:08 +08:00
|
|
|
DependencyGraph(unsigned Size) : Nodes(Size) {}
|
2019-06-01 01:18:34 +08:00
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
void addRegisterDep(unsigned From, unsigned To, unsigned RegID,
|
|
|
|
unsigned Cost) {
|
|
|
|
addDependency(From, To, {DependencyEdge::DT_REGISTER, RegID, Cost});
|
2019-06-01 01:18:34 +08:00
|
|
|
}
|
2019-06-10 20:50:08 +08:00
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
void addMemoryDep(unsigned From, unsigned To, unsigned Cost) {
|
|
|
|
addDependency(From, To, {DependencyEdge::DT_MEMORY, /* unused */ 0, Cost});
|
2019-06-01 01:18:34 +08:00
|
|
|
}
|
2019-06-10 20:50:08 +08:00
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
void addResourceDep(unsigned From, unsigned To, uint64_t Mask,
|
|
|
|
unsigned Cost) {
|
|
|
|
addDependency(From, To, {DependencyEdge::DT_RESOURCE, Mask, Cost});
|
2019-06-01 01:18:34 +08:00
|
|
|
}
|
|
|
|
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
// Called by the bottleneck analysis at the end of simulation to propagate
|
|
|
|
// costs through the edges of the graph, and compute a critical path.
|
2019-09-20 00:05:11 +08:00
|
|
|
void finalizeGraph(unsigned Iterations) {
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
SmallVector<unsigned, 16> RootSet;
|
2019-09-20 00:05:11 +08:00
|
|
|
pruneEdges(Iterations);
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
initializeRootSet(RootSet);
|
2019-09-20 00:05:11 +08:00
|
|
|
propagateThroughEdges(RootSet, Iterations);
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Returns a sequence of edges representing the critical sequence based on the
|
|
|
|
// simulated run. It assumes that the graph has already been finalized (i.e.
|
|
|
|
// method `finalizeGraph()` has already been called on this graph).
|
|
|
|
void getCriticalSequence(SmallVectorImpl<const DependencyEdge *> &Seq) const;
|
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
#ifndef NDEBUG
|
2019-06-10 20:50:08 +08:00
|
|
|
void dump(raw_ostream &OS, MCInstPrinter &MCIP) const;
|
2019-06-01 01:18:34 +08:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
2019-04-17 14:02:05 +08:00
|
|
|
/// A view that collects and prints a few performance numbers.
|
2020-08-22 06:52:02 +08:00
|
|
|
class BottleneckAnalysis : public InstructionView {
|
2019-06-01 01:18:34 +08:00
|
|
|
PressureTracker Tracker;
|
|
|
|
DependencyGraph DG;
|
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
unsigned Iterations;
|
2019-04-17 14:02:05 +08:00
|
|
|
unsigned TotalCycles;
|
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
bool PressureIncreasedBecauseOfResources;
|
|
|
|
bool PressureIncreasedBecauseOfRegisterDependencies;
|
|
|
|
bool PressureIncreasedBecauseOfMemoryDependencies;
|
|
|
|
// True if throughput was affected by dispatch stalls.
|
|
|
|
bool SeenStallCycles;
|
|
|
|
|
2019-04-17 14:02:05 +08:00
|
|
|
struct BackPressureInfo {
|
|
|
|
// Cycles where backpressure increased.
|
|
|
|
unsigned PressureIncreaseCycles;
|
|
|
|
// Cycles where backpressure increased because of pipeline pressure.
|
|
|
|
unsigned ResourcePressureCycles;
|
|
|
|
// Cycles where backpressure increased because of data dependencies.
|
|
|
|
unsigned DataDependencyCycles;
|
|
|
|
// Cycles where backpressure increased because of register dependencies.
|
|
|
|
unsigned RegisterDependencyCycles;
|
|
|
|
// Cycles where backpressure increased because of memory dependencies.
|
|
|
|
unsigned MemoryDependencyCycles;
|
|
|
|
};
|
|
|
|
BackPressureInfo BPI;
|
|
|
|
|
2019-06-10 20:50:08 +08:00
|
|
|
// Used to populate the dependency graph DG.
|
|
|
|
void addRegisterDep(unsigned From, unsigned To, unsigned RegID, unsigned Cy);
|
|
|
|
void addMemoryDep(unsigned From, unsigned To, unsigned Cy);
|
|
|
|
void addResourceDep(unsigned From, unsigned To, uint64_t Mask, unsigned Cy);
|
|
|
|
|
2020-08-22 06:52:02 +08:00
|
|
|
void printInstruction(formatted_raw_ostream &FOS, const MCInst &MCI,
|
|
|
|
bool UseDifferentColor = false) const;
|
|
|
|
|
2019-06-18 20:59:46 +08:00
|
|
|
// Prints a bottleneck message to OS.
|
|
|
|
void printBottleneckHints(raw_ostream &OS) const;
|
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
Differential Revision: https://reviews.llvm.org/D63543
llvm-svn: 364045
2019-06-21 21:32:54 +08:00
|
|
|
void printCriticalSequence(raw_ostream &OS) const;
|
2019-06-18 20:59:46 +08:00
|
|
|
|
2019-04-17 14:02:05 +08:00
|
|
|
public:
|
2019-06-10 20:50:08 +08:00
|
|
|
BottleneckAnalysis(const MCSubtargetInfo &STI, MCInstPrinter &MCIP,
|
2019-06-18 20:59:46 +08:00
|
|
|
ArrayRef<MCInst> Sequence, unsigned Iterations);
|
2019-04-17 14:02:05 +08:00
|
|
|
|
2019-06-01 01:18:34 +08:00
|
|
|
void onCycleEnd() override;
|
2019-04-17 14:02:05 +08:00
|
|
|
void onEvent(const HWStallEvent &Event) override { SeenStallCycles = true; }
|
|
|
|
void onEvent(const HWPressureEvent &Event) override;
|
2019-06-01 01:18:34 +08:00
|
|
|
void onEvent(const HWInstructionEvent &Event) override;
|
2019-04-17 14:02:05 +08:00
|
|
|
|
2019-06-01 02:01:42 +08:00
|
|
|
void printView(raw_ostream &OS) const override;
|
|
|
|
|
|
|
|
#ifndef NDEBUG
|
2019-06-10 21:33:54 +08:00
|
|
|
void dump(raw_ostream &OS, MCInstPrinter &MCIP) const { DG.dump(OS, MCIP); }
|
2019-06-01 02:01:42 +08:00
|
|
|
#endif
|
2019-04-17 14:02:05 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
} // namespace mca
|
|
|
|
} // namespace llvm
|
|
|
|
|
|
|
|
#endif
|