2017-08-25 05:21:39 +08:00
|
|
|
//===- MachineBlockPlacement.cpp - Basic Block Code Layout optimization ---===//
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
// This file implements basic block placement transformations using the CFG
|
|
|
|
// structure and branch probability estimates.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
//
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
// The pass strives to preserve the structure of the CFG (that is, retain
|
2012-06-02 18:20:22 +08:00
|
|
|
// a topological ordering of basic blocks) in the absence of a *strong* signal
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
// to the contrary from probabilities. However, within the CFG structure, it
|
|
|
|
// attempts to choose an ordering which favors placing more likely sequences of
|
|
|
|
// blocks adjacent to each other.
|
|
|
|
//
|
|
|
|
// The algorithm works from the inner-most loop within a function outward, and
|
|
|
|
// at each stage walks through the basic blocks, trying to coalesce them into
|
|
|
|
// sequential chains where allowed by the CFG (or demanded by heavy
|
|
|
|
// probabilities). Finally, it walks the blocks in topological order, and the
|
|
|
|
// first time it reaches a chain of basic blocks, it schedules them in the
|
|
|
|
// function in-order.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2016-06-09 23:24:29 +08:00
|
|
|
#include "BranchFolding.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/ADT/ArrayRef.h"
|
2012-12-04 00:50:05 +08:00
|
|
|
#include "llvm/ADT/DenseMap.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/ADT/STLExtras.h"
|
|
|
|
#include "llvm/ADT/SetVector.h"
|
2012-12-04 00:50:05 +08:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
|
|
|
#include "llvm/ADT/Statistic.h"
|
2017-01-29 09:57:02 +08:00
|
|
|
#include "llvm/Analysis/BlockFrequencyInfoImpl.h"
|
2019-12-06 01:39:37 +08:00
|
|
|
#include "llvm/Analysis/ProfileSummaryInfo.h"
|
2011-10-21 16:57:37 +08:00
|
|
|
#include "llvm/CodeGen/MachineBasicBlock.h"
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
#include "llvm/CodeGen/MachineBlockFrequencyInfo.h"
|
|
|
|
#include "llvm/CodeGen/MachineBranchProbabilityInfo.h"
|
|
|
|
#include "llvm/CodeGen/MachineFunction.h"
|
|
|
|
#include "llvm/CodeGen/MachineFunctionPass.h"
|
2011-10-21 16:57:37 +08:00
|
|
|
#include "llvm/CodeGen/MachineLoopInfo.h"
|
|
|
|
#include "llvm/CodeGen/MachineModuleInfo.h"
|
2017-02-01 07:48:32 +08:00
|
|
|
#include "llvm/CodeGen/MachinePostDominators.h"
|
2019-12-06 01:39:37 +08:00
|
|
|
#include "llvm/CodeGen/MachineSizeOpts.h"
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
#include "llvm/CodeGen/TailDuplicator.h"
|
2017-11-08 09:01:31 +08:00
|
|
|
#include "llvm/CodeGen/TargetInstrInfo.h"
|
2017-11-17 09:07:10 +08:00
|
|
|
#include "llvm/CodeGen/TargetLowering.h"
|
2017-06-06 19:49:48 +08:00
|
|
|
#include "llvm/CodeGen/TargetPassConfig.h"
|
2017-11-17 09:07:10 +08:00
|
|
|
#include "llvm/CodeGen/TargetSubtargetInfo.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/IR/DebugLoc.h"
|
|
|
|
#include "llvm/IR/Function.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-14 05:15:01 +08:00
|
|
|
#include "llvm/InitializePasses.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/Pass.h"
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
#include "llvm/Support/Allocator.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/Support/BlockFrequency.h"
|
|
|
|
#include "llvm/Support/BranchProbability.h"
|
|
|
|
#include "llvm/Support/CodeGen.h"
|
2013-04-12 08:48:32 +08:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/Support/Compiler.h"
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
#include "llvm/Support/Debug.h"
|
2015-03-24 03:32:43 +08:00
|
|
|
#include "llvm/Support/raw_ostream.h"
|
2017-08-25 05:21:39 +08:00
|
|
|
#include "llvm/Target/TargetMachine.h"
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
#include <algorithm>
|
2017-08-25 05:21:39 +08:00
|
|
|
#include <cassert>
|
|
|
|
#include <cstdint>
|
|
|
|
#include <iterator>
|
|
|
|
#include <memory>
|
|
|
|
#include <string>
|
|
|
|
#include <tuple>
|
2017-02-01 07:48:32 +08:00
|
|
|
#include <utility>
|
2017-08-25 05:21:39 +08:00
|
|
|
#include <vector>
|
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
using namespace llvm;
|
|
|
|
|
2015-03-05 10:28:25 +08:00
|
|
|
#define DEBUG_TYPE "block-placement"
|
2014-04-22 10:02:50 +08:00
|
|
|
|
2011-11-02 15:17:12 +08:00
|
|
|
STATISTIC(NumCondBranches, "Number of conditional branches");
|
2015-09-16 11:52:32 +08:00
|
|
|
STATISTIC(NumUncondBranches, "Number of unconditional branches");
|
2011-11-02 15:17:12 +08:00
|
|
|
STATISTIC(CondBranchTakenFreq,
|
|
|
|
"Potential frequency of taking conditional branches");
|
|
|
|
STATISTIC(UncondBranchTakenFreq,
|
|
|
|
"Potential frequency of taking unconditional branches");
|
|
|
|
|
[LLVM][Alignment] Make functions using log of alignment explicit
Summary:
This patch renames functions that takes or returns alignment as log2, this patch will help with the transition to llvm::Align.
The renaming makes it explicit that we deal with log(alignment) instead of a power of two alignment.
A few renames uncovered dubious assignments:
- `MirParser`/`MirPrinter` was expecting powers of two but `MachineFunction` and `MachineBasicBlock` were using deal with log2(align). This patch fixes it and updates the documentation.
- `MachineBlockPlacement` exposes two flags (`align-all-blocks` and `align-all-nofallthru-blocks`) supposedly interpreted as power of two alignments, internally these values are interpreted as log2(align). This patch updates the documentation,
- `MachineFunctionexposes` exposes `align-all-functions` also interpreted as power of two alignment, internally this value is interpreted as log2(align). This patch updates the documentation,
Reviewers: lattner, thegameg, courbet
Subscribers: dschuff, arsenm, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, Jim, s.egerton, llvm-commits, courbet
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65945
llvm-svn: 371045
2019-09-05 18:00:22 +08:00
|
|
|
static cl::opt<unsigned> AlignAllBlock(
|
|
|
|
"align-all-blocks",
|
|
|
|
cl::desc("Force the alignment of all blocks in the function in log2 format "
|
|
|
|
"(e.g 4 means align on 16B boundaries)."),
|
|
|
|
cl::init(0), cl::Hidden);
|
2013-04-12 08:48:32 +08:00
|
|
|
|
2016-01-22 01:25:52 +08:00
|
|
|
static cl::opt<unsigned> AlignAllNonFallThruBlocks(
|
|
|
|
"align-all-nofallthru-blocks",
|
[LLVM][Alignment] Make functions using log of alignment explicit
Summary:
This patch renames functions that takes or returns alignment as log2, this patch will help with the transition to llvm::Align.
The renaming makes it explicit that we deal with log(alignment) instead of a power of two alignment.
A few renames uncovered dubious assignments:
- `MirParser`/`MirPrinter` was expecting powers of two but `MachineFunction` and `MachineBasicBlock` were using deal with log2(align). This patch fixes it and updates the documentation.
- `MachineBlockPlacement` exposes two flags (`align-all-blocks` and `align-all-nofallthru-blocks`) supposedly interpreted as power of two alignments, internally these values are interpreted as log2(align). This patch updates the documentation,
- `MachineFunctionexposes` exposes `align-all-functions` also interpreted as power of two alignment, internally this value is interpreted as log2(align). This patch updates the documentation,
Reviewers: lattner, thegameg, courbet
Subscribers: dschuff, arsenm, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, Jim, s.egerton, llvm-commits, courbet
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65945
llvm-svn: 371045
2019-09-05 18:00:22 +08:00
|
|
|
cl::desc("Force the alignment of all blocks that have no fall-through "
|
|
|
|
"predecessors (i.e. don't add nops that are executed). In log2 "
|
|
|
|
"format (e.g 4 means align on 16B boundaries)."),
|
2016-01-22 01:25:52 +08:00
|
|
|
cl::init(0), cl::Hidden);
|
|
|
|
|
2013-11-21 03:08:44 +08:00
|
|
|
// FIXME: Find a good default for this flag and remove the flag.
|
2015-03-05 10:35:31 +08:00
|
|
|
static cl::opt<unsigned> ExitBlockBias(
|
|
|
|
"block-placement-exit-block-bias",
|
|
|
|
cl::desc("Block frequency percentage a loop exit block needs "
|
|
|
|
"over the original exit to be considered the new exit."),
|
|
|
|
cl::init(0), cl::Hidden);
|
2013-11-21 03:08:44 +08:00
|
|
|
|
2016-07-27 16:49:23 +08:00
|
|
|
// Definition:
|
|
|
|
// - Outlining: placement of a basic block outside the chain or hot path.
|
|
|
|
|
In MachineBlockPlacement, filter cold blocks off the loop chain when profile data is available.
In the current BB placement algorithm, a loop chain always contains all loop blocks. This has a drawback that cold blocks in the loop may be inserted on a hot function path, hence increasing branch cost and also reducing icache locality.
Consider a simple example shown below:
A
|
B⇆C
|
D
When B->C is quite cold, the best BB-layout should be A,B,D,C. But the current implementation produces A,C,B,D.
This patch filters those cold blocks off from the loop chain by comparing the ratio:
LoopBBFreq / LoopFreq
to 20%: if it is less than 20%, we don't include this BB to the loop chain. Here LoopFreq is the frequency of the loop when we reduce the loop into a single node. In general we have more cold blocks when the loop has few iterations. And vice versa.
Differential revision: http://reviews.llvm.org/D11662
llvm-svn: 251833
2015-11-03 05:24:00 +08:00
|
|
|
static cl::opt<unsigned> LoopToColdBlockRatio(
|
|
|
|
"loop-to-cold-block-ratio",
|
|
|
|
cl::desc("Outline loop blocks from loop chain if (frequency of loop) / "
|
|
|
|
"(frequency of block) is greater than this ratio"),
|
|
|
|
cl::init(5), cl::Hidden);
|
|
|
|
|
2017-08-05 05:13:41 +08:00
|
|
|
static cl::opt<bool> ForceLoopColdBlock(
|
|
|
|
"force-loop-cold-block",
|
|
|
|
cl::desc("Force outlining cold blocks from loops."),
|
|
|
|
cl::init(false), cl::Hidden);
|
|
|
|
|
2015-10-20 07:16:40 +08:00
|
|
|
static cl::opt<bool>
|
|
|
|
PreciseRotationCost("precise-rotation-cost",
|
|
|
|
cl::desc("Model the cost of loop rotation more "
|
|
|
|
"precisely by using profile data."),
|
|
|
|
cl::init(false), cl::Hidden);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2016-05-12 10:04:41 +08:00
|
|
|
static cl::opt<bool>
|
|
|
|
ForcePreciseRotationCost("force-precise-rotation-cost",
|
2016-05-13 00:39:02 +08:00
|
|
|
cl::desc("Force the use of precise cost "
|
|
|
|
"loop rotation strategy."),
|
2016-05-12 10:04:41 +08:00
|
|
|
cl::init(false), cl::Hidden);
|
2015-10-20 07:16:40 +08:00
|
|
|
|
|
|
|
static cl::opt<unsigned> MisfetchCost(
|
|
|
|
"misfetch-cost",
|
2016-07-16 02:41:56 +08:00
|
|
|
cl::desc("Cost that models the probabilistic risk of an instruction "
|
2015-10-20 07:16:40 +08:00
|
|
|
"misfetch due to a jump comparing to falling through, whose cost "
|
|
|
|
"is zero."),
|
|
|
|
cl::init(1), cl::Hidden);
|
|
|
|
|
|
|
|
static cl::opt<unsigned> JumpInstCost("jump-inst-cost",
|
|
|
|
cl::desc("Cost of jump instructions."),
|
|
|
|
cl::init(1), cl::Hidden);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
static cl::opt<bool>
|
|
|
|
TailDupPlacement("tail-dup-placement",
|
|
|
|
cl::desc("Perform tail duplication during placement. "
|
|
|
|
"Creates more fallthrough opportunites in "
|
|
|
|
"outline branches."),
|
|
|
|
cl::init(true), cl::Hidden);
|
2015-10-20 07:16:40 +08:00
|
|
|
|
2016-06-09 23:24:29 +08:00
|
|
|
static cl::opt<bool>
|
|
|
|
BranchFoldPlacement("branch-fold-placement",
|
|
|
|
cl::desc("Perform branch folding during placement. "
|
|
|
|
"Reduces code size."),
|
|
|
|
cl::init(true), cl::Hidden);
|
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
// Heuristic for tail duplication.
|
2017-02-01 07:48:32 +08:00
|
|
|
static cl::opt<unsigned> TailDupPlacementThreshold(
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
"tail-dup-placement-threshold",
|
|
|
|
cl::desc("Instruction cutoff for tail duplication during layout. "
|
|
|
|
"Tail merging during layout is forced to have a threshold "
|
|
|
|
"that won't conflict."), cl::init(2),
|
|
|
|
cl::Hidden);
|
|
|
|
|
2017-05-16 01:30:47 +08:00
|
|
|
// Heuristic for aggressive tail duplication.
|
|
|
|
static cl::opt<unsigned> TailDupPlacementAggressiveThreshold(
|
|
|
|
"tail-dup-placement-aggressive-threshold",
|
|
|
|
cl::desc("Instruction cutoff for aggressive tail duplication during "
|
|
|
|
"layout. Used at -O3. Tail merging during layout is forced to "
|
2017-08-18 07:38:41 +08:00
|
|
|
"have a threshold that won't conflict."), cl::init(4),
|
2017-05-16 01:30:47 +08:00
|
|
|
cl::Hidden);
|
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
// Heuristic for tail duplication.
|
|
|
|
static cl::opt<unsigned> TailDupPlacementPenalty(
|
|
|
|
"tail-dup-placement-penalty",
|
|
|
|
cl::desc("Cost penalty for blocks that can avoid breaking CFG by copying. "
|
|
|
|
"Copying can increase fallthrough, but it also increases icache "
|
|
|
|
"pressure. This parameter controls the penalty to account for that. "
|
|
|
|
"Percent as integer."),
|
|
|
|
cl::init(2),
|
|
|
|
cl::Hidden);
|
|
|
|
|
2020-07-22 02:18:06 +08:00
|
|
|
// Heuristic for tail duplication if profile count is used in cost model.
|
|
|
|
static cl::opt<unsigned> TailDupProfilePercentThreshold(
|
|
|
|
"tail-dup-profile-percent-threshold",
|
|
|
|
cl::desc("If profile count information is used in tail duplication cost "
|
|
|
|
"model, the gained fall through number from tail duplication "
|
|
|
|
"should be at least this percent of hot count."),
|
|
|
|
cl::init(50), cl::Hidden);
|
|
|
|
|
2017-03-03 09:00:22 +08:00
|
|
|
// Heuristic for triangle chains.
|
|
|
|
static cl::opt<unsigned> TriangleChainCount(
|
|
|
|
"triangle-chain-count",
|
|
|
|
cl::desc("Number of triangle-shaped-CFG's that need to be in a row for the "
|
|
|
|
"triangle tail duplication heuristic to kick in. 0 to disable."),
|
2017-03-16 09:32:29 +08:00
|
|
|
cl::init(2),
|
2017-03-03 09:00:22 +08:00
|
|
|
cl::Hidden);
|
|
|
|
|
2016-06-04 07:48:36 +08:00
|
|
|
extern cl::opt<unsigned> StaticLikelyProb;
|
2016-06-15 06:27:17 +08:00
|
|
|
extern cl::opt<unsigned> ProfileLikelyProb;
|
2016-06-04 07:48:36 +08:00
|
|
|
|
2017-02-03 05:29:17 +08:00
|
|
|
// Internal option used to control BFI display only after MBP pass.
|
|
|
|
// Defined in CodeGen/MachineBlockFrequencyInfo.cpp:
|
|
|
|
// -view-block-layout-with-bfi=
|
2017-01-29 09:57:02 +08:00
|
|
|
extern cl::opt<GVDAGType> ViewBlockLayoutWithBFI;
|
2017-02-03 05:29:17 +08:00
|
|
|
|
|
|
|
// Command line option to specify the name of the function for CFG dump
|
|
|
|
// Defined in Analysis/BlockFrequencyInfo.cpp: -view-bfi-func-name=
|
2017-01-29 09:57:02 +08:00
|
|
|
extern cl::opt<std::string> ViewBlockFreqFuncName;
|
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
namespace {
|
2017-08-25 05:21:39 +08:00
|
|
|
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
class BlockChain;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Type for our function-wide basic block -> block chain mapping.
|
2017-08-25 05:21:39 +08:00
|
|
|
using BlockToChainMapType = DenseMap<const MachineBasicBlock *, BlockChain *>;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A chain of blocks which will be laid out contiguously.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
|
|
|
/// This is the datastructure representing a chain of consecutive blocks that
|
|
|
|
/// are profitable to layout together in order to maximize fallthrough
|
2012-06-26 13:16:37 +08:00
|
|
|
/// probabilities and code locality. We also can use a block chain to represent
|
|
|
|
/// a sequence of basic blocks which have some external (correctness)
|
|
|
|
/// requirement for sequential layout.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
2012-06-26 13:16:37 +08:00
|
|
|
/// Chains can be built around a single basic block and can be merged to grow
|
|
|
|
/// them. They participate in a block-to-chain mapping, which is updated
|
|
|
|
/// automatically as chains are merged together.
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
class BlockChain {
|
2018-05-01 23:54:18 +08:00
|
|
|
/// The sequence of blocks belonging to this chain.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
/// This is the sequence of blocks for a particular chain. These will be laid
|
|
|
|
/// out in-order within the function.
|
|
|
|
SmallVector<MachineBasicBlock *, 4> Blocks;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the function-wide basic block to block chain mapping.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
|
|
|
/// This is retained in each block chain to simplify the computation of child
|
|
|
|
/// block chains for SCC-formation and iteration. We store the edges to child
|
|
|
|
/// basic blocks, and map them back to their associated chains using this
|
|
|
|
/// structure.
|
|
|
|
BlockToChainMapType &BlockToChain;
|
|
|
|
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
public:
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Construct a new BlockChain.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
|
|
|
/// This builds a new block chain representing a single basic block in the
|
|
|
|
/// function. It also registers itself as the chain that block participates
|
|
|
|
/// in with the BlockToChain mapping.
|
|
|
|
BlockChain(BlockToChainMapType &BlockToChain, MachineBasicBlock *BB)
|
2017-08-25 05:21:39 +08:00
|
|
|
: Blocks(1, BB), BlockToChain(BlockToChain) {
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
assert(BB && "Cannot create a chain with a null basic block");
|
|
|
|
BlockToChain[BB] = this;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Iterator over blocks within the chain.
|
2017-08-25 05:21:39 +08:00
|
|
|
using iterator = SmallVectorImpl<MachineBasicBlock *>::iterator;
|
|
|
|
using const_iterator = SmallVectorImpl<MachineBasicBlock *>::const_iterator;
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Beginning of blocks within the chain.
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
iterator begin() { return Blocks.begin(); }
|
2017-02-04 10:26:32 +08:00
|
|
|
const_iterator begin() const { return Blocks.begin(); }
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// End of blocks within the chain.
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
iterator end() { return Blocks.end(); }
|
2017-02-04 10:26:32 +08:00
|
|
|
const_iterator end() const { return Blocks.end(); }
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
bool remove(MachineBasicBlock* BB) {
|
|
|
|
for(iterator i = begin(); i != end(); ++i) {
|
|
|
|
if (*i == BB) {
|
|
|
|
Blocks.erase(i);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Merge a block chain into this one.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
|
|
|
/// This routine merges a block chain into this one. It takes care of forming
|
|
|
|
/// a contiguous sequence of basic blocks, updating the edge list, and
|
|
|
|
/// updating the block -> chain mapping. It does not free or tear down the
|
|
|
|
/// old chain, but the old chain's block list is no longer valid.
|
2011-12-22 07:02:08 +08:00
|
|
|
void merge(MachineBasicBlock *BB, BlockChain *Chain) {
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BB && "Can't merge a null block.");
|
|
|
|
assert(!Blocks.empty() && "Can't merge into an empty chain.");
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
|
|
|
// Fast path in case we don't have a chain already.
|
|
|
|
if (!Chain) {
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(!BlockToChain[BB] &&
|
|
|
|
"Passed chain is null, but BB has entry in BlockToChain.");
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
Blocks.push_back(BB);
|
|
|
|
BlockToChain[BB] = this;
|
|
|
|
return;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
}
|
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BB == *Chain->begin() && "Passed BB is not head of Chain.");
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
assert(Chain->begin() != Chain->end());
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
// Update the incoming blocks to point to this chain, and add them to the
|
|
|
|
// chain structure.
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : *Chain) {
|
|
|
|
Blocks.push_back(ChainBB);
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BlockToChain[ChainBB] == Chain && "Incoming blocks not in chain.");
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockToChain[ChainBB] = this;
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
}
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2012-04-08 22:37:01 +08:00
|
|
|
#ifndef NDEBUG
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Dump the blocks in this chain.
|
2014-01-04 06:53:37 +08:00
|
|
|
LLVM_DUMP_METHOD void dump() {
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *MBB : *this)
|
|
|
|
MBB->dump();
|
2012-04-08 22:37:01 +08:00
|
|
|
}
|
|
|
|
#endif // NDEBUG
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Count of predecessors of any block within the chain which have not
|
2016-03-03 08:58:43 +08:00
|
|
|
/// yet been scheduled. In general, we will delay scheduling this chain
|
|
|
|
/// until those predecessors are scheduled (or we find a sufficiently good
|
|
|
|
/// reason to override this heuristic.) Note that when forming loop chains,
|
|
|
|
/// blocks outside the loop are ignored and treated as if they were already
|
|
|
|
/// scheduled.
|
2011-11-13 19:20:44 +08:00
|
|
|
///
|
2016-03-03 08:58:43 +08:00
|
|
|
/// Note: This field is reinitialized multiple times - once for each loop,
|
|
|
|
/// and then once for the function as a whole.
|
2017-08-25 05:21:39 +08:00
|
|
|
unsigned UnscheduledPredecessors = 0;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
class MachineBlockPlacement : public MachineFunctionPass {
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A type for a block filter set.
|
2017-08-25 05:21:39 +08:00
|
|
|
using BlockFilterSet = SmallSetVector<const MachineBasicBlock *, 16>;
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2019-01-09 13:11:10 +08:00
|
|
|
/// Pair struct containing basic block and taildup profitability
|
2017-02-01 07:48:32 +08:00
|
|
|
struct BlockAndTailDupResult {
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
MachineBasicBlock *BB;
|
2017-02-01 07:48:32 +08:00
|
|
|
bool ShouldTailDup;
|
|
|
|
};
|
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Triple struct containing edge weight and the edge.
|
|
|
|
struct WeightedEdge {
|
|
|
|
BlockFrequency Weight;
|
|
|
|
MachineBasicBlock *Src;
|
|
|
|
MachineBasicBlock *Dest;
|
|
|
|
};
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// work lists of blocks that are ready to be laid out
|
2016-07-01 13:46:48 +08:00
|
|
|
SmallVector<MachineBasicBlock *, 16> BlockWorkList;
|
|
|
|
SmallVector<MachineBasicBlock *, 16> EHPadWorkList;
|
|
|
|
|
2017-02-24 05:22:24 +08:00
|
|
|
/// Edges that have already been computed as optimal.
|
|
|
|
DenseMap<const MachineBasicBlock *, BlockAndTailDupResult> ComputedEdges;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Machine Function
|
2016-06-14 06:23:44 +08:00
|
|
|
MachineFunction *F;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the branch probability pass.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
const MachineBranchProbabilityInfo *MBPI;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the function-wide block frequency pass.
|
2020-01-28 02:05:54 +08:00
|
|
|
std::unique_ptr<MBFIWrapper> MBFI;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the loop info.
|
2016-06-09 23:24:29 +08:00
|
|
|
MachineLoopInfo *MLI;
|
2011-10-21 16:57:37 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Preferred loop exit.
|
2016-10-28 05:37:20 +08:00
|
|
|
/// Member variable for convenience. It may be removed by duplication deep
|
|
|
|
/// in the call stack.
|
|
|
|
MachineBasicBlock *PreferredLoopExit;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the target's instruction info.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
const TargetInstrInfo *TII;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the target's lowering info.
|
2013-01-12 04:05:37 +08:00
|
|
|
const TargetLoweringBase *TLI;
|
2011-10-21 16:57:37 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the post dominator tree.
|
2017-02-01 07:48:32 +08:00
|
|
|
MachinePostDominatorTree *MPDT;
|
|
|
|
|
2019-12-06 01:39:37 +08:00
|
|
|
ProfileSummaryInfo *PSI;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Duplicator used to duplicate tails during placement.
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
///
|
|
|
|
/// Placement decisions can open up new tail duplication opportunities, but
|
|
|
|
/// since tail duplication affects placement decisions of later blocks, it
|
|
|
|
/// must be done inline.
|
|
|
|
TailDuplicator TailDup;
|
|
|
|
|
2020-02-13 07:22:33 +08:00
|
|
|
/// Partial tail duplication threshold.
|
|
|
|
BlockFrequency DupThreshold;
|
|
|
|
|
2020-07-22 02:18:06 +08:00
|
|
|
/// True: use block profile count to compute tail duplication cost.
|
|
|
|
/// False: use block frequency to compute tail duplication cost.
|
|
|
|
bool UseProfileCount;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Allocator and owner of BlockChain structures.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
2012-06-26 13:16:37 +08:00
|
|
|
/// We build BlockChains lazily while processing the loop structure of
|
|
|
|
/// a function. To reduce malloc traffic, we allocate them using this
|
|
|
|
/// slab-like allocator, and destroy them after the pass completes. An
|
|
|
|
/// important guarantee is that this allocator produces stable pointers to
|
|
|
|
/// the chains.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
SpecificBumpPtrAllocator<BlockChain> ChainAllocator;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Function wide BasicBlock to BlockChain mapping.
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
///
|
|
|
|
/// This mapping allows efficiently moving from any given basic block to the
|
|
|
|
/// BlockChain it participates in, if any. We use it to, among other things,
|
|
|
|
/// allow implicitly defining edges between chains as the existing edges
|
|
|
|
/// between basic blocks.
|
2017-02-04 10:26:32 +08:00
|
|
|
DenseMap<const MachineBasicBlock *, BlockChain *> BlockToChain;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
[MachineBlockPlacement] Don't make blocks "uneditable"
Summary:
This fixes an issue with MachineBlockPlacement due to a badly timed call
to `analyzeBranch` with `AllowModify` set to true. The timeline is as
follows:
1. `MachineBlockPlacement::maybeTailDuplicateBlock` calls
`TailDup.shouldTailDuplicate` on its argument, which in turn calls
`analyzeBranch` with `AllowModify` set to true.
2. This `analyzeBranch` call edits the terminator sequence of the block
based on the physical layout of the machine function, turning an
unanalyzable non-fallthrough block to a unanalyzable fallthrough
block. Normally MBP bails out of rearranging such blocks, but this
block was unanalyzable non-fallthrough (and thus rearrangeable) the
first time MBP looked at it, and so it goes ahead and decides where
it should be placed in the function.
3. When placing this block MBP fails to analyze and thus update the
block in keeping with the new physical layout.
Concretely, before (1) we have something like:
```
LBL0:
< unknown terminator op that may branch to LBL1 >
jmp LBL1
LBL1:
... A
LBL2:
... B
```
In (2), analyze branch simplifies this to
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump removed
LBL1:
... A
LBL2:
... B
```
In (3), MachineBlockPlacement goes ahead with its plan of putting LBL2
after the first block since that is profitable.
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump
LBL2:
... B
LBL1:
... A
```
and the program now has incorrect behavior (we no longer fall-through
from `LBL0` to `LBL1`) because MBP can no longer edit LBL0.
There are several possible solutions, but I went with removing the teeth
off of the `analyzeBranch` calls in TailDuplicator. That makes thinking
about the result of these calls easier, and breaks nothing in the lit
test suite.
I've also added some bookkeeping to the MachineBlockPlacement pass and
used that to write an assert that would have caught this.
Reviewers: chandlerc, gberry, MatzeB, iteratee
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D27783
llvm-svn: 289764
2016-12-15 13:08:57 +08:00
|
|
|
#ifndef NDEBUG
|
|
|
|
/// The set of basic blocks that have terminators that cannot be fully
|
|
|
|
/// analyzed. These basic blocks cannot be re-ordered safely by
|
|
|
|
/// MachineBlockPlacement, and we must preserve physical layout of these
|
|
|
|
/// blocks and their successors through the pass.
|
|
|
|
SmallPtrSet<MachineBasicBlock *, 4> BlocksWithUnanalyzableExits;
|
|
|
|
#endif
|
|
|
|
|
2020-07-22 02:18:06 +08:00
|
|
|
/// Get block profile count or frequency according to UseProfileCount.
|
|
|
|
/// The return value is used to model tail duplication cost.
|
|
|
|
BlockFrequency getBlockCountOrFrequency(const MachineBasicBlock *BB) {
|
|
|
|
if (UseProfileCount) {
|
|
|
|
auto Count = MBFI->getMBFI().getBlockProfileCount(BB);
|
|
|
|
if (Count)
|
|
|
|
return *Count;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
} else
|
|
|
|
return MBFI->getBlockFreq(BB);
|
|
|
|
}
|
|
|
|
|
2020-02-13 07:22:33 +08:00
|
|
|
/// Scale the DupThreshold according to basic block size.
|
|
|
|
BlockFrequency scaleThreshold(MachineBasicBlock *BB);
|
|
|
|
void initDupThreshold();
|
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
/// Decrease the UnscheduledPredecessors count for all blocks in chain, and
|
|
|
|
/// if the count goes to 0, add them to the appropriate work list.
|
2017-02-04 10:26:32 +08:00
|
|
|
void markChainSuccessors(
|
|
|
|
const BlockChain &Chain, const MachineBasicBlock *LoopHeaderBB,
|
|
|
|
const BlockFilterSet *BlockFilter = nullptr);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
|
|
|
|
/// Decrease the UnscheduledPredecessors count for a single block, and
|
|
|
|
/// if the count goes to 0, add them to the appropriate work list.
|
|
|
|
void markBlockSuccessors(
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, const MachineBasicBlock *BB,
|
|
|
|
const MachineBasicBlock *LoopHeaderBB,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
const BlockFilterSet *BlockFilter = nullptr);
|
|
|
|
|
2016-06-12 02:35:40 +08:00
|
|
|
BranchProbability
|
2017-02-04 10:26:32 +08:00
|
|
|
collectViableSuccessors(
|
|
|
|
const MachineBasicBlock *BB, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter,
|
|
|
|
SmallVector<MachineBasicBlock *, 4> &Successors);
|
|
|
|
bool shouldPredBlockBeOutlined(
|
|
|
|
const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
|
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter,
|
|
|
|
BranchProbability SuccProb, BranchProbability HotProb);
|
2020-02-13 07:22:33 +08:00
|
|
|
bool isBestSuccessor(MachineBasicBlock *BB, MachineBasicBlock *Pred,
|
|
|
|
BlockFilterSet *BlockFilter);
|
|
|
|
void findDuplicateCandidates(SmallVectorImpl<MachineBasicBlock *> &Candidates,
|
|
|
|
MachineBasicBlock *BB,
|
|
|
|
BlockFilterSet *BlockFilter);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
bool repeatedlyTailDuplicateBlock(
|
|
|
|
MachineBasicBlock *BB, MachineBasicBlock *&LPred,
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *LoopHeaderBB,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
BlockChain &Chain, BlockFilterSet *BlockFilter,
|
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt);
|
2017-02-04 10:26:32 +08:00
|
|
|
bool maybeTailDuplicateBlock(
|
|
|
|
MachineBasicBlock *BB, MachineBasicBlock *LPred,
|
|
|
|
BlockChain &Chain, BlockFilterSet *BlockFilter,
|
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt,
|
2018-07-17 02:51:40 +08:00
|
|
|
bool &DuplicatedToLPred);
|
2017-02-04 10:26:32 +08:00
|
|
|
bool hasBetterLayoutPredecessor(
|
|
|
|
const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
|
|
|
|
const BlockChain &SuccChain, BranchProbability SuccProb,
|
|
|
|
BranchProbability RealSuccProb, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter);
|
|
|
|
BlockAndTailDupResult selectBestSuccessor(
|
|
|
|
const MachineBasicBlock *BB, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter);
|
|
|
|
MachineBasicBlock *selectBestCandidateBlock(
|
|
|
|
const BlockChain &Chain, SmallVectorImpl<MachineBasicBlock *> &WorkList);
|
|
|
|
MachineBasicBlock *getFirstUnplacedBlock(
|
|
|
|
const BlockChain &PlacedChain,
|
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt,
|
|
|
|
const BlockFilterSet *BlockFilter);
|
2016-03-15 05:24:11 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Add a basic block to the work list if it is appropriate.
|
2016-03-15 05:24:11 +08:00
|
|
|
///
|
|
|
|
/// If the optional parameter BlockFilter is provided, only MBB
|
|
|
|
/// present in the set will be added to the worklist. If nullptr
|
|
|
|
/// is provided, no filtering occurs.
|
2017-02-04 10:26:32 +08:00
|
|
|
void fillWorkLists(const MachineBasicBlock *MBB,
|
2016-03-15 05:24:11 +08:00
|
|
|
SmallPtrSetImpl<BlockChain *> &UpdatedPreds,
|
|
|
|
const BlockFilterSet *BlockFilter);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2017-02-04 10:26:32 +08:00
|
|
|
void buildChain(const MachineBasicBlock *BB, BlockChain &Chain,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
BlockFilterSet *BlockFilter = nullptr);
|
2019-01-26 03:45:13 +08:00
|
|
|
bool canMoveBottomBlockToTop(const MachineBasicBlock *BottomBlock,
|
|
|
|
const MachineBasicBlock *OldTop);
|
2019-02-23 02:04:37 +08:00
|
|
|
bool hasViableTopFallthrough(const MachineBasicBlock *Top,
|
|
|
|
const BlockFilterSet &LoopBlockSet);
|
2019-06-15 07:08:59 +08:00
|
|
|
BlockFrequency TopFallThroughFreq(const MachineBasicBlock *Top,
|
|
|
|
const BlockFilterSet &LoopBlockSet);
|
|
|
|
BlockFrequency FallThroughGains(const MachineBasicBlock *NewTop,
|
|
|
|
const MachineBasicBlock *OldTop,
|
|
|
|
const MachineBasicBlock *ExitBB,
|
|
|
|
const BlockFilterSet &LoopBlockSet);
|
|
|
|
MachineBasicBlock *findBestLoopTopHelper(MachineBasicBlock *OldTop,
|
2019-08-09 04:25:23 +08:00
|
|
|
const MachineLoop &L, const BlockFilterSet &LoopBlockSet);
|
2019-08-30 03:03:58 +08:00
|
|
|
MachineBasicBlock *findBestLoopTop(
|
2019-08-23 00:21:32 +08:00
|
|
|
const MachineLoop &L, const BlockFilterSet &LoopBlockSet);
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBasicBlock *findBestLoopExit(
|
2019-08-30 03:03:58 +08:00
|
|
|
const MachineLoop &L, const BlockFilterSet &LoopBlockSet,
|
|
|
|
BlockFrequency &ExitFreq);
|
2017-02-04 10:26:32 +08:00
|
|
|
BlockFilterSet collectLoopBlockSet(const MachineLoop &L);
|
|
|
|
void buildLoopChains(const MachineLoop &L);
|
|
|
|
void rotateLoop(
|
|
|
|
BlockChain &LoopChain, const MachineBasicBlock *ExitingBB,
|
2019-08-30 03:03:58 +08:00
|
|
|
BlockFrequency ExitFreq, const BlockFilterSet &LoopBlockSet);
|
2017-02-04 10:26:32 +08:00
|
|
|
void rotateLoopWithProfile(
|
|
|
|
BlockChain &LoopChain, const MachineLoop &L,
|
|
|
|
const BlockFilterSet &LoopBlockSet);
|
2016-06-14 06:23:44 +08:00
|
|
|
void buildCFGChains();
|
|
|
|
void optimizeBranches();
|
|
|
|
void alignBlocks();
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Returns true if a block should be tail-duplicated to increase fallthrough
|
|
|
|
/// opportunities.
|
2017-02-01 07:48:32 +08:00
|
|
|
bool shouldTailDuplicate(MachineBasicBlock *BB);
|
|
|
|
/// Check the edge frequencies to see if tail duplication will increase
|
|
|
|
/// fallthroughs.
|
|
|
|
bool isProfitableToTailDup(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
|
2018-07-17 02:51:40 +08:00
|
|
|
BranchProbability QProb,
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Check for a trellis layout.
|
|
|
|
bool isTrellis(const MachineBasicBlock *BB,
|
|
|
|
const SmallVectorImpl<MachineBasicBlock *> &ViableSuccs,
|
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Get the best successor given a trellis layout.
|
|
|
|
BlockAndTailDupResult getBestTrellisSuccessor(
|
|
|
|
const MachineBasicBlock *BB,
|
|
|
|
const SmallVectorImpl<MachineBasicBlock *> &ViableSuccs,
|
|
|
|
BranchProbability AdjustedSumProb, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Get the best pair of non-conflicting edges.
|
|
|
|
static std::pair<WeightedEdge, WeightedEdge> getBestNonConflictingEdges(
|
|
|
|
const MachineBasicBlock *BB,
|
2017-04-12 21:26:28 +08:00
|
|
|
MutableArrayRef<SmallVector<WeightedEdge, 8>> Edges);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
/// Returns true if a block can tail duplicate into all unplaced
|
|
|
|
/// predecessors. Filters based on loop.
|
|
|
|
bool canTailDuplicateUnplacedPreds(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, MachineBasicBlock *Succ,
|
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter);
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2017-03-03 09:00:22 +08:00
|
|
|
/// Find chains of triangles to tail-duplicate where a global analysis works,
|
|
|
|
/// but a local analysis would not find them.
|
|
|
|
void precomputeTriangleChains();
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
|
|
|
public:
|
|
|
|
static char ID; // Pass identification, replacement for typeid
|
2017-08-25 05:21:39 +08:00
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
MachineBlockPlacement() : MachineFunctionPass(ID) {
|
|
|
|
initializeMachineBlockPlacementPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
2014-03-07 17:26:03 +08:00
|
|
|
bool runOnMachineFunction(MachineFunction &F) override;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
bool allowTailDupPlacement() const {
|
|
|
|
assert(F);
|
|
|
|
return TailDupPlacement && !F->getTarget().requiresStructuredCFG();
|
|
|
|
}
|
|
|
|
|
2014-03-07 17:26:03 +08:00
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
AU.addRequired<MachineBranchProbabilityInfo>();
|
|
|
|
AU.addRequired<MachineBlockFrequencyInfo>();
|
2017-02-01 07:48:32 +08:00
|
|
|
if (TailDupPlacement)
|
|
|
|
AU.addRequired<MachinePostDominatorTree>();
|
2011-10-21 16:57:37 +08:00
|
|
|
AU.addRequired<MachineLoopInfo>();
|
2019-12-06 01:39:37 +08:00
|
|
|
AU.addRequired<ProfileSummaryInfoWrapperPass>();
|
2016-06-09 23:24:29 +08:00
|
|
|
AU.addRequired<TargetPassConfig>();
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
MachineFunctionPass::getAnalysisUsage(AU);
|
|
|
|
}
|
|
|
|
};
|
2017-08-25 05:21:39 +08:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
|
|
|
char MachineBlockPlacement::ID = 0;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2012-02-09 05:23:13 +08:00
|
|
|
char &llvm::MachineBlockPlacementID = MachineBlockPlacement::ID;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2017-05-26 05:26:32 +08:00
|
|
|
INITIALIZE_PASS_BEGIN(MachineBlockPlacement, DEBUG_TYPE,
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
"Branch Probability Basic Block Placement", false, false)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineBlockFrequencyInfo)
|
2017-02-01 07:48:32 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachinePostDominatorTree)
|
2011-10-21 16:57:37 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
|
2019-12-06 01:39:37 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(ProfileSummaryInfoWrapperPass)
|
2017-05-26 05:26:32 +08:00
|
|
|
INITIALIZE_PASS_END(MachineBlockPlacement, DEBUG_TYPE,
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
"Branch Probability Basic Block Placement", false, false)
|
|
|
|
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
#ifndef NDEBUG
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Helper to print the name of a MBB.
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
///
|
|
|
|
/// Only used by debug logging.
|
2017-02-04 10:26:32 +08:00
|
|
|
static std::string getBlockName(const MachineBasicBlock *BB) {
|
2014-06-27 06:52:05 +08:00
|
|
|
std::string Result;
|
|
|
|
raw_string_ostream OS(Result);
|
2017-12-05 01:18:51 +08:00
|
|
|
OS << printMBBReference(*BB);
|
2016-03-03 05:45:13 +08:00
|
|
|
OS << " ('" << BB->getName() << "')";
|
2014-06-27 06:52:05 +08:00
|
|
|
OS.flush();
|
|
|
|
return Result;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
}
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
#endif
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Mark a chain's successors as having one fewer preds.
|
2011-11-13 19:34:55 +08:00
|
|
|
///
|
|
|
|
/// When a chain is being merged into the "placed" chain, this routine will
|
|
|
|
/// quickly walk the successors of each block in the chain and mark them as
|
|
|
|
/// having one fewer active predecessor. It also adds any successors of this
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
/// chain which reach the zero-predecessor state to the appropriate worklist.
|
2011-11-13 19:20:44 +08:00
|
|
|
void MachineBlockPlacement::markChainSuccessors(
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, const MachineBasicBlock *LoopHeaderBB,
|
2011-12-22 07:02:08 +08:00
|
|
|
const BlockFilterSet *BlockFilter) {
|
2011-11-13 19:20:44 +08:00
|
|
|
// Walk all the blocks in this chain, marking their successors as having
|
|
|
|
// a predecessor placed.
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *MBB : Chain) {
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
markBlockSuccessors(Chain, MBB, LoopHeaderBB, BlockFilter);
|
|
|
|
}
|
|
|
|
}
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283619
2016-10-08 06:33:20 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Mark a single block's successors as having one fewer preds.
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
///
|
|
|
|
/// Under normal circumstances, this is only called by markChainSuccessors,
|
|
|
|
/// but if a block that was to be placed is completely tail-duplicated away,
|
|
|
|
/// and was duplicated into the chain end, we need to redo markBlockSuccessors
|
|
|
|
/// for just that block.
|
|
|
|
void MachineBlockPlacement::markBlockSuccessors(
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, const MachineBasicBlock *MBB,
|
|
|
|
const MachineBasicBlock *LoopHeaderBB, const BlockFilterSet *BlockFilter) {
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
// Add any successors for which this is the only un-placed in-loop
|
|
|
|
// predecessor to the worklist as a viable candidate for CFG-neutral
|
|
|
|
// placement. No subsequent placement of this block will violate the CFG
|
|
|
|
// shape, so we get to use heuristics to choose a favorable placement.
|
|
|
|
for (MachineBasicBlock *Succ : MBB->successors()) {
|
|
|
|
if (BlockFilter && !BlockFilter->count(Succ))
|
|
|
|
continue;
|
|
|
|
BlockChain &SuccChain = *BlockToChain[Succ];
|
|
|
|
// Disregard edges within a fixed chain, or edges to the loop header.
|
|
|
|
if (&Chain == &SuccChain || Succ == LoopHeaderBB)
|
|
|
|
continue;
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283842
2016-10-11 09:20:33 +08:00
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
// This is a cross-chain edge that is within the loop, so decrement the
|
|
|
|
// loop predecessor count of the destination chain.
|
|
|
|
if (SuccChain.UnscheduledPredecessors == 0 ||
|
|
|
|
--SuccChain.UnscheduledPredecessors > 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
auto *NewBB = *SuccChain.begin();
|
|
|
|
if (NewBB->isEHPad())
|
|
|
|
EHPadWorkList.push_back(NewBB);
|
|
|
|
else
|
|
|
|
BlockWorkList.push_back(NewBB);
|
2011-11-13 19:20:44 +08:00
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
}
|
|
|
|
|
2016-06-12 02:35:40 +08:00
|
|
|
/// This helper function collects the set of successors of block
|
|
|
|
/// \p BB that are allowed to be its layout successors, and return
|
|
|
|
/// the total branch probability of edges from \p BB to those
|
|
|
|
/// blocks.
|
|
|
|
BranchProbability MachineBlockPlacement::collectViableSuccessors(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter,
|
2016-06-12 02:35:40 +08:00
|
|
|
SmallVector<MachineBasicBlock *, 4> &Successors) {
|
2015-12-01 13:29:22 +08:00
|
|
|
// Adjust edge probabilities by excluding edges pointing to blocks that is
|
|
|
|
// either not in BlockFilter or is already in the current chain. Consider the
|
|
|
|
// following CFG:
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
//
|
|
|
|
// --->A
|
|
|
|
// | / \
|
|
|
|
// | B C
|
|
|
|
// | \ / \
|
|
|
|
// ----D E
|
|
|
|
//
|
|
|
|
// Assume A->C is very hot (>90%), and C->D has a 50% probability, then after
|
|
|
|
// A->C is chosen as a fall-through, D won't be selected as a successor of C
|
|
|
|
// due to CFG constraint (the probability of C->D is not greater than
|
2017-06-16 20:23:04 +08:00
|
|
|
// HotProb to break topo-order). If we exclude E that is not in BlockFilter
|
|
|
|
// when calculating the probability of C->D, D will be selected and we
|
2016-06-12 02:35:40 +08:00
|
|
|
// will get A C D B as the layout of this loop.
|
2015-12-01 13:29:22 +08:00
|
|
|
auto AdjustedSumProb = BranchProbability::getOne();
|
2015-02-18 16:19:16 +08:00
|
|
|
for (MachineBasicBlock *Succ : BB->successors()) {
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
bool SkipSucc = false;
|
2016-04-08 05:29:39 +08:00
|
|
|
if (Succ->isEHPad() || (BlockFilter && !BlockFilter->count(Succ))) {
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
SkipSucc = true;
|
|
|
|
} else {
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if (SuccChain == &Chain) {
|
|
|
|
SkipSucc = true;
|
|
|
|
} else if (Succ != *SuccChain->begin()) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " " << getBlockName(Succ)
|
|
|
|
<< " -> Mid chain!\n");
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
continue;
|
|
|
|
}
|
2011-11-19 18:26:02 +08:00
|
|
|
}
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
if (SkipSucc)
|
2015-12-01 13:29:22 +08:00
|
|
|
AdjustedSumProb -= MBPI->getEdgeProbability(BB, Succ);
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
else
|
|
|
|
Successors.push_back(Succ);
|
|
|
|
}
|
2011-11-13 19:34:53 +08:00
|
|
|
|
2016-06-12 02:35:40 +08:00
|
|
|
return AdjustedSumProb;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// The helper function returns the branch probability that is adjusted
|
|
|
|
/// or normalized over the new total \p AdjustedSumProb.
|
|
|
|
static BranchProbability
|
|
|
|
getAdjustedProbability(BranchProbability OrigProb,
|
|
|
|
BranchProbability AdjustedSumProb) {
|
|
|
|
BranchProbability SuccProb;
|
|
|
|
uint32_t SuccProbN = OrigProb.getNumerator();
|
|
|
|
uint32_t SuccProbD = AdjustedSumProb.getNumerator();
|
|
|
|
if (SuccProbN >= SuccProbD)
|
|
|
|
SuccProb = BranchProbability::getOne();
|
|
|
|
else
|
|
|
|
SuccProb = BranchProbability(SuccProbN, SuccProbD);
|
|
|
|
|
|
|
|
return SuccProb;
|
|
|
|
}
|
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Check if \p BB has exactly the successors in \p Successors.
|
|
|
|
static bool
|
|
|
|
hasSameSuccessors(MachineBasicBlock &BB,
|
|
|
|
SmallPtrSetImpl<const MachineBasicBlock *> &Successors) {
|
|
|
|
if (BB.succ_size() != Successors.size())
|
|
|
|
return false;
|
|
|
|
// We don't want to count self-loops
|
|
|
|
if (Successors.count(&BB))
|
|
|
|
return false;
|
|
|
|
for (MachineBasicBlock *Succ : BB.successors())
|
|
|
|
if (!Successors.count(Succ))
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Check if a block should be tail duplicated to increase fallthrough
|
|
|
|
/// opportunities.
|
2017-02-01 07:48:32 +08:00
|
|
|
/// \p BB Block to check.
|
|
|
|
bool MachineBlockPlacement::shouldTailDuplicate(MachineBasicBlock *BB) {
|
|
|
|
// Blocks with single successors don't create additional fallthrough
|
|
|
|
// opportunities. Don't duplicate them. TODO: When conditional exits are
|
|
|
|
// analyzable, allow them to be duplicated.
|
|
|
|
bool IsSimple = TailDup.isSimpleBB(BB);
|
|
|
|
|
|
|
|
if (BB->succ_size() == 1)
|
|
|
|
return false;
|
|
|
|
return TailDup.shouldTailDuplicate(IsSimple, *BB);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Compare 2 BlockFrequency's with a small penalty for \p A.
|
|
|
|
/// In order to be conservative, we apply a X% penalty to account for
|
|
|
|
/// increased icache pressure and static heuristics. For small frequencies
|
|
|
|
/// we use only the numerators to improve accuracy. For simplicity, we assume the
|
|
|
|
/// penalty is less than 100%
|
|
|
|
/// TODO(iteratee): Use 64-bit fixed point edge frequencies everywhere.
|
|
|
|
static bool greaterWithBias(BlockFrequency A, BlockFrequency B,
|
|
|
|
uint64_t EntryFreq) {
|
|
|
|
BranchProbability ThresholdProb(TailDupPlacementPenalty, 100);
|
|
|
|
BlockFrequency Gain = A - B;
|
|
|
|
return (Gain / ThresholdProb).getFrequency() >= EntryFreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Check the edge frequencies to see if tail duplication will increase
|
|
|
|
/// fallthroughs. It only makes sense to call this function when
|
|
|
|
/// \p Succ would not be chosen otherwise. Tail duplication of \p Succ is
|
|
|
|
/// always locally profitable if we would have picked \p Succ without
|
|
|
|
/// considering duplication.
|
|
|
|
bool MachineBlockPlacement::isProfitableToTailDup(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
|
2017-02-01 07:48:32 +08:00
|
|
|
BranchProbability QProb,
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter) {
|
2017-02-01 07:48:32 +08:00
|
|
|
// We need to do a probability calculation to make sure this is profitable.
|
|
|
|
// First: does succ have a successor that post-dominates? This affects the
|
|
|
|
// calculation. The 2 relevant cases are:
|
|
|
|
// BB BB
|
|
|
|
// | \Qout | \Qout
|
|
|
|
// P| C |P C
|
|
|
|
// = C' = C'
|
|
|
|
// | /Qin | /Qin
|
|
|
|
// | / | /
|
|
|
|
// Succ Succ
|
|
|
|
// / \ | \ V
|
|
|
|
// U/ =V |U \
|
|
|
|
// / \ = D
|
|
|
|
// D E | /
|
|
|
|
// | /
|
|
|
|
// |/
|
|
|
|
// PDom
|
|
|
|
// '=' : Branch taken for that CFG edge
|
|
|
|
// In the second case, Placing Succ while duplicating it into C prevents the
|
|
|
|
// fallthrough of Succ into either D or PDom, because they now have C as an
|
|
|
|
// unplaced predecessor
|
|
|
|
|
|
|
|
// Start by figuring out which case we fall into
|
|
|
|
MachineBasicBlock *PDom = nullptr;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> SuccSuccs;
|
|
|
|
// Only scan the relevant successors
|
|
|
|
auto AdjustedSuccSumProb =
|
|
|
|
collectViableSuccessors(Succ, Chain, BlockFilter, SuccSuccs);
|
|
|
|
BranchProbability PProb = MBPI->getEdgeProbability(BB, Succ);
|
|
|
|
auto BBFreq = MBFI->getBlockFreq(BB);
|
|
|
|
auto SuccFreq = MBFI->getBlockFreq(Succ);
|
|
|
|
BlockFrequency P = BBFreq * PProb;
|
|
|
|
BlockFrequency Qout = BBFreq * QProb;
|
|
|
|
uint64_t EntryFreq = MBFI->getEntryFreq();
|
|
|
|
// If there are no more successors, it is profitable to copy, as it strictly
|
|
|
|
// increases fallthrough.
|
|
|
|
if (SuccSuccs.size() == 0)
|
|
|
|
return greaterWithBias(P, Qout, EntryFreq);
|
|
|
|
|
|
|
|
auto BestSuccSucc = BranchProbability::getZero();
|
|
|
|
// Find the PDom or the best Succ if no PDom exists.
|
|
|
|
for (MachineBasicBlock *SuccSucc : SuccSuccs) {
|
|
|
|
auto Prob = MBPI->getEdgeProbability(Succ, SuccSucc);
|
|
|
|
if (Prob > BestSuccSucc)
|
|
|
|
BestSuccSucc = Prob;
|
|
|
|
if (PDom == nullptr)
|
|
|
|
if (MPDT->dominates(SuccSucc, Succ)) {
|
|
|
|
PDom = SuccSucc;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// For the comparisons, we need to know Succ's best incoming edge that isn't
|
|
|
|
// from BB.
|
|
|
|
auto SuccBestPred = BlockFrequency(0);
|
|
|
|
for (MachineBasicBlock *SuccPred : Succ->predecessors()) {
|
|
|
|
if (SuccPred == Succ || SuccPred == BB
|
|
|
|
|| BlockToChain[SuccPred] == &Chain
|
|
|
|
|| (BlockFilter && !BlockFilter->count(SuccPred)))
|
|
|
|
continue;
|
|
|
|
auto Freq = MBFI->getBlockFreq(SuccPred)
|
|
|
|
* MBPI->getEdgeProbability(SuccPred, Succ);
|
|
|
|
if (Freq > SuccBestPred)
|
|
|
|
SuccBestPred = Freq;
|
|
|
|
}
|
|
|
|
// Qin is Succ's best unplaced incoming edge that isn't BB
|
|
|
|
BlockFrequency Qin = SuccBestPred;
|
|
|
|
// If it doesn't have a post-dominating successor, here is the calculation:
|
|
|
|
// BB BB
|
|
|
|
// | \Qout | \
|
|
|
|
// P| C | =
|
|
|
|
// = C' | C
|
|
|
|
// | /Qin | |
|
|
|
|
// | / | C' (+Succ)
|
|
|
|
// Succ Succ /|
|
|
|
|
// / \ | \/ |
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
// U/ =V | == |
|
2017-02-01 07:48:32 +08:00
|
|
|
// / \ | / \|
|
|
|
|
// D E D E
|
|
|
|
// '=' : Branch taken for that CFG edge
|
|
|
|
// Cost in the first case is: P + V
|
|
|
|
// For this calculation, we always assume P > Qout. If Qout > P
|
|
|
|
// The result of this function will be ignored at the caller.
|
2017-04-11 06:28:18 +08:00
|
|
|
// Let F = SuccFreq - Qin
|
|
|
|
// Cost in the second case is: Qout + min(Qin, F) * U + max(Qin, F) * V
|
2017-02-01 07:48:32 +08:00
|
|
|
|
|
|
|
if (PDom == nullptr || !Succ->isSuccessor(PDom)) {
|
|
|
|
BranchProbability UProb = BestSuccSucc;
|
|
|
|
BranchProbability VProb = AdjustedSuccSumProb - UProb;
|
2017-04-11 06:28:18 +08:00
|
|
|
BlockFrequency F = SuccFreq - Qin;
|
2017-02-01 07:48:32 +08:00
|
|
|
BlockFrequency V = SuccFreq * VProb;
|
2017-04-11 06:28:18 +08:00
|
|
|
BlockFrequency QinU = std::min(Qin, F) * UProb;
|
2017-02-01 07:48:32 +08:00
|
|
|
BlockFrequency BaseCost = P + V;
|
2017-04-11 06:28:18 +08:00
|
|
|
BlockFrequency DupCost = Qout + QinU + std::max(Qin, F) * VProb;
|
2017-02-01 07:48:32 +08:00
|
|
|
return greaterWithBias(BaseCost, DupCost, EntryFreq);
|
|
|
|
}
|
|
|
|
BranchProbability UProb = MBPI->getEdgeProbability(Succ, PDom);
|
|
|
|
BranchProbability VProb = AdjustedSuccSumProb - UProb;
|
|
|
|
BlockFrequency U = SuccFreq * UProb;
|
|
|
|
BlockFrequency V = SuccFreq * VProb;
|
2017-04-11 06:28:18 +08:00
|
|
|
BlockFrequency F = SuccFreq - Qin;
|
2017-02-01 07:48:32 +08:00
|
|
|
// If there is a post-dominating successor, here is the calculation:
|
|
|
|
// BB BB BB BB
|
2017-04-11 06:28:18 +08:00
|
|
|
// | \Qout | \ | \Qout | \
|
|
|
|
// |P C | = |P C | =
|
|
|
|
// = C' |P C = C' |P C
|
|
|
|
// | /Qin | | | /Qin | |
|
|
|
|
// | / | C' (+Succ) | / | C' (+Succ)
|
|
|
|
// Succ Succ /| Succ Succ /|
|
|
|
|
// | \ V | \/ | | \ V | \/ |
|
|
|
|
// |U \ |U /\ =? |U = |U /\ |
|
|
|
|
// = D = = =?| | D | = =|
|
|
|
|
// | / |/ D | / |/ D
|
|
|
|
// | / | / | = | /
|
|
|
|
// |/ | / |/ | =
|
|
|
|
// Dom Dom Dom Dom
|
2017-02-01 07:48:32 +08:00
|
|
|
// '=' : Branch taken for that CFG edge
|
|
|
|
// The cost for taken branches in the first case is P + U
|
2017-04-11 06:28:18 +08:00
|
|
|
// Let F = SuccFreq - Qin
|
2017-02-01 07:48:32 +08:00
|
|
|
// The cost in the second case (assuming independence), given the layout:
|
2017-04-11 06:28:18 +08:00
|
|
|
// BB, Succ, (C+Succ), D, Dom or the layout:
|
|
|
|
// BB, Succ, D, Dom, (C+Succ)
|
|
|
|
// is Qout + max(F, Qin) * U + min(F, Qin)
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
// compare P + U vs Qout + P * U + Qin.
|
2017-02-01 07:48:32 +08:00
|
|
|
//
|
|
|
|
// The 3rd and 4th cases cover when Dom would be chosen to follow Succ.
|
|
|
|
//
|
|
|
|
// For the 3rd case, the cost is P + 2 * V
|
2017-04-11 06:28:18 +08:00
|
|
|
// For the 4th case, the cost is Qout + min(Qin, F) * U + max(Qin, F) * V + V
|
|
|
|
// We choose 4 over 3 when (P + V) > Qout + min(Qin, F) * U + max(Qin, F) * V
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
if (UProb > AdjustedSuccSumProb / 2 &&
|
|
|
|
!hasBetterLayoutPredecessor(Succ, PDom, *BlockToChain[PDom], UProb, UProb,
|
|
|
|
Chain, BlockFilter))
|
2017-02-01 07:48:32 +08:00
|
|
|
// Cases 3 & 4
|
2017-04-11 06:28:18 +08:00
|
|
|
return greaterWithBias(
|
|
|
|
(P + V), (Qout + std::max(Qin, F) * VProb + std::min(Qin, F) * UProb),
|
|
|
|
EntryFreq);
|
2017-02-01 07:48:32 +08:00
|
|
|
// Cases 1 & 2
|
2017-04-11 06:28:18 +08:00
|
|
|
return greaterWithBias((P + U),
|
|
|
|
(Qout + std::min(Qin, F) * AdjustedSuccSumProb +
|
|
|
|
std::max(Qin, F) * UProb),
|
|
|
|
EntryFreq);
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Check for a trellis layout. \p BB is the upper part of a trellis if its
|
|
|
|
/// successors form the lower part of a trellis. A successor set S forms the
|
|
|
|
/// lower part of a trellis if all of the predecessors of S are either in S or
|
|
|
|
/// have all of S as successors. We ignore trellises where BB doesn't have 2
|
|
|
|
/// successors because for fewer than 2, it's trivial, and for 3 or greater they
|
|
|
|
/// are very uncommon and complex to compute optimally. Allowing edges within S
|
|
|
|
/// is not strictly a trellis, but the same algorithm works, so we allow it.
|
|
|
|
bool MachineBlockPlacement::isTrellis(
|
|
|
|
const MachineBasicBlock *BB,
|
|
|
|
const SmallVectorImpl<MachineBasicBlock *> &ViableSuccs,
|
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter) {
|
|
|
|
// Technically BB could form a trellis with branching factor higher than 2.
|
|
|
|
// But that's extremely uncommon.
|
|
|
|
if (BB->succ_size() != 2 || ViableSuccs.size() != 2)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
SmallPtrSet<const MachineBasicBlock *, 2> Successors(BB->succ_begin(),
|
|
|
|
BB->succ_end());
|
|
|
|
// To avoid reviewing the same predecessors twice.
|
|
|
|
SmallPtrSet<const MachineBasicBlock *, 8> SeenPreds;
|
|
|
|
|
|
|
|
for (MachineBasicBlock *Succ : ViableSuccs) {
|
|
|
|
int PredCount = 0;
|
|
|
|
for (auto SuccPred : Succ->predecessors()) {
|
|
|
|
// Allow triangle successors, but don't count them.
|
2017-03-24 07:28:09 +08:00
|
|
|
if (Successors.count(SuccPred)) {
|
|
|
|
// Make sure that it is actually a triangle.
|
|
|
|
for (MachineBasicBlock *CheckSucc : SuccPred->successors())
|
|
|
|
if (!Successors.count(CheckSucc))
|
|
|
|
return false;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
continue;
|
2017-03-24 07:28:09 +08:00
|
|
|
}
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
const BlockChain *PredChain = BlockToChain[SuccPred];
|
|
|
|
if (SuccPred == BB || (BlockFilter && !BlockFilter->count(SuccPred)) ||
|
|
|
|
PredChain == &Chain || PredChain == BlockToChain[Succ])
|
|
|
|
continue;
|
|
|
|
++PredCount;
|
|
|
|
// Perform the successor check only once.
|
|
|
|
if (!SeenPreds.insert(SuccPred).second)
|
|
|
|
continue;
|
|
|
|
if (!hasSameSuccessors(*SuccPred, Successors))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
// If one of the successors has only BB as a predecessor, it is not a
|
|
|
|
// trellis.
|
|
|
|
if (PredCount < 1)
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Pick the highest total weight pair of edges that can both be laid out.
|
|
|
|
/// The edges in \p Edges[0] are assumed to have a different destination than
|
|
|
|
/// the edges in \p Edges[1]. Simple counting shows that the best pair is either
|
|
|
|
/// the individual highest weight edges to the 2 different destinations, or in
|
|
|
|
/// case of a conflict, one of them should be replaced with a 2nd best edge.
|
|
|
|
std::pair<MachineBlockPlacement::WeightedEdge,
|
|
|
|
MachineBlockPlacement::WeightedEdge>
|
|
|
|
MachineBlockPlacement::getBestNonConflictingEdges(
|
|
|
|
const MachineBasicBlock *BB,
|
2017-04-12 21:26:28 +08:00
|
|
|
MutableArrayRef<SmallVector<MachineBlockPlacement::WeightedEdge, 8>>
|
|
|
|
Edges) {
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
// Sort the edges, and then for each successor, find the best incoming
|
|
|
|
// predecessor. If the best incoming predecessors aren't the same,
|
|
|
|
// then that is clearly the best layout. If there is a conflict, one of the
|
|
|
|
// successors will have to fallthrough from the second best predecessor. We
|
|
|
|
// compare which combination is better overall.
|
|
|
|
|
|
|
|
// Sort for highest frequency.
|
|
|
|
auto Cmp = [](WeightedEdge A, WeightedEdge B) { return A.Weight > B.Weight; };
|
|
|
|
|
2019-04-23 22:51:27 +08:00
|
|
|
llvm::stable_sort(Edges[0], Cmp);
|
|
|
|
llvm::stable_sort(Edges[1], Cmp);
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
auto BestA = Edges[0].begin();
|
|
|
|
auto BestB = Edges[1].begin();
|
|
|
|
// Arrange for the correct answer to be in BestA and BestB
|
|
|
|
// If the 2 best edges don't conflict, the answer is already there.
|
|
|
|
if (BestA->Src == BestB->Src) {
|
|
|
|
// Compare the total fallthrough of (Best + Second Best) for both pairs
|
|
|
|
auto SecondBestA = std::next(BestA);
|
|
|
|
auto SecondBestB = std::next(BestB);
|
|
|
|
BlockFrequency BestAScore = BestA->Weight + SecondBestB->Weight;
|
|
|
|
BlockFrequency BestBScore = BestB->Weight + SecondBestA->Weight;
|
|
|
|
if (BestAScore < BestBScore)
|
|
|
|
BestA = SecondBestA;
|
|
|
|
else
|
|
|
|
BestB = SecondBestB;
|
|
|
|
}
|
|
|
|
// Arrange for the BB edge to be in BestA if it exists.
|
|
|
|
if (BestB->Src == BB)
|
|
|
|
std::swap(BestA, BestB);
|
|
|
|
return std::make_pair(*BestA, *BestB);
|
2017-02-01 07:48:32 +08:00
|
|
|
}
|
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
/// Get the best successor from \p BB based on \p BB being part of a trellis.
|
|
|
|
/// We only handle trellises with 2 successors, so the algorithm is
|
|
|
|
/// straightforward: Find the best pair of edges that don't conflict. We find
|
|
|
|
/// the best incoming edge for each successor in the trellis. If those conflict,
|
|
|
|
/// we consider which of them should be replaced with the second best.
|
|
|
|
/// Upon return the two best edges will be in \p BestEdges. If one of the edges
|
|
|
|
/// comes from \p BB, it will be in \p BestEdges[0]
|
|
|
|
MachineBlockPlacement::BlockAndTailDupResult
|
|
|
|
MachineBlockPlacement::getBestTrellisSuccessor(
|
|
|
|
const MachineBasicBlock *BB,
|
|
|
|
const SmallVectorImpl<MachineBasicBlock *> &ViableSuccs,
|
|
|
|
BranchProbability AdjustedSumProb, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter) {
|
|
|
|
|
|
|
|
BlockAndTailDupResult Result = {nullptr, false};
|
|
|
|
SmallPtrSet<const MachineBasicBlock *, 4> Successors(BB->succ_begin(),
|
|
|
|
BB->succ_end());
|
|
|
|
|
|
|
|
// We assume size 2 because it's common. For general n, we would have to do
|
|
|
|
// the Hungarian algorithm, but it's not worth the complexity because more
|
|
|
|
// than 2 successors is fairly uncommon, and a trellis even more so.
|
|
|
|
if (Successors.size() != 2 || ViableSuccs.size() != 2)
|
|
|
|
return Result;
|
|
|
|
|
|
|
|
// Collect the edge frequencies of all edges that form the trellis.
|
2017-04-12 21:26:28 +08:00
|
|
|
SmallVector<WeightedEdge, 8> Edges[2];
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
int SuccIndex = 0;
|
|
|
|
for (auto Succ : ViableSuccs) {
|
|
|
|
for (MachineBasicBlock *SuccPred : Succ->predecessors()) {
|
|
|
|
// Skip any placed predecessors that are not BB
|
|
|
|
if (SuccPred != BB)
|
|
|
|
if ((BlockFilter && !BlockFilter->count(SuccPred)) ||
|
|
|
|
BlockToChain[SuccPred] == &Chain ||
|
|
|
|
BlockToChain[SuccPred] == BlockToChain[Succ])
|
|
|
|
continue;
|
|
|
|
BlockFrequency EdgeFreq = MBFI->getBlockFreq(SuccPred) *
|
|
|
|
MBPI->getEdgeProbability(SuccPred, Succ);
|
|
|
|
Edges[SuccIndex].push_back({EdgeFreq, SuccPred, Succ});
|
|
|
|
}
|
|
|
|
++SuccIndex;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pick the best combination of 2 edges from all the edges in the trellis.
|
|
|
|
WeightedEdge BestA, BestB;
|
|
|
|
std::tie(BestA, BestB) = getBestNonConflictingEdges(BB, Edges);
|
|
|
|
|
|
|
|
if (BestA.Src != BB) {
|
|
|
|
// If we have a trellis, and BB doesn't have the best fallthrough edges,
|
|
|
|
// we shouldn't choose any successor. We've already looked and there's a
|
|
|
|
// better fallthrough edge for all the successors.
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Trellis, but not one of the chosen edges.\n");
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
return Result;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Did we pick the triangle edge? If tail-duplication is profitable, do
|
|
|
|
// that instead. Otherwise merge the triangle edge now while we know it is
|
|
|
|
// optimal.
|
|
|
|
if (BestA.Dest == BestB.Src) {
|
|
|
|
// The edges are BB->Succ1->Succ2, and we're looking to see if BB->Succ2
|
|
|
|
// would be better.
|
|
|
|
MachineBasicBlock *Succ1 = BestA.Dest;
|
|
|
|
MachineBasicBlock *Succ2 = BestB.Dest;
|
|
|
|
// Check to see if tail-duplication would be profitable.
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
if (allowTailDupPlacement() && shouldTailDuplicate(Succ2) &&
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
canTailDuplicateUnplacedPreds(BB, Succ2, Chain, BlockFilter) &&
|
|
|
|
isProfitableToTailDup(BB, Succ2, MBPI->getEdgeProbability(BB, Succ1),
|
|
|
|
Chain, BlockFilter)) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(BranchProbability Succ2Prob = getAdjustedProbability(
|
|
|
|
MBPI->getEdgeProbability(BB, Succ2), AdjustedSumProb);
|
|
|
|
dbgs() << " Selected: " << getBlockName(Succ2)
|
|
|
|
<< ", probability: " << Succ2Prob
|
|
|
|
<< " (Tail Duplicate)\n");
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
Result.BB = Succ2;
|
|
|
|
Result.ShouldTailDup = true;
|
|
|
|
return Result;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// We have already computed the optimal edge for the other side of the
|
|
|
|
// trellis.
|
2017-02-24 05:22:24 +08:00
|
|
|
ComputedEdges[BestB.Src] = { BestB.Dest, false };
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
|
|
|
|
auto TrellisSucc = BestA.Dest;
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(BranchProbability SuccProb = getAdjustedProbability(
|
|
|
|
MBPI->getEdgeProbability(BB, TrellisSucc), AdjustedSumProb);
|
|
|
|
dbgs() << " Selected: " << getBlockName(TrellisSucc)
|
|
|
|
<< ", probability: " << SuccProb << " (Trellis)\n");
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
Result.BB = TrellisSucc;
|
|
|
|
return Result;
|
|
|
|
}
|
2017-02-01 07:48:32 +08:00
|
|
|
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
/// When the option allowTailDupPlacement() is on, this method checks if the
|
2017-02-01 07:48:32 +08:00
|
|
|
/// fallthrough candidate block \p Succ (of block \p BB) can be tail-duplicated
|
|
|
|
/// into all of its unplaced, unfiltered predecessors, that are not BB.
|
|
|
|
bool MachineBlockPlacement::canTailDuplicateUnplacedPreds(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, MachineBasicBlock *Succ,
|
|
|
|
const BlockChain &Chain, const BlockFilterSet *BlockFilter) {
|
2017-02-01 07:48:32 +08:00
|
|
|
if (!shouldTailDuplicate(Succ))
|
|
|
|
return false;
|
|
|
|
|
2019-12-05 08:01:20 +08:00
|
|
|
// The result of canTailDuplicate.
|
|
|
|
bool Duplicate = true;
|
|
|
|
// Number of possible duplication.
|
|
|
|
unsigned int NumDup = 0;
|
|
|
|
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
// For CFG checking.
|
|
|
|
SmallPtrSet<const MachineBasicBlock *, 4> Successors(BB->succ_begin(),
|
|
|
|
BB->succ_end());
|
2017-02-01 07:48:32 +08:00
|
|
|
for (MachineBasicBlock *Pred : Succ->predecessors()) {
|
|
|
|
// Make sure all unplaced and unfiltered predecessors can be
|
|
|
|
// tail-duplicated into.
|
2017-02-04 10:26:32 +08:00
|
|
|
// Skip any blocks that are already placed or not in this loop.
|
2017-02-01 07:48:32 +08:00
|
|
|
if (Pred == BB || (BlockFilter && !BlockFilter->count(Pred))
|
|
|
|
|| BlockToChain[Pred] == &Chain)
|
|
|
|
continue;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
if (!TailDup.canTailDuplicate(Succ, Pred)) {
|
|
|
|
if (Successors.size() > 1 && hasSameSuccessors(*Pred, Successors))
|
|
|
|
// This will result in a trellis after tail duplication, so we don't
|
|
|
|
// need to copy Succ into this predecessor. In the presence
|
|
|
|
// of a trellis tail duplication can continue to be profitable.
|
|
|
|
// For example:
|
|
|
|
// A A
|
|
|
|
// |\ |\
|
|
|
|
// | \ | \
|
|
|
|
// | C | C+BB
|
|
|
|
// | / | |
|
|
|
|
// |/ | |
|
|
|
|
// BB => BB |
|
|
|
|
// |\ |\/|
|
|
|
|
// | \ |/\|
|
|
|
|
// | D | D
|
|
|
|
// | / | /
|
|
|
|
// |/ |/
|
|
|
|
// Succ Succ
|
|
|
|
//
|
|
|
|
// After BB was duplicated into C, the layout looks like the one on the
|
|
|
|
// right. BB and C now have the same successors. When considering
|
|
|
|
// whether Succ can be duplicated into all its unplaced predecessors, we
|
|
|
|
// ignore C.
|
|
|
|
// We can do this because C already has a profitable fallthrough, namely
|
|
|
|
// D. TODO(iteratee): ignore sufficiently cold predecessors for
|
|
|
|
// duplication and for this test.
|
|
|
|
//
|
|
|
|
// This allows trellises to be laid out in 2 separate chains
|
|
|
|
// (A,B,Succ,...) and later (C,D,...) This is a reasonable heuristic
|
|
|
|
// because it allows the creation of 2 fallthrough paths with links
|
|
|
|
// between them, and we correctly identify the best layout for these
|
|
|
|
// CFGs. We want to extend trellises that the user created in addition
|
|
|
|
// to trellises created by tail-duplication, so we just look for the
|
|
|
|
// CFG.
|
|
|
|
continue;
|
2019-12-05 08:01:20 +08:00
|
|
|
Duplicate = false;
|
|
|
|
continue;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
}
|
2019-12-05 08:01:20 +08:00
|
|
|
NumDup++;
|
2017-02-01 07:48:32 +08:00
|
|
|
}
|
2019-12-05 08:01:20 +08:00
|
|
|
|
|
|
|
// No possible duplication in current filter set.
|
|
|
|
if (NumDup == 0)
|
|
|
|
return false;
|
|
|
|
|
2020-02-13 07:22:33 +08:00
|
|
|
// If profile information is available, findDuplicateCandidates can do more
|
|
|
|
// precise benefit analysis.
|
|
|
|
if (F->getFunction().hasProfileData())
|
|
|
|
return true;
|
|
|
|
|
2019-12-05 08:01:20 +08:00
|
|
|
// This is mainly for function exit BB.
|
|
|
|
// The integrated tail duplication is really designed for increasing
|
|
|
|
// fallthrough from predecessors from Succ to its successors. We may need
|
|
|
|
// other machanism to handle different cases.
|
|
|
|
if (Succ->succ_size() == 0)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
// Plus the already placed predecessor.
|
|
|
|
NumDup++;
|
|
|
|
|
|
|
|
// If the duplication candidate has more unplaced predecessors than
|
|
|
|
// successors, the extra duplication can't bring more fallthrough.
|
|
|
|
//
|
|
|
|
// Pred1 Pred2 Pred3
|
|
|
|
// \ | /
|
|
|
|
// \ | /
|
|
|
|
// \ | /
|
|
|
|
// Dup
|
|
|
|
// / \
|
|
|
|
// / \
|
|
|
|
// Succ1 Succ2
|
|
|
|
//
|
|
|
|
// In this example Dup has 2 successors and 3 predecessors, duplication of Dup
|
|
|
|
// can increase the fallthrough from Pred1 to Succ1 and from Pred2 to Succ2,
|
|
|
|
// but the duplication into Pred3 can't increase fallthrough.
|
|
|
|
//
|
|
|
|
// A small number of extra duplication may not hurt too much. We need a better
|
|
|
|
// heuristic to handle it.
|
|
|
|
if ((NumDup > Succ->succ_size()) || !Duplicate)
|
|
|
|
return false;
|
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-03-03 09:00:22 +08:00
|
|
|
/// Find chains of triangles where we believe it would be profitable to
|
|
|
|
/// tail-duplicate them all, but a local analysis would not find them.
|
|
|
|
/// There are 3 ways this can be profitable:
|
|
|
|
/// 1) The post-dominators marked 50% are actually taken 55% (This shrinks with
|
|
|
|
/// longer chains)
|
|
|
|
/// 2) The chains are statically correlated. Branch probabilities have a very
|
|
|
|
/// U-shaped distribution.
|
|
|
|
/// [http://nrs.harvard.edu/urn-3:HUL.InstRepos:24015805]
|
|
|
|
/// If the branches in a chain are likely to be from the same side of the
|
|
|
|
/// distribution as their predecessor, but are independent at runtime, this
|
|
|
|
/// transformation is profitable. (Because the cost of being wrong is a small
|
|
|
|
/// fixed cost, unlike the standard triangle layout where the cost of being
|
|
|
|
/// wrong scales with the # of triangles.)
|
|
|
|
/// 3) The chains are dynamically correlated. If the probability that a previous
|
|
|
|
/// branch was taken positively influences whether the next branch will be
|
|
|
|
/// taken
|
|
|
|
/// We believe that 2 and 3 are common enough to justify the small margin in 1.
|
|
|
|
void MachineBlockPlacement::precomputeTriangleChains() {
|
|
|
|
struct TriangleChain {
|
2017-04-12 21:26:28 +08:00
|
|
|
std::vector<MachineBasicBlock *> Edges;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2017-04-12 21:26:28 +08:00
|
|
|
TriangleChain(MachineBasicBlock *src, MachineBasicBlock *dst)
|
|
|
|
: Edges({src, dst}) {}
|
2017-03-03 09:00:22 +08:00
|
|
|
|
|
|
|
void append(MachineBasicBlock *dst) {
|
2017-04-12 21:26:28 +08:00
|
|
|
assert(getKey()->isSuccessor(dst) &&
|
2017-03-03 09:00:22 +08:00
|
|
|
"Attempting to append a block that is not a successor.");
|
2017-04-12 21:26:28 +08:00
|
|
|
Edges.push_back(dst);
|
2017-03-03 09:00:22 +08:00
|
|
|
}
|
|
|
|
|
2017-04-12 21:26:28 +08:00
|
|
|
unsigned count() const { return Edges.size() - 1; }
|
|
|
|
|
|
|
|
MachineBasicBlock *getKey() const {
|
|
|
|
return Edges.back();
|
2017-03-03 09:00:22 +08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
if (TriangleChainCount == 0)
|
|
|
|
return;
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Pre-computing triangle chains.\n");
|
2017-03-03 09:00:22 +08:00
|
|
|
// Map from last block to the chain that contains it. This allows us to extend
|
|
|
|
// chains as we find new triangles.
|
|
|
|
DenseMap<const MachineBasicBlock *, TriangleChain> TriangleChainMap;
|
|
|
|
for (MachineBasicBlock &BB : *F) {
|
|
|
|
// If BB doesn't have 2 successors, it doesn't start a triangle.
|
|
|
|
if (BB.succ_size() != 2)
|
|
|
|
continue;
|
|
|
|
MachineBasicBlock *PDom = nullptr;
|
|
|
|
for (MachineBasicBlock *Succ : BB.successors()) {
|
|
|
|
if (!MPDT->dominates(Succ, &BB))
|
|
|
|
continue;
|
|
|
|
PDom = Succ;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
// If BB doesn't have a post-dominating successor, it doesn't form a
|
|
|
|
// triangle.
|
|
|
|
if (PDom == nullptr)
|
|
|
|
continue;
|
|
|
|
// If PDom has a hint that it is low probability, skip this triangle.
|
|
|
|
if (MBPI->getEdgeProbability(&BB, PDom) < BranchProbability(50, 100))
|
|
|
|
continue;
|
|
|
|
// If PDom isn't eligible for duplication, this isn't the kind of triangle
|
|
|
|
// we're looking for.
|
|
|
|
if (!shouldTailDuplicate(PDom))
|
|
|
|
continue;
|
|
|
|
bool CanTailDuplicate = true;
|
|
|
|
// If PDom can't tail-duplicate into it's non-BB predecessors, then this
|
|
|
|
// isn't the kind of triangle we're looking for.
|
|
|
|
for (MachineBasicBlock* Pred : PDom->predecessors()) {
|
|
|
|
if (Pred == &BB)
|
|
|
|
continue;
|
|
|
|
if (!TailDup.canTailDuplicate(PDom, Pred)) {
|
|
|
|
CanTailDuplicate = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// If we can't tail-duplicate PDom to its predecessors, then skip this
|
|
|
|
// triangle.
|
|
|
|
if (!CanTailDuplicate)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// Now we have an interesting triangle. Insert it if it's not part of an
|
2017-06-16 20:23:04 +08:00
|
|
|
// existing chain.
|
2017-03-03 09:00:22 +08:00
|
|
|
// Note: This cannot be replaced with a call insert() or emplace() because
|
|
|
|
// the find key is BB, but the insert/emplace key is PDom.
|
|
|
|
auto Found = TriangleChainMap.find(&BB);
|
|
|
|
// If it is, remove the chain from the map, grow it, and put it back in the
|
|
|
|
// map with the end as the new key.
|
|
|
|
if (Found != TriangleChainMap.end()) {
|
|
|
|
TriangleChain Chain = std::move(Found->second);
|
|
|
|
TriangleChainMap.erase(Found);
|
|
|
|
Chain.append(PDom);
|
|
|
|
TriangleChainMap.insert(std::make_pair(Chain.getKey(), std::move(Chain)));
|
|
|
|
} else {
|
|
|
|
auto InsertResult = TriangleChainMap.try_emplace(PDom, &BB, PDom);
|
2017-04-12 21:26:31 +08:00
|
|
|
assert(InsertResult.second && "Block seen twice.");
|
|
|
|
(void)InsertResult;
|
2017-03-03 09:00:22 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-13 02:30:32 +08:00
|
|
|
// Iterating over a DenseMap is safe here, because the only thing in the body
|
|
|
|
// of the loop is inserting into another DenseMap (ComputedEdges).
|
|
|
|
// ComputedEdges is never iterated, so this doesn't lead to non-determinism.
|
2017-03-03 09:00:22 +08:00
|
|
|
for (auto &ChainPair : TriangleChainMap) {
|
|
|
|
TriangleChain &Chain = ChainPair.second;
|
|
|
|
// Benchmarking has shown that due to branch correlation duplicating 2 or
|
|
|
|
// more triangles is profitable, despite the calculations assuming
|
|
|
|
// independence.
|
2017-04-12 21:26:28 +08:00
|
|
|
if (Chain.count() < TriangleChainCount)
|
2017-03-03 09:00:22 +08:00
|
|
|
continue;
|
2017-04-12 21:26:28 +08:00
|
|
|
MachineBasicBlock *dst = Chain.Edges.back();
|
|
|
|
Chain.Edges.pop_back();
|
|
|
|
for (MachineBasicBlock *src : reverse(Chain.Edges)) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Marking edge: " << getBlockName(src) << "->"
|
|
|
|
<< getBlockName(dst)
|
|
|
|
<< " as pre-computed based on triangles.\n");
|
2017-04-12 21:26:31 +08:00
|
|
|
|
|
|
|
auto InsertResult = ComputedEdges.insert({src, {dst, true}});
|
|
|
|
assert(InsertResult.second && "Block seen twice.");
|
|
|
|
(void)InsertResult;
|
|
|
|
|
2017-03-03 09:00:22 +08:00
|
|
|
dst = src;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-15 06:27:17 +08:00
|
|
|
// When profile is not present, return the StaticLikelyProb.
|
|
|
|
// When profile is available, we need to handle the triangle-shape CFG.
|
|
|
|
static BranchProbability getLayoutSuccessorProbThreshold(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB) {
|
2017-12-22 09:33:52 +08:00
|
|
|
if (!BB->getParent()->getFunction().hasProfileData())
|
2016-06-15 06:27:17 +08:00
|
|
|
return BranchProbability(StaticLikelyProb, 100);
|
|
|
|
if (BB->succ_size() == 2) {
|
|
|
|
const MachineBasicBlock *Succ1 = *BB->succ_begin();
|
|
|
|
const MachineBasicBlock *Succ2 = *(BB->succ_begin() + 1);
|
2016-06-15 11:03:30 +08:00
|
|
|
if (Succ1->isSuccessor(Succ2) || Succ2->isSuccessor(Succ1)) {
|
|
|
|
/* See case 1 below for the cost analysis. For BB->Succ to
|
|
|
|
* be taken with smaller cost, the following needs to hold:
|
2017-02-01 07:48:32 +08:00
|
|
|
* Prob(BB->Succ) > 2 * Prob(BB->Pred)
|
|
|
|
* So the threshold T in the calculation below
|
|
|
|
* (1-T) * Prob(BB->Succ) > T * Prob(BB->Pred)
|
|
|
|
* So T / (1 - T) = 2, Yielding T = 2/3
|
|
|
|
* Also adding user specified branch bias, we have
|
2016-06-15 11:03:30 +08:00
|
|
|
* T = (2/3)*(ProfileLikelyProb/50)
|
|
|
|
* = (2*ProfileLikelyProb)/150)
|
|
|
|
*/
|
|
|
|
return BranchProbability(2 * ProfileLikelyProb, 150);
|
|
|
|
}
|
2016-06-15 06:27:17 +08:00
|
|
|
}
|
|
|
|
return BranchProbability(ProfileLikelyProb, 100);
|
2016-06-14 04:24:19 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Checks to see if the layout candidate block \p Succ has a better layout
|
|
|
|
/// predecessor than \c BB. If yes, returns true.
|
2017-02-01 07:48:32 +08:00
|
|
|
/// \p SuccProb: The probability adjusted for only remaining blocks.
|
|
|
|
/// Only used for logging
|
|
|
|
/// \p RealSuccProb: The un-adjusted probability.
|
|
|
|
/// \p Chain: The chain that BB belongs to and Succ is being considered for.
|
|
|
|
/// \p BlockFilter: if non-null, the set of blocks that make up the loop being
|
|
|
|
/// considered
|
2016-06-14 04:24:19 +08:00
|
|
|
bool MachineBlockPlacement::hasBetterLayoutPredecessor(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
|
|
|
|
const BlockChain &SuccChain, BranchProbability SuccProb,
|
|
|
|
BranchProbability RealSuccProb, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter) {
|
2016-06-14 04:24:19 +08:00
|
|
|
|
2016-07-16 02:41:56 +08:00
|
|
|
// There isn't a better layout when there are no unscheduled predecessors.
|
2016-06-14 04:24:19 +08:00
|
|
|
if (SuccChain.UnscheduledPredecessors == 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// There are two basic scenarios here:
|
|
|
|
// -------------------------------------
|
2016-07-16 02:41:56 +08:00
|
|
|
// Case 1: triangular shape CFG (if-then):
|
2016-06-14 04:24:19 +08:00
|
|
|
// BB
|
|
|
|
// | \
|
|
|
|
// | \
|
|
|
|
// | Pred
|
|
|
|
// | /
|
|
|
|
// Succ
|
|
|
|
// In this case, we are evaluating whether to select edge -> Succ, e.g.
|
|
|
|
// set Succ as the layout successor of BB. Picking Succ as BB's
|
2016-07-16 02:41:56 +08:00
|
|
|
// successor breaks the CFG constraints (FIXME: define these constraints).
|
|
|
|
// With this layout, Pred BB
|
2016-06-14 04:24:19 +08:00
|
|
|
// is forced to be outlined, so the overall cost will be cost of the
|
|
|
|
// branch taken from BB to Pred, plus the cost of back taken branch
|
2016-07-16 02:41:56 +08:00
|
|
|
// from Pred to Succ, as well as the additional cost associated
|
2016-06-14 04:24:19 +08:00
|
|
|
// with the needed unconditional jump instruction from Pred To Succ.
|
2016-07-16 02:41:56 +08:00
|
|
|
|
2016-06-14 04:24:19 +08:00
|
|
|
// The cost of the topological order layout is the taken branch cost
|
|
|
|
// from BB to Succ, so to make BB->Succ a viable candidate, the following
|
|
|
|
// must hold:
|
|
|
|
// 2 * freq(BB->Pred) * taken_branch_cost + unconditional_jump_cost
|
|
|
|
// < freq(BB->Succ) * taken_branch_cost.
|
|
|
|
// Ignoring unconditional jump cost, we get
|
|
|
|
// freq(BB->Succ) > 2 * freq(BB->Pred), i.e.,
|
|
|
|
// prob(BB->Succ) > 2 * prob(BB->Pred)
|
|
|
|
//
|
2016-07-16 02:41:56 +08:00
|
|
|
// When real profile data is available, we can precisely compute the
|
|
|
|
// probability threshold that is needed for edge BB->Succ to be considered.
|
|
|
|
// Without profile data, the heuristic requires the branch bias to be
|
2016-06-14 04:24:19 +08:00
|
|
|
// a lot larger to make sure the signal is very strong (e.g. 80% default).
|
|
|
|
// -----------------------------------------------------------------
|
2016-07-16 02:41:56 +08:00
|
|
|
// Case 2: diamond like CFG (if-then-else):
|
2016-06-14 04:24:19 +08:00
|
|
|
// S
|
|
|
|
// / \
|
|
|
|
// | \
|
|
|
|
// BB Pred
|
|
|
|
// \ /
|
|
|
|
// Succ
|
|
|
|
// ..
|
2016-07-16 02:41:56 +08:00
|
|
|
//
|
|
|
|
// The current block is BB and edge BB->Succ is now being evaluated.
|
|
|
|
// Note that edge S->BB was previously already selected because
|
|
|
|
// prob(S->BB) > prob(S->Pred).
|
|
|
|
// At this point, 2 blocks can be placed after BB: Pred or Succ. If we
|
|
|
|
// choose Pred, we will have a topological ordering as shown on the left
|
|
|
|
// in the picture below. If we choose Succ, we have the solution as shown
|
|
|
|
// on the right:
|
|
|
|
//
|
|
|
|
// topo-order:
|
|
|
|
//
|
|
|
|
// S----- ---S
|
|
|
|
// | | | |
|
|
|
|
// ---BB | | BB
|
|
|
|
// | | | |
|
2017-06-16 20:23:04 +08:00
|
|
|
// | Pred-- | Succ--
|
2016-07-16 02:41:56 +08:00
|
|
|
// | | | |
|
2017-06-16 20:23:04 +08:00
|
|
|
// ---Succ ---Pred--
|
2016-07-16 02:41:56 +08:00
|
|
|
//
|
|
|
|
// cost = freq(S->Pred) + freq(BB->Succ) cost = 2 * freq (S->Pred)
|
|
|
|
// = freq(S->Pred) + freq(S->BB)
|
|
|
|
//
|
|
|
|
// If we have profile data (i.e, branch probabilities can be trusted), the
|
|
|
|
// cost (number of taken branches) with layout S->BB->Succ->Pred is 2 *
|
|
|
|
// freq(S->Pred) while the cost of topo order is freq(S->Pred) + freq(S->BB).
|
|
|
|
// We know Prob(S->BB) > Prob(S->Pred), so freq(S->BB) > freq(S->Pred), which
|
|
|
|
// means the cost of topological order is greater.
|
2016-06-14 04:24:19 +08:00
|
|
|
// When profile data is not available, however, we need to be more
|
|
|
|
// conservative. If the branch prediction is wrong, breaking the topo-order
|
|
|
|
// will actually yield a layout with large cost. For this reason, we need
|
2016-07-16 02:41:56 +08:00
|
|
|
// strong biased branch at block S with Prob(S->BB) in order to select
|
|
|
|
// BB->Succ. This is equivalent to looking the CFG backward with backward
|
2016-06-14 04:24:19 +08:00
|
|
|
// edge: Prob(Succ->BB) needs to >= HotProb in order to be selected (without
|
|
|
|
// profile data).
|
2016-07-30 02:09:28 +08:00
|
|
|
// --------------------------------------------------------------------------
|
|
|
|
// Case 3: forked diamond
|
|
|
|
// S
|
|
|
|
// / \
|
|
|
|
// / \
|
|
|
|
// BB Pred
|
|
|
|
// | \ / |
|
|
|
|
// | \ / |
|
|
|
|
// | X |
|
|
|
|
// | / \ |
|
|
|
|
// | / \ |
|
|
|
|
// S1 S2
|
|
|
|
//
|
|
|
|
// The current block is BB and edge BB->S1 is now being evaluated.
|
|
|
|
// As above S->BB was already selected because
|
|
|
|
// prob(S->BB) > prob(S->Pred). Assume that prob(BB->S1) >= prob(BB->S2).
|
|
|
|
//
|
|
|
|
// topo-order:
|
|
|
|
//
|
|
|
|
// S-------| ---S
|
|
|
|
// | | | |
|
|
|
|
// ---BB | | BB
|
|
|
|
// | | | |
|
|
|
|
// | Pred----| | S1----
|
|
|
|
// | | | |
|
|
|
|
// --(S1 or S2) ---Pred--
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
// |
|
|
|
|
// S2
|
2016-07-30 02:09:28 +08:00
|
|
|
//
|
|
|
|
// topo-cost = freq(S->Pred) + freq(BB->S1) + freq(BB->S2)
|
|
|
|
// + min(freq(Pred->S1), freq(Pred->S2))
|
|
|
|
// Non-topo-order cost:
|
|
|
|
// non-topo-cost = 2 * freq(S->Pred) + freq(BB->S2).
|
|
|
|
// To be conservative, we can assume that min(freq(Pred->S1), freq(Pred->S2))
|
|
|
|
// is 0. Then the non topo layout is better when
|
|
|
|
// freq(S->Pred) < freq(BB->S1).
|
|
|
|
// This is exactly what is checked below.
|
|
|
|
// Note there are other shapes that apply (Pred may not be a single block,
|
|
|
|
// but they all fit this general pattern.)
|
2016-06-15 06:27:17 +08:00
|
|
|
BranchProbability HotProb = getLayoutSuccessorProbThreshold(BB);
|
2016-06-14 04:24:19 +08:00
|
|
|
|
|
|
|
// Make sure that a hot successor doesn't have a globally more
|
|
|
|
// important predecessor.
|
|
|
|
BlockFrequency CandidateEdgeFreq = MBFI->getBlockFreq(BB) * RealSuccProb;
|
|
|
|
bool BadCFGConflict = false;
|
|
|
|
|
|
|
|
for (MachineBasicBlock *Pred : Succ->predecessors()) {
|
2019-12-05 08:01:20 +08:00
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (Pred == Succ || PredChain == &SuccChain ||
|
2016-06-14 04:24:19 +08:00
|
|
|
(BlockFilter && !BlockFilter->count(Pred)) ||
|
2019-12-05 08:01:20 +08:00
|
|
|
PredChain == &Chain || Pred != *std::prev(PredChain->end()) ||
|
2017-02-01 07:48:32 +08:00
|
|
|
// This check is redundant except for look ahead. This function is
|
|
|
|
// called for lookahead by isProfitableToTailDup when BB hasn't been
|
|
|
|
// placed yet.
|
|
|
|
(Pred == BB))
|
2016-06-14 04:24:19 +08:00
|
|
|
continue;
|
2016-07-30 02:09:28 +08:00
|
|
|
// Do backward checking.
|
|
|
|
// For all cases above, we need a backward checking to filter out edges that
|
2017-02-01 07:48:32 +08:00
|
|
|
// are not 'strongly' biased.
|
2016-06-14 04:24:19 +08:00
|
|
|
// BB Pred
|
|
|
|
// \ /
|
|
|
|
// Succ
|
2016-07-16 02:41:56 +08:00
|
|
|
// We select edge BB->Succ if
|
2016-06-14 04:24:19 +08:00
|
|
|
// freq(BB->Succ) > freq(Succ) * HotProb
|
|
|
|
// i.e. freq(BB->Succ) > freq(BB->Succ) * HotProb + freq(Pred->Succ) *
|
|
|
|
// HotProb
|
|
|
|
// i.e. freq((BB->Succ) * (1 - HotProb) > freq(Pred->Succ) * HotProb
|
2016-07-30 02:09:28 +08:00
|
|
|
// Case 1 is covered too, because the first equation reduces to:
|
|
|
|
// prob(BB->Succ) > HotProb. (freq(Succ) = freq(BB) for a triangle)
|
2016-06-14 04:24:19 +08:00
|
|
|
BlockFrequency PredEdgeFreq =
|
|
|
|
MBFI->getBlockFreq(Pred) * MBPI->getEdgeProbability(Pred, Succ);
|
|
|
|
if (PredEdgeFreq * HotProb >= CandidateEdgeFreq * HotProb.getCompl()) {
|
|
|
|
BadCFGConflict = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (BadCFGConflict) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Not a candidate: " << getBlockName(Succ) << " -> "
|
|
|
|
<< SuccProb << " (prob) (non-cold CFG conflict)\n");
|
2016-06-14 04:24:19 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Select the best successor for a block.
|
2016-06-12 02:35:40 +08:00
|
|
|
///
|
|
|
|
/// This looks across all successors of a particular block and attempts to
|
|
|
|
/// select the "best" one to be the layout successor. It only considers direct
|
|
|
|
/// successors which also pass the block filter. It will attempt to avoid
|
|
|
|
/// breaking CFG structure, but cave and break such structures in the case of
|
|
|
|
/// very hot successor edges.
|
|
|
|
///
|
2017-02-01 07:48:32 +08:00
|
|
|
/// \returns The best successor block found, or null if none are viable, along
|
|
|
|
/// with a boolean indicating if tail duplication is necessary.
|
|
|
|
MachineBlockPlacement::BlockAndTailDupResult
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBlockPlacement::selectBestSuccessor(
|
|
|
|
const MachineBasicBlock *BB, const BlockChain &Chain,
|
|
|
|
const BlockFilterSet *BlockFilter) {
|
2016-06-12 02:35:40 +08:00
|
|
|
const BranchProbability HotProb(StaticLikelyProb, 100);
|
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
BlockAndTailDupResult BestSucc = { nullptr, false };
|
2016-06-12 02:35:40 +08:00
|
|
|
auto BestProb = BranchProbability::getZero();
|
|
|
|
|
|
|
|
SmallVector<MachineBasicBlock *, 4> Successors;
|
|
|
|
auto AdjustedSumProb =
|
|
|
|
collectViableSuccessors(BB, Chain, BlockFilter, Successors);
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Selecting best successor for: " << getBlockName(BB)
|
|
|
|
<< "\n");
|
2017-02-01 07:48:32 +08:00
|
|
|
|
2017-02-24 05:22:24 +08:00
|
|
|
// if we already precomputed the best successor for BB, return that if still
|
|
|
|
// applicable.
|
|
|
|
auto FoundEdge = ComputedEdges.find(BB);
|
|
|
|
if (FoundEdge != ComputedEdges.end()) {
|
|
|
|
MachineBasicBlock *Succ = FoundEdge->second.BB;
|
|
|
|
ComputedEdges.erase(FoundEdge);
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if (BB->isSuccessor(Succ) && (!BlockFilter || BlockFilter->count(Succ)) &&
|
2017-02-24 05:22:24 +08:00
|
|
|
SuccChain != &Chain && Succ == *SuccChain->begin())
|
|
|
|
return FoundEdge->second;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// if BB is part of a trellis, Use the trellis to determine the optimal
|
|
|
|
// fallthrough edges
|
|
|
|
if (isTrellis(BB, Successors, Chain, BlockFilter))
|
|
|
|
return getBestTrellisSuccessor(BB, Successors, AdjustedSumProb, Chain,
|
|
|
|
BlockFilter);
|
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
// For blocks with CFG violations, we may be able to lay them out anyway with
|
|
|
|
// tail-duplication. We keep this vector so we can perform the probability
|
|
|
|
// calculations the minimum number of times.
|
2020-05-03 02:18:09 +08:00
|
|
|
SmallVector<std::pair<BranchProbability, MachineBasicBlock *>, 4>
|
2017-02-01 07:48:32 +08:00
|
|
|
DupCandidates;
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
for (MachineBasicBlock *Succ : Successors) {
|
2016-06-12 02:35:40 +08:00
|
|
|
auto RealSuccProb = MBPI->getEdgeProbability(BB, Succ);
|
|
|
|
BranchProbability SuccProb =
|
|
|
|
getAdjustedProbability(RealSuccProb, AdjustedSumProb);
|
2011-11-13 19:34:53 +08:00
|
|
|
|
Improving edge probabilities computation when choosing the best successor in machine block placement.
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
llvm-svn: 253414
2015-11-18 08:52:52 +08:00
|
|
|
BlockChain &SuccChain = *BlockToChain[Succ];
|
2016-06-14 04:24:19 +08:00
|
|
|
// Skip the edge \c BB->Succ if block \c Succ has a better layout
|
|
|
|
// predecessor that yields lower global cost.
|
|
|
|
if (hasBetterLayoutPredecessor(BB, Succ, SuccChain, SuccProb, RealSuccProb,
|
2017-02-01 07:48:32 +08:00
|
|
|
Chain, BlockFilter)) {
|
|
|
|
// If tail duplication would make Succ profitable, place it.
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
if (allowTailDupPlacement() && shouldTailDuplicate(Succ))
|
2020-05-03 02:18:09 +08:00
|
|
|
DupCandidates.emplace_back(SuccProb, Succ);
|
2016-06-14 04:24:19 +08:00
|
|
|
continue;
|
2017-02-01 07:48:32 +08:00
|
|
|
}
|
2011-11-13 19:34:53 +08:00
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(
|
|
|
|
dbgs() << " Candidate: " << getBlockName(Succ)
|
|
|
|
<< ", probability: " << SuccProb
|
2016-06-14 04:24:19 +08:00
|
|
|
<< (SuccChain.UnscheduledPredecessors != 0 ? " (CFG break)" : "")
|
|
|
|
<< "\n");
|
2016-07-27 16:49:23 +08:00
|
|
|
|
2017-02-01 07:48:32 +08:00
|
|
|
if (BestSucc.BB && BestProb >= SuccProb) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Not the best candidate, continuing\n");
|
2011-11-13 19:34:53 +08:00
|
|
|
continue;
|
2016-07-27 16:49:23 +08:00
|
|
|
}
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Setting it as best candidate\n");
|
2017-02-01 07:48:32 +08:00
|
|
|
BestSucc.BB = Succ;
|
2015-12-01 13:29:22 +08:00
|
|
|
BestProb = SuccProb;
|
2011-11-13 19:34:53 +08:00
|
|
|
}
|
2017-02-01 07:48:32 +08:00
|
|
|
// Handle the tail duplication candidates in order of decreasing probability.
|
|
|
|
// Stop at the first one that is profitable. Also stop if they are less
|
|
|
|
// profitable than BestSucc. Position is important because we preserve it and
|
|
|
|
// prefer first best match. Here we aren't comparing in order, so we capture
|
|
|
|
// the position instead.
|
2019-04-23 22:51:27 +08:00
|
|
|
llvm::stable_sort(DupCandidates,
|
|
|
|
[](std::tuple<BranchProbability, MachineBasicBlock *> L,
|
|
|
|
std::tuple<BranchProbability, MachineBasicBlock *> R) {
|
|
|
|
return std::get<0>(L) > std::get<0>(R);
|
|
|
|
});
|
|
|
|
for (auto &Tup : DupCandidates) {
|
2017-02-01 07:48:32 +08:00
|
|
|
BranchProbability DupProb;
|
|
|
|
MachineBasicBlock *Succ;
|
|
|
|
std::tie(DupProb, Succ) = Tup;
|
|
|
|
if (DupProb < BestProb)
|
|
|
|
break;
|
|
|
|
if (canTailDuplicateUnplacedPreds(BB, Succ, Chain, BlockFilter)
|
2017-04-11 06:28:22 +08:00
|
|
|
&& (isProfitableToTailDup(BB, Succ, BestProb, Chain, BlockFilter))) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Candidate: " << getBlockName(Succ)
|
|
|
|
<< ", probability: " << DupProb
|
|
|
|
<< " (Tail Duplicate)\n");
|
2017-02-01 07:48:32 +08:00
|
|
|
BestSucc.BB = Succ;
|
|
|
|
BestSucc.ShouldTailDup = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (BestSucc.BB)
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Selected: " << getBlockName(BestSucc.BB) << "\n");
|
2016-07-27 16:49:23 +08:00
|
|
|
|
2011-11-13 19:34:53 +08:00
|
|
|
return BestSucc;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Select the best block from a worklist.
|
2011-11-13 19:42:26 +08:00
|
|
|
///
|
|
|
|
/// This looks through the provided worklist as a list of candidate basic
|
|
|
|
/// blocks and select the most profitable one to place. The definition of
|
|
|
|
/// profitable only really makes sense in the context of a loop. This returns
|
|
|
|
/// the most frequently visited block in the worklist, which in the case of
|
|
|
|
/// a loop, is the one most desirable to be physically close to the rest of the
|
2016-07-16 02:41:56 +08:00
|
|
|
/// loop body in order to improve i-cache behavior.
|
2011-11-13 19:42:26 +08:00
|
|
|
///
|
|
|
|
/// \returns The best block found, or null if none are viable.
|
|
|
|
MachineBasicBlock *MachineBlockPlacement::selectBestCandidateBlock(
|
2017-02-04 10:26:32 +08:00
|
|
|
const BlockChain &Chain, SmallVectorImpl<MachineBasicBlock *> &WorkList) {
|
2011-11-14 17:46:33 +08:00
|
|
|
// Once we need to walk the worklist looking for a candidate, cleanup the
|
|
|
|
// worklist of already placed entries.
|
|
|
|
// FIXME: If this shows up on profiles, it could be folded (at the cost of
|
|
|
|
// some code complexity) into the loop below.
|
2017-08-25 05:21:39 +08:00
|
|
|
WorkList.erase(llvm::remove_if(WorkList,
|
|
|
|
[&](MachineBasicBlock *BB) {
|
|
|
|
return BlockToChain.lookup(BB) == &Chain;
|
|
|
|
}),
|
2011-11-14 17:46:33 +08:00
|
|
|
WorkList.end());
|
|
|
|
|
2016-04-08 05:29:39 +08:00
|
|
|
if (WorkList.empty())
|
|
|
|
return nullptr;
|
|
|
|
|
|
|
|
bool IsEHPad = WorkList[0]->isEHPad();
|
|
|
|
|
2014-04-14 08:51:57 +08:00
|
|
|
MachineBasicBlock *BestBlock = nullptr;
|
2011-11-13 19:42:26 +08:00
|
|
|
BlockFrequency BestFreq;
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *MBB : WorkList) {
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(MBB->isEHPad() == IsEHPad &&
|
|
|
|
"EHPad mismatch between block and work list.");
|
2016-04-08 05:29:39 +08:00
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockChain &SuccChain = *BlockToChain[MBB];
|
2016-03-03 06:40:51 +08:00
|
|
|
if (&SuccChain == &Chain)
|
2011-11-13 19:42:26 +08:00
|
|
|
continue;
|
2016-03-11 13:07:07 +08:00
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(SuccChain.UnscheduledPredecessors == 0 &&
|
|
|
|
"Found CFG-violating block");
|
2011-11-13 19:42:26 +08:00
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockFrequency CandidateFreq = MBFI->getBlockFreq(MBB);
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " " << getBlockName(MBB) << " -> ";
|
|
|
|
MBFI->printBlockFreq(dbgs(), CandidateFreq) << " (freq)\n");
|
2016-04-08 05:29:39 +08:00
|
|
|
|
|
|
|
// For ehpad, we layout the least probable first as to avoid jumping back
|
|
|
|
// from least probable landingpads to more probable ones.
|
|
|
|
//
|
|
|
|
// FIXME: Using probability is probably (!) not the best way to achieve
|
|
|
|
// this. We should probably have a more principled approach to layout
|
|
|
|
// cleanup code.
|
|
|
|
//
|
|
|
|
// The goal is to get:
|
|
|
|
//
|
|
|
|
// +--------------------------+
|
|
|
|
// | V
|
|
|
|
// InnerLp -> InnerCleanup OuterLp -> OuterCleanup -> Resume
|
|
|
|
//
|
|
|
|
// Rather than:
|
|
|
|
//
|
|
|
|
// +-------------------------------------+
|
|
|
|
// V |
|
|
|
|
// OuterLp -> OuterCleanup -> Resume InnerLp -> InnerCleanup
|
|
|
|
if (BestBlock && (IsEHPad ^ (BestFreq >= CandidateFreq)))
|
2011-11-13 19:42:26 +08:00
|
|
|
continue;
|
2016-04-08 05:29:39 +08:00
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
BestBlock = MBB;
|
2011-11-13 19:42:26 +08:00
|
|
|
BestFreq = CandidateFreq;
|
|
|
|
}
|
2016-04-08 05:29:39 +08:00
|
|
|
|
2011-11-13 19:42:26 +08:00
|
|
|
return BestBlock;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Retrieve the first unplaced basic block.
|
2011-11-14 08:00:35 +08:00
|
|
|
///
|
|
|
|
/// This routine is called when we are unable to use the CFG to walk through
|
|
|
|
/// all of the basic blocks and form a chain due to unnatural loops in the CFG.
|
2011-11-15 14:26:43 +08:00
|
|
|
/// We walk through the function's blocks in order, starting from the
|
|
|
|
/// LastUnplacedBlockIt. We update this iterator on each call to avoid
|
|
|
|
/// re-scanning the entire sequence on repeated calls to this routine.
|
2011-11-14 08:00:35 +08:00
|
|
|
MachineBasicBlock *MachineBlockPlacement::getFirstUnplacedBlock(
|
2016-06-14 06:23:44 +08:00
|
|
|
const BlockChain &PlacedChain,
|
2011-11-15 14:26:43 +08:00
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt,
|
2011-12-22 07:02:08 +08:00
|
|
|
const BlockFilterSet *BlockFilter) {
|
2016-06-14 06:23:44 +08:00
|
|
|
for (MachineFunction::iterator I = PrevUnplacedBlockIt, E = F->end(); I != E;
|
2011-11-15 14:26:43 +08:00
|
|
|
++I) {
|
2015-10-10 03:36:12 +08:00
|
|
|
if (BlockFilter && !BlockFilter->count(&*I))
|
2011-11-15 14:26:43 +08:00
|
|
|
continue;
|
2015-10-10 03:36:12 +08:00
|
|
|
if (BlockToChain[&*I] != &PlacedChain) {
|
2011-11-15 14:26:43 +08:00
|
|
|
PrevUnplacedBlockIt = I;
|
2011-11-23 11:03:21 +08:00
|
|
|
// Now select the head of the chain to which the unplaced block belongs
|
|
|
|
// as the block to place. This will force the entire chain to be placed,
|
|
|
|
// and satisfies the requirements of merging chains.
|
2015-10-10 03:36:12 +08:00
|
|
|
return *BlockToChain[&*I]->begin();
|
2011-11-14 08:00:35 +08:00
|
|
|
}
|
|
|
|
}
|
2014-04-14 08:51:57 +08:00
|
|
|
return nullptr;
|
2011-11-14 08:00:35 +08:00
|
|
|
}
|
|
|
|
|
2016-03-15 05:24:11 +08:00
|
|
|
void MachineBlockPlacement::fillWorkLists(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *MBB,
|
2016-03-15 05:24:11 +08:00
|
|
|
SmallPtrSetImpl<BlockChain *> &UpdatedPreds,
|
|
|
|
const BlockFilterSet *BlockFilter = nullptr) {
|
|
|
|
BlockChain &Chain = *BlockToChain[MBB];
|
|
|
|
if (!UpdatedPreds.insert(&Chain).second)
|
|
|
|
return;
|
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(
|
|
|
|
Chain.UnscheduledPredecessors == 0 &&
|
|
|
|
"Attempting to place block with unscheduled predecessors in worklist.");
|
2016-03-15 05:24:11 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : Chain) {
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BlockToChain[ChainBB] == &Chain &&
|
|
|
|
"Block in chain doesn't match BlockToChain map.");
|
2016-03-15 05:24:11 +08:00
|
|
|
for (MachineBasicBlock *Pred : ChainBB->predecessors()) {
|
|
|
|
if (BlockFilter && !BlockFilter->count(Pred))
|
|
|
|
continue;
|
|
|
|
if (BlockToChain[Pred] == &Chain)
|
|
|
|
continue;
|
|
|
|
++Chain.UnscheduledPredecessors;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-08 05:29:39 +08:00
|
|
|
if (Chain.UnscheduledPredecessors != 0)
|
|
|
|
return;
|
|
|
|
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBasicBlock *BB = *Chain.begin();
|
|
|
|
if (BB->isEHPad())
|
|
|
|
EHPadWorkList.push_back(BB);
|
2016-04-08 05:29:39 +08:00
|
|
|
else
|
2017-02-04 10:26:32 +08:00
|
|
|
BlockWorkList.push_back(BB);
|
2016-03-15 05:24:11 +08:00
|
|
|
}
|
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
void MachineBlockPlacement::buildChain(
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *HeadBB, BlockChain &Chain,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
BlockFilterSet *BlockFilter) {
|
2017-02-04 10:26:32 +08:00
|
|
|
assert(HeadBB && "BB must not be null.\n");
|
|
|
|
assert(BlockToChain[HeadBB] == &Chain && "BlockToChainMap mis-match.\n");
|
2016-06-14 06:23:44 +08:00
|
|
|
MachineFunction::iterator PrevUnplacedBlockIt = F->begin();
|
2011-11-14 08:00:35 +08:00
|
|
|
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *LoopHeaderBB = HeadBB;
|
2016-07-01 13:46:48 +08:00
|
|
|
markChainSuccessors(Chain, LoopHeaderBB, BlockFilter);
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBasicBlock *BB = *std::prev(Chain.end());
|
2017-08-25 05:21:39 +08:00
|
|
|
while (true) {
|
2016-06-29 06:50:54 +08:00
|
|
|
assert(BB && "null block found at end of chain in loop.");
|
|
|
|
assert(BlockToChain[BB] == &Chain && "BlockToChainMap mis-match in loop.");
|
|
|
|
assert(*std::prev(Chain.end()) == BB && "BB Not found at end of chain.");
|
|
|
|
|
2011-11-13 20:17:28 +08:00
|
|
|
|
2011-11-19 18:26:02 +08:00
|
|
|
// Look for the best viable successor if there is one to place immediately
|
|
|
|
// after this block.
|
2017-02-01 07:48:32 +08:00
|
|
|
auto Result = selectBestSuccessor(BB, Chain, BlockFilter);
|
|
|
|
MachineBasicBlock* BestSucc = Result.BB;
|
|
|
|
bool ShouldTailDup = Result.ShouldTailDup;
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
if (allowTailDupPlacement())
|
2019-12-05 08:01:20 +08:00
|
|
|
ShouldTailDup |= (BestSucc && canTailDuplicateUnplacedPreds(BB, BestSucc,
|
|
|
|
Chain,
|
|
|
|
BlockFilter));
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
// If an immediate successor isn't available, look for the best viable
|
|
|
|
// block among those we've identified as not violating the loop's CFG at
|
|
|
|
// this point. This won't be a fallthrough, but it will increase locality.
|
2011-11-13 19:42:26 +08:00
|
|
|
if (!BestSucc)
|
2016-04-07 14:34:47 +08:00
|
|
|
BestSucc = selectBestCandidateBlock(Chain, BlockWorkList);
|
2016-04-08 05:29:39 +08:00
|
|
|
if (!BestSucc)
|
|
|
|
BestSucc = selectBestCandidateBlock(Chain, EHPadWorkList);
|
2011-11-13 19:42:26 +08:00
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
if (!BestSucc) {
|
2016-06-14 06:23:44 +08:00
|
|
|
BestSucc = getFirstUnplacedBlock(Chain, PrevUnplacedBlockIt, BlockFilter);
|
2011-11-14 08:00:35 +08:00
|
|
|
if (!BestSucc)
|
|
|
|
break;
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Unnatural loop CFG detected, forcibly merging the "
|
|
|
|
"layout successor until the CFG reduces\n");
|
2011-11-13 19:20:44 +08:00
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
// Placement may have changed tail duplication opportunities.
|
|
|
|
// Check for that now.
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
if (allowTailDupPlacement() && BestSucc && ShouldTailDup) {
|
2020-02-13 07:22:33 +08:00
|
|
|
repeatedlyTailDuplicateBlock(BestSucc, BB, LoopHeaderBB, Chain,
|
|
|
|
BlockFilter, PrevUnplacedBlockIt);
|
|
|
|
// If the chosen successor was duplicated into BB, don't bother laying
|
|
|
|
// it out, just go round the loop again with BB as the chain end.
|
|
|
|
if (!BB->isSuccessor(BestSucc))
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
// Place this block, updating the datastructures to reflect its placement.
|
2011-12-22 07:02:08 +08:00
|
|
|
BlockChain &SuccChain = *BlockToChain[BestSucc];
|
2016-03-03 08:58:43 +08:00
|
|
|
// Zero out UnscheduledPredecessors for the successor we're about to merge in case
|
2011-11-14 08:00:35 +08:00
|
|
|
// we selected a successor that didn't fit naturally into the CFG.
|
2016-03-03 08:58:43 +08:00
|
|
|
SuccChain.UnscheduledPredecessors = 0;
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Merging from " << getBlockName(BB) << " to "
|
|
|
|
<< getBlockName(BestSucc) << "\n");
|
2016-07-01 13:46:48 +08:00
|
|
|
markChainSuccessors(SuccChain, LoopHeaderBB, BlockFilter);
|
2011-11-13 19:20:44 +08:00
|
|
|
Chain.merge(BestSucc, &SuccChain);
|
2014-03-02 20:27:27 +08:00
|
|
|
BB = *std::prev(Chain.end());
|
2011-12-08 03:46:10 +08:00
|
|
|
}
|
2011-11-14 08:00:35 +08:00
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Finished forming chain for header block "
|
|
|
|
<< getBlockName(*Chain.begin()) << "\n");
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2019-01-26 03:45:13 +08:00
|
|
|
// If bottom of block BB has only one successor OldTop, in most cases it is
|
|
|
|
// profitable to move it before OldTop, except the following case:
|
|
|
|
//
|
|
|
|
// -->OldTop<-
|
|
|
|
// | . |
|
|
|
|
// | . |
|
|
|
|
// | . |
|
|
|
|
// ---Pred |
|
|
|
|
// | |
|
|
|
|
// BB-----
|
|
|
|
//
|
|
|
|
// If BB is moved before OldTop, Pred needs a taken branch to BB, and it can't
|
|
|
|
// layout the other successor below it, so it can't reduce taken branch.
|
|
|
|
// In this case we keep its original layout.
|
|
|
|
bool
|
|
|
|
MachineBlockPlacement::canMoveBottomBlockToTop(
|
|
|
|
const MachineBasicBlock *BottomBlock,
|
|
|
|
const MachineBasicBlock *OldTop) {
|
|
|
|
if (BottomBlock->pred_size() != 1)
|
|
|
|
return true;
|
|
|
|
MachineBasicBlock *Pred = *BottomBlock->pred_begin();
|
|
|
|
if (Pred->succ_size() != 2)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
MachineBasicBlock *OtherBB = *Pred->succ_begin();
|
|
|
|
if (OtherBB == BottomBlock)
|
|
|
|
OtherBB = *Pred->succ_rbegin();
|
|
|
|
if (OtherBB == OldTop)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2019-06-15 07:08:59 +08:00
|
|
|
// Find out the possible fall through frequence to the top of a loop.
|
|
|
|
BlockFrequency
|
|
|
|
MachineBlockPlacement::TopFallThroughFreq(
|
|
|
|
const MachineBasicBlock *Top,
|
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
|
|
|
BlockFrequency MaxFreq = 0;
|
|
|
|
for (MachineBasicBlock *Pred : Top->predecessors()) {
|
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (!LoopBlockSet.count(Pred) &&
|
|
|
|
(!PredChain || Pred == *std::prev(PredChain->end()))) {
|
|
|
|
// Found a Pred block can be placed before Top.
|
|
|
|
// Check if Top is the best successor of Pred.
|
|
|
|
auto TopProb = MBPI->getEdgeProbability(Pred, Top);
|
|
|
|
bool TopOK = true;
|
|
|
|
for (MachineBasicBlock *Succ : Pred->successors()) {
|
|
|
|
auto SuccProb = MBPI->getEdgeProbability(Pred, Succ);
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
// Check if Succ can be placed after Pred.
|
|
|
|
// Succ should not be in any chain, or it is the head of some chain.
|
|
|
|
if (!LoopBlockSet.count(Succ) && (SuccProb > TopProb) &&
|
|
|
|
(!SuccChain || Succ == *SuccChain->begin())) {
|
|
|
|
TopOK = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (TopOK) {
|
|
|
|
BlockFrequency EdgeFreq = MBFI->getBlockFreq(Pred) *
|
|
|
|
MBPI->getEdgeProbability(Pred, Top);
|
|
|
|
if (EdgeFreq > MaxFreq)
|
|
|
|
MaxFreq = EdgeFreq;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return MaxFreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Compute the fall through gains when move NewTop before OldTop.
|
|
|
|
//
|
|
|
|
// In following diagram, edges marked as "-" are reduced fallthrough, edges
|
|
|
|
// marked as "+" are increased fallthrough, this function computes
|
|
|
|
//
|
|
|
|
// SUM(increased fallthrough) - SUM(decreased fallthrough)
|
|
|
|
//
|
|
|
|
// |
|
|
|
|
// | -
|
|
|
|
// V
|
|
|
|
// --->OldTop
|
|
|
|
// | .
|
|
|
|
// | .
|
|
|
|
// +| . +
|
|
|
|
// | Pred --->
|
|
|
|
// | |-
|
|
|
|
// | V
|
|
|
|
// --- NewTop <---
|
|
|
|
// |-
|
|
|
|
// V
|
|
|
|
//
|
|
|
|
BlockFrequency
|
|
|
|
MachineBlockPlacement::FallThroughGains(
|
|
|
|
const MachineBasicBlock *NewTop,
|
|
|
|
const MachineBasicBlock *OldTop,
|
|
|
|
const MachineBasicBlock *ExitBB,
|
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
|
|
|
BlockFrequency FallThrough2Top = TopFallThroughFreq(OldTop, LoopBlockSet);
|
|
|
|
BlockFrequency FallThrough2Exit = 0;
|
|
|
|
if (ExitBB)
|
|
|
|
FallThrough2Exit = MBFI->getBlockFreq(NewTop) *
|
|
|
|
MBPI->getEdgeProbability(NewTop, ExitBB);
|
|
|
|
BlockFrequency BackEdgeFreq = MBFI->getBlockFreq(NewTop) *
|
|
|
|
MBPI->getEdgeProbability(NewTop, OldTop);
|
|
|
|
|
|
|
|
// Find the best Pred of NewTop.
|
|
|
|
MachineBasicBlock *BestPred = nullptr;
|
|
|
|
BlockFrequency FallThroughFromPred = 0;
|
|
|
|
for (MachineBasicBlock *Pred : NewTop->predecessors()) {
|
|
|
|
if (!LoopBlockSet.count(Pred))
|
|
|
|
continue;
|
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (!PredChain || Pred == *std::prev(PredChain->end())) {
|
|
|
|
BlockFrequency EdgeFreq = MBFI->getBlockFreq(Pred) *
|
|
|
|
MBPI->getEdgeProbability(Pred, NewTop);
|
|
|
|
if (EdgeFreq > FallThroughFromPred) {
|
|
|
|
FallThroughFromPred = EdgeFreq;
|
|
|
|
BestPred = Pred;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If NewTop is not placed after Pred, another successor can be placed
|
|
|
|
// after Pred.
|
|
|
|
BlockFrequency NewFreq = 0;
|
|
|
|
if (BestPred) {
|
|
|
|
for (MachineBasicBlock *Succ : BestPred->successors()) {
|
|
|
|
if ((Succ == NewTop) || (Succ == BestPred) || !LoopBlockSet.count(Succ))
|
|
|
|
continue;
|
|
|
|
if (ComputedEdges.find(Succ) != ComputedEdges.end())
|
|
|
|
continue;
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if ((SuccChain && (Succ != *SuccChain->begin())) ||
|
|
|
|
(SuccChain == BlockToChain[BestPred]))
|
|
|
|
continue;
|
|
|
|
BlockFrequency EdgeFreq = MBFI->getBlockFreq(BestPred) *
|
|
|
|
MBPI->getEdgeProbability(BestPred, Succ);
|
|
|
|
if (EdgeFreq > NewFreq)
|
|
|
|
NewFreq = EdgeFreq;
|
|
|
|
}
|
|
|
|
BlockFrequency OrigEdgeFreq = MBFI->getBlockFreq(BestPred) *
|
|
|
|
MBPI->getEdgeProbability(BestPred, NewTop);
|
|
|
|
if (NewFreq > OrigEdgeFreq) {
|
|
|
|
// If NewTop is not the best successor of Pred, then Pred doesn't
|
|
|
|
// fallthrough to NewTop. So there is no FallThroughFromPred and
|
|
|
|
// NewFreq.
|
|
|
|
NewFreq = 0;
|
|
|
|
FallThroughFromPred = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
BlockFrequency Result = 0;
|
|
|
|
BlockFrequency Gains = BackEdgeFreq + NewFreq;
|
|
|
|
BlockFrequency Lost = FallThrough2Top + FallThrough2Exit +
|
|
|
|
FallThroughFromPred;
|
|
|
|
if (Gains > Lost)
|
|
|
|
Result = Gains - Lost;
|
|
|
|
return Result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Helper function of findBestLoopTop. Find the best loop top block
|
|
|
|
/// from predecessors of old top.
|
2011-11-27 08:38:03 +08:00
|
|
|
///
|
2019-06-15 07:08:59 +08:00
|
|
|
/// Look for a block which is strictly better than the old top for laying
|
|
|
|
/// out before the old top of the loop. This looks for only two patterns:
|
|
|
|
///
|
|
|
|
/// 1. a block has only one successor, the old loop top
|
|
|
|
///
|
|
|
|
/// Because such a block will always result in an unconditional jump,
|
|
|
|
/// rotating it in front of the old top is always profitable.
|
|
|
|
///
|
|
|
|
/// 2. a block has two successors, one is old top, another is exit
|
|
|
|
/// and it has more than one predecessors
|
|
|
|
///
|
|
|
|
/// If it is below one of its predecessors P, only P can fall through to
|
|
|
|
/// it, all other predecessors need a jump to it, and another conditional
|
|
|
|
/// jump to loop header. If it is moved before loop header, all its
|
|
|
|
/// predecessors jump to it, then fall through to loop header. So all its
|
|
|
|
/// predecessors except P can reduce one taken branch.
|
|
|
|
/// At the same time, move it before old top increases the taken branch
|
|
|
|
/// to loop exit block, so the reduced taken branch will be compared with
|
|
|
|
/// the increased taken branch to the loop exit block.
|
2012-04-16 21:33:36 +08:00
|
|
|
MachineBasicBlock *
|
2019-06-15 07:08:59 +08:00
|
|
|
MachineBlockPlacement::findBestLoopTopHelper(
|
|
|
|
MachineBasicBlock *OldTop,
|
|
|
|
const MachineLoop &L,
|
2019-08-30 03:03:58 +08:00
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
2012-04-16 21:33:36 +08:00
|
|
|
// Check that the header hasn't been fused with a preheader block due to
|
|
|
|
// crazy branches. If it has, we need to start with the header at the top to
|
|
|
|
// prevent pulling the preheader into the loop body.
|
2019-06-15 07:08:59 +08:00
|
|
|
BlockChain &HeaderChain = *BlockToChain[OldTop];
|
2012-04-16 21:33:36 +08:00
|
|
|
if (!LoopBlockSet.count(*HeaderChain.begin()))
|
2019-06-15 07:08:59 +08:00
|
|
|
return OldTop;
|
2012-04-16 21:33:36 +08:00
|
|
|
|
2019-06-15 07:08:59 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Finding best loop top for: " << getBlockName(OldTop)
|
|
|
|
<< "\n");
|
2012-04-16 21:33:36 +08:00
|
|
|
|
2019-06-15 07:08:59 +08:00
|
|
|
BlockFrequency BestGains = 0;
|
2014-04-14 08:51:57 +08:00
|
|
|
MachineBasicBlock *BestPred = nullptr;
|
2019-06-15 07:08:59 +08:00
|
|
|
for (MachineBasicBlock *Pred : OldTop->predecessors()) {
|
2012-04-16 21:33:36 +08:00
|
|
|
if (!LoopBlockSet.count(Pred))
|
|
|
|
continue;
|
2019-06-15 07:08:59 +08:00
|
|
|
if (Pred == L.getHeader())
|
|
|
|
continue;
|
|
|
|
LLVM_DEBUG(dbgs() << " old top pred: " << getBlockName(Pred) << ", has "
|
2018-05-14 20:53:11 +08:00
|
|
|
<< Pred->succ_size() << " successors, ";
|
|
|
|
MBFI->printBlockFreq(dbgs(), Pred) << " freq\n");
|
2019-06-15 07:08:59 +08:00
|
|
|
if (Pred->succ_size() > 2)
|
2012-04-16 21:33:36 +08:00
|
|
|
continue;
|
|
|
|
|
2019-08-30 03:03:58 +08:00
|
|
|
MachineBasicBlock *OtherBB = nullptr;
|
|
|
|
if (Pred->succ_size() == 2) {
|
|
|
|
OtherBB = *Pred->succ_begin();
|
|
|
|
if (OtherBB == OldTop)
|
|
|
|
OtherBB = *Pred->succ_rbegin();
|
|
|
|
}
|
|
|
|
|
2019-06-15 07:08:59 +08:00
|
|
|
if (!canMoveBottomBlockToTop(Pred, OldTop))
|
2019-01-26 03:45:13 +08:00
|
|
|
continue;
|
|
|
|
|
2019-08-30 03:03:58 +08:00
|
|
|
BlockFrequency Gains = FallThroughGains(Pred, OldTop, OtherBB,
|
|
|
|
LoopBlockSet);
|
|
|
|
if ((Gains > 0) && (Gains > BestGains ||
|
|
|
|
((Gains == BestGains) && Pred->isLayoutSuccessor(OldTop)))) {
|
|
|
|
BestPred = Pred;
|
|
|
|
BestGains = Gains;
|
2012-04-16 21:33:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If no direct predecessor is fine, just use the loop header.
|
2016-03-03 05:45:13 +08:00
|
|
|
if (!BestPred) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " final top unchanged\n");
|
2019-06-15 07:08:59 +08:00
|
|
|
return OldTop;
|
2016-03-03 05:45:13 +08:00
|
|
|
}
|
2012-04-16 21:33:36 +08:00
|
|
|
|
|
|
|
// Walk backwards through any straight line of predecessors.
|
|
|
|
while (BestPred->pred_size() == 1 &&
|
|
|
|
(*BestPred->pred_begin())->succ_size() == 1 &&
|
|
|
|
*BestPred->pred_begin() != L.getHeader())
|
|
|
|
BestPred = *BestPred->pred_begin();
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " final top: " << getBlockName(BestPred) << "\n");
|
2012-04-16 21:33:36 +08:00
|
|
|
return BestPred;
|
|
|
|
}
|
|
|
|
|
2019-08-30 03:03:58 +08:00
|
|
|
/// Find the best loop top block for layout.
|
2019-06-15 07:08:59 +08:00
|
|
|
///
|
|
|
|
/// This function iteratively calls findBestLoopTopHelper, until no new better
|
|
|
|
/// BB can be found.
|
|
|
|
MachineBasicBlock *
|
|
|
|
MachineBlockPlacement::findBestLoopTop(const MachineLoop &L,
|
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
|
|
|
// Placing the latch block before the header may introduce an extra branch
|
|
|
|
// that skips this block the first time the loop is executed, which we want
|
|
|
|
// to avoid when optimising for size.
|
|
|
|
// FIXME: in theory there is a case that does not introduce a new branch,
|
|
|
|
// i.e. when the layout predecessor does not fallthrough to the loop header.
|
|
|
|
// In practice this never happens though: there always seems to be a preheader
|
|
|
|
// that can fallthrough and that is also placed before the header.
|
2019-12-06 01:39:37 +08:00
|
|
|
bool OptForSize = F->getFunction().hasOptSize() ||
|
2020-01-30 01:36:31 +08:00
|
|
|
llvm::shouldOptimizeForSize(L.getHeader(), PSI, MBFI.get());
|
2019-12-06 01:39:37 +08:00
|
|
|
if (OptForSize)
|
2019-06-15 07:08:59 +08:00
|
|
|
return L.getHeader();
|
|
|
|
|
|
|
|
MachineBasicBlock *OldTop = nullptr;
|
|
|
|
MachineBasicBlock *NewTop = L.getHeader();
|
|
|
|
while (NewTop != OldTop) {
|
|
|
|
OldTop = NewTop;
|
|
|
|
NewTop = findBestLoopTopHelper(OldTop, L, LoopBlockSet);
|
|
|
|
if (NewTop != OldTop)
|
|
|
|
ComputedEdges[NewTop] = { OldTop, false };
|
|
|
|
}
|
|
|
|
return NewTop;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Find the best loop exiting block for layout.
|
2012-04-16 21:33:36 +08:00
|
|
|
///
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
/// This routine implements the logic to analyze the loop looking for the best
|
|
|
|
/// block to layout at the top of the loop. Typically this is done to maximize
|
|
|
|
/// fallthrough opportunities.
|
|
|
|
MachineBasicBlock *
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBlockPlacement::findBestLoopExit(const MachineLoop &L,
|
2019-08-30 03:03:58 +08:00
|
|
|
const BlockFilterSet &LoopBlockSet,
|
|
|
|
BlockFrequency &ExitFreq) {
|
2012-04-10 21:35:57 +08:00
|
|
|
// We don't want to layout the loop linearly in all cases. If the loop header
|
|
|
|
// is just a normal basic block in the loop, we want to look for what block
|
|
|
|
// within the loop is the best one to layout at the top. However, if the loop
|
|
|
|
// header has be pre-merged into a chain due to predecessors not having
|
|
|
|
// analyzable branches, *and* the predecessor it is merged with is *not* part
|
|
|
|
// of the loop, rotating the header into the middle of the loop will create
|
|
|
|
// a non-contiguous range of blocks which is Very Bad. So start with the
|
|
|
|
// header and only rotate if safe.
|
|
|
|
BlockChain &HeaderChain = *BlockToChain[L.getHeader()];
|
|
|
|
if (!LoopBlockSet.count(*HeaderChain.begin()))
|
2014-04-14 08:51:57 +08:00
|
|
|
return nullptr;
|
2012-04-10 21:35:57 +08:00
|
|
|
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
BlockFrequency BestExitEdgeFreq;
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
unsigned BestExitLoopDepth = 0;
|
2014-04-14 08:51:57 +08:00
|
|
|
MachineBasicBlock *ExitingBB = nullptr;
|
2011-11-28 04:18:00 +08:00
|
|
|
// If there are exits to outer loops, loop rotation can severely limit
|
2016-07-16 02:41:56 +08:00
|
|
|
// fallthrough opportunities unless it selects such an exit. Keep a set of
|
2011-11-28 04:18:00 +08:00
|
|
|
// blocks where rotating to exit with that block will reach an outer loop.
|
|
|
|
SmallPtrSet<MachineBasicBlock *, 4> BlocksExitingToOuterLoop;
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Finding best loop exit for: "
|
|
|
|
<< getBlockName(L.getHeader()) << "\n");
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *MBB : L.getBlocks()) {
|
|
|
|
BlockChain &Chain = *BlockToChain[MBB];
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
// Ensure that this block is at the end of a chain; otherwise it could be
|
2015-04-15 21:19:54 +08:00
|
|
|
// mid-way through an inner loop or a successor of an unanalyzable branch.
|
2015-03-05 11:19:05 +08:00
|
|
|
if (MBB != *std::prev(Chain.end()))
|
2011-11-27 08:38:03 +08:00
|
|
|
continue;
|
|
|
|
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
// Now walk the successors. We need to establish whether this has a viable
|
|
|
|
// exiting successor and whether it has a viable non-exiting successor.
|
|
|
|
// We store the old exiting state and restore it if a viable looping
|
|
|
|
// successor isn't found.
|
|
|
|
MachineBasicBlock *OldExitingBB = ExitingBB;
|
|
|
|
BlockFrequency OldBestExitEdgeFreq = BestExitEdgeFreq;
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
bool HasLoopingSucc = false;
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *Succ : MBB->successors()) {
|
2015-08-28 07:27:47 +08:00
|
|
|
if (Succ->isEHPad())
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
continue;
|
2015-03-05 11:19:05 +08:00
|
|
|
if (Succ == MBB)
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
continue;
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockChain &SuccChain = *BlockToChain[Succ];
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
// Don't split chains, either this chain or the successor's chain.
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
if (&Chain == &SuccChain) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " exiting: " << getBlockName(MBB) << " -> "
|
|
|
|
<< getBlockName(Succ) << " (chain conflict)\n");
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2015-12-01 13:29:22 +08:00
|
|
|
auto SuccProb = MBPI->getEdgeProbability(MBB, Succ);
|
2015-03-05 11:19:05 +08:00
|
|
|
if (LoopBlockSet.count(Succ)) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " looping: " << getBlockName(MBB) << " -> "
|
|
|
|
<< getBlockName(Succ) << " (" << SuccProb << ")\n");
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
HasLoopingSucc = true;
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
unsigned SuccLoopDepth = 0;
|
2015-03-05 11:19:05 +08:00
|
|
|
if (MachineLoop *ExitLoop = MLI->getLoopFor(Succ)) {
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
SuccLoopDepth = ExitLoop->getLoopDepth();
|
|
|
|
if (ExitLoop->contains(&L))
|
2015-03-05 11:19:05 +08:00
|
|
|
BlocksExitingToOuterLoop.insert(MBB);
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
}
|
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockFrequency ExitEdgeFreq = MBFI->getBlockFreq(MBB) * SuccProb;
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " exiting: " << getBlockName(MBB) << " -> "
|
|
|
|
<< getBlockName(Succ) << " [L:" << SuccLoopDepth
|
|
|
|
<< "] (";
|
|
|
|
MBFI->printBlockFreq(dbgs(), ExitEdgeFreq) << ")\n");
|
2013-11-21 03:08:44 +08:00
|
|
|
// Note that we bias this toward an existing layout successor to retain
|
|
|
|
// incoming order in the absence of better information. The exit must have
|
|
|
|
// a frequency higher than the current exit before we consider breaking
|
|
|
|
// the layout.
|
|
|
|
BranchProbability Bias(100 - ExitBlockBias, 100);
|
2015-04-15 21:39:42 +08:00
|
|
|
if (!ExitingBB || SuccLoopDepth > BestExitLoopDepth ||
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
ExitEdgeFreq > BestExitEdgeFreq ||
|
2015-03-05 11:19:05 +08:00
|
|
|
(MBB->isLayoutSuccessor(Succ) &&
|
2013-11-21 03:08:44 +08:00
|
|
|
!(ExitEdgeFreq < BestExitEdgeFreq * Bias))) {
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
BestExitEdgeFreq = ExitEdgeFreq;
|
2015-03-05 11:19:05 +08:00
|
|
|
ExitingBB = MBB;
|
2011-11-27 17:22:53 +08:00
|
|
|
}
|
2011-11-27 08:38:03 +08:00
|
|
|
}
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
if (!HasLoopingSucc) {
|
2015-04-15 21:26:41 +08:00
|
|
|
// Restore the old exiting state, no viable looping successor was found.
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
ExitingBB = OldExitingBB;
|
|
|
|
BestExitEdgeFreq = OldBestExitEdgeFreq;
|
|
|
|
}
|
2011-11-27 08:38:03 +08:00
|
|
|
}
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
// Without a candidate exiting block or with only a single block in the
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
// loop, just use the loop header to layout the loop.
|
2016-07-27 16:49:23 +08:00
|
|
|
if (!ExitingBB) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(
|
|
|
|
dbgs() << " No other candidate exit blocks, using loop header\n");
|
2016-07-27 16:49:23 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
if (L.getNumBlocks() == 1) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Loop has 1 block, using loop header as exit\n");
|
2014-04-14 08:51:57 +08:00
|
|
|
return nullptr;
|
2016-07-27 16:49:23 +08:00
|
|
|
}
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
|
2011-11-28 04:18:00 +08:00
|
|
|
// Also, if we have exit blocks which lead to outer loops but didn't select
|
|
|
|
// one of them as the exiting block we are rotating toward, disable loop
|
|
|
|
// rotation altogether.
|
|
|
|
if (!BlocksExitingToOuterLoop.empty() &&
|
|
|
|
!BlocksExitingToOuterLoop.count(ExitingBB))
|
2014-04-14 08:51:57 +08:00
|
|
|
return nullptr;
|
2011-11-28 04:18:00 +08:00
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << " Best exiting block: " << getBlockName(ExitingBB)
|
|
|
|
<< "\n");
|
2019-08-30 03:03:58 +08:00
|
|
|
ExitFreq = BestExitEdgeFreq;
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
return ExitingBB;
|
2011-11-27 08:38:03 +08:00
|
|
|
}
|
|
|
|
|
2019-02-23 02:04:37 +08:00
|
|
|
/// Check if there is a fallthrough to loop header Top.
|
|
|
|
///
|
|
|
|
/// 1. Look for a Pred that can be layout before Top.
|
|
|
|
/// 2. Check if Top is the most possible successor of Pred.
|
|
|
|
bool
|
|
|
|
MachineBlockPlacement::hasViableTopFallthrough(
|
|
|
|
const MachineBasicBlock *Top,
|
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
|
|
|
for (MachineBasicBlock *Pred : Top->predecessors()) {
|
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (!LoopBlockSet.count(Pred) &&
|
|
|
|
(!PredChain || Pred == *std::prev(PredChain->end()))) {
|
|
|
|
// Found a Pred block can be placed before Top.
|
|
|
|
// Check if Top is the best successor of Pred.
|
|
|
|
auto TopProb = MBPI->getEdgeProbability(Pred, Top);
|
|
|
|
bool TopOK = true;
|
|
|
|
for (MachineBasicBlock *Succ : Pred->successors()) {
|
|
|
|
auto SuccProb = MBPI->getEdgeProbability(Pred, Succ);
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
// Check if Succ can be placed after Pred.
|
|
|
|
// Succ should not be in any chain, or it is the head of some chain.
|
|
|
|
if ((!SuccChain || Succ == *SuccChain->begin()) && SuccProb > TopProb) {
|
|
|
|
TopOK = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (TopOK)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Attempt to rotate an exiting block to the bottom of the loop.
|
2012-04-16 17:31:23 +08:00
|
|
|
///
|
|
|
|
/// Once we have built a chain, try to rotate it to line up the hot exit block
|
|
|
|
/// with fallthrough out of the loop if doing so doesn't introduce unnecessary
|
|
|
|
/// branches. For example, if the loop has fallthrough into its header and out
|
|
|
|
/// of its bottom already, don't rotate it.
|
|
|
|
void MachineBlockPlacement::rotateLoop(BlockChain &LoopChain,
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *ExitingBB,
|
2019-08-30 03:03:58 +08:00
|
|
|
BlockFrequency ExitFreq,
|
2012-04-16 17:31:23 +08:00
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
|
|
|
if (!ExitingBB)
|
|
|
|
return;
|
|
|
|
|
|
|
|
MachineBasicBlock *Top = *LoopChain.begin();
|
Revert Revert [MBP] do not rotate loop if it creates extra branch
This is a second attempt to land this patch.
The first one resulted in a crash of clang sanitizer buildbot.
The fix is here and regression test is added.
This is a last fix for the corner case of PR32214. Actually this is not really corner case in general.
We should not do a loop rotation if we create an additional branch due to it.
Consider the case where we have a loop chain H, M, B, C , where
H is header with viable fallthrough from pre-header and exit from the loop
M - some middle block
B - backedge to Header but with exit from the loop also.
C - some cold block of the loop.
Let's H is determined as a best exit. If we do a loop rotation M, B, C, H we can introduce the extra branch.
Let's compute the change in number of branches:
+1 branch from pre-header to header
-1 branch from header to exit
+1 branch from header to middle block if there is such
-1 branch from cold bock to header if there is one
So if C is not a predecessor of H then we introduce extra branch.
This change actually prohibits rotation of the loop if both true
Best Exit has next element in chain as successor.
Last element in chain is not a predecessor of first element of chain.
Reviewers: iteratee, xur, sammccall, chandlerc
Reviewed By: iteratee
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34745
llvm-svn: 307631
2017-07-11 16:34:58 +08:00
|
|
|
MachineBasicBlock *Bottom = *std::prev(LoopChain.end());
|
|
|
|
|
|
|
|
// If ExitingBB is already the last one in a chain then nothing to do.
|
|
|
|
if (Bottom == ExitingBB)
|
|
|
|
return;
|
|
|
|
|
2019-02-23 02:04:37 +08:00
|
|
|
bool ViableTopFallthrough = hasViableTopFallthrough(Top, LoopBlockSet);
|
2012-04-16 17:31:23 +08:00
|
|
|
|
|
|
|
// If the header has viable fallthrough, check whether the current loop
|
|
|
|
// bottom is a viable exiting block. If so, bail out as rotating will
|
|
|
|
// introduce an unnecessary branch.
|
|
|
|
if (ViableTopFallthrough) {
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *Succ : Bottom->successors()) {
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if (!LoopBlockSet.count(Succ) &&
|
|
|
|
(!SuccChain || Succ == *SuccChain->begin()))
|
2012-04-16 17:31:23 +08:00
|
|
|
return;
|
|
|
|
}
|
2019-08-30 03:03:58 +08:00
|
|
|
|
|
|
|
// Rotate will destroy the top fallthrough, we need to ensure the new exit
|
|
|
|
// frequency is larger than top fallthrough.
|
|
|
|
BlockFrequency FallThrough2Top = TopFallThroughFreq(Top, LoopBlockSet);
|
|
|
|
if (FallThrough2Top >= ExitFreq)
|
|
|
|
return;
|
2012-04-16 17:31:23 +08:00
|
|
|
}
|
|
|
|
|
2017-08-25 05:21:39 +08:00
|
|
|
BlockChain::iterator ExitIt = llvm::find(LoopChain, ExitingBB);
|
2012-04-16 17:31:23 +08:00
|
|
|
if (ExitIt == LoopChain.end())
|
|
|
|
return;
|
|
|
|
|
Revert Revert [MBP] do not rotate loop if it creates extra branch
This is a second attempt to land this patch.
The first one resulted in a crash of clang sanitizer buildbot.
The fix is here and regression test is added.
This is a last fix for the corner case of PR32214. Actually this is not really corner case in general.
We should not do a loop rotation if we create an additional branch due to it.
Consider the case where we have a loop chain H, M, B, C , where
H is header with viable fallthrough from pre-header and exit from the loop
M - some middle block
B - backedge to Header but with exit from the loop also.
C - some cold block of the loop.
Let's H is determined as a best exit. If we do a loop rotation M, B, C, H we can introduce the extra branch.
Let's compute the change in number of branches:
+1 branch from pre-header to header
-1 branch from header to exit
+1 branch from header to middle block if there is such
-1 branch from cold bock to header if there is one
So if C is not a predecessor of H then we introduce extra branch.
This change actually prohibits rotation of the loop if both true
Best Exit has next element in chain as successor.
Last element in chain is not a predecessor of first element of chain.
Reviewers: iteratee, xur, sammccall, chandlerc
Reviewed By: iteratee
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34745
llvm-svn: 307631
2017-07-11 16:34:58 +08:00
|
|
|
// Rotating a loop exit to the bottom when there is a fallthrough to top
|
|
|
|
// trades the entry fallthrough for an exit fallthrough.
|
|
|
|
// If there is no bottom->top edge, but the chosen exit block does have
|
|
|
|
// a fallthrough, we break that fallthrough for nothing in return.
|
|
|
|
|
|
|
|
// Let's consider an example. We have a built chain of basic blocks
|
|
|
|
// B1, B2, ..., Bn, where Bk is a ExitingBB - chosen exit block.
|
|
|
|
// By doing a rotation we get
|
|
|
|
// Bk+1, ..., Bn, B1, ..., Bk
|
|
|
|
// Break of fallthrough to B1 is compensated by a fallthrough from Bk.
|
|
|
|
// If we had a fallthrough Bk -> Bk+1 it is broken now.
|
|
|
|
// It might be compensated by fallthrough Bn -> B1.
|
|
|
|
// So we have a condition to avoid creation of extra branch by loop rotation.
|
|
|
|
// All below must be true to avoid loop rotation:
|
|
|
|
// If there is a fallthrough to top (B1)
|
|
|
|
// There was fallthrough from chosen exit block (Bk) to next one (Bk+1)
|
|
|
|
// There is no fallthrough from bottom (Bn) to top (B1).
|
|
|
|
// Please note that there is no exit fallthrough from Bn because we checked it
|
|
|
|
// above.
|
|
|
|
if (ViableTopFallthrough) {
|
|
|
|
assert(std::next(ExitIt) != LoopChain.end() &&
|
|
|
|
"Exit should not be last BB");
|
|
|
|
MachineBasicBlock *NextBlockInChain = *std::next(ExitIt);
|
|
|
|
if (ExitingBB->isSuccessor(NextBlockInChain))
|
|
|
|
if (!Bottom->isSuccessor(Top))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Rotating loop to put exit " << getBlockName(ExitingBB)
|
|
|
|
<< " at bottom\n");
|
2014-03-02 20:27:27 +08:00
|
|
|
std::rotate(LoopChain.begin(), std::next(ExitIt), LoopChain.end());
|
2012-04-16 17:31:23 +08:00
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Attempt to rotate a loop based on profile data to reduce branch cost.
|
2015-10-20 07:16:40 +08:00
|
|
|
///
|
|
|
|
/// With profile data, we can determine the cost in terms of missed fall through
|
|
|
|
/// opportunities when rotating a loop chain and select the best rotation.
|
|
|
|
/// Basically, there are three kinds of cost to consider for each rotation:
|
|
|
|
/// 1. The possibly missed fall through edge (if it exists) from BB out of
|
|
|
|
/// the loop to the loop header.
|
|
|
|
/// 2. The possibly missed fall through edges (if they exist) from the loop
|
|
|
|
/// exits to BB out of the loop.
|
|
|
|
/// 3. The missed fall through edge (if it exists) from the last BB to the
|
|
|
|
/// first BB in the loop chain.
|
|
|
|
/// Therefore, the cost for a given rotation is the sum of costs listed above.
|
|
|
|
/// We select the best rotation with the smallest cost.
|
|
|
|
void MachineBlockPlacement::rotateLoopWithProfile(
|
2017-02-04 10:26:32 +08:00
|
|
|
BlockChain &LoopChain, const MachineLoop &L,
|
|
|
|
const BlockFilterSet &LoopBlockSet) {
|
2015-10-20 07:16:40 +08:00
|
|
|
auto RotationPos = LoopChain.end();
|
|
|
|
|
|
|
|
BlockFrequency SmallestRotationCost = BlockFrequency::getMaxFrequency();
|
|
|
|
|
|
|
|
// A utility lambda that scales up a block frequency by dividing it by a
|
|
|
|
// branch probability which is the reciprocal of the scale.
|
|
|
|
auto ScaleBlockFrequency = [](BlockFrequency Freq,
|
|
|
|
unsigned Scale) -> BlockFrequency {
|
|
|
|
if (Scale == 0)
|
|
|
|
return 0;
|
|
|
|
// Use operator / between BlockFrequency and BranchProbability to implement
|
|
|
|
// saturating multiplication.
|
|
|
|
return Freq / BranchProbability(1, Scale);
|
|
|
|
};
|
|
|
|
|
|
|
|
// Compute the cost of the missed fall-through edge to the loop header if the
|
|
|
|
// chain head is not the loop header. As we only consider natural loops with
|
|
|
|
// single header, this computation can be done only once.
|
|
|
|
BlockFrequency HeaderFallThroughCost(0);
|
2019-06-15 07:08:59 +08:00
|
|
|
MachineBasicBlock *ChainHeaderBB = *LoopChain.begin();
|
|
|
|
for (auto *Pred : ChainHeaderBB->predecessors()) {
|
2015-10-20 07:16:40 +08:00
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (!LoopBlockSet.count(Pred) &&
|
|
|
|
(!PredChain || Pred == *std::prev(PredChain->end()))) {
|
2019-06-15 07:08:59 +08:00
|
|
|
auto EdgeFreq = MBFI->getBlockFreq(Pred) *
|
|
|
|
MBPI->getEdgeProbability(Pred, ChainHeaderBB);
|
2015-10-20 07:16:40 +08:00
|
|
|
auto FallThruCost = ScaleBlockFrequency(EdgeFreq, MisfetchCost);
|
|
|
|
// If the predecessor has only an unconditional jump to the header, we
|
|
|
|
// need to consider the cost of this jump.
|
|
|
|
if (Pred->succ_size() == 1)
|
|
|
|
FallThruCost += ScaleBlockFrequency(EdgeFreq, JumpInstCost);
|
|
|
|
HeaderFallThroughCost = std::max(HeaderFallThroughCost, FallThruCost);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Here we collect all exit blocks in the loop, and for each exit we find out
|
|
|
|
// its hottest exit edge. For each loop rotation, we define the loop exit cost
|
|
|
|
// as the sum of frequencies of exit edges we collect here, excluding the exit
|
|
|
|
// edge from the tail of the loop chain.
|
|
|
|
SmallVector<std::pair<MachineBasicBlock *, BlockFrequency>, 4> ExitsWithFreq;
|
|
|
|
for (auto BB : LoopChain) {
|
2015-12-01 13:29:22 +08:00
|
|
|
auto LargestExitEdgeProb = BranchProbability::getZero();
|
2015-10-20 07:16:40 +08:00
|
|
|
for (auto *Succ : BB->successors()) {
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if (!LoopBlockSet.count(Succ) &&
|
|
|
|
(!SuccChain || Succ == *SuccChain->begin())) {
|
2015-12-01 13:29:22 +08:00
|
|
|
auto SuccProb = MBPI->getEdgeProbability(BB, Succ);
|
|
|
|
LargestExitEdgeProb = std::max(LargestExitEdgeProb, SuccProb);
|
2015-10-20 07:16:40 +08:00
|
|
|
}
|
|
|
|
}
|
2015-12-01 13:29:22 +08:00
|
|
|
if (LargestExitEdgeProb > BranchProbability::getZero()) {
|
|
|
|
auto ExitFreq = MBFI->getBlockFreq(BB) * LargestExitEdgeProb;
|
2015-10-20 07:16:40 +08:00
|
|
|
ExitsWithFreq.emplace_back(BB, ExitFreq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// In this loop we iterate every block in the loop chain and calculate the
|
|
|
|
// cost assuming the block is the head of the loop chain. When the loop ends,
|
|
|
|
// we should have found the best candidate as the loop chain's head.
|
|
|
|
for (auto Iter = LoopChain.begin(), TailIter = std::prev(LoopChain.end()),
|
|
|
|
EndIter = LoopChain.end();
|
|
|
|
Iter != EndIter; Iter++, TailIter++) {
|
|
|
|
// TailIter is used to track the tail of the loop chain if the block we are
|
|
|
|
// checking (pointed by Iter) is the head of the chain.
|
|
|
|
if (TailIter == LoopChain.end())
|
|
|
|
TailIter = LoopChain.begin();
|
|
|
|
|
|
|
|
auto TailBB = *TailIter;
|
|
|
|
|
|
|
|
// Calculate the cost by putting this BB to the top.
|
|
|
|
BlockFrequency Cost = 0;
|
|
|
|
|
|
|
|
// If the current BB is the loop header, we need to take into account the
|
|
|
|
// cost of the missed fall through edge from outside of the loop to the
|
|
|
|
// header.
|
2019-06-15 07:08:59 +08:00
|
|
|
if (Iter != LoopChain.begin())
|
2015-10-20 07:16:40 +08:00
|
|
|
Cost += HeaderFallThroughCost;
|
|
|
|
|
|
|
|
// Collect the loop exit cost by summing up frequencies of all exit edges
|
|
|
|
// except the one from the chain tail.
|
|
|
|
for (auto &ExitWithFreq : ExitsWithFreq)
|
|
|
|
if (TailBB != ExitWithFreq.first)
|
|
|
|
Cost += ExitWithFreq.second;
|
|
|
|
|
|
|
|
// The cost of breaking the once fall-through edge from the tail to the top
|
|
|
|
// of the loop chain. Here we need to consider three cases:
|
|
|
|
// 1. If the tail node has only one successor, then we will get an
|
|
|
|
// additional jmp instruction. So the cost here is (MisfetchCost +
|
|
|
|
// JumpInstCost) * tail node frequency.
|
|
|
|
// 2. If the tail node has two successors, then we may still get an
|
|
|
|
// additional jmp instruction if the layout successor after the loop
|
|
|
|
// chain is not its CFG successor. Note that the more frequently executed
|
|
|
|
// jmp instruction will be put ahead of the other one. Assume the
|
|
|
|
// frequency of those two branches are x and y, where x is the frequency
|
|
|
|
// of the edge to the chain head, then the cost will be
|
|
|
|
// (x * MisfetechCost + min(x, y) * JumpInstCost) * tail node frequency.
|
|
|
|
// 3. If the tail node has more than two successors (this rarely happens),
|
|
|
|
// we won't consider any additional cost.
|
|
|
|
if (TailBB->isSuccessor(*Iter)) {
|
|
|
|
auto TailBBFreq = MBFI->getBlockFreq(TailBB);
|
|
|
|
if (TailBB->succ_size() == 1)
|
|
|
|
Cost += ScaleBlockFrequency(TailBBFreq.getFrequency(),
|
|
|
|
MisfetchCost + JumpInstCost);
|
|
|
|
else if (TailBB->succ_size() == 2) {
|
|
|
|
auto TailToHeadProb = MBPI->getEdgeProbability(TailBB, *Iter);
|
|
|
|
auto TailToHeadFreq = TailBBFreq * TailToHeadProb;
|
|
|
|
auto ColderEdgeFreq = TailToHeadProb > BranchProbability(1, 2)
|
|
|
|
? TailBBFreq * TailToHeadProb.getCompl()
|
|
|
|
: TailToHeadFreq;
|
|
|
|
Cost += ScaleBlockFrequency(TailToHeadFreq, MisfetchCost) +
|
|
|
|
ScaleBlockFrequency(ColderEdgeFreq, JumpInstCost);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "The cost of loop rotation by making "
|
|
|
|
<< getBlockName(*Iter)
|
|
|
|
<< " to the top: " << Cost.getFrequency() << "\n");
|
2015-10-20 07:16:40 +08:00
|
|
|
|
|
|
|
if (Cost < SmallestRotationCost) {
|
|
|
|
SmallestRotationCost = Cost;
|
|
|
|
RotationPos = Iter;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (RotationPos != LoopChain.end()) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Rotate loop by making " << getBlockName(*RotationPos)
|
|
|
|
<< " to the top\n");
|
2015-10-20 07:16:40 +08:00
|
|
|
std::rotate(LoopChain.begin(), RotationPos, LoopChain.end());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Collect blocks in the given loop that are to be placed.
|
In MachineBlockPlacement, filter cold blocks off the loop chain when profile data is available.
In the current BB placement algorithm, a loop chain always contains all loop blocks. This has a drawback that cold blocks in the loop may be inserted on a hot function path, hence increasing branch cost and also reducing icache locality.
Consider a simple example shown below:
A
|
B⇆C
|
D
When B->C is quite cold, the best BB-layout should be A,B,D,C. But the current implementation produces A,C,B,D.
This patch filters those cold blocks off from the loop chain by comparing the ratio:
LoopBBFreq / LoopFreq
to 20%: if it is less than 20%, we don't include this BB to the loop chain. Here LoopFreq is the frequency of the loop when we reduce the loop into a single node. In general we have more cold blocks when the loop has few iterations. And vice versa.
Differential revision: http://reviews.llvm.org/D11662
llvm-svn: 251833
2015-11-03 05:24:00 +08:00
|
|
|
///
|
|
|
|
/// When profile data is available, exclude cold blocks from the returned set;
|
|
|
|
/// otherwise, collect all blocks in the loop.
|
|
|
|
MachineBlockPlacement::BlockFilterSet
|
2017-02-04 10:26:32 +08:00
|
|
|
MachineBlockPlacement::collectLoopBlockSet(const MachineLoop &L) {
|
In MachineBlockPlacement, filter cold blocks off the loop chain when profile data is available.
In the current BB placement algorithm, a loop chain always contains all loop blocks. This has a drawback that cold blocks in the loop may be inserted on a hot function path, hence increasing branch cost and also reducing icache locality.
Consider a simple example shown below:
A
|
B⇆C
|
D
When B->C is quite cold, the best BB-layout should be A,B,D,C. But the current implementation produces A,C,B,D.
This patch filters those cold blocks off from the loop chain by comparing the ratio:
LoopBBFreq / LoopFreq
to 20%: if it is less than 20%, we don't include this BB to the loop chain. Here LoopFreq is the frequency of the loop when we reduce the loop into a single node. In general we have more cold blocks when the loop has few iterations. And vice versa.
Differential revision: http://reviews.llvm.org/D11662
llvm-svn: 251833
2015-11-03 05:24:00 +08:00
|
|
|
BlockFilterSet LoopBlockSet;
|
|
|
|
|
|
|
|
// Filter cold blocks off from LoopBlockSet when profile data is available.
|
|
|
|
// Collect the sum of frequencies of incoming edges to the loop header from
|
|
|
|
// outside. If we treat the loop as a super block, this is the frequency of
|
|
|
|
// the loop. Then for each block in the loop, we calculate the ratio between
|
|
|
|
// its frequency and the frequency of the loop block. When it is too small,
|
|
|
|
// don't add it to the loop chain. If there are outer loops, then this block
|
|
|
|
// will be merged into the first outer loop chain for which this block is not
|
2020-03-25 00:23:26 +08:00
|
|
|
// cold anymore. This needs precise profile data and we only do this when
|
|
|
|
// profile data is available.
|
|
|
|
if (F->getFunction().hasProfileData() || ForceLoopColdBlock) {
|
In MachineBlockPlacement, filter cold blocks off the loop chain when profile data is available.
In the current BB placement algorithm, a loop chain always contains all loop blocks. This has a drawback that cold blocks in the loop may be inserted on a hot function path, hence increasing branch cost and also reducing icache locality.
Consider a simple example shown below:
A
|
B⇆C
|
D
When B->C is quite cold, the best BB-layout should be A,B,D,C. But the current implementation produces A,C,B,D.
This patch filters those cold blocks off from the loop chain by comparing the ratio:
LoopBBFreq / LoopFreq
to 20%: if it is less than 20%, we don't include this BB to the loop chain. Here LoopFreq is the frequency of the loop when we reduce the loop into a single node. In general we have more cold blocks when the loop has few iterations. And vice versa.
Differential revision: http://reviews.llvm.org/D11662
llvm-svn: 251833
2015-11-03 05:24:00 +08:00
|
|
|
BlockFrequency LoopFreq(0);
|
|
|
|
for (auto LoopPred : L.getHeader()->predecessors())
|
|
|
|
if (!L.contains(LoopPred))
|
|
|
|
LoopFreq += MBFI->getBlockFreq(LoopPred) *
|
|
|
|
MBPI->getEdgeProbability(LoopPred, L.getHeader());
|
|
|
|
|
|
|
|
for (MachineBasicBlock *LoopBB : L.getBlocks()) {
|
|
|
|
auto Freq = MBFI->getBlockFreq(LoopBB).getFrequency();
|
|
|
|
if (Freq == 0 || LoopFreq.getFrequency() / Freq > LoopToColdBlockRatio)
|
|
|
|
continue;
|
|
|
|
LoopBlockSet.insert(LoopBB);
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
LoopBlockSet.insert(L.block_begin(), L.block_end());
|
|
|
|
|
|
|
|
return LoopBlockSet;
|
|
|
|
}
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// Forms basic block chains from the natural loop structures.
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
///
|
|
|
|
/// These chains are designed to preserve the existing *structure* of the code
|
|
|
|
/// as much as possible. We can then stitch the chains together in a way which
|
|
|
|
/// both preserves the topological structure and minimizes taken conditional
|
|
|
|
/// branches.
|
2017-02-04 10:26:32 +08:00
|
|
|
void MachineBlockPlacement::buildLoopChains(const MachineLoop &L) {
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
// First recurse through any nested loops, building chains for those inner
|
|
|
|
// loops.
|
2017-02-04 10:26:32 +08:00
|
|
|
for (const MachineLoop *InnerLoop : L)
|
2016-06-14 06:23:44 +08:00
|
|
|
buildLoopChains(*InnerLoop);
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BlockWorkList.empty() &&
|
|
|
|
"BlockWorkList not empty when starting to build loop chains.");
|
|
|
|
assert(EHPadWorkList.empty() &&
|
|
|
|
"EHPadWorkList not empty when starting to build loop chains.");
|
2016-06-14 06:23:44 +08:00
|
|
|
BlockFilterSet LoopBlockSet = collectLoopBlockSet(L);
|
Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.
The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.
This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.
llvm-svn: 145183
2011-11-27 21:34:33 +08:00
|
|
|
|
2015-10-20 07:16:40 +08:00
|
|
|
// Check if we have profile data for this function. If yes, we will rotate
|
|
|
|
// this loop by modeling costs more precisely which requires the profile data
|
|
|
|
// for better layout.
|
|
|
|
bool RotateLoopWithProfile =
|
2016-05-12 10:04:41 +08:00
|
|
|
ForcePreciseRotationCost ||
|
2017-12-22 09:33:52 +08:00
|
|
|
(PreciseRotationCost && F->getFunction().hasProfileData());
|
2015-10-20 07:16:40 +08:00
|
|
|
|
2012-04-16 21:33:36 +08:00
|
|
|
// First check to see if there is an obviously preferable top block for the
|
|
|
|
// loop. This will default to the header, but may end up as one of the
|
|
|
|
// predecessors to the header if there is one which will result in strictly
|
|
|
|
// fewer branches in the loop body.
|
2019-08-30 03:03:58 +08:00
|
|
|
MachineBasicBlock *LoopTop = findBestLoopTop(L, LoopBlockSet);
|
2012-04-16 21:33:36 +08:00
|
|
|
|
|
|
|
// If we selected just the header for the loop top, look for a potentially
|
|
|
|
// profitable exit block in the event that rotating the loop can eliminate
|
|
|
|
// branches by placing an exit edge at the bottom.
|
2017-10-05 05:39:25 +08:00
|
|
|
//
|
|
|
|
// Loops are processed innermost to uttermost, make sure we clear
|
|
|
|
// PreferredLoopExit before processing a new loop.
|
|
|
|
PreferredLoopExit = nullptr;
|
2019-08-30 03:03:58 +08:00
|
|
|
BlockFrequency ExitFreq;
|
2015-10-20 07:16:40 +08:00
|
|
|
if (!RotateLoopWithProfile && LoopTop == L.getHeader())
|
2019-08-30 03:03:58 +08:00
|
|
|
PreferredLoopExit = findBestLoopExit(L, LoopBlockSet, ExitFreq);
|
2012-04-16 21:33:36 +08:00
|
|
|
|
|
|
|
BlockChain &LoopChain = *BlockToChain[LoopTop];
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
// FIXME: This is a really lame way of walking the chains in the loop: we
|
|
|
|
// walk the blocks, and use a set to prevent visiting a particular chain
|
|
|
|
// twice.
|
2011-12-22 07:02:08 +08:00
|
|
|
SmallPtrSet<BlockChain *, 4> UpdatedPreds;
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(LoopChain.UnscheduledPredecessors == 0 &&
|
|
|
|
"LoopChain should not have unscheduled predecessors.");
|
2011-12-08 03:46:10 +08:00
|
|
|
UpdatedPreds.insert(&LoopChain);
|
In MachineBlockPlacement, filter cold blocks off the loop chain when profile data is available.
In the current BB placement algorithm, a loop chain always contains all loop blocks. This has a drawback that cold blocks in the loop may be inserted on a hot function path, hence increasing branch cost and also reducing icache locality.
Consider a simple example shown below:
A
|
B⇆C
|
D
When B->C is quite cold, the best BB-layout should be A,B,D,C. But the current implementation produces A,C,B,D.
This patch filters those cold blocks off from the loop chain by comparing the ratio:
LoopBBFreq / LoopFreq
to 20%: if it is less than 20%, we don't include this BB to the loop chain. Here LoopFreq is the frequency of the loop when we reduce the loop into a single node. In general we have more cold blocks when the loop has few iterations. And vice versa.
Differential revision: http://reviews.llvm.org/D11662
llvm-svn: 251833
2015-11-03 05:24:00 +08:00
|
|
|
|
2017-02-04 10:26:32 +08:00
|
|
|
for (const MachineBasicBlock *LoopBB : LoopBlockSet)
|
2016-07-01 13:46:48 +08:00
|
|
|
fillWorkLists(LoopBB, UpdatedPreds, &LoopBlockSet);
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2016-07-01 13:46:48 +08:00
|
|
|
buildChain(LoopTop, LoopChain, &LoopBlockSet);
|
2015-10-20 07:16:40 +08:00
|
|
|
|
2019-08-30 03:03:58 +08:00
|
|
|
if (RotateLoopWithProfile)
|
|
|
|
rotateLoopWithProfile(LoopChain, L, LoopBlockSet);
|
|
|
|
else
|
|
|
|
rotateLoop(LoopChain, PreferredLoopExit, ExitFreq, LoopBlockSet);
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG({
|
2011-11-14 05:39:51 +08:00
|
|
|
// Crash at the end so we get all of the debugging output first.
|
|
|
|
bool BadLoop = false;
|
2016-03-03 08:58:43 +08:00
|
|
|
if (LoopChain.UnscheduledPredecessors) {
|
2011-11-14 05:39:51 +08:00
|
|
|
BadLoop = true;
|
2011-11-13 19:20:44 +08:00
|
|
|
dbgs() << "Loop chain contains a block without its preds placed!\n"
|
|
|
|
<< " Loop header: " << getBlockName(*L.block_begin()) << "\n"
|
|
|
|
<< " Chain header: " << getBlockName(*LoopChain.begin()) << "\n";
|
2011-11-14 05:39:51 +08:00
|
|
|
}
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : LoopChain) {
|
|
|
|
dbgs() << " ... " << getBlockName(ChainBB) << "\n";
|
2016-11-17 04:50:06 +08:00
|
|
|
if (!LoopBlockSet.remove(ChainBB)) {
|
2011-11-14 18:55:53 +08:00
|
|
|
// We don't mark the loop as bad here because there are real situations
|
|
|
|
// where this can occur. For example, with an unanalyzable fallthrough
|
2011-11-23 18:35:36 +08:00
|
|
|
// from a loop block to a non-loop block or vice versa.
|
2011-11-13 19:20:44 +08:00
|
|
|
dbgs() << "Loop chain contains a block not contained by the loop!\n"
|
|
|
|
<< " Loop header: " << getBlockName(*L.block_begin()) << "\n"
|
|
|
|
<< " Chain header: " << getBlockName(*LoopChain.begin()) << "\n"
|
2015-03-05 11:19:05 +08:00
|
|
|
<< " Bad block: " << getBlockName(ChainBB) << "\n";
|
2011-11-14 05:39:51 +08:00
|
|
|
}
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
}
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2011-11-14 05:39:51 +08:00
|
|
|
if (!LoopBlockSet.empty()) {
|
|
|
|
BadLoop = true;
|
2017-02-04 10:26:32 +08:00
|
|
|
for (const MachineBasicBlock *LoopBB : LoopBlockSet)
|
2011-11-13 19:20:44 +08:00
|
|
|
dbgs() << "Loop contains blocks never placed into a chain!\n"
|
|
|
|
<< " Loop header: " << getBlockName(*L.block_begin()) << "\n"
|
|
|
|
<< " Chain header: " << getBlockName(*LoopChain.begin()) << "\n"
|
2015-03-05 11:19:05 +08:00
|
|
|
<< " Bad block: " << getBlockName(LoopBB) << "\n";
|
2011-11-14 05:39:51 +08:00
|
|
|
}
|
|
|
|
assert(!BadLoop && "Detected problems with the placement of this loop.");
|
2011-11-13 19:20:44 +08:00
|
|
|
});
|
2016-07-01 13:46:48 +08:00
|
|
|
|
|
|
|
BlockWorkList.clear();
|
|
|
|
EHPadWorkList.clear();
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
void MachineBlockPlacement::buildCFGChains() {
|
2011-11-13 19:20:44 +08:00
|
|
|
// Ensure that every BB in the function has an associated chain to simplify
|
|
|
|
// the assumptions of the remaining algorithm.
|
2020-01-21 23:47:35 +08:00
|
|
|
SmallVector<MachineOperand, 4> Cond; // For analyzeBranch.
|
2016-06-14 06:23:44 +08:00
|
|
|
for (MachineFunction::iterator FI = F->begin(), FE = F->end(); FI != FE;
|
|
|
|
++FI) {
|
2015-10-10 03:36:12 +08:00
|
|
|
MachineBasicBlock *BB = &*FI;
|
2015-03-05 10:35:31 +08:00
|
|
|
BlockChain *Chain =
|
|
|
|
new (ChainAllocator.Allocate()) BlockChain(BlockToChain, BB);
|
2011-11-19 18:26:02 +08:00
|
|
|
// Also, merge any blocks which we cannot reason about and must preserve
|
|
|
|
// the exact fallthrough behavior for.
|
2017-08-25 05:21:39 +08:00
|
|
|
while (true) {
|
2011-11-19 18:26:02 +08:00
|
|
|
Cond.clear();
|
2020-01-21 23:47:35 +08:00
|
|
|
MachineBasicBlock *TBB = nullptr, *FBB = nullptr; // For analyzeBranch.
|
2016-07-15 22:41:04 +08:00
|
|
|
if (!TII->analyzeBranch(*BB, TBB, FBB, Cond) || !FI->canFallThrough())
|
2011-11-19 18:26:02 +08:00
|
|
|
break;
|
|
|
|
|
2015-10-10 03:36:12 +08:00
|
|
|
MachineFunction::iterator NextFI = std::next(FI);
|
|
|
|
MachineBasicBlock *NextBB = &*NextFI;
|
2011-11-19 18:26:02 +08:00
|
|
|
// Ensure that the layout successor is a viable block, as we know that
|
|
|
|
// fallthrough is a possibility.
|
|
|
|
assert(NextFI != FE && "Can't fallthrough past the last block.");
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Pre-merging due to unanalyzable fallthrough: "
|
|
|
|
<< getBlockName(BB) << " -> " << getBlockName(NextBB)
|
|
|
|
<< "\n");
|
2014-04-14 08:51:57 +08:00
|
|
|
Chain->merge(NextBB, nullptr);
|
2016-12-15 13:33:19 +08:00
|
|
|
#ifndef NDEBUG
|
[MachineBlockPlacement] Don't make blocks "uneditable"
Summary:
This fixes an issue with MachineBlockPlacement due to a badly timed call
to `analyzeBranch` with `AllowModify` set to true. The timeline is as
follows:
1. `MachineBlockPlacement::maybeTailDuplicateBlock` calls
`TailDup.shouldTailDuplicate` on its argument, which in turn calls
`analyzeBranch` with `AllowModify` set to true.
2. This `analyzeBranch` call edits the terminator sequence of the block
based on the physical layout of the machine function, turning an
unanalyzable non-fallthrough block to a unanalyzable fallthrough
block. Normally MBP bails out of rearranging such blocks, but this
block was unanalyzable non-fallthrough (and thus rearrangeable) the
first time MBP looked at it, and so it goes ahead and decides where
it should be placed in the function.
3. When placing this block MBP fails to analyze and thus update the
block in keeping with the new physical layout.
Concretely, before (1) we have something like:
```
LBL0:
< unknown terminator op that may branch to LBL1 >
jmp LBL1
LBL1:
... A
LBL2:
... B
```
In (2), analyze branch simplifies this to
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump removed
LBL1:
... A
LBL2:
... B
```
In (3), MachineBlockPlacement goes ahead with its plan of putting LBL2
after the first block since that is profitable.
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump
LBL2:
... B
LBL1:
... A
```
and the program now has incorrect behavior (we no longer fall-through
from `LBL0` to `LBL1`) because MBP can no longer edit LBL0.
There are several possible solutions, but I went with removing the teeth
off of the `analyzeBranch` calls in TailDuplicator. That makes thinking
about the result of these calls easier, and breaks nothing in the lit
test suite.
I've also added some bookkeeping to the MachineBlockPlacement pass and
used that to write an assert that would have caught this.
Reviewers: chandlerc, gberry, MatzeB, iteratee
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D27783
llvm-svn: 289764
2016-12-15 13:08:57 +08:00
|
|
|
BlocksWithUnanalyzableExits.insert(&*BB);
|
2016-12-15 13:33:19 +08:00
|
|
|
#endif
|
2011-11-19 18:26:02 +08:00
|
|
|
FI = NextFI;
|
|
|
|
BB = NextBB;
|
|
|
|
}
|
|
|
|
}
|
2011-11-13 19:20:44 +08:00
|
|
|
|
|
|
|
// Build any loop-based chains.
|
2016-11-02 06:02:14 +08:00
|
|
|
PreferredLoopExit = nullptr;
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineLoop *L : *MLI)
|
2016-06-14 06:23:44 +08:00
|
|
|
buildLoopChains(*L);
|
Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
2011-10-23 17:18:45 +08:00
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BlockWorkList.empty() &&
|
|
|
|
"BlockWorkList should be empty before building final chain.");
|
|
|
|
assert(EHPadWorkList.empty() &&
|
|
|
|
"EHPadWorkList should be empty before building final chain.");
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
SmallPtrSet<BlockChain *, 4> UpdatedPreds;
|
2016-06-14 06:23:44 +08:00
|
|
|
for (MachineBasicBlock &MBB : *F)
|
2016-07-01 13:46:48 +08:00
|
|
|
fillWorkLists(&MBB, UpdatedPreds);
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
BlockChain &FunctionChain = *BlockToChain[&F->front()];
|
2016-07-01 13:46:48 +08:00
|
|
|
buildChain(&F->front(), FunctionChain);
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2013-12-11 02:55:37 +08:00
|
|
|
#ifndef NDEBUG
|
2017-08-25 05:21:39 +08:00
|
|
|
using FunctionBlockSetType = SmallPtrSet<MachineBasicBlock *, 16>;
|
2013-12-11 02:55:37 +08:00
|
|
|
#endif
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG({
|
2011-11-14 05:39:51 +08:00
|
|
|
// Crash at the end so we get all of the debugging output first.
|
|
|
|
bool BadFunc = false;
|
2011-11-13 19:20:44 +08:00
|
|
|
FunctionBlockSetType FunctionBlockSet;
|
2016-06-14 06:23:44 +08:00
|
|
|
for (MachineBasicBlock &MBB : *F)
|
2015-03-05 11:19:05 +08:00
|
|
|
FunctionBlockSet.insert(&MBB);
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : FunctionChain)
|
|
|
|
if (!FunctionBlockSet.erase(ChainBB)) {
|
2011-11-14 05:39:51 +08:00
|
|
|
BadFunc = true;
|
2011-11-13 19:20:44 +08:00
|
|
|
dbgs() << "Function chain contains a block not in the function!\n"
|
2015-03-05 11:19:05 +08:00
|
|
|
<< " Bad block: " << getBlockName(ChainBB) << "\n";
|
2011-11-14 05:39:51 +08:00
|
|
|
}
|
2011-11-13 19:20:44 +08:00
|
|
|
|
2011-11-14 05:39:51 +08:00
|
|
|
if (!FunctionBlockSet.empty()) {
|
|
|
|
BadFunc = true;
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *RemainingBB : FunctionBlockSet)
|
2011-11-13 19:20:44 +08:00
|
|
|
dbgs() << "Function contains blocks never placed into a chain!\n"
|
2015-03-05 11:19:05 +08:00
|
|
|
<< " Bad block: " << getBlockName(RemainingBB) << "\n";
|
2011-11-14 05:39:51 +08:00
|
|
|
}
|
|
|
|
assert(!BadFunc && "Detected problems with the block placement.");
|
2011-11-13 19:20:44 +08:00
|
|
|
});
|
|
|
|
|
MachineBasicBlock::updateTerminator now requires an explicit layout successor.
Previously, it tried to infer the correct destination block from the
successor list, but this is a rather tricky propspect, given the
existence of successors that occur mid-block, such as invoke, and
potentially in the future, callbr/INLINEASM_BR. (INLINEASM_BR, in
particular would be problematic, because its successor blocks are not
distinct from "normal" successors, as EHPads are.)
Instead, require the caller to pass in the expected fallthrough
successor explicitly. In most callers, the correct block is
immediately clear. But, in MachineBlockPlacement, we do need to record
the original ordering, before starting to reorder blocks.
Unfortunately, the goal of decoupling the behavior of end-of-block
jumps from the successor list has not been fully accomplished in this
patch, as there is currently no other way to determine whether a block
is intended to fall-through, or end as unreachable. Further work is
needed there.
Differential Revision: https://reviews.llvm.org/D79605
2020-02-19 23:41:28 +08:00
|
|
|
// Remember original layout ordering, so we can update terminators after
|
|
|
|
// reordering to point to the original layout successor.
|
|
|
|
SmallVector<MachineBasicBlock *, 4> OriginalLayoutSuccessors(
|
|
|
|
F->getNumBlockIDs());
|
|
|
|
{
|
|
|
|
MachineBasicBlock *LastMBB = nullptr;
|
|
|
|
for (auto &MBB : *F) {
|
|
|
|
if (LastMBB != nullptr)
|
|
|
|
OriginalLayoutSuccessors[LastMBB->getNumber()] = &MBB;
|
|
|
|
LastMBB = &MBB;
|
|
|
|
}
|
|
|
|
OriginalLayoutSuccessors[F->back().getNumber()] = nullptr;
|
|
|
|
}
|
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
// Splice the blocks into place.
|
2016-06-14 06:23:44 +08:00
|
|
|
MachineFunction::iterator InsertPos = F->begin();
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "[MBP] Function: " << F->getName() << "\n");
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : FunctionChain) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << (ChainBB == *FunctionChain.begin() ? "Placing chain "
|
|
|
|
: " ... ")
|
|
|
|
<< getBlockName(ChainBB) << "\n");
|
2015-03-05 11:19:05 +08:00
|
|
|
if (InsertPos != MachineFunction::iterator(ChainBB))
|
2016-06-14 06:23:44 +08:00
|
|
|
F->splice(InsertPos, ChainBB);
|
2011-11-13 19:20:44 +08:00
|
|
|
else
|
|
|
|
++InsertPos;
|
|
|
|
|
|
|
|
// Update the terminator of the previous block.
|
2015-03-05 11:19:05 +08:00
|
|
|
if (ChainBB == *FunctionChain.begin())
|
2011-11-13 19:20:44 +08:00
|
|
|
continue;
|
2015-10-10 03:36:12 +08:00
|
|
|
MachineBasicBlock *PrevBB = &*std::prev(MachineFunction::iterator(ChainBB));
|
2011-11-13 19:20:44 +08:00
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
// FIXME: It would be awesome of updateTerminator would just return rather
|
|
|
|
// than assert when the branch cannot be analyzed in order to remove this
|
|
|
|
// boiler plate.
|
|
|
|
Cond.clear();
|
2020-01-21 23:47:35 +08:00
|
|
|
MachineBasicBlock *TBB = nullptr, *FBB = nullptr; // For analyzeBranch.
|
2013-06-04 09:00:57 +08:00
|
|
|
|
[MachineBlockPlacement] Don't make blocks "uneditable"
Summary:
This fixes an issue with MachineBlockPlacement due to a badly timed call
to `analyzeBranch` with `AllowModify` set to true. The timeline is as
follows:
1. `MachineBlockPlacement::maybeTailDuplicateBlock` calls
`TailDup.shouldTailDuplicate` on its argument, which in turn calls
`analyzeBranch` with `AllowModify` set to true.
2. This `analyzeBranch` call edits the terminator sequence of the block
based on the physical layout of the machine function, turning an
unanalyzable non-fallthrough block to a unanalyzable fallthrough
block. Normally MBP bails out of rearranging such blocks, but this
block was unanalyzable non-fallthrough (and thus rearrangeable) the
first time MBP looked at it, and so it goes ahead and decides where
it should be placed in the function.
3. When placing this block MBP fails to analyze and thus update the
block in keeping with the new physical layout.
Concretely, before (1) we have something like:
```
LBL0:
< unknown terminator op that may branch to LBL1 >
jmp LBL1
LBL1:
... A
LBL2:
... B
```
In (2), analyze branch simplifies this to
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump removed
LBL1:
... A
LBL2:
... B
```
In (3), MachineBlockPlacement goes ahead with its plan of putting LBL2
after the first block since that is profitable.
```
LBL0:
< unknown terminator op that may branch to LBL2 >
;; jmp LBL1 <- redundant jump
LBL2:
... B
LBL1:
... A
```
and the program now has incorrect behavior (we no longer fall-through
from `LBL0` to `LBL1`) because MBP can no longer edit LBL0.
There are several possible solutions, but I went with removing the teeth
off of the `analyzeBranch` calls in TailDuplicator. That makes thinking
about the result of these calls easier, and breaks nothing in the lit
test suite.
I've also added some bookkeeping to the MachineBlockPlacement pass and
used that to write an assert that would have caught this.
Reviewers: chandlerc, gberry, MatzeB, iteratee
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D27783
llvm-svn: 289764
2016-12-15 13:08:57 +08:00
|
|
|
#ifndef NDEBUG
|
|
|
|
if (!BlocksWithUnanalyzableExits.count(PrevBB)) {
|
|
|
|
// Given the exact block placement we chose, we may actually not _need_ to
|
|
|
|
// be able to edit PrevBB's terminator sequence, but not being _able_ to
|
|
|
|
// do that at this point is a bug.
|
|
|
|
assert((!TII->analyzeBranch(*PrevBB, TBB, FBB, Cond) ||
|
|
|
|
!PrevBB->canFallThrough()) &&
|
|
|
|
"Unexpected block with un-analyzable fallthrough!");
|
|
|
|
Cond.clear();
|
|
|
|
TBB = FBB = nullptr;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-05-25 06:16:14 +08:00
|
|
|
// The "PrevBB" is not yet updated to reflect current code layout, so,
|
2016-07-16 02:41:56 +08:00
|
|
|
// o. it may fall-through to a block without explicit "goto" instruction
|
2016-05-25 06:16:14 +08:00
|
|
|
// before layout, and no longer fall-through it after layout; or
|
|
|
|
// o. just opposite.
|
|
|
|
//
|
2016-07-15 22:41:04 +08:00
|
|
|
// analyzeBranch() may return erroneous value for FBB when these two
|
2016-05-25 06:16:14 +08:00
|
|
|
// situations take place. For the first scenario FBB is mistakenly set NULL;
|
|
|
|
// for the 2nd scenario, the FBB, which is expected to be NULL, is
|
|
|
|
// mistakenly pointing to "*BI".
|
|
|
|
// Thus, if the future change needs to use FBB before the layout is set, it
|
|
|
|
// has to correct FBB first by using the code similar to the following:
|
|
|
|
//
|
|
|
|
// if (!Cond.empty() && (!FBB || FBB == ChainBB)) {
|
|
|
|
// PrevBB->updateTerminator();
|
|
|
|
// Cond.clear();
|
|
|
|
// TBB = FBB = nullptr;
|
2016-07-15 22:41:04 +08:00
|
|
|
// if (TII->analyzeBranch(*PrevBB, TBB, FBB, Cond)) {
|
2016-05-25 06:16:14 +08:00
|
|
|
// // FIXME: This should never take place.
|
|
|
|
// TBB = FBB = nullptr;
|
|
|
|
// }
|
|
|
|
// }
|
MachineBasicBlock::updateTerminator now requires an explicit layout successor.
Previously, it tried to infer the correct destination block from the
successor list, but this is a rather tricky propspect, given the
existence of successors that occur mid-block, such as invoke, and
potentially in the future, callbr/INLINEASM_BR. (INLINEASM_BR, in
particular would be problematic, because its successor blocks are not
distinct from "normal" successors, as EHPads are.)
Instead, require the caller to pass in the expected fallthrough
successor explicitly. In most callers, the correct block is
immediately clear. But, in MachineBlockPlacement, we do need to record
the original ordering, before starting to reorder blocks.
Unfortunately, the goal of decoupling the behavior of end-of-block
jumps from the successor list has not been fully accomplished in this
patch, as there is currently no other way to determine whether a block
is intended to fall-through, or end as unreachable. Further work is
needed there.
Differential Revision: https://reviews.llvm.org/D79605
2020-02-19 23:41:28 +08:00
|
|
|
if (!TII->analyzeBranch(*PrevBB, TBB, FBB, Cond)) {
|
|
|
|
PrevBB->updateTerminator(OriginalLayoutSuccessors[PrevBB->getNumber()]);
|
|
|
|
}
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
}
|
2011-11-13 19:20:44 +08:00
|
|
|
|
|
|
|
// Fixup the last block.
|
|
|
|
Cond.clear();
|
2020-01-21 23:47:35 +08:00
|
|
|
MachineBasicBlock *TBB = nullptr, *FBB = nullptr; // For analyzeBranch.
|
MachineBasicBlock::updateTerminator now requires an explicit layout successor.
Previously, it tried to infer the correct destination block from the
successor list, but this is a rather tricky propspect, given the
existence of successors that occur mid-block, such as invoke, and
potentially in the future, callbr/INLINEASM_BR. (INLINEASM_BR, in
particular would be problematic, because its successor blocks are not
distinct from "normal" successors, as EHPads are.)
Instead, require the caller to pass in the expected fallthrough
successor explicitly. In most callers, the correct block is
immediately clear. But, in MachineBlockPlacement, we do need to record
the original ordering, before starting to reorder blocks.
Unfortunately, the goal of decoupling the behavior of end-of-block
jumps from the successor list has not been fully accomplished in this
patch, as there is currently no other way to determine whether a block
is intended to fall-through, or end as unreachable. Further work is
needed there.
Differential Revision: https://reviews.llvm.org/D79605
2020-02-19 23:41:28 +08:00
|
|
|
if (!TII->analyzeBranch(F->back(), TBB, FBB, Cond)) {
|
|
|
|
MachineBasicBlock *PrevBB = &F->back();
|
|
|
|
PrevBB->updateTerminator(OriginalLayoutSuccessors[PrevBB->getNumber()]);
|
|
|
|
}
|
2016-07-01 13:46:48 +08:00
|
|
|
|
|
|
|
BlockWorkList.clear();
|
|
|
|
EHPadWorkList.clear();
|
2016-05-25 06:16:14 +08:00
|
|
|
}
|
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
void MachineBlockPlacement::optimizeBranches() {
|
|
|
|
BlockChain &FunctionChain = *BlockToChain[&F->front()];
|
2020-01-21 23:47:35 +08:00
|
|
|
SmallVector<MachineOperand, 4> Cond; // For analyzeBranch.
|
2016-05-03 06:58:59 +08:00
|
|
|
|
|
|
|
// Now that all the basic blocks in the chain have the proper layout,
|
2020-01-21 23:47:35 +08:00
|
|
|
// make a final call to analyzeBranch with AllowModify set.
|
2016-05-03 06:58:59 +08:00
|
|
|
// Indeed, the target may be able to optimize the branches in a way we
|
|
|
|
// cannot because all branches may not be analyzable.
|
|
|
|
// E.g., the target may be able to remove an unconditional branch to
|
|
|
|
// a fallthrough when it occurs after predicated terminators.
|
|
|
|
for (MachineBasicBlock *ChainBB : FunctionChain) {
|
|
|
|
Cond.clear();
|
2020-01-21 23:47:35 +08:00
|
|
|
MachineBasicBlock *TBB = nullptr, *FBB = nullptr; // For analyzeBranch.
|
2016-07-15 22:41:04 +08:00
|
|
|
if (!TII->analyzeBranch(*ChainBB, TBB, FBB, Cond, /*AllowModify*/ true)) {
|
2016-05-25 06:16:14 +08:00
|
|
|
// If PrevBB has a two-way branch, try to re-order the branches
|
|
|
|
// such that we branch to the successor with higher probability first.
|
|
|
|
if (TBB && !Cond.empty() && FBB &&
|
|
|
|
MBPI->getEdgeProbability(ChainBB, FBB) >
|
|
|
|
MBPI->getEdgeProbability(ChainBB, TBB) &&
|
2016-09-15 04:43:16 +08:00
|
|
|
!TII->reverseBranchCondition(Cond)) {
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Reverse order of the two branches: "
|
|
|
|
<< getBlockName(ChainBB) << "\n");
|
|
|
|
LLVM_DEBUG(dbgs() << " Edge probability: "
|
|
|
|
<< MBPI->getEdgeProbability(ChainBB, FBB) << " vs "
|
|
|
|
<< MBPI->getEdgeProbability(ChainBB, TBB) << "\n");
|
2016-05-25 06:16:14 +08:00
|
|
|
DebugLoc dl; // FIXME: this is nowhere
|
2016-09-15 04:43:16 +08:00
|
|
|
TII->removeBranch(*ChainBB);
|
2016-09-15 01:24:15 +08:00
|
|
|
TII->insertBranch(*ChainBB, FBB, TBB, Cond, dl);
|
2016-05-25 06:16:14 +08:00
|
|
|
}
|
|
|
|
}
|
2016-05-03 06:58:59 +08:00
|
|
|
}
|
2016-04-30 01:06:44 +08:00
|
|
|
}
|
2011-10-21 16:57:37 +08:00
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
void MachineBlockPlacement::alignBlocks() {
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
// Walk through the backedges of the function now that we have fully laid out
|
|
|
|
// the basic blocks and align the destination of each backedge. We don't rely
|
2012-08-07 17:45:24 +08:00
|
|
|
// exclusively on the loop info here so that we can align backedges in
|
|
|
|
// unnatural CFGs and backedges that were introduced purely because of the
|
|
|
|
// loop rotations done during this layout pass.
|
2019-04-05 06:40:06 +08:00
|
|
|
if (F->getFunction().hasMinSize() ||
|
|
|
|
(F->getFunction().hasOptSize() && !TLI->alignLoopsWithOptSize()))
|
2011-10-21 16:57:37 +08:00
|
|
|
return;
|
2016-06-14 06:23:44 +08:00
|
|
|
BlockChain &FunctionChain = *BlockToChain[&F->front()];
|
2012-08-07 17:45:24 +08:00
|
|
|
if (FunctionChain.begin() == FunctionChain.end())
|
2015-03-05 10:35:31 +08:00
|
|
|
return; // Empty chain.
|
2011-10-21 16:57:37 +08:00
|
|
|
|
2012-08-07 17:45:24 +08:00
|
|
|
const BranchProbability ColdProb(1, 5); // 20%
|
2016-06-14 06:23:44 +08:00
|
|
|
BlockFrequency EntryFreq = MBFI->getBlockFreq(&F->front());
|
2012-08-07 17:45:24 +08:00
|
|
|
BlockFrequency WeightedEntryFreq = EntryFreq * ColdProb;
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock *ChainBB : FunctionChain) {
|
|
|
|
if (ChainBB == *FunctionChain.begin())
|
|
|
|
continue;
|
|
|
|
|
2012-08-07 17:45:24 +08:00
|
|
|
// Don't align non-looping basic blocks. These are unlikely to execute
|
|
|
|
// enough times to matter in practice. Note that we'll still handle
|
|
|
|
// unnatural CFGs inside of a natural outer loop (the common case) and
|
|
|
|
// rotated loops.
|
2015-03-05 11:19:05 +08:00
|
|
|
MachineLoop *L = MLI->getLoopFor(ChainBB);
|
2012-08-07 17:45:24 +08:00
|
|
|
if (!L)
|
|
|
|
continue;
|
|
|
|
|
2019-09-27 20:54:21 +08:00
|
|
|
const Align Align = TLI->getPrefLoopAlignment(L);
|
2019-09-10 20:00:43 +08:00
|
|
|
if (Align == 1)
|
2015-03-05 10:35:31 +08:00
|
|
|
continue; // Don't care about loop alignment.
|
2015-01-04 01:58:24 +08:00
|
|
|
|
2012-08-07 17:45:24 +08:00
|
|
|
// If the block is cold relative to the function entry don't waste space
|
|
|
|
// aligning it.
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockFrequency Freq = MBFI->getBlockFreq(ChainBB);
|
2012-08-07 17:45:24 +08:00
|
|
|
if (Freq < WeightedEntryFreq)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// If the block is cold relative to its loop header, don't align it
|
|
|
|
// regardless of what edges into the block exist.
|
|
|
|
MachineBasicBlock *LoopHeader = L->getHeader();
|
|
|
|
BlockFrequency LoopHeaderFreq = MBFI->getBlockFreq(LoopHeader);
|
|
|
|
if (Freq < (LoopHeaderFreq * ColdProb))
|
|
|
|
continue;
|
|
|
|
|
2019-12-06 01:39:37 +08:00
|
|
|
// If the global profiles indicates so, don't align it.
|
2020-01-30 01:36:31 +08:00
|
|
|
if (llvm::shouldOptimizeForSize(ChainBB, PSI, MBFI.get()) &&
|
2019-12-06 01:39:37 +08:00
|
|
|
!TLI->alignLoopsWithOptSize())
|
|
|
|
continue;
|
|
|
|
|
2012-08-07 17:45:24 +08:00
|
|
|
// Check for the existence of a non-layout predecessor which would benefit
|
|
|
|
// from aligning this block.
|
2015-03-05 11:19:05 +08:00
|
|
|
MachineBasicBlock *LayoutPred =
|
|
|
|
&*std::prev(MachineFunction::iterator(ChainBB));
|
2012-08-07 17:45:24 +08:00
|
|
|
|
|
|
|
// Force alignment if all the predecessors are jumps. We already checked
|
|
|
|
// that the block isn't cold above.
|
2015-03-05 11:19:05 +08:00
|
|
|
if (!LayoutPred->isSuccessor(ChainBB)) {
|
[Alignment][NFC] Remove LogAlignment functions
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: arsenm, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, MaskRay, atanasyan, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67620
llvm-svn: 372231
2019-09-18 23:49:49 +08:00
|
|
|
ChainBB->setAlignment(Align);
|
2012-08-07 17:45:24 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Align this block if the layout predecessor's edge into this block is
|
2013-03-30 00:34:23 +08:00
|
|
|
// cold relative to the block. When this is true, other predecessors make up
|
2012-08-07 17:45:24 +08:00
|
|
|
// all of the hot entries into the block and thus alignment is likely to be
|
|
|
|
// important.
|
2015-03-05 11:19:05 +08:00
|
|
|
BranchProbability LayoutProb =
|
|
|
|
MBPI->getEdgeProbability(LayoutPred, ChainBB);
|
2012-08-07 17:45:24 +08:00
|
|
|
BlockFrequency LayoutEdgeFreq = MBFI->getBlockFreq(LayoutPred) * LayoutProb;
|
|
|
|
if (LayoutEdgeFreq <= (Freq * ColdProb))
|
[Alignment][NFC] Remove LogAlignment functions
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: arsenm, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, MaskRay, atanasyan, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67620
llvm-svn: 372231
2019-09-18 23:49:49 +08:00
|
|
|
ChainBB->setAlignment(Align);
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
}
|
2011-10-21 16:57:37 +08:00
|
|
|
}
|
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
/// Tail duplicate \p BB into (some) predecessors if profitable, repeating if
|
|
|
|
/// it was duplicated into its chain predecessor and removed.
|
|
|
|
/// \p BB - Basic block that may be duplicated.
|
|
|
|
///
|
|
|
|
/// \p LPred - Chosen layout predecessor of \p BB.
|
|
|
|
/// Updated to be the chain end if LPred is removed.
|
|
|
|
/// \p Chain - Chain to which \p LPred belongs, and \p BB will belong.
|
|
|
|
/// \p BlockFilter - Set of blocks that belong to the loop being laid out.
|
|
|
|
/// Used to identify which blocks to update predecessor
|
|
|
|
/// counts.
|
|
|
|
/// \p PrevUnplacedBlockIt - Iterator pointing to the last block that was
|
|
|
|
/// chosen in the given order due to unnatural CFG
|
|
|
|
/// only needed if \p BB is removed and
|
|
|
|
/// \p PrevUnplacedBlockIt pointed to \p BB.
|
|
|
|
/// @return true if \p BB was removed.
|
|
|
|
bool MachineBlockPlacement::repeatedlyTailDuplicateBlock(
|
|
|
|
MachineBasicBlock *BB, MachineBasicBlock *&LPred,
|
2017-02-04 10:26:32 +08:00
|
|
|
const MachineBasicBlock *LoopHeaderBB,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
BlockChain &Chain, BlockFilterSet *BlockFilter,
|
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt) {
|
|
|
|
bool Removed, DuplicatedToLPred;
|
|
|
|
bool DuplicatedToOriginalLPred;
|
|
|
|
Removed = maybeTailDuplicateBlock(BB, LPred, Chain, BlockFilter,
|
|
|
|
PrevUnplacedBlockIt,
|
|
|
|
DuplicatedToLPred);
|
|
|
|
if (!Removed)
|
|
|
|
return false;
|
|
|
|
DuplicatedToOriginalLPred = DuplicatedToLPred;
|
|
|
|
// Iteratively try to duplicate again. It can happen that a block that is
|
|
|
|
// duplicated into is still small enough to be duplicated again.
|
|
|
|
// No need to call markBlockSuccessors in this case, as the blocks being
|
|
|
|
// duplicated from here on are already scheduled.
|
2020-04-12 03:20:12 +08:00
|
|
|
while (DuplicatedToLPred && Removed) {
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
MachineBasicBlock *DupBB, *DupPred;
|
|
|
|
// The removal callback causes Chain.end() to be updated when a block is
|
|
|
|
// removed. On the first pass through the loop, the chain end should be the
|
|
|
|
// same as it was on function entry. On subsequent passes, because we are
|
|
|
|
// duplicating the block at the end of the chain, if it is removed the
|
|
|
|
// chain will have shrunk by one block.
|
|
|
|
BlockChain::iterator ChainEnd = Chain.end();
|
|
|
|
DupBB = *(--ChainEnd);
|
|
|
|
// Now try to duplicate again.
|
|
|
|
if (ChainEnd == Chain.begin())
|
|
|
|
break;
|
|
|
|
DupPred = *std::prev(ChainEnd);
|
|
|
|
Removed = maybeTailDuplicateBlock(DupBB, DupPred, Chain, BlockFilter,
|
|
|
|
PrevUnplacedBlockIt,
|
|
|
|
DuplicatedToLPred);
|
|
|
|
}
|
|
|
|
// If BB was duplicated into LPred, it is now scheduled. But because it was
|
|
|
|
// removed, markChainSuccessors won't be called for its chain. Instead we
|
|
|
|
// call markBlockSuccessors for LPred to achieve the same effect. This must go
|
|
|
|
// at the end because repeating the tail duplication can increase the number
|
|
|
|
// of unscheduled predecessors.
|
|
|
|
LPred = *std::prev(Chain.end());
|
|
|
|
if (DuplicatedToOriginalLPred)
|
|
|
|
markBlockSuccessors(Chain, LPred, LoopHeaderBB, BlockFilter);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Tail duplicate \p BB into (some) predecessors if profitable.
|
|
|
|
/// \p BB - Basic block that may be duplicated
|
|
|
|
/// \p LPred - Chosen layout predecessor of \p BB
|
|
|
|
/// \p Chain - Chain to which \p LPred belongs, and \p BB will belong.
|
|
|
|
/// \p BlockFilter - Set of blocks that belong to the loop being laid out.
|
|
|
|
/// Used to identify which blocks to update predecessor
|
|
|
|
/// counts.
|
|
|
|
/// \p PrevUnplacedBlockIt - Iterator pointing to the last block that was
|
|
|
|
/// chosen in the given order due to unnatural CFG
|
|
|
|
/// only needed if \p BB is removed and
|
|
|
|
/// \p PrevUnplacedBlockIt pointed to \p BB.
|
2020-04-12 03:20:12 +08:00
|
|
|
/// \p DuplicatedToLPred - True if the block was duplicated into LPred.
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
/// \return - True if the block was duplicated into all preds and removed.
|
|
|
|
bool MachineBlockPlacement::maybeTailDuplicateBlock(
|
|
|
|
MachineBasicBlock *BB, MachineBasicBlock *LPred,
|
2017-02-04 10:26:32 +08:00
|
|
|
BlockChain &Chain, BlockFilterSet *BlockFilter,
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
MachineFunction::iterator &PrevUnplacedBlockIt,
|
|
|
|
bool &DuplicatedToLPred) {
|
|
|
|
DuplicatedToLPred = false;
|
2017-02-04 10:26:34 +08:00
|
|
|
if (!shouldTailDuplicate(BB))
|
|
|
|
return false;
|
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Redoing tail duplication for Succ#" << BB->getNumber()
|
|
|
|
<< "\n");
|
2017-02-01 07:48:32 +08:00
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
// This has to be a callback because none of it can be done after
|
|
|
|
// BB is deleted.
|
|
|
|
bool Removed = false;
|
|
|
|
auto RemovalCallback =
|
|
|
|
[&](MachineBasicBlock *RemBB) {
|
|
|
|
// Signal to outer function
|
|
|
|
Removed = true;
|
|
|
|
|
|
|
|
// Conservative default.
|
|
|
|
bool InWorkList = true;
|
|
|
|
// Remove from the Chain and Chain Map
|
|
|
|
if (BlockToChain.count(RemBB)) {
|
|
|
|
BlockChain *Chain = BlockToChain[RemBB];
|
|
|
|
InWorkList = Chain->UnscheduledPredecessors == 0;
|
|
|
|
Chain->remove(RemBB);
|
|
|
|
BlockToChain.erase(RemBB);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Handle the unplaced block iterator
|
|
|
|
if (&(*PrevUnplacedBlockIt) == RemBB) {
|
|
|
|
PrevUnplacedBlockIt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Handle the Work Lists
|
|
|
|
if (InWorkList) {
|
|
|
|
SmallVectorImpl<MachineBasicBlock *> &RemoveList = BlockWorkList;
|
|
|
|
if (RemBB->isEHPad())
|
|
|
|
RemoveList = EHPadWorkList;
|
|
|
|
RemoveList.erase(
|
2017-08-25 05:21:39 +08:00
|
|
|
llvm::remove_if(RemoveList,
|
|
|
|
[RemBB](MachineBasicBlock *BB) {
|
|
|
|
return BB == RemBB;
|
|
|
|
}),
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
RemoveList.end());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Handle the filter set
|
|
|
|
if (BlockFilter) {
|
2016-11-17 04:50:06 +08:00
|
|
|
BlockFilter->remove(RemBB);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Remove the block from loop info.
|
|
|
|
MLI->removeBlock(RemBB);
|
2016-10-28 05:37:20 +08:00
|
|
|
if (RemBB == PreferredLoopExit)
|
|
|
|
PreferredLoopExit = nullptr;
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
|
2018-05-14 20:53:11 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "TailDuplicator deleted block: "
|
|
|
|
<< getBlockName(RemBB) << "\n");
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
};
|
|
|
|
auto RemovalCallbackRef =
|
2017-08-25 05:21:39 +08:00
|
|
|
function_ref<void(MachineBasicBlock*)>(RemovalCallback);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
|
|
|
|
SmallVector<MachineBasicBlock *, 8> DuplicatedPreds;
|
2017-02-01 07:48:32 +08:00
|
|
|
bool IsSimple = TailDup.isSimpleBB(BB);
|
2020-02-13 07:22:33 +08:00
|
|
|
SmallVector<MachineBasicBlock *, 8> CandidatePreds;
|
|
|
|
SmallVectorImpl<MachineBasicBlock *> *CandidatePtr = nullptr;
|
|
|
|
if (F->getFunction().hasProfileData()) {
|
|
|
|
// We can do partial duplication with precise profile information.
|
|
|
|
findDuplicateCandidates(CandidatePreds, BB, BlockFilter);
|
|
|
|
if (CandidatePreds.size() == 0)
|
|
|
|
return false;
|
|
|
|
if (CandidatePreds.size() < BB->pred_size())
|
|
|
|
CandidatePtr = &CandidatePreds;
|
|
|
|
}
|
|
|
|
TailDup.tailDuplicateAndUpdate(IsSimple, BB, LPred, &DuplicatedPreds,
|
|
|
|
&RemovalCallbackRef, CandidatePtr);
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
|
|
|
|
// Update UnscheduledPredecessors to reflect tail-duplication.
|
|
|
|
DuplicatedToLPred = false;
|
|
|
|
for (MachineBasicBlock *Pred : DuplicatedPreds) {
|
|
|
|
// We're only looking for unscheduled predecessors that match the filter.
|
|
|
|
BlockChain* PredChain = BlockToChain[Pred];
|
|
|
|
if (Pred == LPred)
|
|
|
|
DuplicatedToLPred = true;
|
|
|
|
if (Pred == LPred || (BlockFilter && !BlockFilter->count(Pred))
|
|
|
|
|| PredChain == &Chain)
|
|
|
|
continue;
|
|
|
|
for (MachineBasicBlock *NewSucc : Pred->successors()) {
|
|
|
|
if (BlockFilter && !BlockFilter->count(NewSucc))
|
|
|
|
continue;
|
|
|
|
BlockChain *NewChain = BlockToChain[NewSucc];
|
|
|
|
if (NewChain != &Chain && NewChain != PredChain)
|
|
|
|
NewChain->UnscheduledPredecessors++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return Removed;
|
|
|
|
}
|
|
|
|
|
2020-02-13 07:22:33 +08:00
|
|
|
// Count the number of actual machine instructions.
|
|
|
|
static uint64_t countMBBInstruction(MachineBasicBlock *MBB) {
|
|
|
|
uint64_t InstrCount = 0;
|
|
|
|
for (MachineInstr &MI : *MBB) {
|
|
|
|
if (!MI.isPHI() && !MI.isMetaInstruction())
|
|
|
|
InstrCount += 1;
|
|
|
|
}
|
|
|
|
return InstrCount;
|
|
|
|
}
|
|
|
|
|
|
|
|
// The size cost of duplication is the instruction size of the duplicated block.
|
|
|
|
// So we should scale the threshold accordingly. But the instruction size is not
|
|
|
|
// available on all targets, so we use the number of instructions instead.
|
|
|
|
BlockFrequency MachineBlockPlacement::scaleThreshold(MachineBasicBlock *BB) {
|
|
|
|
return DupThreshold.getFrequency() * countMBBInstruction(BB);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns true if BB is Pred's best successor.
|
|
|
|
bool MachineBlockPlacement::isBestSuccessor(MachineBasicBlock *BB,
|
|
|
|
MachineBasicBlock *Pred,
|
|
|
|
BlockFilterSet *BlockFilter) {
|
|
|
|
if (BB == Pred)
|
|
|
|
return false;
|
|
|
|
if (BlockFilter && !BlockFilter->count(Pred))
|
|
|
|
return false;
|
|
|
|
BlockChain *PredChain = BlockToChain[Pred];
|
|
|
|
if (PredChain && (Pred != *std::prev(PredChain->end())))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// Find the successor with largest probability excluding BB.
|
|
|
|
BranchProbability BestProb = BranchProbability::getZero();
|
|
|
|
for (MachineBasicBlock *Succ : Pred->successors())
|
|
|
|
if (Succ != BB) {
|
|
|
|
if (BlockFilter && !BlockFilter->count(Succ))
|
|
|
|
continue;
|
|
|
|
BlockChain *SuccChain = BlockToChain[Succ];
|
|
|
|
if (SuccChain && (Succ != *SuccChain->begin()))
|
|
|
|
continue;
|
|
|
|
BranchProbability SuccProb = MBPI->getEdgeProbability(Pred, Succ);
|
|
|
|
if (SuccProb > BestProb)
|
|
|
|
BestProb = SuccProb;
|
|
|
|
}
|
|
|
|
|
|
|
|
BranchProbability BBProb = MBPI->getEdgeProbability(Pred, BB);
|
|
|
|
if (BBProb <= BestProb)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// Compute the number of reduced taken branches if Pred falls through to BB
|
|
|
|
// instead of another successor. Then compare it with threshold.
|
2020-07-22 02:18:06 +08:00
|
|
|
BlockFrequency PredFreq = getBlockCountOrFrequency(Pred);
|
2020-02-13 07:22:33 +08:00
|
|
|
BlockFrequency Gain = PredFreq * (BBProb - BestProb);
|
|
|
|
return Gain > scaleThreshold(BB);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Find out the predecessors of BB and BB can be beneficially duplicated into
|
|
|
|
// them.
|
|
|
|
void MachineBlockPlacement::findDuplicateCandidates(
|
|
|
|
SmallVectorImpl<MachineBasicBlock *> &Candidates,
|
|
|
|
MachineBasicBlock *BB,
|
|
|
|
BlockFilterSet *BlockFilter) {
|
|
|
|
MachineBasicBlock *Fallthrough = nullptr;
|
|
|
|
BranchProbability DefaultBranchProb = BranchProbability::getZero();
|
|
|
|
BlockFrequency BBDupThreshold(scaleThreshold(BB));
|
|
|
|
SmallVector<MachineBasicBlock *, 8> Preds(BB->pred_begin(), BB->pred_end());
|
|
|
|
SmallVector<MachineBasicBlock *, 8> Succs(BB->succ_begin(), BB->succ_end());
|
|
|
|
|
|
|
|
// Sort for highest frequency.
|
|
|
|
auto CmpSucc = [&](MachineBasicBlock *A, MachineBasicBlock *B) {
|
|
|
|
return MBPI->getEdgeProbability(BB, A) > MBPI->getEdgeProbability(BB, B);
|
|
|
|
};
|
|
|
|
auto CmpPred = [&](MachineBasicBlock *A, MachineBasicBlock *B) {
|
|
|
|
return MBFI->getBlockFreq(A) > MBFI->getBlockFreq(B);
|
|
|
|
};
|
|
|
|
llvm::stable_sort(Succs, CmpSucc);
|
|
|
|
llvm::stable_sort(Preds, CmpPred);
|
|
|
|
|
|
|
|
auto SuccIt = Succs.begin();
|
|
|
|
if (SuccIt != Succs.end()) {
|
|
|
|
DefaultBranchProb = MBPI->getEdgeProbability(BB, *SuccIt).getCompl();
|
|
|
|
}
|
|
|
|
|
|
|
|
// For each predecessors of BB, compute the benefit of duplicating BB,
|
|
|
|
// if it is larger than the threshold, add it into Candidates.
|
|
|
|
//
|
|
|
|
// If we have following control flow.
|
|
|
|
//
|
|
|
|
// PB1 PB2 PB3 PB4
|
|
|
|
// \ | / /\
|
|
|
|
// \ | / / \
|
|
|
|
// \ |/ / \
|
|
|
|
// BB----/ OB
|
|
|
|
// /\
|
|
|
|
// / \
|
|
|
|
// SB1 SB2
|
|
|
|
//
|
|
|
|
// And it can be partially duplicated as
|
|
|
|
//
|
|
|
|
// PB2+BB
|
|
|
|
// | PB1 PB3 PB4
|
|
|
|
// | | / /\
|
|
|
|
// | | / / \
|
|
|
|
// | |/ / \
|
|
|
|
// | BB----/ OB
|
|
|
|
// |\ /|
|
|
|
|
// | X |
|
|
|
|
// |/ \|
|
|
|
|
// SB2 SB1
|
|
|
|
//
|
|
|
|
// The benefit of duplicating into a predecessor is defined as
|
|
|
|
// Orig_taken_branch - Duplicated_taken_branch
|
|
|
|
//
|
|
|
|
// The Orig_taken_branch is computed with the assumption that predecessor
|
|
|
|
// jumps to BB and the most possible successor is laid out after BB.
|
|
|
|
//
|
|
|
|
// The Duplicated_taken_branch is computed with the assumption that BB is
|
|
|
|
// duplicated into PB, and one successor is layout after it (SB1 for PB1 and
|
|
|
|
// SB2 for PB2 in our case). If there is no available successor, the combined
|
|
|
|
// block jumps to all BB's successor, like PB3 in this example.
|
|
|
|
//
|
|
|
|
// If a predecessor has multiple successors, so BB can't be duplicated into
|
|
|
|
// it. But it can beneficially fall through to BB, and duplicate BB into other
|
|
|
|
// predecessors.
|
|
|
|
for (MachineBasicBlock *Pred : Preds) {
|
2020-07-22 02:18:06 +08:00
|
|
|
BlockFrequency PredFreq = getBlockCountOrFrequency(Pred);
|
2020-02-13 07:22:33 +08:00
|
|
|
|
|
|
|
if (!TailDup.canTailDuplicate(BB, Pred)) {
|
|
|
|
// BB can't be duplicated into Pred, but it is possible to be layout
|
|
|
|
// below Pred.
|
|
|
|
if (!Fallthrough && isBestSuccessor(BB, Pred, BlockFilter)) {
|
|
|
|
Fallthrough = Pred;
|
|
|
|
if (SuccIt != Succs.end())
|
|
|
|
SuccIt++;
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
BlockFrequency OrigCost = PredFreq + PredFreq * DefaultBranchProb;
|
|
|
|
BlockFrequency DupCost;
|
|
|
|
if (SuccIt == Succs.end()) {
|
|
|
|
// Jump to all successors;
|
|
|
|
if (Succs.size() > 0)
|
|
|
|
DupCost += PredFreq;
|
|
|
|
} else {
|
|
|
|
// Fallthrough to *SuccIt, jump to all other successors;
|
|
|
|
DupCost += PredFreq;
|
|
|
|
DupCost -= PredFreq * MBPI->getEdgeProbability(BB, *SuccIt);
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(OrigCost >= DupCost);
|
|
|
|
OrigCost -= DupCost;
|
|
|
|
if (OrigCost > BBDupThreshold) {
|
|
|
|
Candidates.push_back(Pred);
|
|
|
|
if (SuccIt != Succs.end())
|
|
|
|
SuccIt++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// No predecessors can optimally fallthrough to BB.
|
|
|
|
// So we can change one duplication into fallthrough.
|
|
|
|
if (!Fallthrough) {
|
|
|
|
if ((Candidates.size() < Preds.size()) && (Candidates.size() > 0)) {
|
|
|
|
Candidates[0] = Candidates.back();
|
|
|
|
Candidates.pop_back();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void MachineBlockPlacement::initDupThreshold() {
|
|
|
|
DupThreshold = 0;
|
|
|
|
if (!F->getFunction().hasProfileData())
|
|
|
|
return;
|
|
|
|
|
2020-07-22 02:18:06 +08:00
|
|
|
// We prefer to use prifile count.
|
|
|
|
uint64_t HotThreshold = PSI->getOrCompHotCountThreshold();
|
|
|
|
if (HotThreshold != UINT64_MAX) {
|
|
|
|
UseProfileCount = true;
|
|
|
|
DupThreshold = HotThreshold * TailDupProfilePercentThreshold / 100;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Profile count is not available, we can use block frequency instead.
|
2020-02-13 07:22:33 +08:00
|
|
|
BlockFrequency MaxFreq = 0;
|
|
|
|
for (MachineBasicBlock &MBB : *F) {
|
|
|
|
BlockFrequency Freq = MBFI->getBlockFreq(&MBB);
|
|
|
|
if (Freq > MaxFreq)
|
|
|
|
MaxFreq = Freq;
|
|
|
|
}
|
|
|
|
|
|
|
|
BranchProbability ThresholdProb(TailDupPlacementPenalty, 100);
|
|
|
|
DupThreshold = MaxFreq * ThresholdProb;
|
2020-07-22 02:18:06 +08:00
|
|
|
UseProfileCount = false;
|
2020-02-13 07:22:33 +08:00
|
|
|
}
|
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
bool MachineBlockPlacement::runOnMachineFunction(MachineFunction &MF) {
|
2017-12-16 06:22:58 +08:00
|
|
|
if (skipFunction(MF.getFunction()))
|
2016-05-04 06:32:30 +08:00
|
|
|
return false;
|
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
// Check for single-block functions and skip them.
|
2016-06-14 06:23:44 +08:00
|
|
|
if (std::next(MF.begin()) == MF.end())
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
return false;
|
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
F = &MF;
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
|
2020-01-28 02:05:54 +08:00
|
|
|
MBFI = std::make_unique<MBFIWrapper>(
|
2016-06-09 23:24:29 +08:00
|
|
|
getAnalysis<MachineBlockFrequencyInfo>());
|
2011-10-21 16:57:37 +08:00
|
|
|
MLI = &getAnalysis<MachineLoopInfo>();
|
2016-06-14 06:23:44 +08:00
|
|
|
TII = MF.getSubtarget().getInstrInfo();
|
|
|
|
TLI = MF.getSubtarget().getTargetLowering();
|
2017-02-01 07:48:32 +08:00
|
|
|
MPDT = nullptr;
|
2019-12-06 01:39:37 +08:00
|
|
|
PSI = &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI();
|
2016-11-02 06:15:50 +08:00
|
|
|
|
2020-02-13 07:22:33 +08:00
|
|
|
initDupThreshold();
|
|
|
|
|
2016-11-02 06:15:50 +08:00
|
|
|
// Initialize PreferredLoopExit to nullptr here since it may never be set if
|
|
|
|
// there are no MachineLoops.
|
|
|
|
PreferredLoopExit = nullptr;
|
|
|
|
|
2017-05-18 07:44:41 +08:00
|
|
|
assert(BlockToChain.empty() &&
|
|
|
|
"BlockToChain map should be empty before starting placement.");
|
|
|
|
assert(ComputedEdges.empty() &&
|
|
|
|
"Computed Edge map should be empty before starting placement.");
|
2017-04-12 11:18:20 +08:00
|
|
|
|
2017-05-16 01:30:47 +08:00
|
|
|
unsigned TailDupSize = TailDupPlacementThreshold;
|
|
|
|
// If only the aggressive threshold is explicitly set, use it.
|
|
|
|
if (TailDupPlacementAggressiveThreshold.getNumOccurrences() != 0 &&
|
|
|
|
TailDupPlacementThreshold.getNumOccurrences() == 0)
|
|
|
|
TailDupSize = TailDupPlacementAggressiveThreshold;
|
|
|
|
|
|
|
|
TargetPassConfig *PassConfig = &getAnalysis<TargetPassConfig>();
|
2018-06-20 13:29:26 +08:00
|
|
|
// For aggressive optimization, we can adjust some thresholds to be less
|
2017-05-16 01:30:47 +08:00
|
|
|
// conservative.
|
|
|
|
if (PassConfig->getOptLevel() >= CodeGenOpt::Aggressive) {
|
|
|
|
// At O3 we should be more willing to copy blocks for tail duplication. This
|
|
|
|
// increases size pressure, so we only do it at O3
|
|
|
|
// Do this unless only the regular threshold is explicitly set.
|
|
|
|
if (TailDupPlacementThreshold.getNumOccurrences() == 0 ||
|
|
|
|
TailDupPlacementAggressiveThreshold.getNumOccurrences() != 0)
|
|
|
|
TailDupSize = TailDupPlacementAggressiveThreshold;
|
|
|
|
}
|
|
|
|
|
[BlockPlacement] Disable block placement tail duplciation in structured CFG.
Summary:
Tail duplication easily breaks the structure of CFG, e.g. duplicating on
a region entry. If the structure is intended to be preserved, then we
may want to configure tail duplication, or disable it for structured
CFG. From our benchmark results disabling it doesn't cause performance
regression.
Notice that this currently affects AMDGPU backend. In the next patch, I
also plan to turn on requiresStructuredCFG for NVPTX.
All unit tests still pass.
Reviewers: jlebar, arsenm
Subscribers: jholewinski, sanjoy, wdng, tpr, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D45008
llvm-svn: 328884
2018-03-31 01:51:00 +08:00
|
|
|
if (allowTailDupPlacement()) {
|
2017-02-01 07:48:32 +08:00
|
|
|
MPDT = &getAnalysis<MachinePostDominatorTree>();
|
2019-12-06 01:39:37 +08:00
|
|
|
bool OptForSize = MF.getFunction().hasOptSize() ||
|
|
|
|
llvm::shouldOptimizeForSize(&MF, PSI, &MBFI->getMBFI());
|
|
|
|
if (OptForSize)
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
TailDupSize = 1;
|
2017-08-23 11:17:59 +08:00
|
|
|
bool PreRegAlloc = false;
|
2020-01-30 01:36:31 +08:00
|
|
|
TailDup.initMF(MF, PreRegAlloc, MBPI, MBFI.get(), PSI,
|
2019-12-06 01:39:37 +08:00
|
|
|
/* LayoutMode */ true, TailDupSize);
|
2017-03-03 09:00:22 +08:00
|
|
|
precomputeTriangleChains();
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
}
|
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
buildCFGChains();
|
2016-06-09 23:24:29 +08:00
|
|
|
|
|
|
|
// Changing the layout can create new tail merging opportunities.
|
|
|
|
// TailMerge can create jump into if branches that make CFG irreducible for
|
2016-07-16 02:41:56 +08:00
|
|
|
// HW that requires structured CFG.
|
2016-06-14 06:23:44 +08:00
|
|
|
bool EnableTailMerge = !MF.getTarget().requiresStructuredCFG() &&
|
2016-06-09 23:24:29 +08:00
|
|
|
PassConfig->getEnableTailMerge() &&
|
|
|
|
BranchFoldPlacement;
|
|
|
|
// No tail merging opportunities if the block number is less than four.
|
2016-06-14 06:23:44 +08:00
|
|
|
if (MF.size() > 3 && EnableTailMerge) {
|
2017-05-16 01:30:47 +08:00
|
|
|
unsigned TailMergeSize = TailDupSize + 1;
|
2016-06-09 23:24:29 +08:00
|
|
|
BranchFolder BF(/*EnableTailMerge=*/true, /*CommonHoist=*/false, *MBFI,
|
2019-12-06 01:39:37 +08:00
|
|
|
*MBPI, PSI, TailMergeSize);
|
2016-06-09 23:24:29 +08:00
|
|
|
|
2020-07-01 10:10:01 +08:00
|
|
|
if (BF.OptimizeFunction(MF, TII, MF.getSubtarget().getRegisterInfo(), MLI,
|
2019-07-16 12:46:31 +08:00
|
|
|
/*AfterPlacement=*/true)) {
|
2016-06-09 23:24:29 +08:00
|
|
|
// Redo the layout if tail merging creates/removes/moves blocks.
|
|
|
|
BlockToChain.clear();
|
2017-04-12 11:18:20 +08:00
|
|
|
ComputedEdges.clear();
|
2017-03-03 05:44:24 +08:00
|
|
|
// Must redo the post-dominator tree if blocks were changed.
|
2017-02-01 07:48:32 +08:00
|
|
|
if (MPDT)
|
|
|
|
MPDT->runOnMachineFunction(MF);
|
2016-06-09 23:24:29 +08:00
|
|
|
ChainAllocator.DestroyAll();
|
2016-06-14 06:23:44 +08:00
|
|
|
buildCFGChains();
|
2016-06-09 23:24:29 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-14 06:23:44 +08:00
|
|
|
optimizeBranches();
|
|
|
|
alignBlocks();
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
|
|
|
BlockToChain.clear();
|
2017-04-12 11:18:20 +08:00
|
|
|
ComputedEdges.clear();
|
2011-11-14 18:57:23 +08:00
|
|
|
ChainAllocator.DestroyAll();
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
|
2013-04-12 09:24:16 +08:00
|
|
|
if (AlignAllBlock)
|
|
|
|
// Align all of the blocks in the function to a specific alignment.
|
2016-06-14 06:23:44 +08:00
|
|
|
for (MachineBasicBlock &MBB : MF)
|
2019-09-27 20:54:21 +08:00
|
|
|
MBB.setAlignment(Align(1ULL << AlignAllBlock));
|
2016-01-22 01:25:52 +08:00
|
|
|
else if (AlignAllNonFallThruBlocks) {
|
|
|
|
// Align all of the blocks that have no fall-through predecessors to a
|
|
|
|
// specific alignment.
|
2016-06-14 06:23:44 +08:00
|
|
|
for (auto MBI = std::next(MF.begin()), MBE = MF.end(); MBI != MBE; ++MBI) {
|
2016-01-22 01:25:52 +08:00
|
|
|
auto LayoutPred = std::prev(MBI);
|
|
|
|
if (!LayoutPred->isSuccessor(&*MBI))
|
2019-09-27 20:54:21 +08:00
|
|
|
MBI->setAlignment(Align(1ULL << AlignAllNonFallThruBlocks));
|
2016-01-22 01:25:52 +08:00
|
|
|
}
|
|
|
|
}
|
2017-01-29 09:57:02 +08:00
|
|
|
if (ViewBlockLayoutWithBFI != GVDT_None &&
|
|
|
|
(ViewBlockFreqFuncName.empty() ||
|
2017-12-16 06:22:58 +08:00
|
|
|
F->getFunction().getName().equals(ViewBlockFreqFuncName))) {
|
2017-02-16 03:21:04 +08:00
|
|
|
MBFI->view("MBP." + MF.getName(), false);
|
2017-01-29 09:57:02 +08:00
|
|
|
}
|
|
|
|
|
2013-04-12 09:24:16 +08:00
|
|
|
|
Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
2011-10-21 14:46:38 +08:00
|
|
|
// We always return true as we have no way to track whether the final order
|
|
|
|
// differs from the original order.
|
|
|
|
return true;
|
|
|
|
}
|
2011-11-02 15:17:12 +08:00
|
|
|
|
|
|
|
namespace {
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A pass to compute block placement statistics.
|
2011-11-02 15:17:12 +08:00
|
|
|
///
|
|
|
|
/// A separate pass to compute interesting statistics for evaluating block
|
|
|
|
/// placement. This is separate from the actual placement pass so that they can
|
2012-06-02 18:20:22 +08:00
|
|
|
/// be computed in the absence of any placement transformations or when using
|
2011-11-02 15:17:12 +08:00
|
|
|
/// alternative placement strategies.
|
|
|
|
class MachineBlockPlacementStats : public MachineFunctionPass {
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the branch probability pass.
|
2011-11-02 15:17:12 +08:00
|
|
|
const MachineBranchProbabilityInfo *MBPI;
|
|
|
|
|
2018-05-01 23:54:18 +08:00
|
|
|
/// A handle to the function-wide block frequency pass.
|
2011-11-02 15:17:12 +08:00
|
|
|
const MachineBlockFrequencyInfo *MBFI;
|
|
|
|
|
|
|
|
public:
|
|
|
|
static char ID; // Pass identification, replacement for typeid
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2011-11-02 15:17:12 +08:00
|
|
|
MachineBlockPlacementStats() : MachineFunctionPass(ID) {
|
|
|
|
initializeMachineBlockPlacementStatsPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
2014-03-07 17:26:03 +08:00
|
|
|
bool runOnMachineFunction(MachineFunction &F) override;
|
2011-11-02 15:17:12 +08:00
|
|
|
|
2014-03-07 17:26:03 +08:00
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
2011-11-02 15:17:12 +08:00
|
|
|
AU.addRequired<MachineBranchProbabilityInfo>();
|
|
|
|
AU.addRequired<MachineBlockFrequencyInfo>();
|
|
|
|
AU.setPreservesAll();
|
|
|
|
MachineFunctionPass::getAnalysisUsage(AU);
|
|
|
|
}
|
|
|
|
};
|
2017-08-25 05:21:39 +08:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
2011-11-02 15:17:12 +08:00
|
|
|
|
|
|
|
char MachineBlockPlacementStats::ID = 0;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2012-02-09 05:23:13 +08:00
|
|
|
char &llvm::MachineBlockPlacementStatsID = MachineBlockPlacementStats::ID;
|
2017-08-25 05:21:39 +08:00
|
|
|
|
2011-11-02 15:17:12 +08:00
|
|
|
INITIALIZE_PASS_BEGIN(MachineBlockPlacementStats, "block-placement-stats",
|
|
|
|
"Basic Block Placement Stats", false, false)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineBlockFrequencyInfo)
|
|
|
|
INITIALIZE_PASS_END(MachineBlockPlacementStats, "block-placement-stats",
|
|
|
|
"Basic Block Placement Stats", false, false)
|
|
|
|
|
|
|
|
bool MachineBlockPlacementStats::runOnMachineFunction(MachineFunction &F) {
|
|
|
|
// Check for single-block functions and skip them.
|
2014-03-02 20:27:27 +08:00
|
|
|
if (std::next(F.begin()) == F.end())
|
2011-11-02 15:17:12 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
|
|
|
|
MBFI = &getAnalysis<MachineBlockFrequencyInfo>();
|
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
for (MachineBasicBlock &MBB : F) {
|
|
|
|
BlockFrequency BlockFreq = MBFI->getBlockFreq(&MBB);
|
2015-03-05 10:35:31 +08:00
|
|
|
Statistic &NumBranches =
|
2015-03-05 11:19:05 +08:00
|
|
|
(MBB.succ_size() > 1) ? NumCondBranches : NumUncondBranches;
|
2015-03-05 10:35:31 +08:00
|
|
|
Statistic &BranchTakenFreq =
|
2015-03-05 11:19:05 +08:00
|
|
|
(MBB.succ_size() > 1) ? CondBranchTakenFreq : UncondBranchTakenFreq;
|
|
|
|
for (MachineBasicBlock *Succ : MBB.successors()) {
|
2011-11-02 15:17:12 +08:00
|
|
|
// Skip if this successor is a fallthrough.
|
2015-03-05 11:19:05 +08:00
|
|
|
if (MBB.isLayoutSuccessor(Succ))
|
2011-11-02 15:17:12 +08:00
|
|
|
continue;
|
|
|
|
|
2015-03-05 11:19:05 +08:00
|
|
|
BlockFrequency EdgeFreq =
|
|
|
|
BlockFreq * MBPI->getEdgeProbability(&MBB, Succ);
|
2011-11-02 15:17:12 +08:00
|
|
|
++NumBranches;
|
|
|
|
BranchTakenFreq += EdgeFreq.getFrequency();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|