2008-05-15 04:38:44 +08:00
|
|
|
//===- SimplifyCFGPass.cpp - CFG Simplification Pass ----------------------===//
|
2005-04-22 07:48:37 +08:00
|
|
|
//
|
2003-10-21 03:43:21 +08:00
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
2007-12-30 04:36:04 +08:00
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
2005-04-22 07:48:37 +08:00
|
|
|
//
|
2003-10-21 03:43:21 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
2002-05-22 04:49:37 +08:00
|
|
|
//
|
2007-12-04 03:43:18 +08:00
|
|
|
// This file implements dead code elimination and basic block merging, along
|
|
|
|
// with a collection of other peephole control flow optimizations. For example:
|
2002-05-22 04:49:37 +08:00
|
|
|
//
|
2007-11-05 00:15:04 +08:00
|
|
|
// * Removes basic blocks with no predecessors.
|
|
|
|
// * Merges a basic block into its predecessor if there is only one and the
|
2002-05-22 04:49:37 +08:00
|
|
|
// predecessor only has one successor.
|
2007-11-05 00:15:04 +08:00
|
|
|
// * Eliminates PHI nodes for basic blocks with a single predecessor.
|
|
|
|
// * Eliminates a basic block that only contains an unconditional branch.
|
2007-12-04 03:43:18 +08:00
|
|
|
// * Changes invoke instructions to nounwind functions to be calls.
|
|
|
|
// * Change things like "if (x) if (y)" into "if (x&y)".
|
|
|
|
// * etc..
|
2002-05-22 04:49:37 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "llvm/Transforms/Scalar.h"
|
2012-12-04 00:50:05 +08:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
|
|
|
#include "llvm/ADT/Statistic.h"
|
Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342
2014-09-08 02:57:58 +08:00
|
|
|
#include "llvm/Analysis/AssumptionTracker.h"
|
2013-01-07 11:08:10 +08:00
|
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
2013-01-02 19:36:10 +08:00
|
|
|
#include "llvm/IR/Attributes.h"
|
2014-03-04 19:45:46 +08:00
|
|
|
#include "llvm/IR/CFG.h"
|
2013-01-02 19:36:10 +08:00
|
|
|
#include "llvm/IR/Constants.h"
|
|
|
|
#include "llvm/IR/DataLayout.h"
|
|
|
|
#include "llvm/IR/Instructions.h"
|
|
|
|
#include "llvm/IR/IntrinsicInst.h"
|
|
|
|
#include "llvm/IR/Module.h"
|
2002-05-22 04:49:37 +08:00
|
|
|
#include "llvm/Pass.h"
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
2012-12-04 00:50:05 +08:00
|
|
|
#include "llvm/Transforms/Utils/Local.h"
|
2004-01-09 14:02:20 +08:00
|
|
|
using namespace llvm;
|
2003-11-12 06:41:34 +08:00
|
|
|
|
2014-04-22 10:55:47 +08:00
|
|
|
#define DEBUG_TYPE "simplifycfg"
|
|
|
|
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
static cl::opt<unsigned>
|
|
|
|
UserBonusInstThreshold("bonus-inst-threshold", cl::Hidden, cl::init(1),
|
|
|
|
cl::desc("Control the number of bonus instructions (default = 1)"));
|
|
|
|
|
2006-12-20 05:40:18 +08:00
|
|
|
STATISTIC(NumSimpl, "Number of blocks simplified");
|
2002-10-02 06:38:41 +08:00
|
|
|
|
2006-12-20 05:40:18 +08:00
|
|
|
namespace {
|
2013-07-27 08:01:07 +08:00
|
|
|
struct CFGSimplifyPass : public FunctionPass {
|
|
|
|
static char ID; // Pass identification, replacement for typeid
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
unsigned BonusInstThreshold;
|
|
|
|
CFGSimplifyPass(int T = -1) : FunctionPass(ID) {
|
|
|
|
BonusInstThreshold = (T == -1) ? UserBonusInstThreshold : unsigned(T);
|
2013-08-06 10:43:45 +08:00
|
|
|
initializeCFGSimplifyPassPass(*PassRegistry::getPassRegistry());
|
2013-07-27 08:01:07 +08:00
|
|
|
}
|
2014-03-05 17:10:37 +08:00
|
|
|
bool runOnFunction(Function &F) override;
|
2013-08-06 10:43:45 +08:00
|
|
|
|
2014-03-05 17:10:37 +08:00
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342
2014-09-08 02:57:58 +08:00
|
|
|
AU.addRequired<AssumptionTracker>();
|
2013-07-27 08:01:07 +08:00
|
|
|
AU.addRequired<TargetTransformInfo>();
|
|
|
|
}
|
|
|
|
};
|
2002-05-22 04:49:37 +08:00
|
|
|
}
|
|
|
|
|
2013-08-06 10:43:45 +08:00
|
|
|
char CFGSimplifyPass::ID = 0;
|
|
|
|
INITIALIZE_PASS_BEGIN(CFGSimplifyPass, "simplifycfg", "Simplify the CFG", false,
|
2013-07-27 08:01:07 +08:00
|
|
|
false)
|
2013-01-07 11:53:25 +08:00
|
|
|
INITIALIZE_AG_DEPENDENCY(TargetTransformInfo)
|
Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342
2014-09-08 02:57:58 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(AssumptionTracker)
|
2013-08-06 10:43:45 +08:00
|
|
|
INITIALIZE_PASS_END(CFGSimplifyPass, "simplifycfg", "Simplify the CFG", false,
|
2013-07-27 08:01:07 +08:00
|
|
|
false)
|
2008-05-13 08:00:25 +08:00
|
|
|
|
2003-11-12 06:41:34 +08:00
|
|
|
// Public interface to the CFGSimplification pass
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
FunctionPass *llvm::createCFGSimplificationPass(int Threshold) {
|
|
|
|
return new CFGSimplifyPass(Threshold);
|
2002-05-22 04:49:37 +08:00
|
|
|
}
|
|
|
|
|
2012-09-06 08:59:08 +08:00
|
|
|
/// mergeEmptyReturnBlocks - If we have more than one empty (other than phi
|
2009-12-22 14:07:30 +08:00
|
|
|
/// node) return blocks, merge them together to promote recursive block merging.
|
2012-09-06 08:59:08 +08:00
|
|
|
static bool mergeEmptyReturnBlocks(Function &F) {
|
2009-12-22 14:07:30 +08:00
|
|
|
bool Changed = false;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2014-04-25 13:29:35 +08:00
|
|
|
BasicBlock *RetBlock = nullptr;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Scan all the blocks in the function, looking for empty return blocks.
|
|
|
|
for (Function::iterator BBI = F.begin(), E = F.end(); BBI != E; ) {
|
|
|
|
BasicBlock &BB = *BBI++;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Only look at return blocks.
|
|
|
|
ReturnInst *Ret = dyn_cast<ReturnInst>(BB.getTerminator());
|
2014-04-25 13:29:35 +08:00
|
|
|
if (!Ret) continue;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Only look at the block if it is empty or the only other thing in it is a
|
|
|
|
// single PHI node that is the operand to the return.
|
|
|
|
if (Ret != &BB.front()) {
|
|
|
|
// Check for something else in the block.
|
|
|
|
BasicBlock::iterator I = Ret;
|
|
|
|
--I;
|
2010-03-14 18:40:55 +08:00
|
|
|
// Skip over debug info.
|
|
|
|
while (isa<DbgInfoIntrinsic>(I) && I != BB.begin())
|
|
|
|
--I;
|
|
|
|
if (!isa<DbgInfoIntrinsic>(I) &&
|
|
|
|
(!isa<PHINode>(I) || I != BB.begin() ||
|
|
|
|
Ret->getNumOperands() == 0 ||
|
|
|
|
Ret->getOperand(0) != I))
|
2009-12-22 14:07:30 +08:00
|
|
|
continue;
|
|
|
|
}
|
2010-03-14 18:40:55 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// If this is the first returning block, remember it and keep going.
|
2014-04-25 13:29:35 +08:00
|
|
|
if (!RetBlock) {
|
2009-12-22 14:07:30 +08:00
|
|
|
RetBlock = &BB;
|
|
|
|
continue;
|
|
|
|
}
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Otherwise, we found a duplicate return block. Merge the two.
|
|
|
|
Changed = true;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Case when there is no input to the return or when the returned values
|
|
|
|
// agree is trivial. Note that they can't agree if there are phis in the
|
|
|
|
// blocks.
|
|
|
|
if (Ret->getNumOperands() == 0 ||
|
2012-07-24 18:51:42 +08:00
|
|
|
Ret->getOperand(0) ==
|
2009-12-22 14:07:30 +08:00
|
|
|
cast<ReturnInst>(RetBlock->getTerminator())->getOperand(0)) {
|
|
|
|
BB.replaceAllUsesWith(RetBlock);
|
|
|
|
BB.eraseFromParent();
|
|
|
|
continue;
|
|
|
|
}
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// If the canonical return block has no PHI node, create one now.
|
|
|
|
PHINode *RetBlockPHI = dyn_cast<PHINode>(RetBlock->begin());
|
2014-04-25 13:29:35 +08:00
|
|
|
if (!RetBlockPHI) {
|
2010-03-16 03:05:46 +08:00
|
|
|
Value *InVal = cast<ReturnInst>(RetBlock->getTerminator())->getOperand(0);
|
2011-03-30 19:19:20 +08:00
|
|
|
pred_iterator PB = pred_begin(RetBlock), PE = pred_end(RetBlock);
|
2011-03-30 19:28:46 +08:00
|
|
|
RetBlockPHI = PHINode::Create(Ret->getOperand(0)->getType(),
|
|
|
|
std::distance(PB, PE), "merge",
|
2009-12-22 14:07:30 +08:00
|
|
|
&RetBlock->front());
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2011-03-30 19:19:20 +08:00
|
|
|
for (pred_iterator PI = PB; PI != PE; ++PI)
|
2009-12-22 14:07:30 +08:00
|
|
|
RetBlockPHI->addIncoming(InVal, *PI);
|
|
|
|
RetBlock->getTerminator()->setOperand(0, RetBlockPHI);
|
|
|
|
}
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
// Turn BB into a block that just unconditionally branches to the return
|
|
|
|
// block. This handles the case when the two return blocks have a common
|
|
|
|
// predecessor but that return different things.
|
|
|
|
RetBlockPHI->addIncoming(Ret->getOperand(0), &BB);
|
|
|
|
BB.getTerminator()->eraseFromParent();
|
|
|
|
BranchInst::Create(RetBlock, &BB);
|
|
|
|
}
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2009-12-22 14:07:30 +08:00
|
|
|
return Changed;
|
|
|
|
}
|
|
|
|
|
2012-09-06 08:59:08 +08:00
|
|
|
/// iterativelySimplifyCFG - Call SimplifyCFG on all the blocks in the function,
|
2007-11-13 15:32:38 +08:00
|
|
|
/// iterating until no more changes are made.
|
2013-01-07 11:53:25 +08:00
|
|
|
static bool iterativelySimplifyCFG(Function &F, const TargetTransformInfo &TTI,
|
Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342
2014-09-08 02:57:58 +08:00
|
|
|
const DataLayout *DL,
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
AssumptionTracker *AT,
|
|
|
|
unsigned BonusInstThreshold) {
|
2007-11-13 15:32:38 +08:00
|
|
|
bool Changed = false;
|
2002-05-22 04:49:37 +08:00
|
|
|
bool LocalChange = true;
|
|
|
|
while (LocalChange) {
|
|
|
|
LocalChange = false;
|
2012-07-24 18:51:42 +08:00
|
|
|
|
2010-08-14 08:29:42 +08:00
|
|
|
// Loop over all of the basic blocks and remove them if they are unneeded...
|
2002-05-22 04:49:37 +08:00
|
|
|
//
|
2010-08-14 08:29:42 +08:00
|
|
|
for (Function::iterator BBIt = F.begin(); BBIt != F.end(); ) {
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
if (SimplifyCFG(BBIt++, TTI, BonusInstThreshold, DL, AT)) {
|
2002-05-22 04:49:37 +08:00
|
|
|
LocalChange = true;
|
|
|
|
++NumSimpl;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
Changed |= LocalChange;
|
|
|
|
}
|
|
|
|
return Changed;
|
|
|
|
}
|
2007-11-13 15:32:38 +08:00
|
|
|
|
|
|
|
// It is possible that we may require multiple passes over the code to fully
|
|
|
|
// simplify the CFG.
|
|
|
|
//
|
|
|
|
bool CFGSimplifyPass::runOnFunction(Function &F) {
|
2014-02-06 08:07:05 +08:00
|
|
|
if (skipOptnoneFunction(F))
|
|
|
|
return false;
|
|
|
|
|
Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342
2014-09-08 02:57:58 +08:00
|
|
|
AssumptionTracker *AT = &getAnalysis<AssumptionTracker>();
|
2013-01-07 11:53:25 +08:00
|
|
|
const TargetTransformInfo &TTI = getAnalysis<TargetTransformInfo>();
|
2014-02-26 01:30:31 +08:00
|
|
|
DataLayoutPass *DLP = getAnalysisIfAvailable<DataLayoutPass>();
|
2014-04-25 13:29:35 +08:00
|
|
|
const DataLayout *DL = DLP ? &DLP->getDataLayout() : nullptr;
|
2013-08-13 06:38:43 +08:00
|
|
|
bool EverChanged = removeUnreachableBlocks(F);
|
2012-09-06 08:59:08 +08:00
|
|
|
EverChanged |= mergeEmptyReturnBlocks(F);
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
EverChanged |= iterativelySimplifyCFG(F, TTI, DL, AT, BonusInstThreshold);
|
2010-02-06 06:03:18 +08:00
|
|
|
|
2007-11-13 15:32:38 +08:00
|
|
|
// If neither pass changed anything, we're done.
|
|
|
|
if (!EverChanged) return false;
|
|
|
|
|
2012-09-06 08:59:08 +08:00
|
|
|
// iterativelySimplifyCFG can (rarely) make some loops dead. If this happens,
|
2013-08-13 06:38:43 +08:00
|
|
|
// removeUnreachableBlocks is needed to nuke them, which means we should
|
2007-11-13 15:32:38 +08:00
|
|
|
// iterate between the two optimizations. We structure the code like this to
|
2012-09-06 08:59:08 +08:00
|
|
|
// avoid reruning iterativelySimplifyCFG if the second pass of
|
2013-08-13 06:38:43 +08:00
|
|
|
// removeUnreachableBlocks doesn't do anything.
|
|
|
|
if (!removeUnreachableBlocks(F))
|
2007-11-13 15:32:38 +08:00
|
|
|
return true;
|
2010-02-06 06:03:18 +08:00
|
|
|
|
2007-11-13 15:32:38 +08:00
|
|
|
do {
|
[SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.
The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:
__global__ void foo(int a, int b, int c, int d, int e, int n,
const int *input, int *output) {
int sum = 0;
for (int i = 0; i < n; ++i)
sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
*output = sum;
}
The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:
sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];
Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.
We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!
Test Plan: Added one test case to check the threshold is in effect
Reviewers: nadav, eliben, meheff, resistor, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D5529
llvm-svn: 218711
2014-10-01 06:23:38 +08:00
|
|
|
EverChanged = iterativelySimplifyCFG(F, TTI, DL, AT, BonusInstThreshold);
|
2013-08-13 06:38:43 +08:00
|
|
|
EverChanged |= removeUnreachableBlocks(F);
|
2007-11-13 15:32:38 +08:00
|
|
|
} while (EverChanged);
|
2010-02-06 06:03:18 +08:00
|
|
|
|
2007-11-13 15:32:38 +08:00
|
|
|
return true;
|
|
|
|
}
|