llvm-project/llvm/lib/Transforms/IPO/MergeFunctions.cpp

856 lines
31 KiB
C++
Raw Normal View History

//===- MergeFunctions.cpp - Merge identical functions ---------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
//
// This pass looks for equivalent functions that are mergable and folds them.
//
// Order relation is defined on set of functions. It was made through
// special function comparison procedure that returns
// 0 when functions are equal,
// -1 when Left function is less than right function, and
// 1 for opposite case. We need total-ordering, so we need to maintain
// four properties on the functions set:
// a <= a (reflexivity)
// if a <= b and b <= a then a = b (antisymmetry)
// if a <= b and b <= c then a <= c (transitivity).
// for all a and b: a <= b or b <= a (totality).
//
// Comparison iterates through each instruction in each basic block.
// Functions are kept on binary tree. For each new function F we perform
// lookup in binary tree.
// In practice it works the following way:
// -- We define Function* container class with custom "operator<" (FunctionPtr).
// -- "FunctionPtr" instances are stored in std::set collection, so every
// std::set::insert operation will give you result in log(N) time.
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
//
// As an optimization, a hash of the function structure is calculated first, and
// two functions are only compared if they have the same hash. This hash is
// cheap to compute, and has the property that if function F == G according to
// the comparison function, then hash(F) == hash(G). This consistency property
// is critical to ensuring all possible merging opportunities are exploited.
// Collisions in the hash affect the speed of the pass but not the correctness
// or determinism of the resulting transformation.
//
// When a match is found the functions are folded. If both functions are
// overridable, we move the functionality into a new internal function and
// leave two overridable thunks to it.
//
//===----------------------------------------------------------------------===//
//
// Future work:
//
// * virtual functions.
//
// Many functions have their address taken by the virtual function table for
// the object they belong to. However, as long as it's only used for a lookup
// and call, this is irrelevant, and we'd like to fold such functions.
//
// * be smarter about bitcasts.
//
// In order to fold functions, we will sometimes add either bitcast instructions
// or bitcast constant expressions. Unfortunately, this can confound further
// analysis since the two functions differ where one has a bitcast and the
// other doesn't. We should learn to look through bitcasts.
//
// * Compare complex types with pointer types inside.
// * Compare cross-reference cases.
// * Compare complex expressions.
//
// All the three issues above could be described as ability to prove that
// fA == fB == fC == fE == fF == fG in example below:
//
// void fA() {
// fB();
// }
// void fB() {
// fA();
// }
//
// void fE() {
// fF();
// }
// void fF() {
// fG();
// }
// void fG() {
// fE();
// }
//
// Simplest cross-reference case (fA <--> fB) was implemented in previous
// versions of MergeFunctions, though it presented only in two function pairs
// in test-suite (that counts >50k functions)
// Though possibility to detect complex cross-referencing (e.g.: A->B->C->D->A)
// could cover much more cases.
//
//===----------------------------------------------------------------------===//
#include "llvm/ADT/Hashing.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallSet.h"
#include "llvm/ADT/Statistic.h"
#include "llvm/IR/CallSite.h"
#include "llvm/IR/Constants.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/DebugInfo.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/IntrinsicInst.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/ValueHandle.h"
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
#include "llvm/IR/ValueMap.h"
#include "llvm/Pass.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/Debug.h"
#include "llvm/Support/ErrorHandling.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/Transforms/IPO.h"
#include "llvm/Transforms/Utils/FunctionComparator.h"
#include <vector>
using namespace llvm;
#define DEBUG_TYPE "mergefunc"
STATISTIC(NumFunctionsMerged, "Number of functions merged");
STATISTIC(NumThunksWritten, "Number of thunks generated");
STATISTIC(NumDoubleWeak, "Number of new functions created");
static cl::opt<unsigned> NumFunctionsForSanityCheck(
"mergefunc-sanity",
cl::desc("How many functions in module could be used for "
"MergeFunctions pass sanity check. "
"'0' disables this check. Works only with '-debug' key."),
cl::init(0), cl::Hidden);
// Under option -mergefunc-preserve-debug-info we:
// - Do not create a new function for a thunk.
// - Retain the debug info for a thunk's parameters (and associated
// instructions for the debug info) from the entry block.
// Note: -debug will display the algorithm at work.
// - Create debug-info for the call (to the shared implementation) made by
// a thunk and its return value.
// - Erase the rest of the function, retaining the (minimally sized) entry
// block to create a thunk.
// - Preserve a thunk's call site to point to the thunk even when both occur
// within the same translation unit, to aid debugability. Note that this
// behaviour differs from the underlying -mergefunc implementation which
// modifies the thunk's call site to point to the shared implementation
// when both occur within the same translation unit.
static cl::opt<bool>
MergeFunctionsPDI("mergefunc-preserve-debug-info", cl::Hidden,
cl::init(false),
cl::desc("Preserve debug info in thunk when mergefunc "
"transformations are made."));
namespace {
class FunctionNode {
mutable AssertingVH<Function> F;
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
FunctionComparator::FunctionHash Hash;
public:
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
// Note the hash is recalculated potentially multiple times, but it is cheap.
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
FunctionNode(Function *F)
: F(F), Hash(FunctionComparator::functionHash(*F)) {}
Function *getFunc() const { return F; }
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
FunctionComparator::FunctionHash getHash() const { return Hash; }
/// Replace the reference to the function F by the function G, assuming their
/// implementations are equal.
void replaceBy(Function *G) const {
F = G;
}
void release() { F = nullptr; }
};
/// MergeFunctions finds functions which will generate identical machine code,
/// by considering all pointer types to be equivalent. Once identified,
/// MergeFunctions will fold them by replacing a call to one to a call to a
/// bitcast of the other.
///
class MergeFunctions : public ModulePass {
public:
static char ID;
MergeFunctions()
: ModulePass(ID), FnTree(FunctionNodeCmp(&GlobalNumbers)), FNodesInTree() {
initializeMergeFunctionsPass(*PassRegistry::getPassRegistry());
}
bool runOnModule(Module &M) override;
private:
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
// The function comparison operator is provided here so that FunctionNodes do
// not need to become larger with another pointer.
class FunctionNodeCmp {
GlobalNumberState* GlobalNumbers;
public:
FunctionNodeCmp(GlobalNumberState* GN) : GlobalNumbers(GN) {}
bool operator()(const FunctionNode &LHS, const FunctionNode &RHS) const {
// Order first by hashes, then full function comparison.
if (LHS.getHash() != RHS.getHash())
return LHS.getHash() < RHS.getHash();
FunctionComparator FCmp(LHS.getFunc(), RHS.getFunc(), GlobalNumbers);
return FCmp.compare() == -1;
}
};
typedef std::set<FunctionNode, FunctionNodeCmp> FnTreeType;
GlobalNumberState GlobalNumbers;
/// A work queue of functions that may have been modified and should be
/// analyzed again.
std::vector<WeakTrackingVH> Deferred;
/// Checks the rules of order relation introduced among functions set.
/// Returns true, if sanity check has been passed, and false if failed.
#ifndef NDEBUG
bool doSanityCheck(std::vector<WeakTrackingVH> &Worklist);
#endif
/// Insert a ComparableFunction into the FnTree, or merge it away if it's
/// equal to one that's already present.
bool insert(Function *NewFunction);
/// Remove a Function from the FnTree and queue it up for a second sweep of
/// analysis.
void remove(Function *F);
/// Find the functions that use this Value and remove them from FnTree and
/// queue the functions.
void removeUsers(Value *V);
/// Replace all direct calls of Old with calls of New. Will bitcast New if
/// necessary to make types match.
void replaceDirectCallers(Function *Old, Function *New);
/// Merge two equivalent functions. Upon completion, G may be deleted, or may
/// be converted into a thunk. In either case, it should never be visited
/// again.
void mergeTwoFunctions(Function *F, Function *G);
/// Fill PDIUnrelatedWL with instructions from the entry block that are
/// unrelated to parameter related debug info.
void filterInstsUnrelatedToPDI(BasicBlock *GEntryBlock,
std::vector<Instruction *> &PDIUnrelatedWL);
/// Erase the rest of the CFG (i.e. barring the entry block).
void eraseTail(Function *G);
/// Erase the instructions in PDIUnrelatedWL as they are unrelated to the
/// parameter debug info, from the entry block.
void eraseInstsUnrelatedToPDI(std::vector<Instruction *> &PDIUnrelatedWL);
/// Replace G with a simple tail call to bitcast(F). Also (unless
/// MergeFunctionsPDI holds) replace direct uses of G with bitcast(F),
/// delete G.
void writeThunk(Function *F, Function *G);
/// Replace function F with function G in the function tree.
void replaceFunctionInTree(const FunctionNode &FN, Function *G);
/// The set of all distinct functions. Use the insert() and remove() methods
/// to modify it. The map allows efficient lookup and deferring of Functions.
FnTreeType FnTree;
// Map functions to the iterators of the FunctionNode which contains them
// in the FnTree. This must be updated carefully whenever the FnTree is
// modified, i.e. in insert(), remove(), and replaceFunctionInTree(), to avoid
// dangling iterators into FnTree. The invariant that preserves this is that
// there is exactly one mapping F -> FN for each FunctionNode FN in FnTree.
ValueMap<Function*, FnTreeType::iterator> FNodesInTree;
};
} // end anonymous namespace
char MergeFunctions::ID = 0;
INITIALIZE_PASS(MergeFunctions, "mergefunc", "Merge Functions", false, false)
ModulePass *llvm::createMergeFunctionsPass() {
return new MergeFunctions();
}
#ifndef NDEBUG
bool MergeFunctions::doSanityCheck(std::vector<WeakTrackingVH> &Worklist) {
if (const unsigned Max = NumFunctionsForSanityCheck) {
unsigned TripleNumber = 0;
bool Valid = true;
dbgs() << "MERGEFUNC-SANITY: Started for first " << Max << " functions.\n";
unsigned i = 0;
for (std::vector<WeakTrackingVH>::iterator I = Worklist.begin(),
E = Worklist.end();
I != E && i < Max; ++I, ++i) {
unsigned j = i;
for (std::vector<WeakTrackingVH>::iterator J = I; J != E && j < Max;
++J, ++j) {
Function *F1 = cast<Function>(*I);
Function *F2 = cast<Function>(*J);
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
int Res1 = FunctionComparator(F1, F2, &GlobalNumbers).compare();
int Res2 = FunctionComparator(F2, F1, &GlobalNumbers).compare();
// If F1 <= F2, then F2 >= F1, otherwise report failure.
if (Res1 != -Res2) {
dbgs() << "MERGEFUNC-SANITY: Non-symmetric; triple: " << TripleNumber
<< "\n";
dbgs() << *F1 << '\n' << *F2 << '\n';
Valid = false;
}
if (Res1 == 0)
continue;
unsigned k = j;
for (std::vector<WeakTrackingVH>::iterator K = J; K != E && k < Max;
++k, ++K, ++TripleNumber) {
if (K == J)
continue;
Function *F3 = cast<Function>(*K);
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
int Res3 = FunctionComparator(F1, F3, &GlobalNumbers).compare();
int Res4 = FunctionComparator(F2, F3, &GlobalNumbers).compare();
bool Transitive = true;
if (Res1 != 0 && Res1 == Res4) {
// F1 > F2, F2 > F3 => F1 > F3
Transitive = Res3 == Res1;
} else if (Res3 != 0 && Res3 == -Res4) {
// F1 > F3, F3 > F2 => F1 > F2
Transitive = Res3 == Res1;
} else if (Res4 != 0 && -Res3 == Res4) {
// F2 > F3, F3 > F1 => F2 > F1
Transitive = Res4 == -Res1;
}
if (!Transitive) {
dbgs() << "MERGEFUNC-SANITY: Non-transitive; triple: "
<< TripleNumber << "\n";
dbgs() << "Res1, Res3, Res4: " << Res1 << ", " << Res3 << ", "
<< Res4 << "\n";
dbgs() << *F1 << '\n' << *F2 << '\n' << *F3 << '\n';
Valid = false;
}
}
}
}
dbgs() << "MERGEFUNC-SANITY: " << (Valid ? "Passed." : "Failed.") << "\n";
return Valid;
}
return true;
}
#endif
bool MergeFunctions::runOnModule(Module &M) {
if (skipModule(M))
return false;
bool Changed = false;
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
// All functions in the module, ordered by hash. Functions with a unique
// hash value are easily eliminated.
std::vector<std::pair<FunctionComparator::FunctionHash, Function *>>
HashedFuncs;
for (Function &Func : M) {
if (!Func.isDeclaration() && !Func.hasAvailableExternallyLinkage()) {
HashedFuncs.push_back({FunctionComparator::functionHash(Func), &Func});
}
}
std::stable_sort(
HashedFuncs.begin(), HashedFuncs.end(),
[](const std::pair<FunctionComparator::FunctionHash, Function *> &a,
const std::pair<FunctionComparator::FunctionHash, Function *> &b) {
return a.first < b.first;
});
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
auto S = HashedFuncs.begin();
for (auto I = HashedFuncs.begin(), IE = HashedFuncs.end(); I != IE; ++I) {
// If the hash value matches the previous value or the next one, we must
// consider merging it. Otherwise it is dropped and never considered again.
if ((I != S && std::prev(I)->first == I->first) ||
(std::next(I) != IE && std::next(I)->first == I->first) ) {
Deferred.push_back(WeakTrackingVH(I->second));
Accelerate MergeFunctions with hashing This patch makes the Merge Functions pass faster by calculating and comparing a hash value which captures the essential structure of a function before performing a full function comparison. The hash is calculated by hashing the function signature, then walking the basic blocks of the function in the same order as the main comparison function. The opcode of each instruction is hashed in sequence, which means that different functions according to the existing total order cannot have the same hash, as the comparison requires the opcodes of the two functions to be the same order. The hash function is a static member of the FunctionComparator class because it is tightly coupled to the exact comparison function used. For example, functions which are equivalent modulo a single variant callsite might be merged by a more aggressive MergeFunctions, and the hash function would need to be insensitive to these differences in order to exploit this. The hashing function uses a utility class which accumulates the values into an internal state using a standard bit-mixing function. Note that this is a different interface than a regular hashing routine, because the values to be hashed are scattered amongst the properties of a llvm::Function, not linear in memory. This scheme is fast because only one word of state needs to be kept, and the mixing function is a few instructions. The main runOnModule function first computes the hash of each function, and only further processes functions which do not have a unique function hash. The hash is also used to order the sorted function set. If the hashes differ, their values are used to order the functions, otherwise the full comparison is done. Both of these are helpful in speeding up MergeFunctions. Together they result in speedups of 9% for mysqld (a mostly C application with little redundancy), 46% for libxul in Firefox, and 117% for Chromium. (These are all LTO builds.) In all three cases, the new speed of MergeFunctions is about half that of the module verifier, making it relatively inexpensive even for large LTO builds with hundreds of thousands of functions. The same functions are merged, so this change is free performance. Author: jrkoenig Reviewers: nlewycky, dschuff, jfb Subscribers: llvm-commits, aemerson Differential revision: http://reviews.llvm.org/D11923 llvm-svn: 245140
2015-08-15 09:18:18 +08:00
}
}
do {
std::vector<WeakTrackingVH> Worklist;
Deferred.swap(Worklist);
DEBUG(doSanityCheck(Worklist));
DEBUG(dbgs() << "size of module: " << M.size() << '\n');
DEBUG(dbgs() << "size of worklist: " << Worklist.size() << '\n');
// Insert functions and merge them.
for (WeakTrackingVH &I : Worklist) {
if (!I)
continue;
Function *F = cast<Function>(I);
if (!F->isDeclaration() && !F->hasAvailableExternallyLinkage()) {
Changed |= insert(F);
}
}
DEBUG(dbgs() << "size of FnTree: " << FnTree.size() << '\n');
} while (!Deferred.empty());
FnTree.clear();
GlobalNumbers.clear();
return Changed;
}
// Replace direct callers of Old with New.
void MergeFunctions::replaceDirectCallers(Function *Old, Function *New) {
Constant *BitcastNew = ConstantExpr::getBitCast(New, Old->getType());
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
for (auto UI = Old->use_begin(), UE = Old->use_end(); UI != UE;) {
Use *U = &*UI;
++UI;
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
CallSite CS(U->getUser());
if (CS && CS.isCallee(U)) {
// Transfer the called function's attributes to the call site. Due to the
// bitcast we will 'lose' ABI changing attributes because the 'called
// function' is no longer a Function* but the bitcast. Code that looks up
// the attributes from the called function will fail.
// FIXME: This is not actually true, at least not anymore. The callsite
// will always have the same ABI affecting attributes as the callee,
// because otherwise the original input has UB. Note that Old and New
// always have matching ABI, so no attributes need to be changed.
// Transferring other attributes may help other optimizations, but that
// should be done uniformly and not in this ad-hoc way.
auto &Context = New->getContext();
auto NewPAL = New->getAttributes();
SmallVector<AttributeSet, 4> NewArgAttrs;
for (unsigned argIdx = 0; argIdx < CS.arg_size(); argIdx++)
NewArgAttrs.push_back(NewPAL.getParamAttributes(argIdx));
// Don't transfer attributes from the function to the callee. Function
// attributes typically aren't relevant to the calling convention or ABI.
CS.setAttributes(AttributeList::get(Context, /*FnAttrs=*/AttributeSet(),
NewPAL.getRetAttributes(),
NewArgAttrs));
remove(CS.getInstruction()->getParent()->getParent());
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
U->set(BitcastNew);
}
}
}
// Helper for writeThunk,
// Selects proper bitcast operation,
// but a bit simpler then CastInst::getCastOpcode.
static Value *createCast(IRBuilder<> &Builder, Value *V, Type *DestTy) {
Type *SrcTy = V->getType();
if (SrcTy->isStructTy()) {
assert(DestTy->isStructTy());
assert(SrcTy->getStructNumElements() == DestTy->getStructNumElements());
Value *Result = UndefValue::get(DestTy);
for (unsigned int I = 0, E = SrcTy->getStructNumElements(); I < E; ++I) {
Value *Element = createCast(
Builder, Builder.CreateExtractValue(V, makeArrayRef(I)),
DestTy->getStructElementType(I));
Result =
Builder.CreateInsertValue(Result, Element, makeArrayRef(I));
}
return Result;
}
assert(!DestTy->isStructTy());
if (SrcTy->isIntegerTy() && DestTy->isPointerTy())
return Builder.CreateIntToPtr(V, DestTy);
else if (SrcTy->isPointerTy() && DestTy->isIntegerTy())
return Builder.CreatePtrToInt(V, DestTy);
else
return Builder.CreateBitCast(V, DestTy);
}
// Erase the instructions in PDIUnrelatedWL as they are unrelated to the
// parameter debug info, from the entry block.
void MergeFunctions::eraseInstsUnrelatedToPDI(
std::vector<Instruction *> &PDIUnrelatedWL) {
DEBUG(dbgs() << " Erasing instructions (in reverse order of appearance in "
"entry block) unrelated to parameter debug info from entry "
"block: {\n");
while (!PDIUnrelatedWL.empty()) {
Instruction *I = PDIUnrelatedWL.back();
DEBUG(dbgs() << " Deleting Instruction: ");
DEBUG(I->print(dbgs()));
DEBUG(dbgs() << "\n");
I->eraseFromParent();
PDIUnrelatedWL.pop_back();
}
DEBUG(dbgs() << " } // Done erasing instructions unrelated to parameter "
"debug info from entry block. \n");
}
// Reduce G to its entry block.
void MergeFunctions::eraseTail(Function *G) {
std::vector<BasicBlock *> WorklistBB;
for (Function::iterator BBI = std::next(G->begin()), BBE = G->end();
BBI != BBE; ++BBI) {
BBI->dropAllReferences();
WorklistBB.push_back(&*BBI);
}
while (!WorklistBB.empty()) {
BasicBlock *BB = WorklistBB.back();
BB->eraseFromParent();
WorklistBB.pop_back();
}
}
// We are interested in the following instructions from the entry block as being
// related to parameter debug info:
// - @llvm.dbg.declare
// - stores from the incoming parameters to locations on the stack-frame
// - allocas that create these locations on the stack-frame
// - @llvm.dbg.value
// - the entry block's terminator
// The rest are unrelated to debug info for the parameters; fill up
// PDIUnrelatedWL with such instructions.
void MergeFunctions::filterInstsUnrelatedToPDI(
BasicBlock *GEntryBlock, std::vector<Instruction *> &PDIUnrelatedWL) {
std::set<Instruction *> PDIRelated;
for (BasicBlock::iterator BI = GEntryBlock->begin(), BIE = GEntryBlock->end();
BI != BIE; ++BI) {
if (auto *DVI = dyn_cast<DbgValueInst>(&*BI)) {
DEBUG(dbgs() << " Deciding: ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
DILocalVariable *DILocVar = DVI->getVariable();
if (DILocVar->isParameter()) {
DEBUG(dbgs() << " Include (parameter): ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIRelated.insert(&*BI);
} else {
DEBUG(dbgs() << " Delete (!parameter): ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
}
} else if (auto *DDI = dyn_cast<DbgDeclareInst>(&*BI)) {
DEBUG(dbgs() << " Deciding: ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
DILocalVariable *DILocVar = DDI->getVariable();
if (DILocVar->isParameter()) {
DEBUG(dbgs() << " Parameter: ");
DEBUG(DILocVar->print(dbgs()));
AllocaInst *AI = dyn_cast_or_null<AllocaInst>(DDI->getAddress());
if (AI) {
DEBUG(dbgs() << " Processing alloca users: ");
DEBUG(dbgs() << "\n");
for (User *U : AI->users()) {
if (StoreInst *SI = dyn_cast<StoreInst>(U)) {
if (Value *Arg = SI->getValueOperand()) {
if (dyn_cast<Argument>(Arg)) {
DEBUG(dbgs() << " Include: ");
DEBUG(AI->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIRelated.insert(AI);
DEBUG(dbgs() << " Include (parameter): ");
DEBUG(SI->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIRelated.insert(SI);
DEBUG(dbgs() << " Include: ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIRelated.insert(&*BI);
} else {
DEBUG(dbgs() << " Delete (!parameter): ");
DEBUG(SI->print(dbgs()));
DEBUG(dbgs() << "\n");
}
}
} else {
DEBUG(dbgs() << " Defer: ");
DEBUG(U->print(dbgs()));
DEBUG(dbgs() << "\n");
}
}
} else {
DEBUG(dbgs() << " Delete (alloca NULL): ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
}
} else {
DEBUG(dbgs() << " Delete (!parameter): ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
}
} else if (dyn_cast<TerminatorInst>(BI) == GEntryBlock->getTerminator()) {
DEBUG(dbgs() << " Will Include Terminator: ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIRelated.insert(&*BI);
} else {
DEBUG(dbgs() << " Defer: ");
DEBUG(BI->print(dbgs()));
DEBUG(dbgs() << "\n");
}
}
DEBUG(dbgs()
<< " Report parameter debug info related/related instructions: {\n");
for (BasicBlock::iterator BI = GEntryBlock->begin(), BE = GEntryBlock->end();
BI != BE; ++BI) {
Instruction *I = &*BI;
if (PDIRelated.find(I) == PDIRelated.end()) {
DEBUG(dbgs() << " !PDIRelated: ");
DEBUG(I->print(dbgs()));
DEBUG(dbgs() << "\n");
PDIUnrelatedWL.push_back(I);
} else {
DEBUG(dbgs() << " PDIRelated: ");
DEBUG(I->print(dbgs()));
DEBUG(dbgs() << "\n");
}
}
DEBUG(dbgs() << " }\n");
}
// Replace G with a simple tail call to bitcast(F). Also (unless
// MergeFunctionsPDI holds) replace direct uses of G with bitcast(F),
// delete G. Under MergeFunctionsPDI, we use G itself for creating
// the thunk as we preserve the debug info (and associated instructions)
// from G's entry block pertaining to G's incoming arguments which are
// passed on as corresponding arguments in the call that G makes to F.
// For better debugability, under MergeFunctionsPDI, we do not modify G's
// call sites to point to F even when within the same translation unit.
void MergeFunctions::writeThunk(Function *F, Function *G) {
if (!G->isInterposable() && !MergeFunctionsPDI) {
if (G->hasGlobalUnnamedAddr()) {
// If G's address is not significant, replace it entirely.
Constant *BitcastF = ConstantExpr::getBitCast(F, G->getType());
G->replaceAllUsesWith(BitcastF);
} else {
// Redirect direct callers of G to F. (See note on MergeFunctionsPDI
// above).
replaceDirectCallers(G, F);
}
}
// If G was internal then we may have replaced all uses of G with F. If so,
// stop here and delete G. There's no need for a thunk. (See note on
// MergeFunctionsPDI above).
if (G->hasLocalLinkage() && G->use_empty() && !MergeFunctionsPDI) {
G->eraseFromParent();
return;
}
BasicBlock *GEntryBlock = nullptr;
std::vector<Instruction *> PDIUnrelatedWL;
BasicBlock *BB = nullptr;
Function *NewG = nullptr;
if (MergeFunctionsPDI) {
DEBUG(dbgs() << "writeThunk: (MergeFunctionsPDI) Do not create a new "
"function as thunk; retain original: "
<< G->getName() << "()\n");
GEntryBlock = &G->getEntryBlock();
DEBUG(dbgs() << "writeThunk: (MergeFunctionsPDI) filter parameter related "
"debug info for "
<< G->getName() << "() {\n");
filterInstsUnrelatedToPDI(GEntryBlock, PDIUnrelatedWL);
GEntryBlock->getTerminator()->eraseFromParent();
BB = GEntryBlock;
} else {
NewG = Function::Create(G->getFunctionType(), G->getLinkage(), "",
G->getParent());
BB = BasicBlock::Create(F->getContext(), "", NewG);
}
IRBuilder<> Builder(BB);
Function *H = MergeFunctionsPDI ? G : NewG;
SmallVector<Value *, 16> Args;
unsigned i = 0;
FunctionType *FFTy = F->getFunctionType();
for (Argument & AI : H->args()) {
Args.push_back(createCast(Builder, &AI, FFTy->getParamType(i)));
++i;
}
CallInst *CI = Builder.CreateCall(F, Args);
ReturnInst *RI = nullptr;
CI->setTailCall();
CI->setCallingConv(F->getCallingConv());
CI->setAttributes(F->getAttributes());
if (H->getReturnType()->isVoidTy()) {
RI = Builder.CreateRetVoid();
} else {
RI = Builder.CreateRet(createCast(Builder, CI, H->getReturnType()));
}
if (MergeFunctionsPDI) {
DISubprogram *DIS = G->getSubprogram();
if (DIS) {
DebugLoc CIDbgLoc = DebugLoc::get(DIS->getScopeLine(), 0, DIS);
DebugLoc RIDbgLoc = DebugLoc::get(DIS->getScopeLine(), 0, DIS);
CI->setDebugLoc(CIDbgLoc);
RI->setDebugLoc(RIDbgLoc);
} else {
DEBUG(dbgs() << "writeThunk: (MergeFunctionsPDI) No DISubprogram for "
<< G->getName() << "()\n");
}
eraseTail(G);
eraseInstsUnrelatedToPDI(PDIUnrelatedWL);
DEBUG(dbgs() << "} // End of parameter related debug info filtering for: "
<< G->getName() << "()\n");
} else {
NewG->copyAttributesFrom(G);
NewG->takeName(G);
removeUsers(G);
G->replaceAllUsesWith(NewG);
G->eraseFromParent();
}
DEBUG(dbgs() << "writeThunk: " << H->getName() << '\n');
++NumThunksWritten;
}
// Merge two equivalent functions. Upon completion, Function G is deleted.
void MergeFunctions::mergeTwoFunctions(Function *F, Function *G) {
Don't IPO over functions that can be de-refined Summary: Fixes PR26774. If you're aware of the issue, feel free to skip the "Motivation" section and jump directly to "This patch". Motivation: I define "refinement" as discarding behaviors from a program that the optimizer has license to discard. So transforming: ``` void f(unsigned x) { unsigned t = 5 / x; (void)t; } ``` to ``` void f(unsigned x) { } ``` is refinement, since the behavior went from "if x == 0 then undefined else nothing" to "nothing" (the optimizer has license to discard undefined behavior). Refinement is a fundamental aspect of many mid-level optimizations done by LLVM. For instance, transforming `x == (x + 1)` to `false` also involves refinement since the expression's value went from "if x is `undef` then { `true` or `false` } else { `false` }" to "`false`" (by definition, the optimizer has license to fold `undef` to any non-`undef` value). Unfortunately, refinement implies that the optimizer cannot assume that the implementation of a function it can see has all of the behavior an unoptimized or a differently optimized version of the same function can have. This is a problem for functions with comdat linkage, where a function can be replaced by an unoptimized or a differently optimized version of the same source level function. For instance, FunctionAttrs cannot assume a comdat function is actually `readnone` even if it does not have any loads or stores in it; since there may have been loads and stores in the "original function" that were refined out in the currently visible variant, and at the link step the linker may in fact choose an implementation with a load or a store. As an example, consider a function that does two atomic loads from the same memory location, and writes to memory only if the two values are not equal. The optimizer is allowed to refine this function by first CSE'ing the two loads, and the folding the comparision to always report that the two values are equal. Such a refined variant will look like it is `readonly`. However, the unoptimized version of the function can still write to memory (since the two loads //can// result in different values), and selecting the unoptimized version at link time will retroactively invalidate transforms we may have done under the assumption that the function does not write to memory. Note: this is not just a problem with atomics or with linking differently optimized object files. See PR26774 for more realistic examples that involved neither. This patch: This change introduces a new set of linkage types, predicated as `GlobalValue::mayBeDerefined` that returns true if the linkage type allows a function to be replaced by a differently optimized variant at link time. It then changes a set of IPO passes to bail out if they see such a function. Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18634 llvm-svn: 265762
2016-04-08 08:48:30 +08:00
if (F->isInterposable()) {
assert(G->isInterposable());
// Make them both thunks to the same internal function.
Function *H = Function::Create(F->getFunctionType(), F->getLinkage(), "",
F->getParent());
H->copyAttributesFrom(F);
H->takeName(F);
removeUsers(F);
F->replaceAllUsesWith(H);
unsigned MaxAlignment = std::max(G->getAlignment(), H->getAlignment());
writeThunk(F, G);
writeThunk(F, H);
F->setAlignment(MaxAlignment);
F->setLinkage(GlobalValue::PrivateLinkage);
++NumDoubleWeak;
} else {
writeThunk(F, G);
}
++NumFunctionsMerged;
}
/// Replace function F by function G.
void MergeFunctions::replaceFunctionInTree(const FunctionNode &FN,
Function *G) {
Function *F = FN.getFunc();
Improve the determinism of MergeFunctions Summary: Merge functions previously relied on unsigned comparisons of pointer values to order functions. This caused observable non-determinism in the compiler for large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces different hashes when run repeatedly on the same machine. Differing output was observed on three large bitcodes, but it was less frequent on the smallest file. It is possible that this only manifests on the large inputs, hence remaining undetected until now. This patch fixes this by removing (almost, see below) all places where comparisons between pointers are used to order functions. Most of these changes are local, but the comparison of global values requires assigning an identifier to each local in the order it is visited. This is very similar to the way the comparison function identifies Value*'s defined within a function. Because the order of visiting the functions and their subparts is deterministic, the identifiers assigned to the globals will be as well, and the order of functions will be deterministic. With these changes, there is no more observed non-determinism. There is also only minor slowdowns (negligible to 4%) compared to the baseline, which is likely a result of the fact that global comparisons involve hash lookups and not just pointer comparisons. The one caveat so far is that programs containing BlockAddress constants can still be non-deterministic. It is not clear what the right solution is here. In particular, even if the global numbers are used to order by function, we still need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out and fail to order the functions or consider them equal, because we require a total order over functions. Note that programs with BlockAddress constants are relatively rare, so the impact of leaving this in is minor as long as this pass is opt-in. Author: jrkoenig Reviewers: nlewycky, jfb, dschuff Subscribers: jevinskie, llvm-commits, chapuni Differential revision: http://reviews.llvm.org/D12168 llvm-svn: 245762
2015-08-22 07:27:24 +08:00
assert(FunctionComparator(F, G, &GlobalNumbers).compare() == 0 &&
"The two functions must be equal");
auto I = FNodesInTree.find(F);
assert(I != FNodesInTree.end() && "F should be in FNodesInTree");
assert(FNodesInTree.count(G) == 0 && "FNodesInTree should not contain G");
FnTreeType::iterator IterToFNInFnTree = I->second;
assert(&(*IterToFNInFnTree) == &FN && "F should map to FN in FNodesInTree.");
// Remove F -> FN and insert G -> FN
FNodesInTree.erase(I);
FNodesInTree.insert({G, IterToFNInFnTree});
// Replace F with G in FN, which is stored inside the FnTree.
FN.replaceBy(G);
}
// Insert a ComparableFunction into the FnTree, or merge it away if equal to one
// that was already inserted.
bool MergeFunctions::insert(Function *NewFunction) {
std::pair<FnTreeType::iterator, bool> Result =
FnTree.insert(FunctionNode(NewFunction));
if (Result.second) {
assert(FNodesInTree.count(NewFunction) == 0);
FNodesInTree.insert({NewFunction, Result.first});
DEBUG(dbgs() << "Inserting as unique: " << NewFunction->getName() << '\n');
return false;
}
const FunctionNode &OldF = *Result.first;
// Don't merge tiny functions, since it can just end up making the function
// larger.
// FIXME: Should still merge them if they are unnamed_addr and produce an
// alias.
if (NewFunction->size() == 1) {
if (NewFunction->front().size() <= 2) {
DEBUG(dbgs() << NewFunction->getName()
<< " is to small to bother merging\n");
return false;
}
}
// Impose a total order (by name) on the replacement of functions. This is
// important when operating on more than one module independently to prevent
// cycles of thunks calling each other when the modules are linked together.
//
// First of all, we process strong functions before weak functions.
if ((OldF.getFunc()->isInterposable() && !NewFunction->isInterposable()) ||
(OldF.getFunc()->isInterposable() == NewFunction->isInterposable() &&
OldF.getFunc()->getName() > NewFunction->getName())) {
// Swap the two functions.
Function *F = OldF.getFunc();
replaceFunctionInTree(*Result.first, NewFunction);
NewFunction = F;
assert(OldF.getFunc() != F && "Must have swapped the functions.");
}
DEBUG(dbgs() << " " << OldF.getFunc()->getName()
<< " == " << NewFunction->getName() << '\n');
Function *DeleteF = NewFunction;
mergeTwoFunctions(OldF.getFunc(), DeleteF);
return true;
}
// Remove a function from FnTree. If it was already in FnTree, add
// it to Deferred so that we'll look at it in the next round.
void MergeFunctions::remove(Function *F) {
auto I = FNodesInTree.find(F);
if (I != FNodesInTree.end()) {
DEBUG(dbgs() << "Deferred " << F->getName()<< ".\n");
FnTree.erase(I->second);
// I->second has been invalidated, remove it from the FNodesInTree map to
// preserve the invariant.
FNodesInTree.erase(I);
Deferred.emplace_back(F);
}
}
// For each instruction used by the value, remove() the function that contains
// the instruction. This should happen right before a call to RAUW.
void MergeFunctions::removeUsers(Value *V) {
std::vector<Value *> Worklist;
Worklist.push_back(V);
SmallSet<Value*, 8> Visited;
Visited.insert(V);
while (!Worklist.empty()) {
Value *V = Worklist.back();
Worklist.pop_back();
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
for (User *U : V->users()) {
if (Instruction *I = dyn_cast<Instruction>(U)) {
remove(I->getParent()->getParent());
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
} else if (isa<GlobalValue>(U)) {
// do nothing
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
} else if (Constant *C = dyn_cast<Constant>(U)) {
for (User *UU : C->users()) {
if (!Visited.insert(UU).second)
Worklist.push_back(UU);
}
}
}
}
}