2017-09-28 07:26:01 +08:00
|
|
|
//===- SelectionDAGBuilder.h - Selection-DAG building -----------*- C++ -*-===//
|
2008-09-04 00:12:24 +08:00
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2008-09-04 00:12:24 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This implements routines for translating from LLVM IR into SelectionDAG IR.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2014-08-14 00:26:38 +08:00
|
|
|
#ifndef LLVM_LIB_CODEGEN_SELECTIONDAG_SELECTIONDAGBUILDER_H
|
|
|
|
#define LLVM_LIB_CODEGEN_SELECTIONDAG_SELECTIONDAGBUILDER_H
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2015-01-14 19:23:27 +08:00
|
|
|
#include "StatepointLowering.h"
|
2008-09-04 00:12:24 +08:00
|
|
|
#include "llvm/ADT/APInt.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/ADT/ArrayRef.h"
|
2008-09-04 00:12:24 +08:00
|
|
|
#include "llvm/ADT/DenseMap.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/ADT/SmallVector.h"
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-10 01:55:00 +08:00
|
|
|
#include "llvm/Analysis/AliasAnalysis.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/CodeGen/ISDOpcodes.h"
|
2012-12-04 15:12:27 +08:00
|
|
|
#include "llvm/CodeGen/SelectionDAG.h"
|
2008-09-04 00:12:24 +08:00
|
|
|
#include "llvm/CodeGen/SelectionDAGNodes.h"
|
2017-11-17 09:07:10 +08:00
|
|
|
#include "llvm/CodeGen/TargetLowering.h"
|
2018-03-30 01:21:10 +08:00
|
|
|
#include "llvm/CodeGen/ValueTypes.h"
|
2014-03-04 19:01:28 +08:00
|
|
|
#include "llvm/IR/CallSite.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/IR/DebugLoc.h"
|
|
|
|
#include "llvm/IR/Instruction.h"
|
2016-05-27 22:27:24 +08:00
|
|
|
#include "llvm/IR/Statepoint.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/Support/BranchProbability.h"
|
|
|
|
#include "llvm/Support/CodeGen.h"
|
2009-07-12 04:10:48 +08:00
|
|
|
#include "llvm/Support/ErrorHandling.h"
|
2018-03-24 07:58:25 +08:00
|
|
|
#include "llvm/Support/MachineValueType.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include <algorithm>
|
|
|
|
#include <cassert>
|
|
|
|
#include <cstdint>
|
2016-05-27 22:27:24 +08:00
|
|
|
#include <utility>
|
2008-09-04 00:12:24 +08:00
|
|
|
#include <vector>
|
|
|
|
|
|
|
|
namespace llvm {
|
|
|
|
|
|
|
|
class AllocaInst;
|
2017-09-28 07:26:01 +08:00
|
|
|
class AtomicCmpXchgInst;
|
|
|
|
class AtomicRMWInst;
|
2008-09-04 00:12:24 +08:00
|
|
|
class BasicBlock;
|
|
|
|
class BranchInst;
|
|
|
|
class CallInst;
|
2019-02-09 04:48:56 +08:00
|
|
|
class CallBrInst;
|
2017-09-28 07:26:01 +08:00
|
|
|
class CatchPadInst;
|
|
|
|
class CatchReturnInst;
|
|
|
|
class CatchSwitchInst;
|
|
|
|
class CleanupPadInst;
|
|
|
|
class CleanupReturnInst;
|
|
|
|
class Constant;
|
|
|
|
class ConstantInt;
|
|
|
|
class ConstrainedFPIntrinsic;
|
2010-08-27 07:35:15 +08:00
|
|
|
class DbgValueInst;
|
2017-09-28 07:26:01 +08:00
|
|
|
class DataLayout;
|
|
|
|
class DIExpression;
|
|
|
|
class DILocalVariable;
|
|
|
|
class DILocation;
|
|
|
|
class FenceInst;
|
2009-11-24 01:16:22 +08:00
|
|
|
class FunctionLoweringInfo;
|
2008-09-04 00:12:24 +08:00
|
|
|
class GCFunctionInfo;
|
2017-09-28 07:26:01 +08:00
|
|
|
class GCRelocateInst;
|
|
|
|
class GCResultInst;
|
2009-10-28 08:19:10 +08:00
|
|
|
class IndirectBrInst;
|
2008-09-04 00:12:24 +08:00
|
|
|
class InvokeInst;
|
2017-09-28 07:26:01 +08:00
|
|
|
class LandingPadInst;
|
|
|
|
class LLVMContext;
|
2008-09-04 00:12:24 +08:00
|
|
|
class LoadInst;
|
|
|
|
class MachineBasicBlock;
|
|
|
|
class PHINode;
|
2017-09-28 07:26:01 +08:00
|
|
|
class ResumeInst;
|
2008-09-04 00:12:24 +08:00
|
|
|
class ReturnInst;
|
2010-07-16 08:02:08 +08:00
|
|
|
class SDDbgValue;
|
2008-09-04 00:12:24 +08:00
|
|
|
class StoreInst;
|
|
|
|
class SwitchInst;
|
2011-12-09 06:15:21 +08:00
|
|
|
class TargetLibraryInfo;
|
2017-09-28 07:26:01 +08:00
|
|
|
class TargetMachine;
|
|
|
|
class Type;
|
2008-09-04 00:12:24 +08:00
|
|
|
class VAArgInst;
|
2017-09-28 07:26:01 +08:00
|
|
|
class UnreachableInst;
|
|
|
|
class Use;
|
|
|
|
class User;
|
|
|
|
class Value;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
2009-11-24 02:04:58 +08:00
|
|
|
/// SelectionDAGBuilder - This is the common target-independent lowering
|
2008-09-04 00:12:24 +08:00
|
|
|
/// implementation that is parameterized by a TargetLowering object.
|
|
|
|
///
|
2013-09-12 02:05:11 +08:00
|
|
|
class SelectionDAGBuilder {
|
2013-05-25 10:20:36 +08:00
|
|
|
/// CurInst - The current instruction being visited
|
2017-09-28 07:26:01 +08:00
|
|
|
const Instruction *CurInst = nullptr;
|
2009-01-31 10:22:37 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
DenseMap<const Value*, SDValue> NodeMap;
|
2013-11-01 01:18:07 +08:00
|
|
|
|
2010-06-02 03:59:01 +08:00
|
|
|
/// UnusedArgNodeMap - Maps argument value for unused arguments. This is used
|
|
|
|
/// to preserve debug information for incoming arguments.
|
|
|
|
DenseMap<const Value*, SDValue> UnusedArgNodeMap;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2010-07-16 08:02:08 +08:00
|
|
|
/// DanglingDebugInfo - Helper type for DanglingDebugInfoMap.
|
|
|
|
class DanglingDebugInfo {
|
2017-09-28 07:26:01 +08:00
|
|
|
const DbgValueInst* DI = nullptr;
|
2010-07-16 08:02:08 +08:00
|
|
|
DebugLoc dl;
|
2017-09-28 07:26:01 +08:00
|
|
|
unsigned SDNodeOrder = 0;
|
|
|
|
|
2010-07-16 08:02:08 +08:00
|
|
|
public:
|
2017-09-28 07:26:01 +08:00
|
|
|
DanglingDebugInfo() = default;
|
2016-05-27 22:27:24 +08:00
|
|
|
DanglingDebugInfo(const DbgValueInst *di, DebugLoc DL, unsigned SDNO)
|
|
|
|
: DI(di), dl(std::move(DL)), SDNodeOrder(SDNO) {}
|
2017-09-28 07:26:01 +08:00
|
|
|
|
2010-08-27 07:35:15 +08:00
|
|
|
const DbgValueInst* getDI() { return DI; }
|
2010-07-16 08:02:08 +08:00
|
|
|
DebugLoc getdl() { return dl; }
|
|
|
|
unsigned getSDNodeOrder() { return SDNodeOrder; }
|
|
|
|
};
|
|
|
|
|
2018-03-21 17:44:34 +08:00
|
|
|
/// DanglingDebugInfoVector - Helper type for DanglingDebugInfoMap.
|
|
|
|
typedef std::vector<DanglingDebugInfo> DanglingDebugInfoVector;
|
|
|
|
|
2010-07-16 08:02:08 +08:00
|
|
|
/// DanglingDebugInfoMap - Keeps track of dbg_values for which we have not
|
|
|
|
/// yet seen the referent. We defer handling these until we do see it.
|
2018-03-21 17:44:34 +08:00
|
|
|
DenseMap<const Value*, DanglingDebugInfoVector> DanglingDebugInfoMap;
|
2010-07-16 08:02:08 +08:00
|
|
|
|
2009-12-24 08:37:38 +08:00
|
|
|
public:
|
2008-09-04 00:12:24 +08:00
|
|
|
/// PendingLoads - Loads are not emitted to the program immediately. We bunch
|
|
|
|
/// them up and then emit token factor nodes when possible. This allows us to
|
|
|
|
/// get simple disambiguation between loads without worrying about alias
|
|
|
|
/// analysis.
|
|
|
|
SmallVector<SDValue, 8> PendingLoads;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
/// State used while lowering a statepoint sequence (gc_statepoint,
|
|
|
|
/// gc_relocate, and gc_result). See StatepointLowering.hpp/cpp for details.
|
|
|
|
StatepointLoweringState StatepointLowering;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
private:
|
2008-09-04 00:12:24 +08:00
|
|
|
/// PendingExports - CopyToReg nodes that copy values to virtual registers
|
|
|
|
/// for export to other blocks need to be emitted before any terminator
|
|
|
|
/// instruction, but they have no other ordering requirements. We bunch them
|
|
|
|
/// up and the emit a single tokenfactor for them just before terminator
|
|
|
|
/// instructions.
|
|
|
|
SmallVector<SDValue, 8> PendingExports;
|
|
|
|
|
2009-12-19 07:32:53 +08:00
|
|
|
/// SDNodeOrder - A unique monotonically increasing number used to order the
|
|
|
|
/// SDNodes we create.
|
|
|
|
unsigned SDNodeOrder;
|
|
|
|
|
2015-04-24 00:45:24 +08:00
|
|
|
enum CaseClusterKind {
|
|
|
|
/// A cluster of adjacent case labels with the same destination, or just one
|
|
|
|
/// case.
|
|
|
|
CC_Range,
|
|
|
|
/// A cluster of cases suitable for jump table lowering.
|
|
|
|
CC_JumpTable,
|
|
|
|
/// A cluster of cases suitable for bit test lowering.
|
|
|
|
CC_BitTests
|
|
|
|
};
|
|
|
|
|
|
|
|
/// A cluster of case labels.
|
|
|
|
struct CaseCluster {
|
|
|
|
CaseClusterKind Kind;
|
|
|
|
const ConstantInt *Low, *High;
|
|
|
|
union {
|
|
|
|
MachineBasicBlock *MBB;
|
|
|
|
unsigned JTCasesIndex;
|
|
|
|
unsigned BTCasesIndex;
|
|
|
|
};
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability Prob;
|
2015-04-24 00:45:24 +08:00
|
|
|
|
|
|
|
static CaseCluster range(const ConstantInt *Low, const ConstantInt *High,
|
2015-11-24 16:51:23 +08:00
|
|
|
MachineBasicBlock *MBB, BranchProbability Prob) {
|
2015-04-24 00:45:24 +08:00
|
|
|
CaseCluster C;
|
|
|
|
C.Kind = CC_Range;
|
|
|
|
C.Low = Low;
|
|
|
|
C.High = High;
|
|
|
|
C.MBB = MBB;
|
2015-11-24 16:51:23 +08:00
|
|
|
C.Prob = Prob;
|
2015-04-24 00:45:24 +08:00
|
|
|
return C;
|
|
|
|
}
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2015-04-24 00:45:24 +08:00
|
|
|
static CaseCluster jumpTable(const ConstantInt *Low,
|
|
|
|
const ConstantInt *High, unsigned JTCasesIndex,
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability Prob) {
|
2015-04-24 00:45:24 +08:00
|
|
|
CaseCluster C;
|
|
|
|
C.Kind = CC_JumpTable;
|
|
|
|
C.Low = Low;
|
|
|
|
C.High = High;
|
|
|
|
C.JTCasesIndex = JTCasesIndex;
|
2015-11-24 16:51:23 +08:00
|
|
|
C.Prob = Prob;
|
2015-04-24 00:45:24 +08:00
|
|
|
return C;
|
|
|
|
}
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2015-04-24 00:45:24 +08:00
|
|
|
static CaseCluster bitTests(const ConstantInt *Low, const ConstantInt *High,
|
2015-11-24 16:51:23 +08:00
|
|
|
unsigned BTCasesIndex, BranchProbability Prob) {
|
2015-04-24 00:45:24 +08:00
|
|
|
CaseCluster C;
|
|
|
|
C.Kind = CC_BitTests;
|
|
|
|
C.Low = Low;
|
|
|
|
C.High = High;
|
|
|
|
C.BTCasesIndex = BTCasesIndex;
|
2015-11-24 16:51:23 +08:00
|
|
|
C.Prob = Prob;
|
2015-04-24 00:45:24 +08:00
|
|
|
return C;
|
2008-09-04 00:12:24 +08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
using CaseClusterVector = std::vector<CaseCluster>;
|
|
|
|
using CaseClusterIt = CaseClusterVector::iterator;
|
2015-04-24 00:45:24 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
struct CaseBits {
|
2017-09-28 07:26:01 +08:00
|
|
|
uint64_t Mask = 0;
|
|
|
|
MachineBasicBlock* BB = nullptr;
|
|
|
|
unsigned Bits = 0;
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability ExtraProb;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
CaseBits() = default;
|
2012-08-25 02:14:27 +08:00
|
|
|
CaseBits(uint64_t mask, MachineBasicBlock* bb, unsigned bits,
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability Prob):
|
2017-09-28 07:26:01 +08:00
|
|
|
Mask(mask), BB(bb), Bits(bits), ExtraProb(Prob) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
using CaseBitsVector = std::vector<CaseBits>;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2015-04-24 00:45:24 +08:00
|
|
|
/// Sort Clusters and merge adjacent cases.
|
|
|
|
void sortAndRangeify(CaseClusterVector &Clusters);
|
2008-12-24 06:25:27 +08:00
|
|
|
|
2009-11-24 02:04:58 +08:00
|
|
|
/// CaseBlock - This structure is used to communicate between
|
|
|
|
/// SelectionDAGBuilder and SDISel for the code generation of additional basic
|
|
|
|
/// blocks needed by multi-case switch statements.
|
2008-09-04 00:12:24 +08:00
|
|
|
struct CaseBlock {
|
|
|
|
// CC - the condition code to use for the case block's setcc node
|
|
|
|
ISD::CondCode CC;
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
// CmpLHS/CmpRHS/CmpMHS - The LHS/MHS/RHS of the comparison to emit.
|
|
|
|
// Emit by default LHS op RHS. MHS is used for range comparisons:
|
|
|
|
// If MHS is not null: (LHS <= MHS) and (MHS <= RHS).
|
2010-04-15 09:51:59 +08:00
|
|
|
const Value *CmpLHS, *CmpMHS, *CmpRHS;
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
// TrueBB/FalseBB - the block to branch to if the setcc is true/false.
|
|
|
|
MachineBasicBlock *TrueBB, *FalseBB;
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
// ThisBB - the block into which to emit the code for the setcc and branches
|
|
|
|
MachineBasicBlock *ThisBB;
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2017-08-18 00:57:13 +08:00
|
|
|
/// The debug location of the instruction this CaseBlock was
|
|
|
|
/// produced from.
|
|
|
|
SDLoc DL;
|
|
|
|
|
2015-11-24 16:51:23 +08:00
|
|
|
// TrueProb/FalseProb - branch weights.
|
|
|
|
BranchProbability TrueProb, FalseProb;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
|
|
|
CaseBlock(ISD::CondCode cc, const Value *cmplhs, const Value *cmprhs,
|
|
|
|
const Value *cmpmiddle, MachineBasicBlock *truebb,
|
|
|
|
MachineBasicBlock *falsebb, MachineBasicBlock *me,
|
|
|
|
SDLoc dl,
|
|
|
|
BranchProbability trueprob = BranchProbability::getUnknown(),
|
|
|
|
BranchProbability falseprob = BranchProbability::getUnknown())
|
|
|
|
: CC(cc), CmpLHS(cmplhs), CmpMHS(cmpmiddle), CmpRHS(cmprhs),
|
|
|
|
TrueBB(truebb), FalseBB(falsebb), ThisBB(me), DL(dl),
|
|
|
|
TrueProb(trueprob), FalseProb(falseprob) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
2011-07-30 06:25:21 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
struct JumpTable {
|
|
|
|
/// Reg - the virtual register containing the index of the jump table entry
|
|
|
|
//. to jump to.
|
|
|
|
unsigned Reg;
|
|
|
|
/// JTI - the JumpTableIndex for this jump table in the function.
|
|
|
|
unsigned JTI;
|
|
|
|
/// MBB - the MBB into which to emit the code for the indirect jump.
|
|
|
|
MachineBasicBlock *MBB;
|
|
|
|
/// Default - the MBB of the default bb, which is a successor of the range
|
|
|
|
/// check MBB. This is when updating PHI nodes in successors.
|
|
|
|
MachineBasicBlock *Default;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
|
|
|
JumpTable(unsigned R, unsigned J, MachineBasicBlock *M,
|
|
|
|
MachineBasicBlock *D): Reg(R), JTI(J), MBB(M), Default(D) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
|
|
|
struct JumpTableHeader {
|
2008-12-24 06:25:27 +08:00
|
|
|
APInt First;
|
|
|
|
APInt Last;
|
2010-04-15 09:51:59 +08:00
|
|
|
const Value *SValue;
|
2008-09-04 00:12:24 +08:00
|
|
|
MachineBasicBlock *HeaderBB;
|
|
|
|
bool Emitted;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
|
|
|
JumpTableHeader(APInt F, APInt L, const Value *SV, MachineBasicBlock *H,
|
2019-01-29 23:00:50 +08:00
|
|
|
bool E = false)
|
2017-09-28 07:26:01 +08:00
|
|
|
: First(std::move(F)), Last(std::move(L)), SValue(SV), HeaderBB(H),
|
2019-01-29 23:00:50 +08:00
|
|
|
Emitted(E) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
2017-09-28 07:26:01 +08:00
|
|
|
using JumpTableBlock = std::pair<JumpTableHeader, JumpTable>;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
struct BitTestCase {
|
|
|
|
uint64_t Mask;
|
2010-01-02 07:37:34 +08:00
|
|
|
MachineBasicBlock *ThisBB;
|
|
|
|
MachineBasicBlock *TargetBB;
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability ExtraProb;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
|
|
|
BitTestCase(uint64_t M, MachineBasicBlock* T, MachineBasicBlock* Tr,
|
|
|
|
BranchProbability Prob):
|
|
|
|
Mask(M), ThisBB(T), TargetBB(Tr), ExtraProb(Prob) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
using BitTestInfo = SmallVector<BitTestCase, 3>;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
struct BitTestBlock {
|
2008-12-24 06:25:27 +08:00
|
|
|
APInt First;
|
|
|
|
APInt Range;
|
2010-04-15 09:51:59 +08:00
|
|
|
const Value *SValue;
|
2008-09-04 00:12:24 +08:00
|
|
|
unsigned Reg;
|
2012-12-19 20:23:01 +08:00
|
|
|
MVT RegVT;
|
2008-09-04 00:12:24 +08:00
|
|
|
bool Emitted;
|
2015-08-26 05:34:38 +08:00
|
|
|
bool ContiguousRange;
|
2008-09-04 00:12:24 +08:00
|
|
|
MachineBasicBlock *Parent;
|
|
|
|
MachineBasicBlock *Default;
|
|
|
|
BitTestInfo Cases;
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability Prob;
|
|
|
|
BranchProbability DefaultProb;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
|
|
|
BitTestBlock(APInt F, APInt R, const Value *SV, unsigned Rg, MVT RgVT,
|
|
|
|
bool E, bool CR, MachineBasicBlock *P, MachineBasicBlock *D,
|
|
|
|
BitTestInfo C, BranchProbability Pr)
|
|
|
|
: First(std::move(F)), Range(std::move(R)), SValue(SV), Reg(Rg),
|
|
|
|
RegVT(RgVT), Emitted(E), ContiguousRange(CR), Parent(P), Default(D),
|
|
|
|
Cases(std::move(C)), Prob(Pr) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
|
|
|
|
[InlineCost] Improve the cost heuristic for Switch
Summary:
The motivation example is like below which has 13 cases but only 2 distinct targets
```
lor.lhs.false2: ; preds = %if.then
switch i32 %Status, label %if.then27 [
i32 -7012, label %if.end35
i32 -10008, label %if.end35
i32 -10016, label %if.end35
i32 15000, label %if.end35
i32 14013, label %if.end35
i32 10114, label %if.end35
i32 10107, label %if.end35
i32 10105, label %if.end35
i32 10013, label %if.end35
i32 10011, label %if.end35
i32 7008, label %if.end35
i32 7007, label %if.end35
i32 5002, label %if.end35
]
```
which is compiled into a balanced binary tree like this on AArch64 (similar on X86)
```
.LBB853_9: // %lor.lhs.false2
mov w8, #10012
cmp w19, w8
b.gt .LBB853_14
// BB#10: // %lor.lhs.false2
mov w8, #5001
cmp w19, w8
b.gt .LBB853_18
// BB#11: // %lor.lhs.false2
mov w8, #-10016
cmp w19, w8
b.eq .LBB853_23
// BB#12: // %lor.lhs.false2
mov w8, #-10008
cmp w19, w8
b.eq .LBB853_23
// BB#13: // %lor.lhs.false2
mov w8, #-7012
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_14: // %lor.lhs.false2
mov w8, #14012
cmp w19, w8
b.gt .LBB853_21
// BB#15: // %lor.lhs.false2
mov w8, #-10105
add w8, w19, w8
cmp w8, #9 // =9
b.hi .LBB853_17
// BB#16: // %lor.lhs.false2
orr w9, wzr, #0x1
lsl w8, w9, w8
mov w9, #517
and w8, w8, w9
cbnz w8, .LBB853_23
.LBB853_17: // %lor.lhs.false2
mov w8, #10013
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_18: // %lor.lhs.false2
mov w8, #-7007
add w8, w19, w8
cmp w8, #2 // =2
b.lo .LBB853_23
// BB#19: // %lor.lhs.false2
mov w8, #5002
cmp w19, w8
b.eq .LBB853_23
// BB#20: // %lor.lhs.false2
mov w8, #10011
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_21: // %lor.lhs.false2
mov w8, #14013
cmp w19, w8
b.eq .LBB853_23
// BB#22: // %lor.lhs.false2
mov w8, #15000
cmp w19, w8
b.ne .LBB853_3
```
However, the inline cost model estimates the cost to be linear with the number
of distinct targets and the cost of the above switch is just 2 InstrCosts.
The function containing this switch is then inlined about 900 times.
This change use the general way of switch lowering for the inline heuristic. It
etimate the number of case clusters with the suitability check for a jump table
or bit test. Considering the binary search tree built for the clusters, this
change modifies the model to be linear with the size of the balanced binary
tree. The model is off by default for now :
-inline-generic-switch-cost=false
This change was originally proposed by Haicheng in D29870.
Reviewers: hans, bmakam, chandlerc, eraman, haicheng, mcrosier
Reviewed By: hans
Subscribers: joerg, aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D31085
llvm-svn: 301649
2017-04-29 00:04:03 +08:00
|
|
|
/// Return the range of value in [First..Last].
|
|
|
|
uint64_t getJumpTableRange(const CaseClusterVector &Clusters, unsigned First,
|
|
|
|
unsigned Last) const;
|
|
|
|
|
|
|
|
/// Return the number of cases in [First..Last].
|
|
|
|
uint64_t getJumpTableNumCases(const SmallVectorImpl<unsigned> &TotalCases,
|
|
|
|
unsigned First, unsigned Last) const;
|
2015-04-24 00:45:24 +08:00
|
|
|
|
|
|
|
/// Build a jump table cluster from Clusters[First..Last]. Returns false if it
|
|
|
|
/// decides it's not a good idea.
|
2016-09-02 07:35:26 +08:00
|
|
|
bool buildJumpTable(const CaseClusterVector &Clusters, unsigned First,
|
2015-04-24 00:45:24 +08:00
|
|
|
unsigned Last, const SwitchInst *SI,
|
|
|
|
MachineBasicBlock *DefaultMBB, CaseCluster &JTCluster);
|
|
|
|
|
|
|
|
/// Find clusters of cases suitable for jump table lowering.
|
|
|
|
void findJumpTables(CaseClusterVector &Clusters, const SwitchInst *SI,
|
|
|
|
MachineBasicBlock *DefaultMBB);
|
|
|
|
|
|
|
|
/// Build a bit test cluster from Clusters[First..Last]. Returns false if it
|
|
|
|
/// decides it's not a good idea.
|
|
|
|
bool buildBitTests(CaseClusterVector &Clusters, unsigned First, unsigned Last,
|
|
|
|
const SwitchInst *SI, CaseCluster &BTCluster);
|
|
|
|
|
|
|
|
/// Find clusters of cases suitable for bit test lowering.
|
|
|
|
void findBitTestClusters(CaseClusterVector &Clusters, const SwitchInst *SI);
|
|
|
|
|
|
|
|
struct SwitchWorkListItem {
|
|
|
|
MachineBasicBlock *MBB;
|
|
|
|
CaseClusterIt FirstCluster;
|
|
|
|
CaseClusterIt LastCluster;
|
|
|
|
const ConstantInt *GE;
|
|
|
|
const ConstantInt *LT;
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability DefaultProb;
|
2015-04-24 00:45:24 +08:00
|
|
|
};
|
2017-09-28 07:26:01 +08:00
|
|
|
using SwitchWorkList = SmallVector<SwitchWorkListItem, 4>;
|
2015-04-24 00:45:24 +08:00
|
|
|
|
Switch lowering: add heuristic for filling leaf nodes in the weight-balanced binary search tree
Sparse switches with profile info are lowered as weight-balanced BSTs. For
example, if the node weights are {1,1,1,1,1,1000}, the right-most node would
end up in a tree by itself, bringing it closer to the top.
However, a leaf in this BST can contain up to 3 cases, and having a single
case in a leaf node as in the example means the tree might become
unnecessarily high.
This patch adds a heauristic to the pivot selection algorithm that moves more
cases into leaf nodes unless that would lower their rank. It still doesn't
yield the optimal tree in every case, but I believe it's conservatibely correct.
llvm-svn: 240224
2015-06-21 01:14:07 +08:00
|
|
|
/// Determine the rank by weight of CC in [First,Last]. If CC has more weight
|
|
|
|
/// than each cluster in the range, its rank is 0.
|
|
|
|
static unsigned caseClusterRank(const CaseCluster &CC, CaseClusterIt First,
|
|
|
|
CaseClusterIt Last);
|
|
|
|
|
2015-04-24 00:45:24 +08:00
|
|
|
/// Emit comparison and split W into two subtrees.
|
|
|
|
void splitWorkItem(SwitchWorkList &WorkList, const SwitchWorkListItem &W,
|
|
|
|
Value *Cond, MachineBasicBlock *SwitchMBB);
|
|
|
|
|
|
|
|
/// Lower W.
|
|
|
|
void lowerWorkItem(SwitchWorkListItem W, Value *Cond,
|
|
|
|
MachineBasicBlock *SwitchMBB,
|
|
|
|
MachineBasicBlock *DefaultMBB);
|
|
|
|
|
2017-11-15 05:44:09 +08:00
|
|
|
/// Peel the top probability case if it exceeds the threshold
|
|
|
|
MachineBasicBlock *peelDominantCaseCluster(const SwitchInst &SI,
|
|
|
|
CaseClusterVector &Clusters,
|
|
|
|
BranchProbability &PeeledCaseProb);
|
2015-04-24 00:45:24 +08:00
|
|
|
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// A class which encapsulates all of the information needed to generate a
|
|
|
|
/// stack protector check and signals to isel via its state being initialized
|
|
|
|
/// that a stack protector needs to be generated.
|
|
|
|
///
|
|
|
|
/// *NOTE* The following is a high level documentation of SelectionDAG Stack
|
|
|
|
/// Protector Generation. The reason that it is placed here is for a lack of
|
|
|
|
/// other good places to stick it.
|
|
|
|
///
|
|
|
|
/// High Level Overview of SelectionDAG Stack Protector Generation:
|
|
|
|
///
|
|
|
|
/// Previously, generation of stack protectors was done exclusively in the
|
|
|
|
/// pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
|
|
|
|
/// splitting basic blocks at the IR level to create the success/failure basic
|
|
|
|
/// blocks in the tail of the basic block in question. As a result of this,
|
|
|
|
/// calls that would have qualified for the sibling call optimization were no
|
|
|
|
/// longer eligible for optimization since said calls were no longer right in
|
|
|
|
/// the "tail position" (i.e. the immediate predecessor of a ReturnInst
|
|
|
|
/// instruction).
|
|
|
|
///
|
|
|
|
/// Then it was noticed that since the sibling call optimization causes the
|
|
|
|
/// callee to reuse the caller's stack, if we could delay the generation of
|
|
|
|
/// the stack protector check until later in CodeGen after the sibling call
|
|
|
|
/// decision was made, we get both the tail call optimization and the stack
|
|
|
|
/// protector check!
|
|
|
|
///
|
|
|
|
/// A few goals in solving this problem were:
|
|
|
|
///
|
|
|
|
/// 1. Preserve the architecture independence of stack protector generation.
|
|
|
|
///
|
|
|
|
/// 2. Preserve the normal IR level stack protector check for platforms like
|
2014-07-01 02:57:16 +08:00
|
|
|
/// OpenBSD for which we support platform-specific stack protector
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// generation.
|
|
|
|
///
|
|
|
|
/// The main problem that guided the present solution is that one can not
|
|
|
|
/// solve this problem in an architecture independent manner at the IR level
|
|
|
|
/// only. This is because:
|
|
|
|
///
|
|
|
|
/// 1. The decision on whether or not to perform a sibling call on certain
|
|
|
|
/// platforms (for instance i386) requires lower level information
|
|
|
|
/// related to available registers that can not be known at the IR level.
|
|
|
|
///
|
|
|
|
/// 2. Even if the previous point were not true, the decision on whether to
|
|
|
|
/// perform a tail call is done in LowerCallTo in SelectionDAG which
|
|
|
|
/// occurs after the Stack Protector Pass. As a result, one would need to
|
|
|
|
/// put the relevant callinst into the stack protector check success
|
|
|
|
/// basic block (where the return inst is placed) and then move it back
|
|
|
|
/// later at SelectionDAG/MI time before the stack protector check if the
|
|
|
|
/// tail call optimization failed. The MI level option was nixed
|
2014-07-01 02:57:16 +08:00
|
|
|
/// immediately since it would require platform-specific pattern
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// matching. The SelectionDAG level option was nixed because
|
|
|
|
/// SelectionDAG only processes one IR level basic block at a time
|
|
|
|
/// implying one could not create a DAG Combine to move the callinst.
|
|
|
|
///
|
|
|
|
/// To get around this problem a few things were realized:
|
|
|
|
///
|
|
|
|
/// 1. While one can not handle multiple IR level basic blocks at the
|
|
|
|
/// SelectionDAG Level, one can generate multiple machine basic blocks
|
|
|
|
/// for one IR level basic block. This is how we handle bit tests and
|
|
|
|
/// switches.
|
|
|
|
///
|
|
|
|
/// 2. At the MI level, tail calls are represented via a special return
|
|
|
|
/// MIInst called "tcreturn". Thus if we know the basic block in which we
|
|
|
|
/// wish to insert the stack protector check, we get the correct behavior
|
|
|
|
/// by always inserting the stack protector check right before the return
|
|
|
|
/// statement. This is a "magical transformation" since no matter where
|
|
|
|
/// the stack protector check intrinsic is, we always insert the stack
|
|
|
|
/// protector check code at the end of the BB.
|
|
|
|
///
|
|
|
|
/// Given the aforementioned constraints, the following solution was devised:
|
|
|
|
///
|
|
|
|
/// 1. On platforms that do not support SelectionDAG stack protector check
|
|
|
|
/// generation, allow for the normal IR level stack protector check
|
|
|
|
/// generation to continue.
|
|
|
|
///
|
|
|
|
/// 2. On platforms that do support SelectionDAG stack protector check
|
|
|
|
/// generation:
|
|
|
|
///
|
|
|
|
/// a. Use the IR level stack protector pass to decide if a stack
|
|
|
|
/// protector is required/which BB we insert the stack protector check
|
|
|
|
/// in by reusing the logic already therein. If we wish to generate a
|
|
|
|
/// stack protector check in a basic block, we place a special IR
|
|
|
|
/// intrinsic called llvm.stackprotectorcheck right before the BB's
|
|
|
|
/// returninst or if there is a callinst that could potentially be
|
|
|
|
/// sibling call optimized, before the call inst.
|
|
|
|
///
|
|
|
|
/// b. Then when a BB with said intrinsic is processed, we codegen the BB
|
|
|
|
/// normally via SelectBasicBlock. In said process, when we visit the
|
|
|
|
/// stack protector check, we do not actually emit anything into the
|
|
|
|
/// BB. Instead, we just initialize the stack protector descriptor
|
|
|
|
/// class (which involves stashing information/creating the success
|
|
|
|
/// mbbb and the failure mbb if we have not created one for this
|
|
|
|
/// function yet) and export the guard variable that we are going to
|
|
|
|
/// compare.
|
|
|
|
///
|
|
|
|
/// c. After we finish selecting the basic block, in FinishBasicBlock if
|
|
|
|
/// the StackProtectorDescriptor attached to the SelectionDAGBuilder is
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-08 04:15:35 +08:00
|
|
|
/// initialized, we produce the validation code with one of these
|
|
|
|
/// techniques:
|
|
|
|
/// 1) with a call to a guard check function
|
|
|
|
/// 2) with inlined instrumentation
|
|
|
|
///
|
|
|
|
/// 1) We insert a call to the check function before the terminator.
|
|
|
|
///
|
|
|
|
/// 2) We first find a splice point in the parent basic block
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// before the terminator and then splice the terminator of said basic
|
|
|
|
/// block into the success basic block. Then we code-gen a new tail for
|
|
|
|
/// the parent basic block consisting of the two loads, the comparison,
|
|
|
|
/// and finally two branches to the success/failure basic blocks. We
|
|
|
|
/// conclude by code-gening the failure basic block if we have not
|
|
|
|
/// code-gened it already (all stack protector checks we generate in
|
|
|
|
/// the same function, use the same failure basic block).
|
|
|
|
class StackProtectorDescriptor {
|
|
|
|
public:
|
2017-09-28 07:26:01 +08:00
|
|
|
StackProtectorDescriptor() = default;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
|
|
|
|
/// Returns true if all fields of the stack protector descriptor are
|
|
|
|
/// initialized implying that we should/are ready to emit a stack protector.
|
|
|
|
bool shouldEmitStackProtector() const {
|
2016-04-09 05:26:31 +08:00
|
|
|
return ParentMBB && SuccessMBB && FailureMBB;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
}
|
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-08 04:15:35 +08:00
|
|
|
bool shouldEmitFunctionBasedCheckStackProtector() const {
|
|
|
|
return ParentMBB && !SuccessMBB && !FailureMBB;
|
|
|
|
}
|
|
|
|
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// Initialize the stack protector descriptor structure for a new basic
|
|
|
|
/// block.
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-08 04:15:35 +08:00
|
|
|
void initialize(const BasicBlock *BB, MachineBasicBlock *MBB,
|
|
|
|
bool FunctionBasedInstrumentation) {
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
// Make sure we are not initialized yet.
|
|
|
|
assert(!shouldEmitStackProtector() && "Stack Protector Descriptor is "
|
|
|
|
"already initialized!");
|
|
|
|
ParentMBB = MBB;
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-08 04:15:35 +08:00
|
|
|
if (!FunctionBasedInstrumentation) {
|
|
|
|
SuccessMBB = AddSuccessorMBB(BB, MBB, /* IsLikely */ true);
|
|
|
|
FailureMBB = AddSuccessorMBB(BB, MBB, /* IsLikely */ false, FailureMBB);
|
|
|
|
}
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Reset state that changes when we handle different basic blocks.
|
|
|
|
///
|
|
|
|
/// This currently includes:
|
|
|
|
///
|
|
|
|
/// 1. The specific basic block we are generating a
|
|
|
|
/// stack protector for (ParentMBB).
|
|
|
|
///
|
|
|
|
/// 2. The successor machine basic block that will contain the tail of
|
|
|
|
/// parent mbb after we create the stack protector check (SuccessMBB). This
|
|
|
|
/// BB is visited only on stack protector check success.
|
|
|
|
void resetPerBBState() {
|
2014-04-16 12:21:27 +08:00
|
|
|
ParentMBB = nullptr;
|
|
|
|
SuccessMBB = nullptr;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Reset state that only changes when we switch functions.
|
|
|
|
///
|
|
|
|
/// This currently includes:
|
|
|
|
///
|
|
|
|
/// 1. FailureMBB since we reuse the failure code path for all stack
|
|
|
|
/// protector checks created in an individual function.
|
|
|
|
///
|
|
|
|
/// 2.The guard variable since the guard variable we are checking against is
|
|
|
|
/// always the same.
|
|
|
|
void resetPerFunctionState() {
|
2014-04-16 12:21:27 +08:00
|
|
|
FailureMBB = nullptr;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
MachineBasicBlock *getParentMBB() { return ParentMBB; }
|
|
|
|
MachineBasicBlock *getSuccessMBB() { return SuccessMBB; }
|
|
|
|
MachineBasicBlock *getFailureMBB() { return FailureMBB; }
|
|
|
|
|
|
|
|
private:
|
|
|
|
/// The basic block for which we are generating the stack protector.
|
|
|
|
///
|
|
|
|
/// As a result of stack protector generation, we will splice the
|
|
|
|
/// terminators of this basic block into the successor mbb SuccessMBB and
|
|
|
|
/// replace it with a compare/branch to the successor mbbs
|
|
|
|
/// SuccessMBB/FailureMBB depending on whether or not the stack protector
|
|
|
|
/// was violated.
|
2017-09-28 07:26:01 +08:00
|
|
|
MachineBasicBlock *ParentMBB = nullptr;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
|
|
|
|
/// A basic block visited on stack protector check success that contains the
|
|
|
|
/// terminators of ParentMBB.
|
2017-09-28 07:26:01 +08:00
|
|
|
MachineBasicBlock *SuccessMBB = nullptr;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
|
|
|
|
/// This basic block visited on stack protector check failure that will
|
|
|
|
/// contain a call to __stack_chk_fail().
|
2017-09-28 07:26:01 +08:00
|
|
|
MachineBasicBlock *FailureMBB = nullptr;
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
|
|
|
|
/// Add a successor machine basic block to ParentMBB. If the successor mbb
|
|
|
|
/// has not been created yet (i.e. if SuccMBB = 0), then the machine basic
|
2014-12-01 12:27:03 +08:00
|
|
|
/// block will be created. Assign a large weight if IsLikely is true.
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
MachineBasicBlock *AddSuccessorMBB(const BasicBlock *BB,
|
|
|
|
MachineBasicBlock *ParentMBB,
|
2014-12-01 12:27:03 +08:00
|
|
|
bool IsLikely,
|
2014-04-16 12:21:27 +08:00
|
|
|
MachineBasicBlock *SuccMBB = nullptr);
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
};
|
|
|
|
|
2013-06-20 05:36:55 +08:00
|
|
|
private:
|
2010-04-20 03:05:59 +08:00
|
|
|
const TargetMachine &TM;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
2013-06-20 05:36:55 +08:00
|
|
|
public:
|
2014-01-12 22:09:17 +08:00
|
|
|
/// Lowest valid SDNodeOrder. The special case 0 is reserved for scheduling
|
|
|
|
/// nodes without a corresponding SDNode.
|
|
|
|
static const unsigned LowestSDNodeOrder = 1;
|
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
SelectionDAG &DAG;
|
2017-09-28 07:26:01 +08:00
|
|
|
const DataLayout *DL = nullptr;
|
|
|
|
AliasAnalysis *AA = nullptr;
|
2011-12-09 06:15:21 +08:00
|
|
|
const TargetLibraryInfo *LibInfo;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
/// SwitchCases - Vector of CaseBlock structures used to communicate
|
|
|
|
/// SwitchInst code generation information.
|
|
|
|
std::vector<CaseBlock> SwitchCases;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
/// JTCases - Vector of JumpTable structures used to communicate
|
|
|
|
/// SwitchInst code generation information.
|
|
|
|
std::vector<JumpTableBlock> JTCases;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
/// BitTestCases - Vector of BitTestBlock structures used to communicate
|
|
|
|
/// SwitchInst code generation information.
|
|
|
|
std::vector<BitTestBlock> BitTestCases;
|
2017-09-28 07:26:01 +08:00
|
|
|
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
/// A StackProtectorDescriptor structure used to communicate stack protector
|
|
|
|
/// information in between SelectBasicBlock and FinishBasicBlock.
|
|
|
|
StackProtectorDescriptor SPDescriptor;
|
2009-09-19 05:02:19 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
// Emit PHI-node-operand constants only once even if used by multiple
|
|
|
|
// PHI nodes.
|
2010-04-15 09:51:59 +08:00
|
|
|
DenseMap<const Constant *, unsigned> ConstantsOut;
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
/// FuncInfo - Information about the function as a whole.
|
|
|
|
///
|
|
|
|
FunctionLoweringInfo &FuncInfo;
|
2009-02-20 05:12:54 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
/// GFI - Garbage collection metadata for the function.
|
|
|
|
GCFunctionInfo *GFI;
|
|
|
|
|
2011-10-06 06:24:35 +08:00
|
|
|
/// LPadToCallSiteMap - Map a landing pad to the call site indexes.
|
2017-09-28 07:26:01 +08:00
|
|
|
DenseMap<MachineBasicBlock *, SmallVector<unsigned, 4>> LPadToCallSiteMap;
|
2011-10-05 06:00:35 +08:00
|
|
|
|
Major calling convention code refactoring.
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
llvm-svn: 78142
2009-08-05 09:29:28 +08:00
|
|
|
/// HasTailCall - This is set to true if a call in the current
|
|
|
|
/// block has been translated as a tail call. In this case,
|
|
|
|
/// no subsequent DAG nodes should be created.
|
2017-09-28 07:26:01 +08:00
|
|
|
bool HasTailCall = false;
|
Major calling convention code refactoring.
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
llvm-svn: 78142
2009-08-05 09:29:28 +08:00
|
|
|
|
2009-07-13 12:09:18 +08:00
|
|
|
LLVMContext *Context;
|
|
|
|
|
2010-04-20 03:05:59 +08:00
|
|
|
SelectionDAGBuilder(SelectionDAG &dag, FunctionLoweringInfo &funcinfo,
|
2009-11-24 02:04:58 +08:00
|
|
|
CodeGenOpt::Level ol)
|
2017-09-28 07:26:01 +08:00
|
|
|
: SDNodeOrder(LowestSDNodeOrder), TM(dag.getTarget()), DAG(dag),
|
|
|
|
FuncInfo(funcinfo) {}
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2017-05-10 08:39:30 +08:00
|
|
|
void init(GCFunctionInfo *gfi, AliasAnalysis *AA,
|
2011-12-09 06:15:21 +08:00
|
|
|
const TargetLibraryInfo *li);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Clear out the current SelectionDAG and the associated state and prepare
|
|
|
|
/// this SelectionDAGBuilder object to be used for a new block. This doesn't
|
|
|
|
/// clear out information about additional blocks that are needed to complete
|
|
|
|
/// switch lowering or PHI node updating; that information is cleared out as
|
|
|
|
/// it is consumed.
|
2008-09-04 00:12:24 +08:00
|
|
|
void clear();
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Clear the dangling debug information map. This function is separated from
|
|
|
|
/// the clear so that debug information that is dangling in a basic block can
|
|
|
|
/// be properly resolved in a different basic block. This allows the
|
|
|
|
/// SelectionDAG to resolve dangling debug information attached to PHI nodes.
|
2011-05-24 01:44:13 +08:00
|
|
|
void clearDanglingDebugInfo();
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Return the current virtual root of the Selection DAG, flushing any
|
|
|
|
/// PendingLoad items. This must be done before emitting a store or any other
|
|
|
|
/// node that may need to be ordered after any prior load instructions.
|
2008-09-04 00:12:24 +08:00
|
|
|
SDValue getRoot();
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Similar to getRoot, but instead of flushing all the PendingLoad items,
|
|
|
|
/// flush all the PendingExports items. It is necessary to do this before
|
|
|
|
/// emitting a terminator instruction.
|
2008-09-04 00:12:24 +08:00
|
|
|
SDValue getControlRoot();
|
|
|
|
|
2013-05-25 10:20:36 +08:00
|
|
|
SDLoc getCurSDLoc() const {
|
|
|
|
return SDLoc(CurInst, SDNodeOrder);
|
|
|
|
}
|
|
|
|
|
|
|
|
DebugLoc getCurDebugLoc() const {
|
|
|
|
return CurInst ? CurInst->getDebugLoc() : DebugLoc();
|
|
|
|
}
|
2011-02-22 07:21:26 +08:00
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void CopyValueToVirtualRegister(const Value *V, unsigned Reg);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void visit(const Instruction &I);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void visit(unsigned Opcode, const User &I);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2019-02-07 01:36:18 +08:00
|
|
|
/// If there was virtual register allocated for the value V emit CopyFromReg
|
|
|
|
/// of the specified type Ty. Return empty SDValue() otherwise.
|
2015-03-11 00:26:48 +08:00
|
|
|
SDValue getCopyFromRegs(const Value *V, Type *Ty);
|
|
|
|
|
2018-03-13 02:02:39 +08:00
|
|
|
/// If we have dangling debug info that describes \p Variable, or an
|
|
|
|
/// overlapping part of variable considering the \p Expr, then this method
|
|
|
|
/// weill drop that debug info as it isn't valid any longer.
|
|
|
|
void dropDanglingDebugInfo(const DILocalVariable *Variable,
|
|
|
|
const DIExpression *Expr);
|
|
|
|
|
2019-02-07 01:36:18 +08:00
|
|
|
// If we saw an earlier dbg_value referring to V, generate the debug data
|
|
|
|
// structures now that we've seen its definition.
|
2010-07-16 08:02:08 +08:00
|
|
|
void resolveDanglingDebugInfo(const Value *V, SDValue Val);
|
2017-09-28 07:26:01 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
SDValue getValue(const Value *V);
|
2015-04-28 15:57:37 +08:00
|
|
|
bool findValue(const Value *V) const;
|
|
|
|
|
2018-05-15 22:16:24 +08:00
|
|
|
/// Return the SDNode for the specified IR value if it exists.
|
|
|
|
SDNode *getNodeForIRValue(const Value *V) {
|
|
|
|
if (NodeMap.find(V) == NodeMap.end())
|
|
|
|
return nullptr;
|
|
|
|
return NodeMap[V].getNode();
|
|
|
|
}
|
|
|
|
|
2010-07-01 09:59:43 +08:00
|
|
|
SDValue getNonRegisterValue(const Value *V);
|
|
|
|
SDValue getValueImpl(const Value *V);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
|
|
|
void setValue(const Value *V, SDValue NewN) {
|
|
|
|
SDValue &N = NodeMap[V];
|
2014-04-16 12:21:27 +08:00
|
|
|
assert(!N.getNode() && "Already set a value for this node!");
|
2008-09-04 00:12:24 +08:00
|
|
|
N = NewN;
|
|
|
|
}
|
2013-11-01 01:18:07 +08:00
|
|
|
|
2010-06-02 03:59:01 +08:00
|
|
|
void setUnusedArgValue(const Value *V, SDValue NewN) {
|
|
|
|
SDValue &N = UnusedArgNodeMap[V];
|
2014-04-16 12:21:27 +08:00
|
|
|
assert(!N.getNode() && "Already set a value for this node!");
|
2010-06-02 03:59:01 +08:00
|
|
|
N = NewN;
|
|
|
|
}
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void FindMergedConditions(const Value *Cond, MachineBasicBlock *TBB,
|
2008-09-04 00:12:24 +08:00
|
|
|
MachineBasicBlock *FBB, MachineBasicBlock *CurBB,
|
2015-07-15 09:31:26 +08:00
|
|
|
MachineBasicBlock *SwitchBB,
|
2018-07-17 02:51:40 +08:00
|
|
|
Instruction::BinaryOps Opc, BranchProbability TProb,
|
|
|
|
BranchProbability FProb, bool InvertCond);
|
2010-04-15 09:51:59 +08:00
|
|
|
void EmitBranchForMergedCondition(const Value *Cond, MachineBasicBlock *TBB,
|
2008-10-18 05:16:08 +08:00
|
|
|
MachineBasicBlock *FBB,
|
2010-04-20 06:41:47 +08:00
|
|
|
MachineBasicBlock *CurBB,
|
2014-01-31 08:42:44 +08:00
|
|
|
MachineBasicBlock *SwitchBB,
|
2018-07-17 02:51:40 +08:00
|
|
|
BranchProbability TProb, BranchProbability FProb,
|
2017-01-25 00:36:07 +08:00
|
|
|
bool InvertCond);
|
2008-09-04 00:12:24 +08:00
|
|
|
bool ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases);
|
2010-04-15 09:51:59 +08:00
|
|
|
bool isExportableFromCurrentBlock(const Value *V, const BasicBlock *FromBB);
|
|
|
|
void CopyToExportRegsIfNeeded(const Value *V);
|
|
|
|
void ExportFromCurrentBlock(const Value *V);
|
|
|
|
void LowerCallTo(ImmutableCallSite CS, SDValue Callee, bool IsTailCall,
|
2015-09-09 07:28:38 +08:00
|
|
|
const BasicBlock *EHPadBB = nullptr);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2016-02-09 00:28:19 +08:00
|
|
|
// Lower range metadata from 0 to N to assert zext to an integer of nearest
|
|
|
|
// floor power of two.
|
|
|
|
SDValue lowerRangeToAssertZExt(SelectionDAG &DAG, const Instruction &I,
|
|
|
|
SDValue Op);
|
|
|
|
|
2016-03-17 04:49:31 +08:00
|
|
|
void populateCallLoweringInfo(TargetLowering::CallLoweringInfo &CLI,
|
2019-02-11 15:42:30 +08:00
|
|
|
const CallBase *Call, unsigned ArgIdx,
|
2016-03-17 04:49:31 +08:00
|
|
|
unsigned NumArgs, SDValue Callee,
|
|
|
|
Type *ReturnTy, bool IsPatchPoint);
|
|
|
|
|
|
|
|
std::pair<SDValue, SDValue>
|
|
|
|
lowerInvokable(TargetLowering::CallLoweringInfo &CLI,
|
|
|
|
const BasicBlock *EHPadBB = nullptr);
|
2013-11-01 01:18:24 +08:00
|
|
|
|
2010-10-01 03:44:31 +08:00
|
|
|
/// UpdateSplitBlock - When an MBB was split during scheduling, update the
|
2014-01-11 22:01:43 +08:00
|
|
|
/// references that need to refer to the last resulting block.
|
2010-10-01 03:44:31 +08:00
|
|
|
void UpdateSplitBlock(MachineBasicBlock *First, MachineBasicBlock *Last);
|
|
|
|
|
2016-03-17 07:08:00 +08:00
|
|
|
/// Describes a gc.statepoint or a gc.statepoint like thing for the purposes
|
2016-03-23 10:28:35 +08:00
|
|
|
/// of lowering into a STATEPOINT node.
|
2016-03-17 07:08:00 +08:00
|
|
|
struct StatepointLoweringInfo {
|
|
|
|
/// Bases[i] is the base pointer for Ptrs[i]. Together they denote the set
|
|
|
|
/// of gc pointers this STATEPOINT has to relocate.
|
2016-03-23 10:24:07 +08:00
|
|
|
SmallVector<const Value *, 16> Bases;
|
|
|
|
SmallVector<const Value *, 16> Ptrs;
|
2016-03-17 07:08:00 +08:00
|
|
|
|
|
|
|
/// The set of gc.relocate calls associated with this gc.statepoint.
|
2016-03-23 10:24:07 +08:00
|
|
|
SmallVector<const GCRelocateInst *, 16> GCRelocates;
|
2016-03-17 07:08:00 +08:00
|
|
|
|
|
|
|
/// The full list of gc arguments to the gc.statepoint being lowered.
|
|
|
|
ArrayRef<const Use> GCArgs;
|
|
|
|
|
|
|
|
/// The gc.statepoint instruction.
|
|
|
|
const Instruction *StatepointInstr = nullptr;
|
|
|
|
|
|
|
|
/// The list of gc transition arguments present in the gc.statepoint being
|
|
|
|
/// lowered.
|
|
|
|
ArrayRef<const Use> GCTransitionArgs;
|
|
|
|
|
|
|
|
/// The ID that the resulting STATEPOINT instruction has to report.
|
|
|
|
unsigned ID = -1;
|
|
|
|
|
|
|
|
/// Information regarding the underlying call instruction.
|
|
|
|
TargetLowering::CallLoweringInfo CLI;
|
|
|
|
|
|
|
|
/// The deoptimization state associated with this gc.statepoint call, if
|
|
|
|
/// any.
|
|
|
|
ArrayRef<const Use> DeoptState;
|
|
|
|
|
|
|
|
/// Flags associated with the meta arguments being lowered.
|
|
|
|
uint64_t StatepointFlags = -1;
|
|
|
|
|
|
|
|
/// The number of patchable bytes the call needs to get lowered into.
|
|
|
|
unsigned NumPatchBytes = -1;
|
|
|
|
|
|
|
|
/// The exception handling unwind destination, in case this represents an
|
|
|
|
/// invoke of gc.statepoint.
|
|
|
|
const BasicBlock *EHPadBB = nullptr;
|
|
|
|
|
|
|
|
explicit StatepointLoweringInfo(SelectionDAG &DAG) : CLI(DAG) {}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Lower \p SLI into a STATEPOINT instruction.
|
2018-07-17 02:51:40 +08:00
|
|
|
SDValue LowerAsSTATEPOINT(StatepointLoweringInfo &SI);
|
2016-03-17 07:08:00 +08:00
|
|
|
|
2015-02-20 23:28:35 +08:00
|
|
|
// This function is responsible for the whole statepoint lowering process.
|
2015-03-11 00:26:48 +08:00
|
|
|
// It uniformly handles invoke and call statepoints.
|
2018-07-17 02:51:40 +08:00
|
|
|
void LowerStatepoint(ImmutableStatepoint ISP,
|
2015-09-09 07:28:38 +08:00
|
|
|
const BasicBlock *EHPadBB = nullptr);
|
2016-03-22 08:59:13 +08:00
|
|
|
|
2019-02-11 15:42:30 +08:00
|
|
|
void LowerCallSiteWithDeoptBundle(const CallBase *Call, SDValue Callee,
|
2016-03-22 08:59:13 +08:00
|
|
|
const BasicBlock *EHPadBB);
|
|
|
|
|
2016-03-25 04:23:29 +08:00
|
|
|
void LowerDeoptimizeCall(const CallInst *CI);
|
2016-04-06 09:33:49 +08:00
|
|
|
void LowerDeoptimizingReturn();
|
2016-03-25 04:23:29 +08:00
|
|
|
|
2019-02-11 15:42:30 +08:00
|
|
|
void LowerCallSiteWithDeoptBundleImpl(const CallBase *Call, SDValue Callee,
|
2016-03-25 06:51:49 +08:00
|
|
|
const BasicBlock *EHPadBB,
|
2016-04-06 09:33:49 +08:00
|
|
|
bool VarArgDisallowed,
|
|
|
|
bool ForceVoidReturnTy);
|
2016-03-25 06:51:49 +08:00
|
|
|
|
2017-04-28 01:17:16 +08:00
|
|
|
/// Returns the type of FrameIndex and TargetFrameIndex nodes.
|
|
|
|
MVT getFrameIndexTy() {
|
|
|
|
return DAG.getTargetLoweringInfo().getFrameIndexTy(DAG.getDataLayout());
|
|
|
|
}
|
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
private:
|
|
|
|
// Terminator instructions.
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitRet(const ReturnInst &I);
|
|
|
|
void visitBr(const BranchInst &I);
|
|
|
|
void visitSwitch(const SwitchInst &I);
|
|
|
|
void visitIndirectBr(const IndirectBrInst &I);
|
2014-04-19 21:47:43 +08:00
|
|
|
void visitUnreachable(const UnreachableInst &I);
|
2015-08-01 01:58:14 +08:00
|
|
|
void visitCleanupRet(const CleanupReturnInst &I);
|
[IR] Reformulate LLVM's EH funclet IR
While we have successfully implemented a funclet-oriented EH scheme on
top of LLVM IR, our scheme has some notable deficiencies:
- catchendpad and cleanupendpad are necessary in the current design
but they are difficult to explain to others, even to seasoned LLVM
experts.
- catchendpad and cleanupendpad are optimization barriers. They cannot
be split and force all potentially throwing call-sites to be invokes.
This has a noticable effect on the quality of our code generation.
- catchpad, while similar in some aspects to invoke, is fairly awkward.
It is unsplittable, starts a funclet, and has control flow to other
funclets.
- The nesting relationship between funclets is currently a property of
control flow edges. Because of this, we are forced to carefully
analyze the flow graph to see if there might potentially exist illegal
nesting among funclets. While we have logic to clone funclets when
they are illegally nested, it would be nicer if we had a
representation which forbade them upfront.
Let's clean this up a bit by doing the following:
- Instead, make catchpad more like cleanuppad and landingpad: no control
flow, just a bunch of simple operands; catchpad would be splittable.
- Introduce catchswitch, a control flow instruction designed to model
the constraints of funclet oriented EH.
- Make funclet scoping explicit by having funclet instructions consume
the token produced by the funclet which contains them.
- Remove catchendpad and cleanupendpad. Their presence can be inferred
implicitly using coloring information.
N.B. The state numbering code for the CLR has been updated but the
veracity of it's output cannot be spoken for. An expert should take a
look to make sure the results are reasonable.
Reviewers: rnk, JosephTremoulet, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D15139
llvm-svn: 255422
2015-12-12 13:38:55 +08:00
|
|
|
void visitCatchSwitch(const CatchSwitchInst &I);
|
2015-08-01 01:58:14 +08:00
|
|
|
void visitCatchRet(const CatchReturnInst &I);
|
|
|
|
void visitCatchPad(const CatchPadInst &I);
|
|
|
|
void visitCleanupPad(const CleanupPadInst &CPI);
|
2008-09-04 00:12:24 +08:00
|
|
|
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability getEdgeProbability(const MachineBasicBlock *Src,
|
|
|
|
const MachineBasicBlock *Dst) const;
|
|
|
|
void addSuccessorWithProb(
|
|
|
|
MachineBasicBlock *Src, MachineBasicBlock *Dst,
|
|
|
|
BranchProbability Prob = BranchProbability::getUnknown());
|
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
public:
|
2010-04-20 06:41:47 +08:00
|
|
|
void visitSwitchCase(CaseBlock &CB,
|
|
|
|
MachineBasicBlock *SwitchBB);
|
Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
2013-08-20 15:00:16 +08:00
|
|
|
void visitSPDescriptorParent(StackProtectorDescriptor &SPD,
|
|
|
|
MachineBasicBlock *ParentBB);
|
|
|
|
void visitSPDescriptorFailure(StackProtectorDescriptor &SPD);
|
2010-04-20 06:41:47 +08:00
|
|
|
void visitBitTestHeader(BitTestBlock &B, MachineBasicBlock *SwitchBB);
|
2011-01-06 09:02:44 +08:00
|
|
|
void visitBitTestCase(BitTestBlock &BB,
|
|
|
|
MachineBasicBlock* NextMBB,
|
2015-11-24 16:51:23 +08:00
|
|
|
BranchProbability BranchProbToNext,
|
2008-09-04 00:12:24 +08:00
|
|
|
unsigned Reg,
|
2010-04-20 06:41:47 +08:00
|
|
|
BitTestCase &B,
|
|
|
|
MachineBasicBlock *SwitchBB);
|
2008-09-04 00:12:24 +08:00
|
|
|
void visitJumpTable(JumpTable &JT);
|
2010-04-20 06:41:47 +08:00
|
|
|
void visitJumpTableHeader(JumpTable &JT, JumpTableHeader &JTH,
|
|
|
|
MachineBasicBlock *SwitchBB);
|
2013-11-01 01:18:07 +08:00
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
private:
|
|
|
|
// These all get lowered before this pass.
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitInvoke(const InvokeInst &I);
|
2019-02-09 04:48:56 +08:00
|
|
|
void visitCallBr(const CallBrInst &I);
|
2011-07-31 14:30:59 +08:00
|
|
|
void visitResume(const ResumeInst &I);
|
2010-04-15 09:51:59 +08:00
|
|
|
|
2018-11-14 02:15:47 +08:00
|
|
|
void visitUnary(const User &I, unsigned Opcode);
|
|
|
|
void visitFNeg(const User &I) { visitUnary(I, ISD::FNEG); }
|
|
|
|
|
2018-05-12 06:45:22 +08:00
|
|
|
void visitBinary(const User &I, unsigned Opcode);
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitShift(const User &I, unsigned Opcode);
|
|
|
|
void visitAdd(const User &I) { visitBinary(I, ISD::ADD); }
|
|
|
|
void visitFAdd(const User &I) { visitBinary(I, ISD::FADD); }
|
|
|
|
void visitSub(const User &I) { visitBinary(I, ISD::SUB); }
|
|
|
|
void visitFSub(const User &I);
|
|
|
|
void visitMul(const User &I) { visitBinary(I, ISD::MUL); }
|
|
|
|
void visitFMul(const User &I) { visitBinary(I, ISD::FMUL); }
|
|
|
|
void visitURem(const User &I) { visitBinary(I, ISD::UREM); }
|
|
|
|
void visitSRem(const User &I) { visitBinary(I, ISD::SREM); }
|
|
|
|
void visitFRem(const User &I) { visitBinary(I, ISD::FREM); }
|
|
|
|
void visitUDiv(const User &I) { visitBinary(I, ISD::UDIV); }
|
Emit a more efficient magic number multiplication for exact sdivs.
We have to do this in DAGBuilder instead of DAGCombiner, because the exact bit is lost after building.
struct foo { char x[24]; };
long bar(struct foo *a, struct foo *b) { return a-b; }
is now compiled into
movl 4(%esp), %eax
subl 8(%esp), %eax
sarl $3, %eax
imull $-1431655765, %eax, %eax
instead of
movl 4(%esp), %eax
subl 8(%esp), %eax
movl $715827883, %ecx
imull %ecx
movl %edx, %eax
shrl $31, %eax
sarl $2, %edx
addl %eax, %edx
movl %edx, %eax
llvm-svn: 134695
2011-07-08 18:31:30 +08:00
|
|
|
void visitSDiv(const User &I);
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitFDiv(const User &I) { visitBinary(I, ISD::FDIV); }
|
|
|
|
void visitAnd (const User &I) { visitBinary(I, ISD::AND); }
|
|
|
|
void visitOr (const User &I) { visitBinary(I, ISD::OR); }
|
|
|
|
void visitXor (const User &I) { visitBinary(I, ISD::XOR); }
|
|
|
|
void visitShl (const User &I) { visitShift(I, ISD::SHL); }
|
|
|
|
void visitLShr(const User &I) { visitShift(I, ISD::SRL); }
|
|
|
|
void visitAShr(const User &I) { visitShift(I, ISD::SRA); }
|
|
|
|
void visitICmp(const User &I);
|
|
|
|
void visitFCmp(const User &I);
|
2008-09-04 00:12:24 +08:00
|
|
|
// Visit the conversion instructions
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitTrunc(const User &I);
|
|
|
|
void visitZExt(const User &I);
|
|
|
|
void visitSExt(const User &I);
|
|
|
|
void visitFPTrunc(const User &I);
|
|
|
|
void visitFPExt(const User &I);
|
|
|
|
void visitFPToUI(const User &I);
|
|
|
|
void visitFPToSI(const User &I);
|
|
|
|
void visitUIToFP(const User &I);
|
|
|
|
void visitSIToFP(const User &I);
|
|
|
|
void visitPtrToInt(const User &I);
|
|
|
|
void visitIntToPtr(const User &I);
|
|
|
|
void visitBitCast(const User &I);
|
2013-11-15 09:34:59 +08:00
|
|
|
void visitAddrSpaceCast(const User &I);
|
2010-04-15 09:51:59 +08:00
|
|
|
|
|
|
|
void visitExtractElement(const User &I);
|
|
|
|
void visitInsertElement(const User &I);
|
|
|
|
void visitShuffleVector(const User &I);
|
|
|
|
|
2017-07-10 00:01:04 +08:00
|
|
|
void visitExtractValue(const User &I);
|
|
|
|
void visitInsertValue(const User &I);
|
2018-07-17 02:51:40 +08:00
|
|
|
void visitLandingPad(const LandingPadInst &LP);
|
2010-04-15 09:51:59 +08:00
|
|
|
|
|
|
|
void visitGetElementPtr(const User &I);
|
|
|
|
void visitSelect(const User &I);
|
|
|
|
|
|
|
|
void visitAlloca(const AllocaInst &I);
|
|
|
|
void visitLoad(const LoadInst &I);
|
|
|
|
void visitStore(const StoreInst &I);
|
2016-11-03 11:23:55 +08:00
|
|
|
void visitMaskedLoad(const CallInst &I, bool IsExpanding = false);
|
|
|
|
void visitMaskedStore(const CallInst &I, bool IsCompressing = false);
|
2015-04-28 15:57:37 +08:00
|
|
|
void visitMaskedGather(const CallInst &I);
|
|
|
|
void visitMaskedScatter(const CallInst &I);
|
2011-07-29 05:48:00 +08:00
|
|
|
void visitAtomicCmpXchg(const AtomicCmpXchgInst &I);
|
|
|
|
void visitAtomicRMW(const AtomicRMWInst &I);
|
2011-07-26 07:16:38 +08:00
|
|
|
void visitFence(const FenceInst &I);
|
2010-04-20 23:00:41 +08:00
|
|
|
void visitPHI(const PHINode &I);
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitCall(const CallInst &I);
|
|
|
|
bool visitMemCmpCall(const CallInst &I);
|
2016-07-30 02:23:18 +08:00
|
|
|
bool visitMemPCpyCall(const CallInst &I);
|
2013-08-20 17:38:48 +08:00
|
|
|
bool visitMemChrCall(const CallInst &I);
|
2013-08-16 19:29:37 +08:00
|
|
|
bool visitStrCpyCall(const CallInst &I, bool isStpcpy);
|
2013-08-16 19:21:54 +08:00
|
|
|
bool visitStrCmpCall(const CallInst &I);
|
2013-08-16 19:41:43 +08:00
|
|
|
bool visitStrLenCall(const CallInst &I);
|
|
|
|
bool visitStrNLenCall(const CallInst &I);
|
2012-08-04 07:29:17 +08:00
|
|
|
bool visitUnaryFloatCall(const CallInst &I, unsigned Opcode);
|
2014-10-22 07:01:01 +08:00
|
|
|
bool visitBinaryFloatCall(const CallInst &I, unsigned Opcode);
|
2011-08-25 04:50:09 +08:00
|
|
|
void visitAtomicLoad(const LoadInst &I);
|
|
|
|
void visitAtomicStore(const StoreInst &I);
|
2016-04-06 02:13:16 +08:00
|
|
|
void visitLoadFromSwiftError(const LoadInst &I);
|
|
|
|
void visitStoreToSwiftError(const StoreInst &I);
|
2011-08-25 04:50:09 +08:00
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitInlineAsm(ImmutableCallSite CS);
|
|
|
|
const char *visitIntrinsicCall(const CallInst &I, unsigned Intrinsic);
|
|
|
|
void visitTargetIntrinsic(const CallInst &I, unsigned Intrinsic);
|
2017-05-26 05:31:00 +08:00
|
|
|
void visitConstrainedFPIntrinsic(const ConstrainedFPIntrinsic &FPI);
|
2010-04-15 09:51:59 +08:00
|
|
|
|
|
|
|
void visitVAStart(const CallInst &I);
|
|
|
|
void visitVAArg(const VAArgInst &I);
|
|
|
|
void visitVAEnd(const CallInst &I);
|
|
|
|
void visitVACopy(const CallInst &I);
|
2013-11-01 01:18:24 +08:00
|
|
|
void visitStackmap(const CallInst &I);
|
2014-10-18 01:39:00 +08:00
|
|
|
void visitPatchpoint(ImmutableCallSite CS,
|
2015-09-09 07:28:38 +08:00
|
|
|
const BasicBlock *EHPadBB = nullptr);
|
2010-04-15 09:51:59 +08:00
|
|
|
|
2016-03-17 08:47:14 +08:00
|
|
|
// These two are implemented in StatepointLowering.cpp
|
2018-07-17 02:51:40 +08:00
|
|
|
void visitGCRelocate(const GCRelocateInst &Relocate);
|
2016-04-13 02:05:10 +08:00
|
|
|
void visitGCResult(const GCResultInst &I);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2017-05-09 18:43:25 +08:00
|
|
|
void visitVectorReduce(const CallInst &I, unsigned Intrinsic);
|
|
|
|
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitUserOp1(const Instruction &I) {
|
2009-07-15 00:55:14 +08:00
|
|
|
llvm_unreachable("UserOp1 should not exist at instruction selection time!");
|
2008-09-04 00:12:24 +08:00
|
|
|
}
|
2010-04-15 09:51:59 +08:00
|
|
|
void visitUserOp2(const Instruction &I) {
|
2009-07-15 00:55:14 +08:00
|
|
|
llvm_unreachable("UserOp2 should not exist at instruction selection time!");
|
2008-09-04 00:12:24 +08:00
|
|
|
}
|
2010-04-23 04:55:53 +08:00
|
|
|
|
2013-08-16 18:55:47 +08:00
|
|
|
void processIntegerCallValue(const Instruction &I,
|
|
|
|
SDValue Value, bool IsSigned);
|
|
|
|
|
2010-04-23 04:55:53 +08:00
|
|
|
void HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB);
|
2010-04-29 07:08:54 +08:00
|
|
|
|
2016-05-18 03:52:01 +08:00
|
|
|
void emitInlineAsmError(ImmutableCallSite CS, const Twine &Message);
|
|
|
|
|
2017-07-29 05:27:35 +08:00
|
|
|
/// If V is an function argument then create corresponding DBG_VALUE machine
|
|
|
|
/// instruction for it now. At the end of instruction selection, they will be
|
|
|
|
/// inserted to the entry BB.
|
2015-04-30 00:38:44 +08:00
|
|
|
bool EmitFuncArgumentDbgValue(const Value *V, DILocalVariable *Variable,
|
|
|
|
DIExpression *Expr, DILocation *DL,
|
2017-07-29 05:27:35 +08:00
|
|
|
bool IsDbgDeclare, const SDValue &N);
|
2015-03-20 04:41:48 +08:00
|
|
|
|
|
|
|
/// Return the next block after MBB, or nullptr if there is none.
|
|
|
|
MachineBasicBlock *NextBlock(MachineBasicBlock *MBB);
|
2015-04-14 01:16:45 +08:00
|
|
|
|
|
|
|
/// Update the DAG and DAG builder with the relevant information after
|
|
|
|
/// a new root node has been created which could be a tail call.
|
|
|
|
void updateDAGForMaybeTailCall(SDValue MaybeTC);
|
2016-08-16 02:18:26 +08:00
|
|
|
|
|
|
|
/// Return the appropriate SDDbgValue based on N.
|
|
|
|
SDDbgValue *getDbgValue(SDValue N, DILocalVariable *Variable,
|
2017-07-29 05:27:35 +08:00
|
|
|
DIExpression *Expr, const DebugLoc &dl,
|
|
|
|
unsigned DbgSDNodeOrder);
|
2008-09-04 00:12:24 +08:00
|
|
|
};
|
|
|
|
|
2015-05-06 07:06:54 +08:00
|
|
|
/// RegsForValue - This struct represents the registers (physical or virtual)
|
|
|
|
/// that a particular set of values is assigned, and the type information about
|
|
|
|
/// the value. The most common situation is to represent one value at a time,
|
|
|
|
/// but struct or array values are handled element-wise as multiple values. The
|
|
|
|
/// splitting of aggregates is performed recursively, so that we never have
|
|
|
|
/// aggregate-typed registers. The values at this point do not necessarily have
|
|
|
|
/// legal types, so each value may require one or more registers of some legal
|
|
|
|
/// type.
|
|
|
|
///
|
|
|
|
struct RegsForValue {
|
2017-03-03 04:48:08 +08:00
|
|
|
/// The value types of the values, which may not be legal, and
|
2015-05-06 07:06:54 +08:00
|
|
|
/// may need be promoted or synthesized from one or more registers.
|
|
|
|
SmallVector<EVT, 4> ValueVTs;
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// The value types of the registers. This is the same size as ValueVTs and it
|
|
|
|
/// records, for each value, what the type of the assigned register or
|
|
|
|
/// registers are. (Individual values are never synthesized from more than one
|
|
|
|
/// type of register.)
|
2015-05-06 07:06:54 +08:00
|
|
|
///
|
|
|
|
/// With virtual registers, the contents of RegVTs is redundant with TLI's
|
|
|
|
/// getRegisterType member function, however when with physical registers
|
|
|
|
/// it is necessary to have a separate record of the types.
|
|
|
|
SmallVector<MVT, 4> RegVTs;
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// This list holds the registers assigned to the values.
|
2015-05-06 07:06:54 +08:00
|
|
|
/// Each legal or promoted value requires one register, and each
|
|
|
|
/// expanded value requires multiple registers.
|
|
|
|
SmallVector<unsigned, 4> Regs;
|
|
|
|
|
Reland "[SelectionDAG] Enable target specific vector scalarization of calls and returns"
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.
The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.
Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.
By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.
Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".
This patch enables the MIPS backend to take either form for vector types.
The previous version of this patch had a "conditional move or jump depends on
uninitialized value".
Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur
Differential Revision: https://reviews.llvm.org/D27845
llvm-svn: 305083
2017-06-09 22:37:08 +08:00
|
|
|
/// This list holds the number of registers for each value.
|
|
|
|
SmallVector<unsigned, 4> RegCount;
|
|
|
|
|
|
|
|
/// Records if this value needs to be treated in an ABI dependant manner,
|
|
|
|
/// different to normal type legalization.
|
2018-07-28 21:25:19 +08:00
|
|
|
Optional<CallingConv::ID> CallConv;
|
2015-05-06 07:06:54 +08:00
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
RegsForValue() = default;
|
Reland "[SelectionDAG] Enable target specific vector scalarization of calls and returns"
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.
The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.
Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.
By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.
Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".
This patch enables the MIPS backend to take either form for vector types.
The previous version of this patch had a "conditional move or jump depends on
uninitialized value".
Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur
Differential Revision: https://reviews.llvm.org/D27845
llvm-svn: 305083
2017-06-09 22:37:08 +08:00
|
|
|
RegsForValue(const SmallVector<unsigned, 4> ®s, MVT regvt, EVT valuevt,
|
2018-07-28 21:25:19 +08:00
|
|
|
Optional<CallingConv::ID> CC = None);
|
2015-07-09 09:57:34 +08:00
|
|
|
RegsForValue(LLVMContext &Context, const TargetLowering &TLI,
|
Reland "[SelectionDAG] Enable target specific vector scalarization of calls and returns"
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.
The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.
Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.
By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.
Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".
This patch enables the MIPS backend to take either form for vector types.
The previous version of this patch had a "conditional move or jump depends on
uninitialized value".
Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur
Differential Revision: https://reviews.llvm.org/D27845
llvm-svn: 305083
2017-06-09 22:37:08 +08:00
|
|
|
const DataLayout &DL, unsigned Reg, Type *Ty,
|
2018-07-28 21:25:19 +08:00
|
|
|
Optional<CallingConv::ID> CC);
|
|
|
|
|
|
|
|
bool isABIMangled() const {
|
|
|
|
return CallConv.hasValue();
|
|
|
|
}
|
2015-05-06 07:06:54 +08:00
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Add the specified values to this one.
|
2015-05-06 07:06:54 +08:00
|
|
|
void append(const RegsForValue &RHS) {
|
|
|
|
ValueVTs.append(RHS.ValueVTs.begin(), RHS.ValueVTs.end());
|
|
|
|
RegVTs.append(RHS.RegVTs.begin(), RHS.RegVTs.end());
|
|
|
|
Regs.append(RHS.Regs.begin(), RHS.Regs.end());
|
Reland "[SelectionDAG] Enable target specific vector scalarization of calls and returns"
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.
The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.
Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.
By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.
Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".
This patch enables the MIPS backend to take either form for vector types.
The previous version of this patch had a "conditional move or jump depends on
uninitialized value".
Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur
Differential Revision: https://reviews.llvm.org/D27845
llvm-svn: 305083
2017-06-09 22:37:08 +08:00
|
|
|
RegCount.push_back(RHS.Regs.size());
|
2015-05-06 07:06:54 +08:00
|
|
|
}
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Emit a series of CopyFromReg nodes that copies from this value and returns
|
|
|
|
/// the result as a ValueVTs value. This uses Chain/Flag as the input and
|
|
|
|
/// updates them for the output Chain/Flag. If the Flag pointer is NULL, no
|
|
|
|
/// flag is used.
|
2015-05-06 07:06:54 +08:00
|
|
|
SDValue getCopyFromRegs(SelectionDAG &DAG, FunctionLoweringInfo &FuncInfo,
|
2016-06-12 23:39:02 +08:00
|
|
|
const SDLoc &dl, SDValue &Chain, SDValue *Flag,
|
2015-05-06 07:06:54 +08:00
|
|
|
const Value *V = nullptr) const;
|
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Emit a series of CopyToReg nodes that copies the specified value into the
|
|
|
|
/// registers specified by this object. This uses Chain/Flag as the input and
|
|
|
|
/// updates them for the output Chain/Flag. If the Flag pointer is nullptr, no
|
|
|
|
/// flag is used. If V is not nullptr, then it is used in printing better
|
|
|
|
/// diagnostic messages on error.
|
2016-06-12 23:39:02 +08:00
|
|
|
void getCopyToRegs(SDValue Val, SelectionDAG &DAG, const SDLoc &dl,
|
|
|
|
SDValue &Chain, SDValue *Flag, const Value *V = nullptr,
|
|
|
|
ISD::NodeType PreferredExtendType = ISD::ANY_EXTEND) const;
|
2015-05-06 07:06:54 +08:00
|
|
|
|
2017-03-03 04:48:08 +08:00
|
|
|
/// Add this value to the specified inlineasm node operand list. This adds the
|
|
|
|
/// code marker, matching input operand index (if applicable), and includes
|
|
|
|
/// the number of values added into it.
|
2018-07-17 02:51:40 +08:00
|
|
|
void AddInlineAsmOperands(unsigned Code, bool HasMatching,
|
2016-06-12 23:39:02 +08:00
|
|
|
unsigned MatchingIdx, const SDLoc &dl,
|
|
|
|
SelectionDAG &DAG, std::vector<SDValue> &Ops) const;
|
Reapply "[SelectionDAG] Selection of DBG_VALUE using a PHI node result (pt 2)"
Summary:
This reverts SVN r331441 (reapplies r331337), together with a fix
in to handle an already existing fragment expression in the
dbg.value that must be fragmented due to a split PHI node.
This should solve the problem seen in PR37321, which was the
reason for the revert of r331337.
The situation in PR37321 is that we have a PHI node like this
%u.sroa = phi i80 [ %u.sroa.x, %if.x ],
[ %u.sroa.y, %if.y ],
[ %u.sroa.z, %if.z ]
and a dbg.value like this
call void @llvm.dbg.value(metadata i80 %u.sroa,
metadata !13,
metadata !DIExpression(DW_OP_LLVM_fragment, 0, 80))
The phi node is split into three 32-bit PHI nodes
%30:gr32 = PHI %11:gr32, %bb.4, %14:gr32, %bb.5, %27:gr32, %bb.8
%31:gr32 = PHI %12:gr32, %bb.4, %15:gr32, %bb.5, %28:gr32, %bb.8
%32:gr32 = PHI %13:gr32, %bb.4, %16:gr32, %bb.5, %29:gr32, %bb.8
but since the original value only is 80 bits we need to adjust the size
of the last fragment expression, and with this patch we get
DBG_VALUE debug-use %30:gr32, debug-use $noreg, !"u", !DIExpression(DW_OP_LLVM_fragment, 0, 32)
DBG_VALUE debug-use %31:gr32, debug-use $noreg, !"u", !DIExpression(DW_OP_LLVM_fragment, 32, 32)
DBG_VALUE debug-use %32:gr32, debug-use $noreg, !"u", !DIExpression(DW_OP_LLVM_fragment, 64, 16)
Reviewers: vsk, aprantl, mstorsjo
Reviewed By: aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46384
llvm-svn: 331464
2018-05-04 01:04:16 +08:00
|
|
|
|
|
|
|
/// Check if the total RegCount is greater than one.
|
|
|
|
bool occupiesMultipleRegs() const {
|
|
|
|
return std::accumulate(RegCount.begin(), RegCount.end(), 0) > 1;
|
|
|
|
}
|
2018-05-04 16:50:48 +08:00
|
|
|
|
|
|
|
/// Return a list of registers and their sizes.
|
|
|
|
SmallVector<std::pair<unsigned, unsigned>, 4> getRegsAndSizes() const;
|
2015-05-06 07:06:54 +08:00
|
|
|
};
|
|
|
|
|
2008-09-04 00:12:24 +08:00
|
|
|
} // end namespace llvm
|
|
|
|
|
2017-09-28 07:26:01 +08:00
|
|
|
#endif // LLVM_LIB_CODEGEN_SELECTIONDAG_SELECTIONDAGBUILDER_H
|