forked from OSchip/llvm-project
Remove gc.root's performCustomLowering
This is a refactoring to restructure the single user of performCustomLowering as a specific lowering pass and remove the custom lowering hook entirely. Before this change, the LowerIntrinsics pass (note to self: rename!) was essentially acting as a pass manager, but without being structured in terms of passes. Instead, it proxied calls to a set of GCStrategies internally. This adds a lot of conceptual complexity (i.e. GCStrategies are stateful!) for very little benefit. Since there's been interest in keeping the ShadowStackGC working, I extracting it's custom lowering pass into a dedicated pass and just added that to the pass order. It will only run for functions which opt-in to that gc. I wasn't able to find an easy way to preserve the runtime registration of custom lowering functionality. Given that no user of this exists that I'm aware of, I made the choice to just remove that. If someone really cares, we can look at restoring it via dynamic pass registration in the future. Note that despite the large diff, none of the lowering code actual changes. I added the framing needed to make it a pass and rename the class, but that's it. Differential Revision: http://reviews.llvm.org/D7218 llvm-svn: 227351
This commit is contained in:
parent
2265acf39e
commit
23cf2e2f97
|
@ -721,8 +721,9 @@ this feature should be used by all GC plugins. It is enabled by default.
|
|||
Custom lowering of intrinsics: ``CustomRoots``, ``CustomReadBarriers``, and ``CustomWriteBarriers``
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
For GCs which use barriers or unusual treatment of stack roots, these flags
|
||||
allow the collector to perform arbitrary transformations of the LLVM IR:
|
||||
For GCs which use barriers or unusual treatment of stack roots, these
|
||||
flags allow the collector to perform arbitrary transformations of the
|
||||
LLVM IR:
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
|
@ -733,70 +734,18 @@ allow the collector to perform arbitrary transformations of the LLVM IR:
|
|||
CustomReadBarriers = true;
|
||||
CustomWriteBarriers = true;
|
||||
}
|
||||
|
||||
virtual bool initializeCustomLowering(Module &M);
|
||||
virtual bool performCustomLowering(Function &F);
|
||||
};
|
||||
|
||||
If any of these flags are set, then LLVM suppresses its default lowering for the
|
||||
corresponding intrinsics and instead calls ``performCustomLowering``.
|
||||
If any of these flags are set, LLVM suppresses its default lowering for
|
||||
the corresponding intrinsics. Instead, you must provide a custom Pass
|
||||
which lowers the intrinsics as desired. If you have opted in to custom
|
||||
lowering of a particular intrinsic your pass **must** eliminate all
|
||||
instances of the corresponding intrinsic in functions which opt in to
|
||||
your GC. The best example of such a pass is the ShadowStackGC and it's
|
||||
ShadowStackGCLowering pass.
|
||||
|
||||
LLVM's default action for each intrinsic is as follows:
|
||||
|
||||
* ``llvm.gcroot``: Leave it alone. The code generator must see it or the stack
|
||||
map will not be computed.
|
||||
|
||||
* ``llvm.gcread``: Substitute a ``load`` instruction.
|
||||
|
||||
* ``llvm.gcwrite``: Substitute a ``store`` instruction.
|
||||
|
||||
If ``CustomReadBarriers`` or ``CustomWriteBarriers`` are specified, then
|
||||
``performCustomLowering`` **must** eliminate the corresponding barriers.
|
||||
|
||||
``performCustomLowering`` must comply with the same restrictions as
|
||||
:ref:`FunctionPass::runOnFunction <writing-an-llvm-pass-runOnFunction>`
|
||||
Likewise, ``initializeCustomLowering`` has the same semantics as
|
||||
:ref:`Pass::doInitialization(Module&)
|
||||
<writing-an-llvm-pass-doInitialization-mod>`
|
||||
|
||||
The following can be used as a template:
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
#include "llvm/IR/Module.h"
|
||||
#include "llvm/IR/IntrinsicInst.h"
|
||||
|
||||
bool MyGC::initializeCustomLowering(Module &M) {
|
||||
return false;
|
||||
}
|
||||
|
||||
bool MyGC::performCustomLowering(Function &F) {
|
||||
bool MadeChange = false;
|
||||
|
||||
for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
|
||||
for (BasicBlock::iterator II = BB->begin(), E = BB->end(); II != E; )
|
||||
if (IntrinsicInst *CI = dyn_cast<IntrinsicInst>(II++))
|
||||
if (Function *F = CI->getCalledFunction())
|
||||
switch (F->getIntrinsicID()) {
|
||||
case Intrinsic::gcwrite:
|
||||
// Handle llvm.gcwrite.
|
||||
CI->eraseFromParent();
|
||||
MadeChange = true;
|
||||
break;
|
||||
case Intrinsic::gcread:
|
||||
// Handle llvm.gcread.
|
||||
CI->eraseFromParent();
|
||||
MadeChange = true;
|
||||
break;
|
||||
case Intrinsic::gcroot:
|
||||
// Handle llvm.gcroot.
|
||||
CI->eraseFromParent();
|
||||
MadeChange = true;
|
||||
break;
|
||||
}
|
||||
|
||||
return MadeChange;
|
||||
}
|
||||
There is currently no way to register such a custom lowering pass
|
||||
without building a custom copy of LLVM.
|
||||
|
||||
.. _safe-points:
|
||||
|
||||
|
|
|
@ -101,13 +101,13 @@ public:
|
|||
const std::string &getName() const { return Name; }
|
||||
|
||||
/// By default, write barriers are replaced with simple store
|
||||
/// instructions. If true, then performCustomLowering must instead lower
|
||||
/// them.
|
||||
/// instructions. If true, you must provide a custom pass to lower
|
||||
/// calls to @llvm.gcwrite.
|
||||
bool customWriteBarrier() const { return CustomWriteBarriers; }
|
||||
|
||||
/// By default, read barriers are replaced with simple load
|
||||
/// instructions. If true, then performCustomLowering must instead lower
|
||||
/// them.
|
||||
/// instructions. If true, you must provide a custom pass to lower
|
||||
/// calls to @llvm.gcread.
|
||||
bool customReadBarrier() const { return CustomReadBarriers; }
|
||||
|
||||
/// Returns true if this strategy is expecting the use of gc.statepoints,
|
||||
|
@ -143,7 +143,8 @@ public:
|
|||
}
|
||||
|
||||
/// By default, roots are left for the code generator so it can generate a
|
||||
/// stack map. If true, then performCustomLowering must delete them.
|
||||
/// stack map. If true, you must provide a custom pass to lower
|
||||
/// calls to @llvm.gcroot.
|
||||
bool customRoots() const { return CustomRoots; }
|
||||
|
||||
/// If set, gcroot intrinsics should initialize their allocas to null
|
||||
|
@ -158,23 +159,6 @@ public:
|
|||
bool usesMetadata() const { return UsesMetadata; }
|
||||
|
||||
///@}
|
||||
|
||||
/// initializeCustomLowering/performCustomLowering - If any of the actions
|
||||
/// are set to custom, performCustomLowering must be overriden to transform
|
||||
/// the corresponding actions to LLVM IR. initializeCustomLowering is
|
||||
/// optional to override. These are the only GCStrategy methods through
|
||||
/// which the LLVM IR can be modified. These methods apply mostly to
|
||||
/// gc.root based implementations, but can be overriden to provide custom
|
||||
/// barrier lowerings with gc.statepoint as well.
|
||||
///@{
|
||||
virtual bool initializeCustomLowering(Module &F) {
|
||||
// No changes made
|
||||
return false;
|
||||
}
|
||||
virtual bool performCustomLowering(Function &F) {
|
||||
llvm_unreachable("GCStrategy subclass specified a configuration which"
|
||||
"requires a custom lowering without providing one");
|
||||
}
|
||||
};
|
||||
|
||||
/// Subclasses of GCStrategy are made available for use during compilation by
|
||||
|
|
|
@ -517,11 +517,15 @@ namespace llvm {
|
|||
/// information.
|
||||
extern char &MachineBlockPlacementStatsID;
|
||||
|
||||
/// GCLowering Pass - Performs target-independent LLVM IR transformations for
|
||||
/// highly portable strategies.
|
||||
///
|
||||
/// GCLowering Pass - Used by gc.root to perform its default lowering
|
||||
/// operations.
|
||||
FunctionPass *createGCLoweringPass();
|
||||
|
||||
/// ShadowStackGCLowering - Implements the custom lowering mechanism
|
||||
/// used by the shadow stack GC. Only runs on functions which opt in to
|
||||
/// the shadow stack collector.
|
||||
FunctionPass *createShadowStackGCLoweringPass();
|
||||
|
||||
/// GCMachineCodeAnalysis - Target-independent pass to mark safe points
|
||||
/// in machine code. Must be added very late during code generation, just
|
||||
/// prior to output, and importantly after all CFG transformations (such as
|
||||
|
|
|
@ -245,6 +245,7 @@ void initializeSROA_SSAUpPass(PassRegistry&);
|
|||
void initializeScalarEvolutionAliasAnalysisPass(PassRegistry&);
|
||||
void initializeScalarEvolutionPass(PassRegistry&);
|
||||
void initializeSimpleInlinerPass(PassRegistry&);
|
||||
void initializeShadowStackGCLoweringPass(PassRegistry&);
|
||||
void initializeRegisterCoalescerPass(PassRegistry&);
|
||||
void initializeSingleLoopExtractorPass(PassRegistry&);
|
||||
void initializeSinkingPass(PassRegistry&);
|
||||
|
|
|
@ -96,6 +96,7 @@ add_llvm_library(LLVMCodeGen
|
|||
ScheduleDAGPrinter.cpp
|
||||
ScoreboardHazardRecognizer.cpp
|
||||
ShadowStackGC.cpp
|
||||
ShadowStackGCLowering.cpp
|
||||
SjLjEHPrepare.cpp
|
||||
SlotIndexes.cpp
|
||||
SpillPlacement.cpp
|
||||
|
|
|
@ -111,30 +111,15 @@ static bool NeedsDefaultLoweringPass(const GCStrategy &C) {
|
|||
C.initializeRoots();
|
||||
}
|
||||
|
||||
static bool NeedsCustomLoweringPass(const GCStrategy &C) {
|
||||
// Custom lowering is only necessary if enabled for some action.
|
||||
return C.customWriteBarrier() || C.customReadBarrier() || C.customRoots();
|
||||
}
|
||||
|
||||
/// doInitialization - If this module uses the GC intrinsics, find them now.
|
||||
bool LowerIntrinsics::doInitialization(Module &M) {
|
||||
// FIXME: This is rather antisocial in the context of a JIT since it performs
|
||||
// work against the entire module. But this cannot be done at
|
||||
// runFunction time (initializeCustomLowering likely needs to change
|
||||
// the module).
|
||||
GCModuleInfo *MI = getAnalysisIfAvailable<GCModuleInfo>();
|
||||
assert(MI && "LowerIntrinsics didn't require GCModuleInfo!?");
|
||||
for (Module::iterator I = M.begin(), E = M.end(); I != E; ++I)
|
||||
if (!I->isDeclaration() && I->hasGC())
|
||||
MI->getFunctionInfo(*I); // Instantiate the GC strategy.
|
||||
|
||||
bool MadeChange = false;
|
||||
for (GCModuleInfo::iterator I = MI->begin(), E = MI->end(); I != E; ++I)
|
||||
if (NeedsCustomLoweringPass(**I))
|
||||
if ((*I)->initializeCustomLowering(M))
|
||||
MadeChange = true;
|
||||
|
||||
return MadeChange;
|
||||
return false;
|
||||
}
|
||||
|
||||
/// CouldBecomeSafePoint - Predicate to conservatively determine whether the
|
||||
|
@ -211,17 +196,6 @@ bool LowerIntrinsics::runOnFunction(Function &F) {
|
|||
if (NeedsDefaultLoweringPass(S))
|
||||
MadeChange |= PerformDefaultLowering(F, S);
|
||||
|
||||
bool UseCustomLoweringPass = NeedsCustomLoweringPass(S);
|
||||
if (UseCustomLoweringPass)
|
||||
MadeChange |= S.performCustomLowering(F);
|
||||
|
||||
// Custom lowering may modify the CFG, so dominators must be recomputed.
|
||||
if (UseCustomLoweringPass) {
|
||||
if (DominatorTreeWrapperPass *DTWP =
|
||||
getAnalysisIfAvailable<DominatorTreeWrapperPass>())
|
||||
DTWP->getDomTree().recalculate(F);
|
||||
}
|
||||
|
||||
return MadeChange;
|
||||
}
|
||||
|
||||
|
|
|
@ -419,7 +419,10 @@ void TargetPassConfig::addIRPasses() {
|
|||
addPass(createPrintFunctionPass(dbgs(), "\n\n*** Code after LSR ***\n"));
|
||||
}
|
||||
|
||||
// Run GC lowering passes for builtin collectors
|
||||
// TODO: add a pass insertion point here
|
||||
addPass(createGCLoweringPass());
|
||||
addPass(createShadowStackGCLoweringPass());
|
||||
|
||||
// Make sure that no unreachable blocks are instruction selected.
|
||||
addPass(createUnreachableBlockEliminationPass());
|
||||
|
|
|
@ -38,412 +38,18 @@ using namespace llvm;
|
|||
#define DEBUG_TYPE "shadowstackgc"
|
||||
|
||||
namespace {
|
||||
|
||||
class ShadowStackGC : public GCStrategy {
|
||||
/// RootChain - This is the global linked-list that contains the chain of GC
|
||||
/// roots.
|
||||
GlobalVariable *Head;
|
||||
|
||||
/// StackEntryTy - Abstract type of a link in the shadow stack.
|
||||
///
|
||||
StructType *StackEntryTy;
|
||||
StructType *FrameMapTy;
|
||||
|
||||
/// Roots - GC roots in the current function. Each is a pair of the
|
||||
/// intrinsic call and its corresponding alloca.
|
||||
std::vector<std::pair<CallInst *, AllocaInst *>> Roots;
|
||||
|
||||
public:
|
||||
ShadowStackGC();
|
||||
|
||||
bool initializeCustomLowering(Module &M) override;
|
||||
bool performCustomLowering(Function &F) override;
|
||||
|
||||
private:
|
||||
bool IsNullValue(Value *V);
|
||||
Constant *GetFrameMap(Function &F);
|
||||
Type *GetConcreteStackEntryType(Function &F);
|
||||
void CollectRoots(Function &F);
|
||||
static GetElementPtrInst *CreateGEP(LLVMContext &Context, IRBuilder<> &B,
|
||||
Value *BasePtr, int Idx1,
|
||||
const char *Name);
|
||||
static GetElementPtrInst *CreateGEP(LLVMContext &Context, IRBuilder<> &B,
|
||||
Value *BasePtr, int Idx1, int Idx2,
|
||||
const char *Name);
|
||||
};
|
||||
}
|
||||
|
||||
static GCRegistry::Add<ShadowStackGC>
|
||||
X("shadow-stack", "Very portable GC for uncooperative code generators");
|
||||
|
||||
namespace {
|
||||
/// EscapeEnumerator - This is a little algorithm to find all escape points
|
||||
/// from a function so that "finally"-style code can be inserted. In addition
|
||||
/// to finding the existing return and unwind instructions, it also (if
|
||||
/// necessary) transforms any call instructions into invokes and sends them to
|
||||
/// a landing pad.
|
||||
///
|
||||
/// It's wrapped up in a state machine using the same transform C# uses for
|
||||
/// 'yield return' enumerators, This transform allows it to be non-allocating.
|
||||
class EscapeEnumerator {
|
||||
Function &F;
|
||||
const char *CleanupBBName;
|
||||
|
||||
// State.
|
||||
int State;
|
||||
Function::iterator StateBB, StateE;
|
||||
IRBuilder<> Builder;
|
||||
|
||||
public:
|
||||
EscapeEnumerator(Function &F, const char *N = "cleanup")
|
||||
: F(F), CleanupBBName(N), State(0), Builder(F.getContext()) {}
|
||||
|
||||
IRBuilder<> *Next() {
|
||||
switch (State) {
|
||||
default:
|
||||
return nullptr;
|
||||
|
||||
case 0:
|
||||
StateBB = F.begin();
|
||||
StateE = F.end();
|
||||
State = 1;
|
||||
|
||||
case 1:
|
||||
// Find all 'return', 'resume', and 'unwind' instructions.
|
||||
while (StateBB != StateE) {
|
||||
BasicBlock *CurBB = StateBB++;
|
||||
|
||||
// Branches and invokes do not escape, only unwind, resume, and return
|
||||
// do.
|
||||
TerminatorInst *TI = CurBB->getTerminator();
|
||||
if (!isa<ReturnInst>(TI) && !isa<ResumeInst>(TI))
|
||||
continue;
|
||||
|
||||
Builder.SetInsertPoint(TI->getParent(), TI);
|
||||
return &Builder;
|
||||
}
|
||||
|
||||
State = 2;
|
||||
|
||||
// Find all 'call' instructions.
|
||||
SmallVector<Instruction *, 16> Calls;
|
||||
for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
|
||||
for (BasicBlock::iterator II = BB->begin(), EE = BB->end(); II != EE;
|
||||
++II)
|
||||
if (CallInst *CI = dyn_cast<CallInst>(II))
|
||||
if (!CI->getCalledFunction() ||
|
||||
!CI->getCalledFunction()->getIntrinsicID())
|
||||
Calls.push_back(CI);
|
||||
|
||||
if (Calls.empty())
|
||||
return nullptr;
|
||||
|
||||
// Create a cleanup block.
|
||||
LLVMContext &C = F.getContext();
|
||||
BasicBlock *CleanupBB = BasicBlock::Create(C, CleanupBBName, &F);
|
||||
Type *ExnTy =
|
||||
StructType::get(Type::getInt8PtrTy(C), Type::getInt32Ty(C), nullptr);
|
||||
Constant *PersFn = F.getParent()->getOrInsertFunction(
|
||||
"__gcc_personality_v0", FunctionType::get(Type::getInt32Ty(C), true));
|
||||
LandingPadInst *LPad =
|
||||
LandingPadInst::Create(ExnTy, PersFn, 1, "cleanup.lpad", CleanupBB);
|
||||
LPad->setCleanup(true);
|
||||
ResumeInst *RI = ResumeInst::Create(LPad, CleanupBB);
|
||||
|
||||
// Transform the 'call' instructions into 'invoke's branching to the
|
||||
// cleanup block. Go in reverse order to make prettier BB names.
|
||||
SmallVector<Value *, 16> Args;
|
||||
for (unsigned I = Calls.size(); I != 0;) {
|
||||
CallInst *CI = cast<CallInst>(Calls[--I]);
|
||||
|
||||
// Split the basic block containing the function call.
|
||||
BasicBlock *CallBB = CI->getParent();
|
||||
BasicBlock *NewBB =
|
||||
CallBB->splitBasicBlock(CI, CallBB->getName() + ".cont");
|
||||
|
||||
// Remove the unconditional branch inserted at the end of CallBB.
|
||||
CallBB->getInstList().pop_back();
|
||||
NewBB->getInstList().remove(CI);
|
||||
|
||||
// Create a new invoke instruction.
|
||||
Args.clear();
|
||||
CallSite CS(CI);
|
||||
Args.append(CS.arg_begin(), CS.arg_end());
|
||||
|
||||
InvokeInst *II =
|
||||
InvokeInst::Create(CI->getCalledValue(), NewBB, CleanupBB, Args,
|
||||
CI->getName(), CallBB);
|
||||
II->setCallingConv(CI->getCallingConv());
|
||||
II->setAttributes(CI->getAttributes());
|
||||
CI->replaceAllUsesWith(II);
|
||||
delete CI;
|
||||
}
|
||||
|
||||
Builder.SetInsertPoint(RI->getParent(), RI);
|
||||
return &Builder;
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
void llvm::linkShadowStackGC() {}
|
||||
|
||||
ShadowStackGC::ShadowStackGC() : Head(nullptr), StackEntryTy(nullptr) {
|
||||
ShadowStackGC::ShadowStackGC() {
|
||||
InitRoots = true;
|
||||
CustomRoots = true;
|
||||
}
|
||||
|
||||
Constant *ShadowStackGC::GetFrameMap(Function &F) {
|
||||
// doInitialization creates the abstract type of this value.
|
||||
Type *VoidPtr = Type::getInt8PtrTy(F.getContext());
|
||||
|
||||
// Truncate the ShadowStackDescriptor if some metadata is null.
|
||||
unsigned NumMeta = 0;
|
||||
SmallVector<Constant *, 16> Metadata;
|
||||
for (unsigned I = 0; I != Roots.size(); ++I) {
|
||||
Constant *C = cast<Constant>(Roots[I].first->getArgOperand(1));
|
||||
if (!C->isNullValue())
|
||||
NumMeta = I + 1;
|
||||
Metadata.push_back(ConstantExpr::getBitCast(C, VoidPtr));
|
||||
}
|
||||
Metadata.resize(NumMeta);
|
||||
|
||||
Type *Int32Ty = Type::getInt32Ty(F.getContext());
|
||||
|
||||
Constant *BaseElts[] = {
|
||||
ConstantInt::get(Int32Ty, Roots.size(), false),
|
||||
ConstantInt::get(Int32Ty, NumMeta, false),
|
||||
};
|
||||
|
||||
Constant *DescriptorElts[] = {
|
||||
ConstantStruct::get(FrameMapTy, BaseElts),
|
||||
ConstantArray::get(ArrayType::get(VoidPtr, NumMeta), Metadata)};
|
||||
|
||||
Type *EltTys[] = {DescriptorElts[0]->getType(), DescriptorElts[1]->getType()};
|
||||
StructType *STy = StructType::create(EltTys, "gc_map." + utostr(NumMeta));
|
||||
|
||||
Constant *FrameMap = ConstantStruct::get(STy, DescriptorElts);
|
||||
|
||||
// FIXME: Is this actually dangerous as WritingAnLLVMPass.html claims? Seems
|
||||
// that, short of multithreaded LLVM, it should be safe; all that is
|
||||
// necessary is that a simple Module::iterator loop not be invalidated.
|
||||
// Appending to the GlobalVariable list is safe in that sense.
|
||||
//
|
||||
// All of the output passes emit globals last. The ExecutionEngine
|
||||
// explicitly supports adding globals to the module after
|
||||
// initialization.
|
||||
//
|
||||
// Still, if it isn't deemed acceptable, then this transformation needs
|
||||
// to be a ModulePass (which means it cannot be in the 'llc' pipeline
|
||||
// (which uses a FunctionPassManager (which segfaults (not asserts) if
|
||||
// provided a ModulePass))).
|
||||
Constant *GV = new GlobalVariable(*F.getParent(), FrameMap->getType(), true,
|
||||
GlobalVariable::InternalLinkage, FrameMap,
|
||||
"__gc_" + F.getName());
|
||||
|
||||
Constant *GEPIndices[2] = {
|
||||
ConstantInt::get(Type::getInt32Ty(F.getContext()), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(F.getContext()), 0)};
|
||||
return ConstantExpr::getGetElementPtr(GV, GEPIndices);
|
||||
}
|
||||
|
||||
Type *ShadowStackGC::GetConcreteStackEntryType(Function &F) {
|
||||
// doInitialization creates the generic version of this type.
|
||||
std::vector<Type *> EltTys;
|
||||
EltTys.push_back(StackEntryTy);
|
||||
for (size_t I = 0; I != Roots.size(); I++)
|
||||
EltTys.push_back(Roots[I].second->getAllocatedType());
|
||||
|
||||
return StructType::create(EltTys, "gc_stackentry." + F.getName().str());
|
||||
}
|
||||
|
||||
/// doInitialization - If this module uses the GC intrinsics, find them now. If
|
||||
/// not, exit fast.
|
||||
bool ShadowStackGC::initializeCustomLowering(Module &M) {
|
||||
// struct FrameMap {
|
||||
// int32_t NumRoots; // Number of roots in stack frame.
|
||||
// int32_t NumMeta; // Number of metadata descriptors. May be < NumRoots.
|
||||
// void *Meta[]; // May be absent for roots without metadata.
|
||||
// };
|
||||
std::vector<Type *> EltTys;
|
||||
// 32 bits is ok up to a 32GB stack frame. :)
|
||||
EltTys.push_back(Type::getInt32Ty(M.getContext()));
|
||||
// Specifies length of variable length array.
|
||||
EltTys.push_back(Type::getInt32Ty(M.getContext()));
|
||||
FrameMapTy = StructType::create(EltTys, "gc_map");
|
||||
PointerType *FrameMapPtrTy = PointerType::getUnqual(FrameMapTy);
|
||||
|
||||
// struct StackEntry {
|
||||
// ShadowStackEntry *Next; // Caller's stack entry.
|
||||
// FrameMap *Map; // Pointer to constant FrameMap.
|
||||
// void *Roots[]; // Stack roots (in-place array, so we pretend).
|
||||
// };
|
||||
|
||||
StackEntryTy = StructType::create(M.getContext(), "gc_stackentry");
|
||||
|
||||
EltTys.clear();
|
||||
EltTys.push_back(PointerType::getUnqual(StackEntryTy));
|
||||
EltTys.push_back(FrameMapPtrTy);
|
||||
StackEntryTy->setBody(EltTys);
|
||||
PointerType *StackEntryPtrTy = PointerType::getUnqual(StackEntryTy);
|
||||
|
||||
// Get the root chain if it already exists.
|
||||
Head = M.getGlobalVariable("llvm_gc_root_chain");
|
||||
if (!Head) {
|
||||
// If the root chain does not exist, insert a new one with linkonce
|
||||
// linkage!
|
||||
Head = new GlobalVariable(
|
||||
M, StackEntryPtrTy, false, GlobalValue::LinkOnceAnyLinkage,
|
||||
Constant::getNullValue(StackEntryPtrTy), "llvm_gc_root_chain");
|
||||
} else if (Head->hasExternalLinkage() && Head->isDeclaration()) {
|
||||
Head->setInitializer(Constant::getNullValue(StackEntryPtrTy));
|
||||
Head->setLinkage(GlobalValue::LinkOnceAnyLinkage);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ShadowStackGC::IsNullValue(Value *V) {
|
||||
if (Constant *C = dyn_cast<Constant>(V))
|
||||
return C->isNullValue();
|
||||
return false;
|
||||
}
|
||||
|
||||
void ShadowStackGC::CollectRoots(Function &F) {
|
||||
// FIXME: Account for original alignment. Could fragment the root array.
|
||||
// Approach 1: Null initialize empty slots at runtime. Yuck.
|
||||
// Approach 2: Emit a map of the array instead of just a count.
|
||||
|
||||
assert(Roots.empty() && "Not cleaned up?");
|
||||
|
||||
SmallVector<std::pair<CallInst *, AllocaInst *>, 16> MetaRoots;
|
||||
|
||||
for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
|
||||
for (BasicBlock::iterator II = BB->begin(), E = BB->end(); II != E;)
|
||||
if (IntrinsicInst *CI = dyn_cast<IntrinsicInst>(II++))
|
||||
if (Function *F = CI->getCalledFunction())
|
||||
if (F->getIntrinsicID() == Intrinsic::gcroot) {
|
||||
std::pair<CallInst *, AllocaInst *> Pair = std::make_pair(
|
||||
CI,
|
||||
cast<AllocaInst>(CI->getArgOperand(0)->stripPointerCasts()));
|
||||
if (IsNullValue(CI->getArgOperand(1)))
|
||||
Roots.push_back(Pair);
|
||||
else
|
||||
MetaRoots.push_back(Pair);
|
||||
}
|
||||
|
||||
// Number roots with metadata (usually empty) at the beginning, so that the
|
||||
// FrameMap::Meta array can be elided.
|
||||
Roots.insert(Roots.begin(), MetaRoots.begin(), MetaRoots.end());
|
||||
}
|
||||
|
||||
GetElementPtrInst *ShadowStackGC::CreateGEP(LLVMContext &Context,
|
||||
IRBuilder<> &B, Value *BasePtr,
|
||||
int Idx, int Idx2,
|
||||
const char *Name) {
|
||||
Value *Indices[] = {ConstantInt::get(Type::getInt32Ty(Context), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx2)};
|
||||
Value *Val = B.CreateGEP(BasePtr, Indices, Name);
|
||||
|
||||
assert(isa<GetElementPtrInst>(Val) && "Unexpected folded constant");
|
||||
|
||||
return dyn_cast<GetElementPtrInst>(Val);
|
||||
}
|
||||
|
||||
GetElementPtrInst *ShadowStackGC::CreateGEP(LLVMContext &Context,
|
||||
IRBuilder<> &B, Value *BasePtr,
|
||||
int Idx, const char *Name) {
|
||||
Value *Indices[] = {ConstantInt::get(Type::getInt32Ty(Context), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx)};
|
||||
Value *Val = B.CreateGEP(BasePtr, Indices, Name);
|
||||
|
||||
assert(isa<GetElementPtrInst>(Val) && "Unexpected folded constant");
|
||||
|
||||
return dyn_cast<GetElementPtrInst>(Val);
|
||||
}
|
||||
|
||||
/// runOnFunction - Insert code to maintain the shadow stack.
|
||||
bool ShadowStackGC::performCustomLowering(Function &F) {
|
||||
LLVMContext &Context = F.getContext();
|
||||
|
||||
// Find calls to llvm.gcroot.
|
||||
CollectRoots(F);
|
||||
|
||||
// If there are no roots in this function, then there is no need to add a
|
||||
// stack map entry for it.
|
||||
if (Roots.empty())
|
||||
return false;
|
||||
|
||||
// Build the constant map and figure the type of the shadow stack entry.
|
||||
Value *FrameMap = GetFrameMap(F);
|
||||
Type *ConcreteStackEntryTy = GetConcreteStackEntryType(F);
|
||||
|
||||
// Build the shadow stack entry at the very start of the function.
|
||||
BasicBlock::iterator IP = F.getEntryBlock().begin();
|
||||
IRBuilder<> AtEntry(IP->getParent(), IP);
|
||||
|
||||
Instruction *StackEntry =
|
||||
AtEntry.CreateAlloca(ConcreteStackEntryTy, nullptr, "gc_frame");
|
||||
|
||||
while (isa<AllocaInst>(IP))
|
||||
++IP;
|
||||
AtEntry.SetInsertPoint(IP->getParent(), IP);
|
||||
|
||||
// Initialize the map pointer and load the current head of the shadow stack.
|
||||
Instruction *CurrentHead = AtEntry.CreateLoad(Head, "gc_currhead");
|
||||
Instruction *EntryMapPtr =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, 1, "gc_frame.map");
|
||||
AtEntry.CreateStore(FrameMap, EntryMapPtr);
|
||||
|
||||
// After all the allocas...
|
||||
for (unsigned I = 0, E = Roots.size(); I != E; ++I) {
|
||||
// For each root, find the corresponding slot in the aggregate...
|
||||
Value *SlotPtr = CreateGEP(Context, AtEntry, StackEntry, 1 + I, "gc_root");
|
||||
|
||||
// And use it in lieu of the alloca.
|
||||
AllocaInst *OriginalAlloca = Roots[I].second;
|
||||
SlotPtr->takeName(OriginalAlloca);
|
||||
OriginalAlloca->replaceAllUsesWith(SlotPtr);
|
||||
}
|
||||
|
||||
// Move past the original stores inserted by GCStrategy::InitRoots. This isn't
|
||||
// really necessary (the collector would never see the intermediate state at
|
||||
// runtime), but it's nicer not to push the half-initialized entry onto the
|
||||
// shadow stack.
|
||||
while (isa<StoreInst>(IP))
|
||||
++IP;
|
||||
AtEntry.SetInsertPoint(IP->getParent(), IP);
|
||||
|
||||
// Push the entry onto the shadow stack.
|
||||
Instruction *EntryNextPtr =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, 0, "gc_frame.next");
|
||||
Instruction *NewHeadVal =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, "gc_newhead");
|
||||
AtEntry.CreateStore(CurrentHead, EntryNextPtr);
|
||||
AtEntry.CreateStore(NewHeadVal, Head);
|
||||
|
||||
// For each instruction that escapes...
|
||||
EscapeEnumerator EE(F, "gc_cleanup");
|
||||
while (IRBuilder<> *AtExit = EE.Next()) {
|
||||
// Pop the entry from the shadow stack. Don't reuse CurrentHead from
|
||||
// AtEntry, since that would make the value live for the entire function.
|
||||
Instruction *EntryNextPtr2 =
|
||||
CreateGEP(Context, *AtExit, StackEntry, 0, 0, "gc_frame.next");
|
||||
Value *SavedHead = AtExit->CreateLoad(EntryNextPtr2, "gc_savedhead");
|
||||
AtExit->CreateStore(SavedHead, Head);
|
||||
}
|
||||
|
||||
// Delete the original allocas (which are no longer used) and the intrinsic
|
||||
// calls (which are no longer valid). Doing this last avoids invalidating
|
||||
// iterators.
|
||||
for (unsigned I = 0, E = Roots.size(); I != E; ++I) {
|
||||
Roots[I].first->eraseFromParent();
|
||||
Roots[I].second->eraseFromParent();
|
||||
}
|
||||
|
||||
Roots.clear();
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,457 @@
|
|||
//===-- ShadowStackGCLowering.cpp - Custom lowering for shadow-stack gc ---===//
|
||||
//
|
||||
// The LLVM Compiler Infrastructure
|
||||
//
|
||||
// This file is distributed under the University of Illinois Open Source
|
||||
// License. See LICENSE.TXT for details.
|
||||
//
|
||||
//===----------------------------------------------------------------------===//
|
||||
//
|
||||
// This file contains the custom lowering code required by the shadow-stack GC
|
||||
// strategy.
|
||||
//
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
||||
#include "llvm/CodeGen/Passes.h"
|
||||
#include "llvm/CodeGen/GCStrategy.h"
|
||||
#include "llvm/ADT/StringExtras.h"
|
||||
#include "llvm/IR/CallSite.h"
|
||||
#include "llvm/IR/IRBuilder.h"
|
||||
#include "llvm/IR/IntrinsicInst.h"
|
||||
#include "llvm/IR/Module.h"
|
||||
|
||||
using namespace llvm;
|
||||
|
||||
#define DEBUG_TYPE "shadowstackgclowering"
|
||||
|
||||
namespace {
|
||||
|
||||
class ShadowStackGCLowering : public FunctionPass {
|
||||
/// RootChain - This is the global linked-list that contains the chain of GC
|
||||
/// roots.
|
||||
GlobalVariable *Head;
|
||||
|
||||
/// StackEntryTy - Abstract type of a link in the shadow stack.
|
||||
///
|
||||
StructType *StackEntryTy;
|
||||
StructType *FrameMapTy;
|
||||
|
||||
/// Roots - GC roots in the current function. Each is a pair of the
|
||||
/// intrinsic call and its corresponding alloca.
|
||||
std::vector<std::pair<CallInst *, AllocaInst *>> Roots;
|
||||
|
||||
public:
|
||||
static char ID;
|
||||
ShadowStackGCLowering();
|
||||
|
||||
bool doInitialization(Module &M) override;
|
||||
bool runOnFunction(Function &F) override;
|
||||
|
||||
private:
|
||||
bool IsNullValue(Value *V);
|
||||
Constant *GetFrameMap(Function &F);
|
||||
Type *GetConcreteStackEntryType(Function &F);
|
||||
void CollectRoots(Function &F);
|
||||
static GetElementPtrInst *CreateGEP(LLVMContext &Context, IRBuilder<> &B,
|
||||
Value *BasePtr, int Idx1,
|
||||
const char *Name);
|
||||
static GetElementPtrInst *CreateGEP(LLVMContext &Context, IRBuilder<> &B,
|
||||
Value *BasePtr, int Idx1, int Idx2,
|
||||
const char *Name);
|
||||
};
|
||||
}
|
||||
|
||||
INITIALIZE_PASS_BEGIN(ShadowStackGCLowering, "shadow-stack-gc-lowering",
|
||||
"Shadow Stack GC Lowering", false, false)
|
||||
INITIALIZE_PASS_DEPENDENCY(GCModuleInfo)
|
||||
INITIALIZE_PASS_END(ShadowStackGCLowering, "shadow-stack-gc-lowering",
|
||||
"Shadow Stack GC Lowering", false, false)
|
||||
|
||||
FunctionPass *llvm::createShadowStackGCLoweringPass() { return new ShadowStackGCLowering(); }
|
||||
|
||||
char ShadowStackGCLowering::ID = 0;
|
||||
|
||||
ShadowStackGCLowering::ShadowStackGCLowering()
|
||||
: FunctionPass(ID), Head(nullptr), StackEntryTy(nullptr),
|
||||
FrameMapTy(nullptr) {
|
||||
initializeShadowStackGCLoweringPass(*PassRegistry::getPassRegistry());
|
||||
}
|
||||
|
||||
namespace {
|
||||
/// EscapeEnumerator - This is a little algorithm to find all escape points
|
||||
/// from a function so that "finally"-style code can be inserted. In addition
|
||||
/// to finding the existing return and unwind instructions, it also (if
|
||||
/// necessary) transforms any call instructions into invokes and sends them to
|
||||
/// a landing pad.
|
||||
///
|
||||
/// It's wrapped up in a state machine using the same transform C# uses for
|
||||
/// 'yield return' enumerators, This transform allows it to be non-allocating.
|
||||
class EscapeEnumerator {
|
||||
Function &F;
|
||||
const char *CleanupBBName;
|
||||
|
||||
// State.
|
||||
int State;
|
||||
Function::iterator StateBB, StateE;
|
||||
IRBuilder<> Builder;
|
||||
|
||||
public:
|
||||
EscapeEnumerator(Function &F, const char *N = "cleanup")
|
||||
: F(F), CleanupBBName(N), State(0), Builder(F.getContext()) {}
|
||||
|
||||
IRBuilder<> *Next() {
|
||||
switch (State) {
|
||||
default:
|
||||
return nullptr;
|
||||
|
||||
case 0:
|
||||
StateBB = F.begin();
|
||||
StateE = F.end();
|
||||
State = 1;
|
||||
|
||||
case 1:
|
||||
// Find all 'return', 'resume', and 'unwind' instructions.
|
||||
while (StateBB != StateE) {
|
||||
BasicBlock *CurBB = StateBB++;
|
||||
|
||||
// Branches and invokes do not escape, only unwind, resume, and return
|
||||
// do.
|
||||
TerminatorInst *TI = CurBB->getTerminator();
|
||||
if (!isa<ReturnInst>(TI) && !isa<ResumeInst>(TI))
|
||||
continue;
|
||||
|
||||
Builder.SetInsertPoint(TI->getParent(), TI);
|
||||
return &Builder;
|
||||
}
|
||||
|
||||
State = 2;
|
||||
|
||||
// Find all 'call' instructions.
|
||||
SmallVector<Instruction *, 16> Calls;
|
||||
for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
|
||||
for (BasicBlock::iterator II = BB->begin(), EE = BB->end(); II != EE;
|
||||
++II)
|
||||
if (CallInst *CI = dyn_cast<CallInst>(II))
|
||||
if (!CI->getCalledFunction() ||
|
||||
!CI->getCalledFunction()->getIntrinsicID())
|
||||
Calls.push_back(CI);
|
||||
|
||||
if (Calls.empty())
|
||||
return nullptr;
|
||||
|
||||
// Create a cleanup block.
|
||||
LLVMContext &C = F.getContext();
|
||||
BasicBlock *CleanupBB = BasicBlock::Create(C, CleanupBBName, &F);
|
||||
Type *ExnTy =
|
||||
StructType::get(Type::getInt8PtrTy(C), Type::getInt32Ty(C), nullptr);
|
||||
Constant *PersFn = F.getParent()->getOrInsertFunction(
|
||||
"__gcc_personality_v0", FunctionType::get(Type::getInt32Ty(C), true));
|
||||
LandingPadInst *LPad =
|
||||
LandingPadInst::Create(ExnTy, PersFn, 1, "cleanup.lpad", CleanupBB);
|
||||
LPad->setCleanup(true);
|
||||
ResumeInst *RI = ResumeInst::Create(LPad, CleanupBB);
|
||||
|
||||
// Transform the 'call' instructions into 'invoke's branching to the
|
||||
// cleanup block. Go in reverse order to make prettier BB names.
|
||||
SmallVector<Value *, 16> Args;
|
||||
for (unsigned I = Calls.size(); I != 0;) {
|
||||
CallInst *CI = cast<CallInst>(Calls[--I]);
|
||||
|
||||
// Split the basic block containing the function call.
|
||||
BasicBlock *CallBB = CI->getParent();
|
||||
BasicBlock *NewBB =
|
||||
CallBB->splitBasicBlock(CI, CallBB->getName() + ".cont");
|
||||
|
||||
// Remove the unconditional branch inserted at the end of CallBB.
|
||||
CallBB->getInstList().pop_back();
|
||||
NewBB->getInstList().remove(CI);
|
||||
|
||||
// Create a new invoke instruction.
|
||||
Args.clear();
|
||||
CallSite CS(CI);
|
||||
Args.append(CS.arg_begin(), CS.arg_end());
|
||||
|
||||
InvokeInst *II =
|
||||
InvokeInst::Create(CI->getCalledValue(), NewBB, CleanupBB, Args,
|
||||
CI->getName(), CallBB);
|
||||
II->setCallingConv(CI->getCallingConv());
|
||||
II->setAttributes(CI->getAttributes());
|
||||
CI->replaceAllUsesWith(II);
|
||||
delete CI;
|
||||
}
|
||||
|
||||
Builder.SetInsertPoint(RI->getParent(), RI);
|
||||
return &Builder;
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
Constant *ShadowStackGCLowering::GetFrameMap(Function &F) {
|
||||
// doInitialization creates the abstract type of this value.
|
||||
Type *VoidPtr = Type::getInt8PtrTy(F.getContext());
|
||||
|
||||
// Truncate the ShadowStackDescriptor if some metadata is null.
|
||||
unsigned NumMeta = 0;
|
||||
SmallVector<Constant *, 16> Metadata;
|
||||
for (unsigned I = 0; I != Roots.size(); ++I) {
|
||||
Constant *C = cast<Constant>(Roots[I].first->getArgOperand(1));
|
||||
if (!C->isNullValue())
|
||||
NumMeta = I + 1;
|
||||
Metadata.push_back(ConstantExpr::getBitCast(C, VoidPtr));
|
||||
}
|
||||
Metadata.resize(NumMeta);
|
||||
|
||||
Type *Int32Ty = Type::getInt32Ty(F.getContext());
|
||||
|
||||
Constant *BaseElts[] = {
|
||||
ConstantInt::get(Int32Ty, Roots.size(), false),
|
||||
ConstantInt::get(Int32Ty, NumMeta, false),
|
||||
};
|
||||
|
||||
Constant *DescriptorElts[] = {
|
||||
ConstantStruct::get(FrameMapTy, BaseElts),
|
||||
ConstantArray::get(ArrayType::get(VoidPtr, NumMeta), Metadata)};
|
||||
|
||||
Type *EltTys[] = {DescriptorElts[0]->getType(), DescriptorElts[1]->getType()};
|
||||
StructType *STy = StructType::create(EltTys, "gc_map." + utostr(NumMeta));
|
||||
|
||||
Constant *FrameMap = ConstantStruct::get(STy, DescriptorElts);
|
||||
|
||||
// FIXME: Is this actually dangerous as WritingAnLLVMPass.html claims? Seems
|
||||
// that, short of multithreaded LLVM, it should be safe; all that is
|
||||
// necessary is that a simple Module::iterator loop not be invalidated.
|
||||
// Appending to the GlobalVariable list is safe in that sense.
|
||||
//
|
||||
// All of the output passes emit globals last. The ExecutionEngine
|
||||
// explicitly supports adding globals to the module after
|
||||
// initialization.
|
||||
//
|
||||
// Still, if it isn't deemed acceptable, then this transformation needs
|
||||
// to be a ModulePass (which means it cannot be in the 'llc' pipeline
|
||||
// (which uses a FunctionPassManager (which segfaults (not asserts) if
|
||||
// provided a ModulePass))).
|
||||
Constant *GV = new GlobalVariable(*F.getParent(), FrameMap->getType(), true,
|
||||
GlobalVariable::InternalLinkage, FrameMap,
|
||||
"__gc_" + F.getName());
|
||||
|
||||
Constant *GEPIndices[2] = {
|
||||
ConstantInt::get(Type::getInt32Ty(F.getContext()), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(F.getContext()), 0)};
|
||||
return ConstantExpr::getGetElementPtr(GV, GEPIndices);
|
||||
}
|
||||
|
||||
Type *ShadowStackGCLowering::GetConcreteStackEntryType(Function &F) {
|
||||
// doInitialization creates the generic version of this type.
|
||||
std::vector<Type *> EltTys;
|
||||
EltTys.push_back(StackEntryTy);
|
||||
for (size_t I = 0; I != Roots.size(); I++)
|
||||
EltTys.push_back(Roots[I].second->getAllocatedType());
|
||||
|
||||
return StructType::create(EltTys, "gc_stackentry." + F.getName().str());
|
||||
}
|
||||
|
||||
/// doInitialization - If this module uses the GC intrinsics, find them now. If
|
||||
/// not, exit fast.
|
||||
bool ShadowStackGCLowering::doInitialization(Module &M) {
|
||||
bool Active = false;
|
||||
for (Function &F : M) {
|
||||
if (F.hasGC() && F.getGC() == std::string("shadow-stack")) {
|
||||
Active = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!Active)
|
||||
return false;
|
||||
|
||||
// struct FrameMap {
|
||||
// int32_t NumRoots; // Number of roots in stack frame.
|
||||
// int32_t NumMeta; // Number of metadata descriptors. May be < NumRoots.
|
||||
// void *Meta[]; // May be absent for roots without metadata.
|
||||
// };
|
||||
std::vector<Type *> EltTys;
|
||||
// 32 bits is ok up to a 32GB stack frame. :)
|
||||
EltTys.push_back(Type::getInt32Ty(M.getContext()));
|
||||
// Specifies length of variable length array.
|
||||
EltTys.push_back(Type::getInt32Ty(M.getContext()));
|
||||
FrameMapTy = StructType::create(EltTys, "gc_map");
|
||||
PointerType *FrameMapPtrTy = PointerType::getUnqual(FrameMapTy);
|
||||
|
||||
// struct StackEntry {
|
||||
// ShadowStackEntry *Next; // Caller's stack entry.
|
||||
// FrameMap *Map; // Pointer to constant FrameMap.
|
||||
// void *Roots[]; // Stack roots (in-place array, so we pretend).
|
||||
// };
|
||||
|
||||
StackEntryTy = StructType::create(M.getContext(), "gc_stackentry");
|
||||
|
||||
EltTys.clear();
|
||||
EltTys.push_back(PointerType::getUnqual(StackEntryTy));
|
||||
EltTys.push_back(FrameMapPtrTy);
|
||||
StackEntryTy->setBody(EltTys);
|
||||
PointerType *StackEntryPtrTy = PointerType::getUnqual(StackEntryTy);
|
||||
|
||||
// Get the root chain if it already exists.
|
||||
Head = M.getGlobalVariable("llvm_gc_root_chain");
|
||||
if (!Head) {
|
||||
// If the root chain does not exist, insert a new one with linkonce
|
||||
// linkage!
|
||||
Head = new GlobalVariable(
|
||||
M, StackEntryPtrTy, false, GlobalValue::LinkOnceAnyLinkage,
|
||||
Constant::getNullValue(StackEntryPtrTy), "llvm_gc_root_chain");
|
||||
} else if (Head->hasExternalLinkage() && Head->isDeclaration()) {
|
||||
Head->setInitializer(Constant::getNullValue(StackEntryPtrTy));
|
||||
Head->setLinkage(GlobalValue::LinkOnceAnyLinkage);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ShadowStackGCLowering::IsNullValue(Value *V) {
|
||||
if (Constant *C = dyn_cast<Constant>(V))
|
||||
return C->isNullValue();
|
||||
return false;
|
||||
}
|
||||
|
||||
void ShadowStackGCLowering::CollectRoots(Function &F) {
|
||||
// FIXME: Account for original alignment. Could fragment the root array.
|
||||
// Approach 1: Null initialize empty slots at runtime. Yuck.
|
||||
// Approach 2: Emit a map of the array instead of just a count.
|
||||
|
||||
assert(Roots.empty() && "Not cleaned up?");
|
||||
|
||||
SmallVector<std::pair<CallInst *, AllocaInst *>, 16> MetaRoots;
|
||||
|
||||
for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
|
||||
for (BasicBlock::iterator II = BB->begin(), E = BB->end(); II != E;)
|
||||
if (IntrinsicInst *CI = dyn_cast<IntrinsicInst>(II++))
|
||||
if (Function *F = CI->getCalledFunction())
|
||||
if (F->getIntrinsicID() == Intrinsic::gcroot) {
|
||||
std::pair<CallInst *, AllocaInst *> Pair = std::make_pair(
|
||||
CI,
|
||||
cast<AllocaInst>(CI->getArgOperand(0)->stripPointerCasts()));
|
||||
if (IsNullValue(CI->getArgOperand(1)))
|
||||
Roots.push_back(Pair);
|
||||
else
|
||||
MetaRoots.push_back(Pair);
|
||||
}
|
||||
|
||||
// Number roots with metadata (usually empty) at the beginning, so that the
|
||||
// FrameMap::Meta array can be elided.
|
||||
Roots.insert(Roots.begin(), MetaRoots.begin(), MetaRoots.end());
|
||||
}
|
||||
|
||||
GetElementPtrInst *ShadowStackGCLowering::CreateGEP(LLVMContext &Context,
|
||||
IRBuilder<> &B, Value *BasePtr,
|
||||
int Idx, int Idx2,
|
||||
const char *Name) {
|
||||
Value *Indices[] = {ConstantInt::get(Type::getInt32Ty(Context), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx2)};
|
||||
Value *Val = B.CreateGEP(BasePtr, Indices, Name);
|
||||
|
||||
assert(isa<GetElementPtrInst>(Val) && "Unexpected folded constant");
|
||||
|
||||
return dyn_cast<GetElementPtrInst>(Val);
|
||||
}
|
||||
|
||||
GetElementPtrInst *ShadowStackGCLowering::CreateGEP(LLVMContext &Context,
|
||||
IRBuilder<> &B, Value *BasePtr,
|
||||
int Idx, const char *Name) {
|
||||
Value *Indices[] = {ConstantInt::get(Type::getInt32Ty(Context), 0),
|
||||
ConstantInt::get(Type::getInt32Ty(Context), Idx)};
|
||||
Value *Val = B.CreateGEP(BasePtr, Indices, Name);
|
||||
|
||||
assert(isa<GetElementPtrInst>(Val) && "Unexpected folded constant");
|
||||
|
||||
return dyn_cast<GetElementPtrInst>(Val);
|
||||
}
|
||||
|
||||
/// runOnFunction - Insert code to maintain the shadow stack.
|
||||
bool ShadowStackGCLowering::runOnFunction(Function &F) {
|
||||
// Quick exit for functions that do not use the shadow stack GC.
|
||||
if (!F.hasGC() ||
|
||||
F.getGC() != std::string("shadow-stack"))
|
||||
return false;
|
||||
|
||||
LLVMContext &Context = F.getContext();
|
||||
|
||||
// Find calls to llvm.gcroot.
|
||||
CollectRoots(F);
|
||||
|
||||
// If there are no roots in this function, then there is no need to add a
|
||||
// stack map entry for it.
|
||||
if (Roots.empty())
|
||||
return false;
|
||||
|
||||
// Build the constant map and figure the type of the shadow stack entry.
|
||||
Value *FrameMap = GetFrameMap(F);
|
||||
Type *ConcreteStackEntryTy = GetConcreteStackEntryType(F);
|
||||
|
||||
// Build the shadow stack entry at the very start of the function.
|
||||
BasicBlock::iterator IP = F.getEntryBlock().begin();
|
||||
IRBuilder<> AtEntry(IP->getParent(), IP);
|
||||
|
||||
Instruction *StackEntry =
|
||||
AtEntry.CreateAlloca(ConcreteStackEntryTy, nullptr, "gc_frame");
|
||||
|
||||
while (isa<AllocaInst>(IP))
|
||||
++IP;
|
||||
AtEntry.SetInsertPoint(IP->getParent(), IP);
|
||||
|
||||
// Initialize the map pointer and load the current head of the shadow stack.
|
||||
Instruction *CurrentHead = AtEntry.CreateLoad(Head, "gc_currhead");
|
||||
Instruction *EntryMapPtr =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, 1, "gc_frame.map");
|
||||
AtEntry.CreateStore(FrameMap, EntryMapPtr);
|
||||
|
||||
// After all the allocas...
|
||||
for (unsigned I = 0, E = Roots.size(); I != E; ++I) {
|
||||
// For each root, find the corresponding slot in the aggregate...
|
||||
Value *SlotPtr = CreateGEP(Context, AtEntry, StackEntry, 1 + I, "gc_root");
|
||||
|
||||
// And use it in lieu of the alloca.
|
||||
AllocaInst *OriginalAlloca = Roots[I].second;
|
||||
SlotPtr->takeName(OriginalAlloca);
|
||||
OriginalAlloca->replaceAllUsesWith(SlotPtr);
|
||||
}
|
||||
|
||||
// Move past the original stores inserted by GCStrategy::InitRoots. This isn't
|
||||
// really necessary (the collector would never see the intermediate state at
|
||||
// runtime), but it's nicer not to push the half-initialized entry onto the
|
||||
// shadow stack.
|
||||
while (isa<StoreInst>(IP))
|
||||
++IP;
|
||||
AtEntry.SetInsertPoint(IP->getParent(), IP);
|
||||
|
||||
// Push the entry onto the shadow stack.
|
||||
Instruction *EntryNextPtr =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, 0, "gc_frame.next");
|
||||
Instruction *NewHeadVal =
|
||||
CreateGEP(Context, AtEntry, StackEntry, 0, "gc_newhead");
|
||||
AtEntry.CreateStore(CurrentHead, EntryNextPtr);
|
||||
AtEntry.CreateStore(NewHeadVal, Head);
|
||||
|
||||
// For each instruction that escapes...
|
||||
EscapeEnumerator EE(F, "gc_cleanup");
|
||||
while (IRBuilder<> *AtExit = EE.Next()) {
|
||||
// Pop the entry from the shadow stack. Don't reuse CurrentHead from
|
||||
// AtEntry, since that would make the value live for the entire function.
|
||||
Instruction *EntryNextPtr2 =
|
||||
CreateGEP(Context, *AtExit, StackEntry, 0, 0, "gc_frame.next");
|
||||
Value *SavedHead = AtExit->CreateLoad(EntryNextPtr2, "gc_savedhead");
|
||||
AtExit->CreateStore(SavedHead, Head);
|
||||
}
|
||||
|
||||
// Delete the original allocas (which are no longer used) and the intrinsic
|
||||
// calls (which are no longer valid). Doing this last avoids invalidating
|
||||
// iterators.
|
||||
for (unsigned I = 0, E = Roots.size(); I != E; ++I) {
|
||||
Roots[I].first->eraseFromParent();
|
||||
Roots[I].second->eraseFromParent();
|
||||
}
|
||||
|
||||
Roots.clear();
|
||||
return true;
|
||||
}
|
Loading…
Reference in New Issue