llvm-project/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp

569 lines
22 KiB
C++
Raw Normal View History

//===-- FunctionLoweringInfo.cpp ------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
//
// This implements routines for translating functions from LLVM IR into
// Machine IR.
//
//===----------------------------------------------------------------------===//
#include "llvm/CodeGen/FunctionLoweringInfo.h"
#include "llvm/CodeGen/Analysis.h"
#include "llvm/CodeGen/MachineFrameInfo.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineInstrBuilder.h"
#include "llvm/CodeGen/MachineModuleInfo.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/WinEHFuncInfo.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/DebugInfo.h"
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/IntrinsicInst.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/Debug.h"
#include "llvm/Support/ErrorHandling.h"
#include "llvm/Support/MathExtras.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/Target/TargetFrameLowering.h"
#include "llvm/Target/TargetInstrInfo.h"
#include "llvm/Target/TargetLowering.h"
#include "llvm/Target/TargetOptions.h"
#include "llvm/Target/TargetRegisterInfo.h"
#include "llvm/Target/TargetSubtargetInfo.h"
#include <algorithm>
using namespace llvm;
#define DEBUG_TYPE "function-lowering-info"
/// isUsedOutsideOfDefiningBlock - Return true if this instruction is used by
/// PHI nodes or outside of the basic block that defines it, or used by a
/// switch or atomic instruction, which may expand to multiple basic blocks.
static bool isUsedOutsideOfDefiningBlock(const Instruction *I) {
if (I->use_empty()) return false;
if (isa<PHINode>(I)) return true;
const BasicBlock *BB = I->getParent();
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
for (const User *U : I->users())
if (cast<Instruction>(U)->getParent() != BB || isa<PHINode>(U))
return true;
[C++11] Add range based accessors for the Use-Def chain of a Value. This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
2014-03-09 11:16:01 +08:00
return false;
}
static ISD::NodeType getPreferredExtendForValue(const Value *V) {
// For the users of the source value being used for compare instruction, if
// the number of signed predicate is greater than unsigned predicate, we
// prefer to use SIGN_EXTEND.
//
// With this optimization, we would be able to reduce some redundant sign or
// zero extension instruction, and eventually more machine CSE opportunities
// can be exposed.
ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
unsigned NumOfSigned = 0, NumOfUnsigned = 0;
for (const User *U : V->users()) {
if (const auto *CI = dyn_cast<CmpInst>(U)) {
NumOfSigned += CI->isSigned();
NumOfUnsigned += CI->isUnsigned();
}
}
if (NumOfSigned > NumOfUnsigned)
ExtendKind = ISD::SIGN_EXTEND;
return ExtendKind;
}
void FunctionLoweringInfo::set(const Function &fn, MachineFunction &mf,
SelectionDAG *DAG) {
Fn = &fn;
MF = &mf;
TLI = MF->getSubtarget().getTargetLowering();
RegInfo = &MF->getRegInfo();
MachineModuleInfo &MMI = MF->getMMI();
const TargetFrameLowering *TFI = MF->getSubtarget().getFrameLowering();
unsigned StackAlign = TFI->getStackAlignment();
// Check whether the function can return without sret-demotion.
SmallVector<ISD::OutputArg, 4> Outs;
GetReturnInfo(Fn->getReturnType(), Fn->getAttributes(), Outs, *TLI,
mf.getDataLayout());
CanLowerReturn = TLI->CanLowerReturn(Fn->getCallingConv(), *MF,
Fn->isVarArg(), Outs, Fn->getContext());
// If this personality uses funclets, we need to do a bit more work.
DenseMap<const AllocaInst *, TinyPtrVector<int *>> CatchObjects;
EHPersonality Personality = classifyEHPersonality(
Fn->hasPersonalityFn() ? Fn->getPersonalityFn() : nullptr);
if (isFuncletEHPersonality(Personality)) {
// Calculate state numbers if we haven't already.
WinEHFuncInfo &EHInfo = *MF->getWinEHFuncInfo();
if (Personality == EHPersonality::MSVC_CXX)
calculateWinCXXEHStateNumbers(&fn, EHInfo);
else if (isAsynchronousEHPersonality(Personality))
calculateSEHStateNumbers(&fn, EHInfo);
else if (Personality == EHPersonality::CoreCLR)
calculateClrEHStateNumbers(&fn, EHInfo);
// Map all BB references in the WinEH data to MBBs.
for (WinEHTryBlockMapEntry &TBME : EHInfo.TryBlockMap) {
for (WinEHHandlerType &H : TBME.HandlerArray) {
if (const AllocaInst *AI = H.CatchObj.Alloca)
CatchObjects.insert({AI, {}}).first->second.push_back(
&H.CatchObj.FrameIndex);
else
H.CatchObj.FrameIndex = INT_MAX;
}
}
}
// Initialize the mapping of values to registers. This is only set up for
// instruction values that are used outside of the block that defines
// them.
Function::const_iterator BB = Fn->begin(), EB = Fn->end();
for (; BB != EB; ++BB)
for (BasicBlock::const_iterator I = BB->begin(), E = BB->end();
I != E; ++I) {
if (const AllocaInst *AI = dyn_cast<AllocaInst>(I)) {
Type *Ty = AI->getAllocatedType();
unsigned Align =
std::max((unsigned)MF->getDataLayout().getPrefTypeAlignment(Ty),
AI->getAlignment());
// Static allocas can be folded into the initial stack frame
// adjustment. For targets that don't realign the stack, don't
// do this if there is an extra alignment requirement.
if (AI->isStaticAlloca() &&
(TFI->isStackRealignable() || (Align <= StackAlign))) {
const ConstantInt *CUI = cast<ConstantInt>(AI->getArraySize());
uint64_t TySize = MF->getDataLayout().getTypeAllocSize(Ty);
TySize *= CUI->getZExtValue(); // Get total allocated size.
if (TySize == 0) TySize = 1; // Don't create zero-sized stack objects.
int FrameIndex = INT_MAX;
auto Iter = CatchObjects.find(AI);
if (Iter != CatchObjects.end() && TLI->needsFixedCatchObjects()) {
FrameIndex = MF->getFrameInfo().CreateFixedObject(
TySize, 0, /*Immutable=*/false, /*isAliased=*/true);
MF->getFrameInfo().setObjectAlignment(FrameIndex, Align);
} else {
FrameIndex =
MF->getFrameInfo().CreateStackObject(TySize, Align, false, AI);
}
StaticAllocaMap[AI] = FrameIndex;
// Update the catch handler information.
if (Iter != CatchObjects.end()) {
for (int *CatchObjPtr : Iter->second)
*CatchObjPtr = FrameIndex;
}
} else {
// FIXME: Overaligned static allocas should be grouped into
// a single dynamic allocation instead of using a separate
// stack allocation for each one.
if (Align <= StackAlign)
Align = 0;
// Inform the Frame Information that we have variable-sized objects.
MF->getFrameInfo().CreateVariableSizedObject(Align ? Align : 1, AI);
}
}
// Look for inline asm that clobbers the SP register.
if (isa<CallInst>(I) || isa<InvokeInst>(I)) {
ImmutableCallSite CS(&*I);
if (isa<InlineAsm>(CS.getCalledValue())) {
unsigned SP = TLI->getStackPointerRegisterToSaveRestore();
const TargetRegisterInfo *TRI = MF->getSubtarget().getRegisterInfo();
std::vector<TargetLowering::AsmOperandInfo> Ops =
TLI->ParseConstraints(Fn->getParent()->getDataLayout(), TRI, CS);
for (size_t I = 0, E = Ops.size(); I != E; ++I) {
TargetLowering::AsmOperandInfo &Op = Ops[I];
if (Op.Type == InlineAsm::isClobber) {
// Clobbers don't have SDValue operands, hence SDValue().
TLI->ComputeConstraintToUse(Op, SDValue(), DAG);
std::pair<unsigned, const TargetRegisterClass *> PhysReg =
TLI->getRegForInlineAsmConstraint(TRI, Op.ConstraintCode,
Op.ConstraintVT);
if (PhysReg.first == SP)
MF->getFrameInfo().setHasOpaqueSPAdjustment(true);
}
}
}
}
// Look for calls to the @llvm.va_start intrinsic. We can omit some
// prologue boilerplate for variadic functions that don't examine their
// arguments.
if (const auto *II = dyn_cast<IntrinsicInst>(I)) {
if (II->getIntrinsicID() == Intrinsic::vastart)
MF->getFrameInfo().setHasVAStart(true);
}
// If we have a musttail call in a variadic function, we need to ensure we
// forward implicit register parameters.
if (const auto *CI = dyn_cast<CallInst>(I)) {
if (CI->isMustTailCall() && Fn->isVarArg())
MF->getFrameInfo().setHasMustTailInVarArgFunc(true);
}
// Mark values used outside their block as exported, by allocating
// a virtual register for them.
if (isUsedOutsideOfDefiningBlock(&*I))
if (!isa<AllocaInst>(I) || !StaticAllocaMap.count(cast<AllocaInst>(I)))
InitializeRegForValue(&*I);
// Collect llvm.dbg.declare information. This is done now instead of
// during the initial isel pass through the IR so that it is done
// in a predictable order.
if (const DbgDeclareInst *DI = dyn_cast<DbgDeclareInst>(I)) {
assert(DI->getVariable() && "Missing variable");
assert(DI->getDebugLoc() && "Missing location");
if (MMI.hasDebugInfo()) {
// Don't handle byval struct arguments or VLAs, for example.
// Non-byval arguments are handled here (they refer to the stack
// temporary alloca at this point).
const Value *Address = DI->getAddress();
if (Address) {
if (const BitCastInst *BCI = dyn_cast<BitCastInst>(Address))
Address = BCI->getOperand(0);
if (const AllocaInst *AI = dyn_cast<AllocaInst>(Address)) {
DenseMap<const AllocaInst *, int>::iterator SI =
StaticAllocaMap.find(AI);
if (SI != StaticAllocaMap.end()) { // Check for VLAs.
int FI = SI->second;
MF->setVariableDbgInfo(DI->getVariable(), DI->getExpression(),
FI, DI->getDebugLoc());
}
}
}
}
}
// Decide the preferred extend type for a value.
PreferredExtendType[&*I] = getPreferredExtendForValue(&*I);
}
// Create an initial MachineBasicBlock for each LLVM BasicBlock in F. This
// also creates the initial PHI MachineInstrs, though none of the input
// operands are populated.
for (BB = Fn->begin(); BB != EB; ++BB) {
// Don't create MachineBasicBlocks for imaginary EH pad blocks. These blocks
// are really data, and no instructions can live here.
if (BB->isEHPad()) {
const Instruction *I = BB->getFirstNonPHI();
// If this is a non-landingpad EH pad, mark this function as using
// funclets.
// FIXME: SEH catchpads do not create funclets, so we could avoid setting
// this in such cases in order to improve frame layout.
if (!isa<LandingPadInst>(I)) {
MF->setHasEHFunclets(true);
MF->getFrameInfo().setHasOpaqueSPAdjustment(true);
}
[IR] Reformulate LLVM's EH funclet IR While we have successfully implemented a funclet-oriented EH scheme on top of LLVM IR, our scheme has some notable deficiencies: - catchendpad and cleanupendpad are necessary in the current design but they are difficult to explain to others, even to seasoned LLVM experts. - catchendpad and cleanupendpad are optimization barriers. They cannot be split and force all potentially throwing call-sites to be invokes. This has a noticable effect on the quality of our code generation. - catchpad, while similar in some aspects to invoke, is fairly awkward. It is unsplittable, starts a funclet, and has control flow to other funclets. - The nesting relationship between funclets is currently a property of control flow edges. Because of this, we are forced to carefully analyze the flow graph to see if there might potentially exist illegal nesting among funclets. While we have logic to clone funclets when they are illegally nested, it would be nicer if we had a representation which forbade them upfront. Let's clean this up a bit by doing the following: - Instead, make catchpad more like cleanuppad and landingpad: no control flow, just a bunch of simple operands; catchpad would be splittable. - Introduce catchswitch, a control flow instruction designed to model the constraints of funclet oriented EH. - Make funclet scoping explicit by having funclet instructions consume the token produced by the funclet which contains them. - Remove catchendpad and cleanupendpad. Their presence can be inferred implicitly using coloring information. N.B. The state numbering code for the CLR has been updated but the veracity of it's output cannot be spoken for. An expert should take a look to make sure the results are reasonable. Reviewers: rnk, JosephTremoulet, andrew.w.kaylor Differential Revision: http://reviews.llvm.org/D15139 llvm-svn: 255422
2015-12-12 13:38:55 +08:00
if (isa<CatchSwitchInst>(I)) {
assert(&*BB->begin() == I &&
"WinEHPrepare failed to remove PHIs from imaginary BBs");
continue;
}
[IR] Reformulate LLVM's EH funclet IR While we have successfully implemented a funclet-oriented EH scheme on top of LLVM IR, our scheme has some notable deficiencies: - catchendpad and cleanupendpad are necessary in the current design but they are difficult to explain to others, even to seasoned LLVM experts. - catchendpad and cleanupendpad are optimization barriers. They cannot be split and force all potentially throwing call-sites to be invokes. This has a noticable effect on the quality of our code generation. - catchpad, while similar in some aspects to invoke, is fairly awkward. It is unsplittable, starts a funclet, and has control flow to other funclets. - The nesting relationship between funclets is currently a property of control flow edges. Because of this, we are forced to carefully analyze the flow graph to see if there might potentially exist illegal nesting among funclets. While we have logic to clone funclets when they are illegally nested, it would be nicer if we had a representation which forbade them upfront. Let's clean this up a bit by doing the following: - Instead, make catchpad more like cleanuppad and landingpad: no control flow, just a bunch of simple operands; catchpad would be splittable. - Introduce catchswitch, a control flow instruction designed to model the constraints of funclet oriented EH. - Make funclet scoping explicit by having funclet instructions consume the token produced by the funclet which contains them. - Remove catchendpad and cleanupendpad. Their presence can be inferred implicitly using coloring information. N.B. The state numbering code for the CLR has been updated but the veracity of it's output cannot be spoken for. An expert should take a look to make sure the results are reasonable. Reviewers: rnk, JosephTremoulet, andrew.w.kaylor Differential Revision: http://reviews.llvm.org/D15139 llvm-svn: 255422
2015-12-12 13:38:55 +08:00
if (isa<FuncletPadInst>(I))
assert(&*BB->begin() == I && "WinEHPrepare failed to demote PHIs");
}
MachineBasicBlock *MBB = mf.CreateMachineBasicBlock(&*BB);
MBBMap[&*BB] = MBB;
MF->push_back(MBB);
// Transfer the address-taken flag. This is necessary because there could
// be multiple MachineBasicBlocks corresponding to one BasicBlock, and only
// the first one should be marked.
if (BB->hasAddressTaken())
MBB->setHasAddressTaken();
// Create Machine PHI nodes for LLVM PHI nodes, lowering them as
// appropriate.
for (BasicBlock::const_iterator I = BB->begin();
const PHINode *PN = dyn_cast<PHINode>(I); ++I) {
if (PN->use_empty()) continue;
// Skip empty types
if (PN->getType()->isEmptyTy())
continue;
DebugLoc DL = PN->getDebugLoc();
unsigned PHIReg = ValueMap[PN];
assert(PHIReg && "PHI node does not have an assigned virtual register!");
SmallVector<EVT, 4> ValueVTs;
ComputeValueVTs(*TLI, MF->getDataLayout(), PN->getType(), ValueVTs);
for (unsigned vti = 0, vte = ValueVTs.size(); vti != vte; ++vti) {
EVT VT = ValueVTs[vti];
unsigned NumRegisters = TLI->getNumRegisters(Fn->getContext(), VT);
const TargetInstrInfo *TII = MF->getSubtarget().getInstrInfo();
for (unsigned i = 0; i != NumRegisters; ++i)
BuildMI(MBB, DL, TII->get(TargetOpcode::PHI), PHIReg + i);
PHIReg += NumRegisters;
}
}
}
// Mark landing pad blocks.
SmallVector<const LandingPadInst *, 4> LPads;
for (BB = Fn->begin(); BB != EB; ++BB) {
const Instruction *FNP = BB->getFirstNonPHI();
if (BB->isEHPad() && MBBMap.count(&*BB))
MBBMap[&*BB]->setIsEHPad();
if (const auto *LPI = dyn_cast<LandingPadInst>(FNP))
LPads.push_back(LPI);
}
if (!isFuncletEHPersonality(Personality))
return;
WinEHFuncInfo &EHInfo = *MF->getWinEHFuncInfo();
// Map all BB references in the WinEH data to MBBs.
for (WinEHTryBlockMapEntry &TBME : EHInfo.TryBlockMap) {
for (WinEHHandlerType &H : TBME.HandlerArray) {
if (H.Handler)
H.Handler = MBBMap[H.Handler.get<const BasicBlock *>()];
}
}
for (CxxUnwindMapEntry &UME : EHInfo.CxxUnwindMap)
if (UME.Cleanup)
UME.Cleanup = MBBMap[UME.Cleanup.get<const BasicBlock *>()];
for (SEHUnwindMapEntry &UME : EHInfo.SEHUnwindMap) {
const BasicBlock *BB = UME.Handler.get<const BasicBlock *>();
UME.Handler = MBBMap[BB];
}
for (ClrEHUnwindMapEntry &CME : EHInfo.ClrEHUnwindMap) {
const BasicBlock *BB = CME.Handler.get<const BasicBlock *>();
CME.Handler = MBBMap[BB];
}
}
/// clear - Clear out all the function-specific state. This returns this
/// FunctionLoweringInfo to an empty state, ready to be used for a
/// different function.
void FunctionLoweringInfo::clear() {
MBBMap.clear();
ValueMap.clear();
StaticAllocaMap.clear();
LiveOutRegInfo.clear();
VisitedBBs.clear();
ArgDbgValues.clear();
ByValArgFrameIndexMap.clear();
RegFixups.clear();
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them. With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now. I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it. During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases. In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure. Reviewed by: atrick, ributzka llvm-svn: 223137
2014-12-03 02:50:36 +08:00
StatepointStackSlots.clear();
StatepointSpillMaps.clear();
PreferredExtendType.clear();
}
/// CreateReg - Allocate a single virtual register for the given type.
unsigned FunctionLoweringInfo::CreateReg(MVT VT) {
return RegInfo->createVirtualRegister(
MF->getSubtarget().getTargetLowering()->getRegClassFor(VT));
}
/// CreateRegs - Allocate the appropriate number of virtual registers of
/// the correctly promoted or expanded types. Assign these registers
/// consecutive vreg numbers and return the first assigned number.
///
/// In the case that the given value has struct or array type, this function
/// will assign registers for each member or element.
///
unsigned FunctionLoweringInfo::CreateRegs(Type *Ty) {
const TargetLowering *TLI = MF->getSubtarget().getTargetLowering();
SmallVector<EVT, 4> ValueVTs;
ComputeValueVTs(*TLI, MF->getDataLayout(), Ty, ValueVTs);
unsigned FirstReg = 0;
for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
EVT ValueVT = ValueVTs[Value];
MVT RegisterVT = TLI->getRegisterType(Ty->getContext(), ValueVT);
unsigned NumRegs = TLI->getNumRegisters(Ty->getContext(), ValueVT);
for (unsigned i = 0; i != NumRegs; ++i) {
unsigned R = CreateReg(RegisterVT);
if (!FirstReg) FirstReg = R;
}
}
return FirstReg;
}
/// GetLiveOutRegInfo - Gets LiveOutInfo for a register, returning NULL if the
/// register is a PHI destination and the PHI's LiveOutInfo is not valid. If
/// the register's LiveOutInfo is for a smaller bit width, it is extended to
/// the larger bit width by zero extension. The bit width must be no smaller
/// than the LiveOutInfo's existing bit width.
const FunctionLoweringInfo::LiveOutInfo *
FunctionLoweringInfo::GetLiveOutRegInfo(unsigned Reg, unsigned BitWidth) {
if (!LiveOutRegInfo.inBounds(Reg))
return nullptr;
LiveOutInfo *LOI = &LiveOutRegInfo[Reg];
if (!LOI->IsValid)
return nullptr;
if (BitWidth > LOI->KnownZero.getBitWidth()) {
LOI->NumSignBits = 1;
LOI->KnownZero = LOI->KnownZero.zextOrTrunc(BitWidth);
LOI->KnownOne = LOI->KnownOne.zextOrTrunc(BitWidth);
}
return LOI;
}
/// ComputePHILiveOutRegInfo - Compute LiveOutInfo for a PHI's destination
/// register based on the LiveOutInfo of its operands.
void FunctionLoweringInfo::ComputePHILiveOutRegInfo(const PHINode *PN) {
Type *Ty = PN->getType();
if (!Ty->isIntegerTy() || Ty->isVectorTy())
return;
SmallVector<EVT, 1> ValueVTs;
ComputeValueVTs(*TLI, MF->getDataLayout(), Ty, ValueVTs);
assert(ValueVTs.size() == 1 &&
"PHIs with non-vector integer types should have a single VT.");
EVT IntVT = ValueVTs[0];
if (TLI->getNumRegisters(PN->getContext(), IntVT) != 1)
return;
IntVT = TLI->getTypeToTransformTo(PN->getContext(), IntVT);
unsigned BitWidth = IntVT.getSizeInBits();
unsigned DestReg = ValueMap[PN];
if (!TargetRegisterInfo::isVirtualRegister(DestReg))
return;
LiveOutRegInfo.grow(DestReg);
LiveOutInfo &DestLOI = LiveOutRegInfo[DestReg];
Value *V = PN->getIncomingValue(0);
if (isa<UndefValue>(V) || isa<ConstantExpr>(V)) {
DestLOI.NumSignBits = 1;
APInt Zero(BitWidth, 0);
DestLOI.KnownZero = Zero;
DestLOI.KnownOne = Zero;
return;
}
if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
APInt Val = CI->getValue().zextOrTrunc(BitWidth);
DestLOI.NumSignBits = Val.getNumSignBits();
DestLOI.KnownZero = ~Val;
DestLOI.KnownOne = Val;
} else {
assert(ValueMap.count(V) && "V should have been placed in ValueMap when its"
"CopyToReg node was created.");
unsigned SrcReg = ValueMap[V];
if (!TargetRegisterInfo::isVirtualRegister(SrcReg)) {
DestLOI.IsValid = false;
return;
}
const LiveOutInfo *SrcLOI = GetLiveOutRegInfo(SrcReg, BitWidth);
if (!SrcLOI) {
DestLOI.IsValid = false;
return;
}
DestLOI = *SrcLOI;
}
assert(DestLOI.KnownZero.getBitWidth() == BitWidth &&
DestLOI.KnownOne.getBitWidth() == BitWidth &&
"Masks should have the same bit width as the type.");
for (unsigned i = 1, e = PN->getNumIncomingValues(); i != e; ++i) {
Value *V = PN->getIncomingValue(i);
if (isa<UndefValue>(V) || isa<ConstantExpr>(V)) {
DestLOI.NumSignBits = 1;
APInt Zero(BitWidth, 0);
DestLOI.KnownZero = Zero;
DestLOI.KnownOne = Zero;
return;
}
if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
APInt Val = CI->getValue().zextOrTrunc(BitWidth);
DestLOI.NumSignBits = std::min(DestLOI.NumSignBits, Val.getNumSignBits());
DestLOI.KnownZero &= ~Val;
DestLOI.KnownOne &= Val;
continue;
}
assert(ValueMap.count(V) && "V should have been placed in ValueMap when "
"its CopyToReg node was created.");
unsigned SrcReg = ValueMap[V];
if (!TargetRegisterInfo::isVirtualRegister(SrcReg)) {
DestLOI.IsValid = false;
return;
}
const LiveOutInfo *SrcLOI = GetLiveOutRegInfo(SrcReg, BitWidth);
if (!SrcLOI) {
DestLOI.IsValid = false;
return;
}
DestLOI.NumSignBits = std::min(DestLOI.NumSignBits, SrcLOI->NumSignBits);
DestLOI.KnownZero &= SrcLOI->KnownZero;
DestLOI.KnownOne &= SrcLOI->KnownOne;
}
}
/// setArgumentFrameIndex - Record frame index for the byval
/// argument. This overrides previous frame index entry for this argument,
/// if any.
void FunctionLoweringInfo::setArgumentFrameIndex(const Argument *A,
int FI) {
ByValArgFrameIndexMap[A] = FI;
}
/// getArgumentFrameIndex - Get frame index for the byval argument.
/// If the argument does not have any assigned frame index then 0 is
/// returned.
int FunctionLoweringInfo::getArgumentFrameIndex(const Argument *A) {
DenseMap<const Argument *, int>::iterator I =
ByValArgFrameIndexMap.find(A);
if (I != ByValArgFrameIndexMap.end())
return I->second;
DEBUG(dbgs() << "Argument does not have assigned frame index!\n");
return 0;
}
unsigned FunctionLoweringInfo::getCatchPadExceptionPointerVReg(
const Value *CPI, const TargetRegisterClass *RC) {
MachineRegisterInfo &MRI = MF->getRegInfo();
auto I = CatchPadExceptionPointers.insert({CPI, 0});
unsigned &VReg = I.first->second;
if (I.second)
VReg = MRI.createVirtualRegister(RC);
assert(VReg && "null vreg in exception pointer table!");
return VReg;
}
unsigned
FunctionLoweringInfo::getOrCreateSwiftErrorVReg(const MachineBasicBlock *MBB,
const Value *Val) {
auto Key = std::make_pair(MBB, Val);
auto It = SwiftErrorVRegDefMap.find(Key);
// If this is the first use of this swifterror value in this basic block,
// create a new virtual register.
// After we processed all basic blocks we will satisfy this "upwards exposed
// use" by inserting a copy or phi at the beginning of this block.
if (It == SwiftErrorVRegDefMap.end()) {
auto &DL = MF->getDataLayout();
const TargetRegisterClass *RC = TLI->getRegClassFor(TLI->getPointerTy(DL));
auto VReg = MF->getRegInfo().createVirtualRegister(RC);
SwiftErrorVRegDefMap[Key] = VReg;
SwiftErrorVRegUpwardsUse[Key] = VReg;
return VReg;
} else return It->second;
Swift Calling Convention: swifterror target-independent change. At IR level, the swifterror argument is an input argument with type ErrorObject**. For targets that support swifterror, we want to optimize it to behave as an inout value with type ErrorObject*; it will be passed in a fixed physical register. The main idea is to track the virtual registers for each swifterror value. We define swifterror values as AllocaInsts with swifterror attribute or a function argument with swifterror attribute. In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before handling the basic blocks. When iterating over all basic blocks in RPO, before actually visiting the basic block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when there are multiple predecessors or to simply propagate them. There, we create a virtual register for each swifterror value in the entry block. For predecessors that are not yet visited, we create virtual registers to hold the swifterror values at the end of the predecessor. The assignments are saved in SwiftErrorWorklist and will be materialized at the end of visiting the basic block. When visiting a load from a swifterror value, we copy from the current virtual register assignment. When visiting a store to a swifterror value, we create a virtual register to hold the swifterror value and update SwiftErrorMap to track the current virtual register assignment. Differential Revision: http://reviews.llvm.org/D18108 llvm-svn: 265433
2016-04-06 02:13:16 +08:00
}
void FunctionLoweringInfo::setCurrentSwiftErrorVReg(
const MachineBasicBlock *MBB, const Value *Val, unsigned VReg) {
SwiftErrorVRegDefMap[std::make_pair(MBB, Val)] = VReg;
Swift Calling Convention: swifterror target-independent change. At IR level, the swifterror argument is an input argument with type ErrorObject**. For targets that support swifterror, we want to optimize it to behave as an inout value with type ErrorObject*; it will be passed in a fixed physical register. The main idea is to track the virtual registers for each swifterror value. We define swifterror values as AllocaInsts with swifterror attribute or a function argument with swifterror attribute. In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before handling the basic blocks. When iterating over all basic blocks in RPO, before actually visiting the basic block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when there are multiple predecessors or to simply propagate them. There, we create a virtual register for each swifterror value in the entry block. For predecessors that are not yet visited, we create virtual registers to hold the swifterror values at the end of the predecessor. The assignments are saved in SwiftErrorWorklist and will be materialized at the end of visiting the basic block. When visiting a load from a swifterror value, we copy from the current virtual register assignment. When visiting a store to a swifterror value, we create a virtual register to hold the swifterror value and update SwiftErrorMap to track the current virtual register assignment. Differential Revision: http://reviews.llvm.org/D18108 llvm-svn: 265433
2016-04-06 02:13:16 +08:00
}