2017-09-28 07:26:01 +08:00
|
|
|
//===- StatepointLowering.cpp - SDAGBuilder's statepoint code -------------===//
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file includes support code use by SelectionDAGBuilder when lowering a
|
|
|
|
// statepoint sequence in SelectionDAG IR.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "StatepointLowering.h"
|
|
|
|
#include "SelectionDAGBuilder.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/ADT/ArrayRef.h"
|
|
|
|
#include "llvm/ADT/None.h"
|
|
|
|
#include "llvm/ADT/Optional.h"
|
|
|
|
#include "llvm/ADT/STLExtras.h"
|
2022-03-10 05:29:31 +08:00
|
|
|
#include "llvm/ADT/SetVector.h"
|
|
|
|
#include "llvm/ADT/SmallBitVector.h"
|
2020-04-10 23:23:55 +08:00
|
|
|
#include "llvm/ADT/SmallSet.h"
|
2022-03-10 05:29:31 +08:00
|
|
|
#include "llvm/ADT/SmallVector.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/ADT/Statistic.h"
|
|
|
|
#include "llvm/CodeGen/FunctionLoweringInfo.h"
|
2015-02-13 17:09:03 +08:00
|
|
|
#include "llvm/CodeGen/GCMetadata.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/CodeGen/ISDOpcodes.h"
|
2017-06-06 19:49:48 +08:00
|
|
|
#include "llvm/CodeGen/MachineFrameInfo.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/CodeGen/MachineFunction.h"
|
|
|
|
#include "llvm/CodeGen/MachineMemOperand.h"
|
|
|
|
#include "llvm/CodeGen/RuntimeLibcalls.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/CodeGen/SelectionDAG.h"
|
2022-03-10 05:29:31 +08:00
|
|
|
#include "llvm/CodeGen/SelectionDAGNodes.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/CodeGen/StackMaps.h"
|
2017-11-17 09:07:10 +08:00
|
|
|
#include "llvm/CodeGen/TargetLowering.h"
|
|
|
|
#include "llvm/CodeGen/TargetOpcodes.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/IR/CallingConv.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/IR/DerivedTypes.h"
|
2021-05-12 15:36:24 +08:00
|
|
|
#include "llvm/IR/GCStrategy.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/IR/Instruction.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/IR/Instructions.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/IR/LLVMContext.h"
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#include "llvm/IR/Statepoint.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/IR/Type.h"
|
|
|
|
#include "llvm/Support/Casting.h"
|
2020-06-23 16:41:14 +08:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
2018-03-24 07:58:25 +08:00
|
|
|
#include "llvm/Support/MachineValueType.h"
|
2017-09-28 07:26:01 +08:00
|
|
|
#include "llvm/Target/TargetMachine.h"
|
|
|
|
#include "llvm/Target/TargetOptions.h"
|
|
|
|
#include <cassert>
|
|
|
|
#include <cstddef>
|
|
|
|
#include <cstdint>
|
|
|
|
#include <iterator>
|
|
|
|
#include <tuple>
|
|
|
|
#include <utility>
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
using namespace llvm;
|
|
|
|
|
|
|
|
#define DEBUG_TYPE "statepoint-lowering"
|
|
|
|
|
|
|
|
STATISTIC(NumSlotsAllocatedForStatepoints,
|
|
|
|
"Number of stack slots allocated for statepoints");
|
|
|
|
STATISTIC(NumOfStatepoints, "Number of statepoint nodes encountered");
|
|
|
|
STATISTIC(StatepointMaxSlotsRequired,
|
|
|
|
"Maximum number of stack slots required for a singe statepoint");
|
|
|
|
|
2020-04-09 19:40:53 +08:00
|
|
|
cl::opt<bool> UseRegistersForDeoptValues(
|
|
|
|
"use-registers-for-deopt-values", cl::Hidden, cl::init(false),
|
|
|
|
cl::desc("Allow using registers for non pointer deopt args"));
|
|
|
|
|
2021-01-12 12:52:48 +08:00
|
|
|
cl::opt<bool> UseRegistersForGCPointersInLandingPad(
|
|
|
|
"use-registers-for-gc-values-in-landing-pad", cl::Hidden, cl::init(false),
|
|
|
|
cl::desc("Allow using registers for gc pointer in landing pad"));
|
|
|
|
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
cl::opt<unsigned> MaxRegistersForGCPointers(
|
|
|
|
"max-registers-for-gc-values", cl::Hidden, cl::init(0),
|
|
|
|
cl::desc("Max number of VRegs allowed to pass GC pointer meta args in"));
|
|
|
|
|
2020-07-30 02:41:40 +08:00
|
|
|
typedef FunctionLoweringInfo::StatepointRelocationRecord RecordType;
|
|
|
|
|
2015-05-13 03:50:19 +08:00
|
|
|
static void pushStackMapConstant(SmallVectorImpl<SDValue>& Ops,
|
|
|
|
SelectionDAGBuilder &Builder, uint64_t Value) {
|
|
|
|
SDLoc L = Builder.getCurSDLoc();
|
|
|
|
Ops.push_back(Builder.DAG.getTargetConstant(StackMaps::ConstantOp, L,
|
|
|
|
MVT::i64));
|
|
|
|
Ops.push_back(Builder.DAG.getTargetConstant(Value, L, MVT::i64));
|
|
|
|
}
|
|
|
|
|
2015-04-30 05:52:45 +08:00
|
|
|
void StatepointLoweringState::startNewStatepoint(SelectionDAGBuilder &Builder) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Consistency check
|
|
|
|
assert(PendingGCRelocateCalls.empty() &&
|
|
|
|
"Trying to visit statepoint before finished processing previous one");
|
|
|
|
Locations.clear();
|
|
|
|
NextSlotToAllocate = 0;
|
2016-02-20 01:15:26 +08:00
|
|
|
// Need to resize this on each safepoint - we need the two to stay in sync and
|
|
|
|
// the clear patterns of a SelectionDAGBuilder have no relation to
|
2016-12-13 09:21:15 +08:00
|
|
|
// FunctionLoweringInfo. Also need to ensure used bits get cleared.
|
|
|
|
AllocatedStackSlots.clear();
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
AllocatedStackSlots.resize(Builder.FuncInfo.StatepointStackSlots.size());
|
|
|
|
}
|
2015-05-20 19:37:25 +08:00
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
void StatepointLoweringState::clear() {
|
|
|
|
Locations.clear();
|
|
|
|
AllocatedStackSlots.clear();
|
|
|
|
assert(PendingGCRelocateCalls.empty() &&
|
|
|
|
"cleared before statepoint sequence completed");
|
|
|
|
}
|
|
|
|
|
|
|
|
SDValue
|
|
|
|
StatepointLoweringState::allocateStackSlot(EVT ValueType,
|
|
|
|
SelectionDAGBuilder &Builder) {
|
|
|
|
NumSlotsAllocatedForStatepoints++;
|
2016-07-29 02:40:00 +08:00
|
|
|
MachineFrameInfo &MFI = Builder.DAG.getMachineFunction().getFrameInfo();
|
2016-02-20 01:15:22 +08:00
|
|
|
|
2017-11-28 22:44:32 +08:00
|
|
|
unsigned SpillSize = ValueType.getStoreSize();
|
2021-04-07 18:45:05 +08:00
|
|
|
assert((SpillSize * 8) ==
|
|
|
|
(-8u & (7 + ValueType.getSizeInBits())) && // Round up modulo 8.
|
|
|
|
"Size not in bytes?");
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2016-02-20 01:15:17 +08:00
|
|
|
// First look for a previously created stack slot which is not in
|
|
|
|
// use (accounting for the fact arbitrary slots may already be
|
|
|
|
// reserved), or to create a new stack slot and use it.
|
|
|
|
|
|
|
|
const size_t NumSlots = AllocatedStackSlots.size();
|
|
|
|
assert(NextSlotToAllocate <= NumSlots && "Broken invariant");
|
|
|
|
|
2016-12-13 09:21:15 +08:00
|
|
|
assert(AllocatedStackSlots.size() ==
|
|
|
|
Builder.FuncInfo.StatepointStackSlots.size() &&
|
2016-02-20 01:15:26 +08:00
|
|
|
"Broken invariant");
|
|
|
|
|
2016-02-20 01:15:17 +08:00
|
|
|
for (; NextSlotToAllocate < NumSlots; NextSlotToAllocate++) {
|
2016-02-20 02:15:53 +08:00
|
|
|
if (!AllocatedStackSlots.test(NextSlotToAllocate)) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
const int FI = Builder.FuncInfo.StatepointStackSlots[NextSlotToAllocate];
|
2016-07-29 02:40:00 +08:00
|
|
|
if (MFI.getObjectSize(FI) == SpillSize) {
|
2016-02-20 02:15:53 +08:00
|
|
|
AllocatedStackSlots.set(NextSlotToAllocate);
|
2016-12-13 09:21:15 +08:00
|
|
|
// TODO: Is ValueType the right thing to use here?
|
2016-02-20 02:15:53 +08:00
|
|
|
return Builder.DAG.getFrameIndex(FI, ValueType);
|
|
|
|
}
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
}
|
2016-02-20 01:15:17 +08:00
|
|
|
|
|
|
|
// Couldn't find a free slot, so create a new one:
|
|
|
|
|
|
|
|
SDValue SpillSlot = Builder.DAG.CreateStackTemporary(ValueType);
|
|
|
|
const unsigned FI = cast<FrameIndexSDNode>(SpillSlot)->getIndex();
|
2016-07-29 02:40:00 +08:00
|
|
|
MFI.markAsStatepointSpillSlotObjectIndex(FI);
|
2016-02-20 01:15:17 +08:00
|
|
|
|
|
|
|
Builder.FuncInfo.StatepointStackSlots.push_back(FI);
|
2016-12-13 09:21:15 +08:00
|
|
|
AllocatedStackSlots.resize(AllocatedStackSlots.size()+1, true);
|
|
|
|
assert(AllocatedStackSlots.size() ==
|
|
|
|
Builder.FuncInfo.StatepointStackSlots.size() &&
|
|
|
|
"Broken invariant");
|
2016-02-20 02:15:56 +08:00
|
|
|
|
2017-05-18 08:51:39 +08:00
|
|
|
StatepointMaxSlotsRequired.updateMax(
|
|
|
|
Builder.FuncInfo.StatepointStackSlots.size());
|
2016-02-20 02:15:56 +08:00
|
|
|
|
2016-02-20 01:15:17 +08:00
|
|
|
return SpillSlot;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
2015-06-10 20:31:53 +08:00
|
|
|
/// Utility function for reservePreviousStackSlotForValue. Tries to find
|
|
|
|
/// stack slot index to which we have spilled value for previous statepoints.
|
|
|
|
/// LookUpDepth specifies maximum DFS depth this function is allowed to look.
|
|
|
|
static Optional<int> findPreviousSpillSlot(const Value *Val,
|
|
|
|
SelectionDAGBuilder &Builder,
|
|
|
|
int LookUpDepth) {
|
2015-08-09 02:27:36 +08:00
|
|
|
// Can not look any further - give up now
|
2015-06-10 20:31:53 +08:00
|
|
|
if (LookUpDepth <= 0)
|
2016-02-06 07:40:04 +08:00
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
|
|
|
|
// Spill location is known for gc relocates
|
2016-01-05 12:03:00 +08:00
|
|
|
if (const auto *Relocate = dyn_cast<GCRelocateInst>(Val)) {
|
2020-07-30 02:41:40 +08:00
|
|
|
const auto &RelocationMap =
|
|
|
|
Builder.FuncInfo.StatepointRelocationMaps[Relocate->getStatepoint()];
|
|
|
|
|
|
|
|
auto It = RelocationMap.find(Relocate->getDerivedPtr());
|
|
|
|
if (It == RelocationMap.end())
|
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
|
2020-07-30 02:41:40 +08:00
|
|
|
auto &Record = It->second;
|
|
|
|
if (Record.type != RecordType::Spill)
|
2016-02-06 07:40:04 +08:00
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
|
2020-07-30 02:41:40 +08:00
|
|
|
return Record.payload.FI;
|
2015-06-10 20:31:53 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Look through bitcast instructions.
|
2016-02-20 03:37:07 +08:00
|
|
|
if (const BitCastInst *Cast = dyn_cast<BitCastInst>(Val))
|
2015-06-10 20:31:53 +08:00
|
|
|
return findPreviousSpillSlot(Cast->getOperand(0), Builder, LookUpDepth - 1);
|
|
|
|
|
|
|
|
// Look through phi nodes
|
|
|
|
// All incoming values should have same known stack slot, otherwise result
|
|
|
|
// is unknown.
|
|
|
|
if (const PHINode *Phi = dyn_cast<PHINode>(Val)) {
|
|
|
|
Optional<int> MergedResult = None;
|
|
|
|
|
|
|
|
for (auto &IncomingValue : Phi->incoming_values()) {
|
|
|
|
Optional<int> SpillSlot =
|
|
|
|
findPreviousSpillSlot(IncomingValue, Builder, LookUpDepth - 1);
|
|
|
|
if (!SpillSlot.hasValue())
|
2016-02-06 07:40:04 +08:00
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
|
|
|
|
if (MergedResult.hasValue() && *MergedResult != *SpillSlot)
|
2016-02-06 07:40:04 +08:00
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
|
|
|
|
MergedResult = SpillSlot;
|
|
|
|
}
|
|
|
|
return MergedResult;
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: We can do better for PHI nodes. In cases like this:
|
|
|
|
// ptr = phi(relocated_pointer, not_relocated_pointer)
|
|
|
|
// statepoint(ptr)
|
|
|
|
// We will return that stack slot for ptr is unknown. And later we might
|
|
|
|
// assign different stack slots for ptr and relocated_pointer. This limits
|
|
|
|
// llvm's ability to remove redundant stores.
|
|
|
|
// Unfortunately it's hard to accomplish in current infrastructure.
|
|
|
|
// We use this function to eliminate spill store completely, while
|
|
|
|
// in example we still need to emit store, but instead of any location
|
|
|
|
// we need to use special "preferred" location.
|
|
|
|
|
|
|
|
// TODO: handle simple updates. If a value is modified and the original
|
|
|
|
// value is no longer live, it would be nice to put the modified value in the
|
|
|
|
// same slot. This allows folding of the memory accesses for some
|
|
|
|
// instructions types (like an increment).
|
|
|
|
// statepoint (i)
|
|
|
|
// i1 = i+1
|
|
|
|
// statepoint (i1)
|
|
|
|
// However we need to be careful for cases like this:
|
|
|
|
// statepoint(i)
|
|
|
|
// i1 = i+1
|
|
|
|
// statepoint(i, i1)
|
|
|
|
// Here we want to reserve spill slot for 'i', but not for 'i+1'. If we just
|
|
|
|
// put handling of simple modifications in this function like it's done
|
|
|
|
// for bitcasts we might end up reserving i's slot for 'i+1' because order in
|
|
|
|
// which we visit values is unspecified.
|
|
|
|
|
|
|
|
// Don't know any information about this instruction
|
2016-02-06 07:40:04 +08:00
|
|
|
return None;
|
2015-06-10 20:31:53 +08:00
|
|
|
}
|
|
|
|
|
2020-07-08 03:20:34 +08:00
|
|
|
/// Return true if-and-only-if the given SDValue can be lowered as either a
|
|
|
|
/// constant argument or a stack reference. The key point is that the value
|
|
|
|
/// doesn't need to be spilled or tracked as a vreg use.
|
|
|
|
static bool willLowerDirectly(SDValue Incoming) {
|
|
|
|
// We are making an unchecked assumption that the frame size <= 2^16 as that
|
|
|
|
// is the largest offset which can be encoded in the stackmap format.
|
|
|
|
if (isa<FrameIndexSDNode>(Incoming))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
// The largest constant describeable in the StackMap format is 64 bits.
|
|
|
|
// Potential Optimization: Constants values are sign extended by consumer,
|
|
|
|
// and thus there are many constants of static type > 64 bits whose value
|
|
|
|
// happens to be sext(Con64) and could thus be lowered directly.
|
|
|
|
if (Incoming.getValueType().getSizeInBits() > 64)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return (isa<ConstantSDNode>(Incoming) || isa<ConstantFPSDNode>(Incoming) ||
|
|
|
|
Incoming.isUndef());
|
|
|
|
}
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
/// Try to find existing copies of the incoming values in stack slots used for
|
|
|
|
/// statepoint spilling. If we can find a spill slot for the incoming value,
|
|
|
|
/// mark that slot as allocated, and reuse the same slot for this safepoint.
|
2015-08-09 02:27:36 +08:00
|
|
|
/// This helps to avoid series of loads and stores that only serve to reshuffle
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
/// values on the stack between calls.
|
2015-06-10 20:31:53 +08:00
|
|
|
static void reservePreviousStackSlotForValue(const Value *IncomingValue,
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
SelectionDAGBuilder &Builder) {
|
2015-06-10 20:31:53 +08:00
|
|
|
SDValue Incoming = Builder.getValue(IncomingValue);
|
|
|
|
|
2020-07-08 03:20:34 +08:00
|
|
|
// If we won't spill this, we don't need to check for previously allocated
|
|
|
|
// stack slots.
|
|
|
|
if (willLowerDirectly(Incoming))
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
return;
|
|
|
|
|
2015-06-10 20:31:53 +08:00
|
|
|
SDValue OldLocation = Builder.StatepointLowering.getLocation(Incoming);
|
|
|
|
if (OldLocation.getNode())
|
2016-02-20 03:37:07 +08:00
|
|
|
// Duplicates in input
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
return;
|
|
|
|
|
2015-06-10 20:31:53 +08:00
|
|
|
const int LookUpDepth = 6;
|
|
|
|
Optional<int> Index =
|
|
|
|
findPreviousSpillSlot(IncomingValue, Builder, LookUpDepth);
|
|
|
|
if (!Index.hasValue())
|
|
|
|
return;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2016-02-20 03:37:07 +08:00
|
|
|
const auto &StatepointSlots = Builder.FuncInfo.StatepointStackSlots;
|
|
|
|
|
|
|
|
auto SlotIt = find(StatepointSlots, *Index);
|
|
|
|
assert(SlotIt != StatepointSlots.end() &&
|
|
|
|
"Value spilled to the unknown stack slot");
|
2015-06-10 20:31:53 +08:00
|
|
|
|
|
|
|
// This is one of our dedicated lowering slots
|
2016-02-20 03:37:07 +08:00
|
|
|
const int Offset = std::distance(StatepointSlots.begin(), SlotIt);
|
2015-06-10 20:31:53 +08:00
|
|
|
if (Builder.StatepointLowering.isStackSlotAllocated(Offset)) {
|
|
|
|
// stack slot already assigned to someone else, can't use it!
|
|
|
|
// TODO: currently we reserve space for gc arguments after doing
|
|
|
|
// normal allocation for deopt arguments. We should reserve for
|
|
|
|
// _all_ deopt and gc arguments, then start allocating. This
|
|
|
|
// will prevent some moves being inserted when vm state changes,
|
|
|
|
// but gc state doesn't between two calls.
|
|
|
|
return;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
2015-06-10 20:31:53 +08:00
|
|
|
// Reserve this stack slot
|
|
|
|
Builder.StatepointLowering.reserveStackSlot(Offset);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2015-06-10 20:31:53 +08:00
|
|
|
// Cache this slot so we find it when going through the normal
|
|
|
|
// assignment loop.
|
2017-04-28 01:17:16 +08:00
|
|
|
SDValue Loc =
|
|
|
|
Builder.DAG.getTargetFrameIndex(*Index, Builder.getFrameIndexTy());
|
2015-06-10 20:31:53 +08:00
|
|
|
Builder.StatepointLowering.setLocation(Incoming, Loc);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Extract call from statepoint, lower it and return pointer to the
|
|
|
|
/// call node. Also update NodeMap so that getValue(statepoint) will
|
|
|
|
/// reference lowered call result
|
2016-03-17 07:08:00 +08:00
|
|
|
static std::pair<SDValue, SDNode *> lowerCallFromStatepointLoweringInfo(
|
|
|
|
SelectionDAGBuilder::StatepointLoweringInfo &SI,
|
|
|
|
SelectionDAGBuilder &Builder, SmallVectorImpl<SDValue> &PendingExports) {
|
2015-05-06 10:36:20 +08:00
|
|
|
SDValue ReturnValue, CallEndVal;
|
2016-03-17 07:08:00 +08:00
|
|
|
std::tie(ReturnValue, CallEndVal) =
|
|
|
|
Builder.lowerInvokable(SI.CLI, SI.EHPadBB);
|
2015-05-06 10:36:20 +08:00
|
|
|
SDNode *CallEnd = CallEndVal.getNode();
|
|
|
|
|
|
|
|
// Get a call instruction from the call sequence chain. Tail calls are not
|
|
|
|
// allowed. The following code is essentially reverse engineering X86's
|
|
|
|
// LowerCallTo.
|
|
|
|
//
|
|
|
|
// We are expecting DAG to have the following form:
|
|
|
|
//
|
|
|
|
// ch = eh_label (only in case of invoke statepoint)
|
|
|
|
// ch, glue = callseq_start ch
|
|
|
|
// ch, glue = X86::Call ch, glue
|
|
|
|
// ch, glue = callseq_end ch, glue
|
|
|
|
// get_return_value ch, glue
|
|
|
|
//
|
2015-11-18 00:04:21 +08:00
|
|
|
// get_return_value can either be a sequence of CopyFromReg instructions
|
|
|
|
// to grab the return value from the return register(s), or it can be a LOAD
|
|
|
|
// to load a value returned by reference via a stack slot.
|
|
|
|
|
2016-03-17 07:08:00 +08:00
|
|
|
bool HasDef = !SI.CLI.RetTy->isVoidTy();
|
2015-11-18 00:04:21 +08:00
|
|
|
if (HasDef) {
|
|
|
|
if (CallEnd->getOpcode() == ISD::LOAD)
|
|
|
|
CallEnd = CallEnd->getOperand(0).getNode();
|
|
|
|
else
|
|
|
|
while (CallEnd->getOpcode() == ISD::CopyFromReg)
|
|
|
|
CallEnd = CallEnd->getOperand(0).getNode();
|
|
|
|
}
|
2015-05-06 10:36:20 +08:00
|
|
|
|
|
|
|
assert(CallEnd->getOpcode() == ISD::CALLSEQ_END && "expected!");
|
2016-03-17 07:08:00 +08:00
|
|
|
return std::make_pair(ReturnValue, CallEnd->getOperand(0).getNode());
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
2019-03-13 03:12:33 +08:00
|
|
|
static MachineMemOperand* getMachineMemOperand(MachineFunction &MF,
|
|
|
|
FrameIndexSDNode &FI) {
|
|
|
|
auto PtrInfo = MachinePointerInfo::getFixedStack(MF, FI.getIndex());
|
|
|
|
auto MMOFlags = MachineMemOperand::MOStore |
|
|
|
|
MachineMemOperand::MOLoad | MachineMemOperand::MOVolatile;
|
|
|
|
auto &MFI = MF.getFrameInfo();
|
[Alignment][NFC] Use Align version of getMachineMemOperand
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: jyknight, sdardis, nemanjai, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, jfb, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77059
2020-03-30 22:45:57 +08:00
|
|
|
return MF.getMachineMemOperand(PtrInfo, MMOFlags,
|
2019-03-13 03:12:33 +08:00
|
|
|
MFI.getObjectSize(FI.getIndex()),
|
[Alignment][NFC] Use Align version of getMachineMemOperand
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: jyknight, sdardis, nemanjai, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, jfb, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77059
2020-03-30 22:45:57 +08:00
|
|
|
MFI.getObjectAlign(FI.getIndex()));
|
2019-03-13 03:12:33 +08:00
|
|
|
}
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
/// Spill a value incoming to the statepoint. It might be either part of
|
|
|
|
/// vmstate
|
|
|
|
/// or gcstate. In both cases unconditionally spill it on the stack unless it
|
|
|
|
/// is a null constant. Return pair with first element being frame index
|
|
|
|
/// containing saved value and second element with outgoing chain from the
|
|
|
|
/// emitted store
|
2019-03-13 03:12:33 +08:00
|
|
|
static std::tuple<SDValue, SDValue, MachineMemOperand*>
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
spillIncomingStatepointValue(SDValue Incoming, SDValue Chain,
|
|
|
|
SelectionDAGBuilder &Builder) {
|
|
|
|
SDValue Loc = Builder.StatepointLowering.getLocation(Incoming);
|
2019-03-13 03:12:33 +08:00
|
|
|
MachineMemOperand* MMO = nullptr;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Emit new store if we didn't do it for this ptr before
|
|
|
|
if (!Loc.getNode()) {
|
|
|
|
Loc = Builder.StatepointLowering.allocateStackSlot(Incoming.getValueType(),
|
|
|
|
Builder);
|
|
|
|
int Index = cast<FrameIndexSDNode>(Loc)->getIndex();
|
|
|
|
// We use TargetFrameIndex so that isel will not select it into LEA
|
2017-04-28 01:17:16 +08:00
|
|
|
Loc = Builder.DAG.getTargetFrameIndex(Index, Builder.getFrameIndexTy());
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2016-02-20 01:15:22 +08:00
|
|
|
// Right now we always allocate spill slots that are of the same
|
|
|
|
// size as the value we're about to spill (the size of spillee can
|
|
|
|
// vary since we spill vectors of pointers too). At some point we
|
|
|
|
// can consider allowing spills of smaller values to larger slots
|
|
|
|
// (i.e. change the '==' in the assert below to a '>=').
|
2016-07-29 02:40:00 +08:00
|
|
|
MachineFrameInfo &MFI = Builder.DAG.getMachineFunction().getFrameInfo();
|
2019-08-14 18:48:39 +08:00
|
|
|
assert((MFI.getObjectSize(Index) * 8) ==
|
2021-04-07 18:45:05 +08:00
|
|
|
(-8 & (7 + // Round up modulo 8.
|
|
|
|
(int64_t)Incoming.getValueSizeInBits())) &&
|
2016-02-20 01:15:22 +08:00
|
|
|
"Bad spill: stack slot does not match!");
|
|
|
|
|
2019-07-23 07:33:18 +08:00
|
|
|
// Note: Using the alignment of the spill slot (rather than the abi or
|
|
|
|
// preferred alignment) is required for correctness when dealing with spill
|
|
|
|
// slots with preferred alignments larger than frame alignment..
|
2019-03-13 03:12:33 +08:00
|
|
|
auto &MF = Builder.DAG.getMachineFunction();
|
|
|
|
auto PtrInfo = MachinePointerInfo::getFixedStack(MF, Index);
|
[Alignment][NFC] Use Align version of getMachineMemOperand
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: jyknight, sdardis, nemanjai, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, jfb, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77059
2020-03-30 22:45:57 +08:00
|
|
|
auto *StoreMMO = MF.getMachineMemOperand(
|
|
|
|
PtrInfo, MachineMemOperand::MOStore, MFI.getObjectSize(Index),
|
|
|
|
MFI.getObjectAlign(Index));
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
Chain = Builder.DAG.getStore(Chain, Builder.getCurSDLoc(), Incoming, Loc,
|
2019-07-23 07:33:18 +08:00
|
|
|
StoreMMO);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2019-03-13 03:12:33 +08:00
|
|
|
MMO = getMachineMemOperand(MF, *cast<FrameIndexSDNode>(Loc));
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
Builder.StatepointLowering.setLocation(Incoming, Loc);
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(Loc.getNode());
|
2019-03-13 03:12:33 +08:00
|
|
|
return std::make_tuple(Loc, Chain, MMO);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Lower a single value incoming to a statepoint node. This value can be
|
|
|
|
/// either a deopt value or a gc value, the handling is the same. We special
|
|
|
|
/// case constants and allocas, then fall back to spilling if required.
|
2020-04-07 12:04:19 +08:00
|
|
|
static void
|
|
|
|
lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
|
|
|
|
SmallVectorImpl<SDValue> &Ops,
|
|
|
|
SmallVectorImpl<MachineMemOperand *> &MemRefs,
|
|
|
|
SelectionDAGBuilder &Builder) {
|
2020-07-08 03:20:34 +08:00
|
|
|
|
|
|
|
if (willLowerDirectly(Incoming)) {
|
|
|
|
if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Incoming)) {
|
|
|
|
// This handles allocas as arguments to the statepoint (this is only
|
|
|
|
// really meaningful for a deopt value. For GC, we'd be trying to
|
|
|
|
// relocate the address of the alloca itself?)
|
|
|
|
assert(Incoming.getValueType() == Builder.getFrameIndexTy() &&
|
|
|
|
"Incoming value is a frame index!");
|
|
|
|
Ops.push_back(Builder.DAG.getTargetFrameIndex(FI->getIndex(),
|
|
|
|
Builder.getFrameIndexTy()));
|
|
|
|
|
|
|
|
auto &MF = Builder.DAG.getMachineFunction();
|
|
|
|
auto *MMO = getMachineMemOperand(MF, *FI);
|
|
|
|
MemRefs.push_back(MMO);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(Incoming.getValueType().getSizeInBits() <= 64);
|
|
|
|
|
|
|
|
if (Incoming.isUndef()) {
|
|
|
|
// Put an easily recognized constant that's unlikely to be a valid
|
|
|
|
// value so that uses of undef by the consumer of the stackmap is
|
|
|
|
// easily recognized. This is legal since the compiler is always
|
|
|
|
// allowed to chose an arbitrary value for undef.
|
|
|
|
pushStackMapConstant(Ops, Builder, 0xFEFEFEFE);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the original value was a constant, make sure it gets recorded as
|
|
|
|
// such in the stackmap. This is required so that the consumer can
|
|
|
|
// parse any internal format to the deopt state. It also handles null
|
|
|
|
// pointers and other constant pointers in GC states.
|
|
|
|
if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Incoming)) {
|
|
|
|
pushStackMapConstant(Ops, Builder, C->getSExtValue());
|
|
|
|
return;
|
|
|
|
} else if (ConstantFPSDNode *C = dyn_cast<ConstantFPSDNode>(Incoming)) {
|
|
|
|
pushStackMapConstant(Ops, Builder,
|
|
|
|
C->getValueAPF().bitcastToAPInt().getZExtValue());
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
llvm_unreachable("unhandled direct lowering case");
|
2020-05-28 20:26:56 +08:00
|
|
|
}
|
|
|
|
|
2019-03-13 03:12:33 +08:00
|
|
|
|
2020-04-07 12:04:19 +08:00
|
|
|
|
2020-07-08 03:20:34 +08:00
|
|
|
if (!RequireSpillSlot) {
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
// If this value is live in (not live-on-return, or live-through), we can
|
2018-07-31 03:41:25 +08:00
|
|
|
// treat it the same way patchpoint treats it's "live in" values. We'll
|
|
|
|
// end up folding some of these into stack references, but they'll be
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
// handled by the register allocator. Note that we do not have the notion
|
2018-07-31 03:41:25 +08:00
|
|
|
// of a late use so these values might be placed in registers which are
|
2020-04-09 19:40:53 +08:00
|
|
|
// clobbered by the call. This is fine for live-in. For live-through
|
|
|
|
// fix-up pass should be executed to force spilling of such registers.
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
Ops.push_back(Incoming);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
} else {
|
2020-07-08 03:20:34 +08:00
|
|
|
// Otherwise, locate a spill slot and explicitly spill it so it can be
|
|
|
|
// found by the runtime later. Note: We know all of these spills are
|
|
|
|
// independent, but don't bother to exploit that chain wise. DAGCombine
|
|
|
|
// will happily do so as needed, so doing it here would be a small compile
|
|
|
|
// time win at most.
|
|
|
|
SDValue Chain = Builder.getRoot();
|
2016-02-20 03:37:07 +08:00
|
|
|
auto Res = spillIncomingStatepointValue(Incoming, Chain, Builder);
|
2019-03-13 03:12:33 +08:00
|
|
|
Ops.push_back(std::get<0>(Res));
|
|
|
|
if (auto *MMO = std::get<2>(Res))
|
|
|
|
MemRefs.push_back(MMO);
|
|
|
|
Chain = std::get<1>(Res);;
|
2020-07-08 03:20:34 +08:00
|
|
|
Builder.DAG.setRoot(Chain);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2021-02-25 17:28:50 +08:00
|
|
|
/// Return true if value V represents the GC value. The behavior is conservative
|
|
|
|
/// in case it is not sure that value is not GC the function returns true.
|
|
|
|
static bool isGCValue(const Value *V, SelectionDAGBuilder &Builder) {
|
|
|
|
auto *Ty = V->getType();
|
|
|
|
if (!Ty->isPtrOrPtrVectorTy())
|
|
|
|
return false;
|
|
|
|
if (auto *GFI = Builder.GFI)
|
|
|
|
if (auto IsManaged = GFI->getStrategy().isGCManagedPointer(Ty))
|
|
|
|
return *IsManaged;
|
|
|
|
return true; // conservative
|
|
|
|
}
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
/// Lower deopt state and gc pointer arguments of the statepoint. The actual
|
|
|
|
/// lowering is described in lowerIncomingStatepointValue. This function is
|
|
|
|
/// responsible for lowering everything in the right position and playing some
|
|
|
|
/// tricks to avoid redundant stack manipulation where possible. On
|
|
|
|
/// completion, 'Ops' will contain ready to use operands for machine code
|
|
|
|
/// statepoint. The chain nodes will have already been created and the DAG root
|
|
|
|
/// will be set to the last value spilled (if any were).
|
2016-03-17 07:08:00 +08:00
|
|
|
static void
|
|
|
|
lowerStatepointMetaArgs(SmallVectorImpl<SDValue> &Ops,
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
SmallVectorImpl<MachineMemOperand *> &MemRefs,
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
SmallVectorImpl<SDValue> &GCPtrs,
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
DenseMap<SDValue, int> &LowerAsVReg,
|
|
|
|
SelectionDAGBuilder::StatepointLoweringInfo &SI,
|
2016-03-17 07:08:00 +08:00
|
|
|
SelectionDAGBuilder &Builder) {
|
2016-03-17 07:11:21 +08:00
|
|
|
// Lower the deopt and gc arguments for this statepoint. Layout will be:
|
|
|
|
// deopt argument length, deopt arguments.., gc arguments...
|
2015-01-08 03:07:50 +08:00
|
|
|
#ifndef NDEBUG
|
2016-03-22 08:59:13 +08:00
|
|
|
if (auto *GFI = Builder.GFI) {
|
|
|
|
// Check that each of the gc pointer and bases we've gotten out of the
|
|
|
|
// safepoint is something the strategy thinks might be a pointer (or vector
|
|
|
|
// of pointers) into the GC heap. This is basically just here to help catch
|
|
|
|
// errors during statepoint insertion. TODO: This should actually be in the
|
|
|
|
// Verifier, but we can't get to the GCStrategy from there (yet).
|
|
|
|
GCStrategy &S = GFI->getStrategy();
|
|
|
|
for (const Value *V : SI.Bases) {
|
|
|
|
auto Opt = S.isGCManagedPointer(V->getType()->getScalarType());
|
|
|
|
if (Opt.hasValue()) {
|
|
|
|
assert(Opt.getValue() &&
|
|
|
|
"non gc managed base pointer found in statepoint");
|
|
|
|
}
|
2015-01-08 03:07:50 +08:00
|
|
|
}
|
2016-03-22 08:59:13 +08:00
|
|
|
for (const Value *V : SI.Ptrs) {
|
|
|
|
auto Opt = S.isGCManagedPointer(V->getType()->getScalarType());
|
|
|
|
if (Opt.hasValue()) {
|
|
|
|
assert(Opt.getValue() &&
|
|
|
|
"non gc managed derived pointer found in statepoint");
|
|
|
|
}
|
2015-01-08 03:07:50 +08:00
|
|
|
}
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
assert(SI.Bases.size() == SI.Ptrs.size() && "Pointer without base!");
|
2016-03-22 08:59:13 +08:00
|
|
|
} else {
|
|
|
|
assert(SI.Bases.empty() && "No gc specified, so cannot relocate pointers!");
|
|
|
|
assert(SI.Ptrs.empty() && "No gc specified, so cannot relocate pointers!");
|
2015-03-27 13:09:33 +08:00
|
|
|
}
|
2015-01-08 03:07:50 +08:00
|
|
|
#endif
|
|
|
|
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
// Figure out what lowering strategy we're going to use for each part
|
|
|
|
// Note: Is is conservatively correct to lower both "live-in" and "live-out"
|
|
|
|
// as "live-through". A "live-through" variable is one which is "live-in",
|
|
|
|
// "live-out", and live throughout the lifetime of the call (i.e. we can find
|
|
|
|
// it from any PC within the transitive callee of the statepoint). In
|
|
|
|
// particular, if the callee spills callee preserved registers we may not
|
|
|
|
// be able to find a value placed in that register during the call. This is
|
|
|
|
// fine for live-out, but not for live-through. If we were willing to make
|
|
|
|
// assumptions about the code generator producing the callee, we could
|
|
|
|
// potentially allow live-through values in callee saved registers.
|
|
|
|
const bool LiveInDeopt =
|
|
|
|
SI.StatepointFlags & (uint64_t)StatepointFlags::DeoptLiveIn;
|
|
|
|
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
// Decide which deriver pointers will go on VRegs
|
2020-10-14 02:27:14 +08:00
|
|
|
unsigned MaxVRegPtrs = MaxRegistersForGCPointers.getValue();
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
|
2020-12-17 17:36:57 +08:00
|
|
|
// Pointers used on exceptional path of invoke statepoint.
|
|
|
|
// We cannot assing them to VRegs.
|
|
|
|
SmallSet<SDValue, 8> LPadPointers;
|
2021-01-12 12:52:48 +08:00
|
|
|
if (!UseRegistersForGCPointersInLandingPad)
|
|
|
|
if (auto *StInvoke = dyn_cast_or_null<InvokeInst>(SI.StatepointInstr)) {
|
|
|
|
LandingPadInst *LPI = StInvoke->getLandingPadInst();
|
|
|
|
for (auto *Relocate : SI.GCRelocates)
|
|
|
|
if (Relocate->getOperand(0) == LPI) {
|
|
|
|
LPadPointers.insert(Builder.getValue(Relocate->getBasePtr()));
|
|
|
|
LPadPointers.insert(Builder.getValue(Relocate->getDerivedPtr()));
|
|
|
|
}
|
|
|
|
}
|
2020-12-17 17:36:57 +08:00
|
|
|
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Deciding how to lower GC Pointers:\n");
|
|
|
|
|
|
|
|
// List of unique lowered GC Pointer values.
|
|
|
|
SmallSetVector<SDValue, 16> LoweredGCPtrs;
|
|
|
|
// Map lowered GC Pointer value to the index in above vector
|
|
|
|
DenseMap<SDValue, unsigned> GCPtrIndexMap;
|
|
|
|
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
unsigned CurNumVRegs = 0;
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
|
2020-12-17 17:36:57 +08:00
|
|
|
auto canPassGCPtrOnVReg = [&](SDValue SD) {
|
|
|
|
if (SD.getValueType().isVector())
|
|
|
|
return false;
|
|
|
|
if (LPadPointers.count(SD))
|
|
|
|
return false;
|
|
|
|
return !willLowerDirectly(SD);
|
|
|
|
};
|
|
|
|
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
auto processGCPtr = [&](const Value *V) {
|
|
|
|
SDValue PtrSD = Builder.getValue(V);
|
|
|
|
if (!LoweredGCPtrs.insert(PtrSD))
|
|
|
|
return; // skip duplicates
|
|
|
|
GCPtrIndexMap[PtrSD] = LoweredGCPtrs.size() - 1;
|
|
|
|
|
|
|
|
assert(!LowerAsVReg.count(PtrSD) && "must not have been seen");
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
if (LowerAsVReg.size() == MaxVRegPtrs)
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
return;
|
2020-12-17 17:36:57 +08:00
|
|
|
assert(V->getType()->isVectorTy() == PtrSD.getValueType().isVector() &&
|
|
|
|
"IR and SD types disagree");
|
|
|
|
if (!canPassGCPtrOnVReg(PtrSD)) {
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "direct/spill "; PtrSD.dump(&Builder.DAG));
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
return;
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
}
|
|
|
|
LLVM_DEBUG(dbgs() << "vreg "; PtrSD.dump(&Builder.DAG));
|
|
|
|
LowerAsVReg[PtrSD] = CurNumVRegs++;
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
// Process derived pointers first to give them more chance to go on VReg.
|
|
|
|
for (const Value *V : SI.Ptrs)
|
|
|
|
processGCPtr(V);
|
|
|
|
for (const Value *V : SI.Bases)
|
|
|
|
processGCPtr(V);
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << LowerAsVReg.size() << " pointers will go in vregs\n");
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
|
2020-04-07 12:04:19 +08:00
|
|
|
auto requireSpillSlot = [&](const Value *V) {
|
2021-02-26 18:27:03 +08:00
|
|
|
if (!Builder.DAG.getTargetLoweringInfo().isTypeLegal(
|
|
|
|
Builder.getValue(V).getValueType()))
|
|
|
|
return true;
|
2021-02-25 17:28:50 +08:00
|
|
|
if (isGCValue(V, Builder))
|
2020-10-23 22:55:06 +08:00
|
|
|
return !LowerAsVReg.count(Builder.getValue(V));
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
return !(LiveInDeopt || UseRegistersForDeoptValues);
|
2020-04-07 12:04:19 +08:00
|
|
|
};
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Before we actually start lowering (and allocating spill slots for values),
|
|
|
|
// reserve any stack slots which we judge to be profitable to reuse for a
|
|
|
|
// particular value. This is purely an optimization over the code below and
|
|
|
|
// doesn't change semantics at all. It is important for performance that we
|
|
|
|
// reserve slots for both deopt and gc values before lowering either.
|
2016-03-17 07:08:00 +08:00
|
|
|
for (const Value *V : SI.DeoptState) {
|
2020-04-07 12:04:19 +08:00
|
|
|
if (requireSpillSlot(V))
|
[statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.
The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.
Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)
My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.
Differential Revision: https://reviews.llvm.org/D24000
llvm-svn: 280250
2016-08-31 23:12:17 +08:00
|
|
|
reservePreviousStackSlotForValue(V, Builder);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
for (const Value *V : SI.Ptrs) {
|
|
|
|
SDValue SDV = Builder.getValue(V);
|
|
|
|
if (!LowerAsVReg.count(SDV))
|
|
|
|
reservePreviousStackSlotForValue(V, Builder);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (const Value *V : SI.Bases) {
|
|
|
|
SDValue SDV = Builder.getValue(V);
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
if (!LowerAsVReg.count(SDV))
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
reservePreviousStackSlotForValue(V, Builder);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// First, prefix the list with the number of unique values to be
|
|
|
|
// lowered. Note that this is the number of *Values* not the
|
|
|
|
// number of SDValues required to lower them.
|
2016-03-17 07:08:00 +08:00
|
|
|
const int NumVMSArgs = SI.DeoptState.size();
|
2015-05-13 03:50:19 +08:00
|
|
|
pushStackMapConstant(Ops, Builder, NumVMSArgs);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2016-03-17 07:08:00 +08:00
|
|
|
// The vm state arguments are lowered in an opaque manner. We do not know
|
|
|
|
// what type of values are contained within.
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Lowering deopt state\n");
|
2016-03-17 07:08:00 +08:00
|
|
|
for (const Value *V : SI.DeoptState) {
|
2018-12-01 00:22:41 +08:00
|
|
|
SDValue Incoming;
|
|
|
|
// If this is a function argument at a static frame index, generate it as
|
|
|
|
// the frame index.
|
|
|
|
if (const Argument *Arg = dyn_cast<Argument>(V)) {
|
|
|
|
int FI = Builder.FuncInfo.getArgumentFrameIndex(Arg);
|
|
|
|
if (FI != INT_MAX)
|
|
|
|
Incoming = Builder.DAG.getFrameIndex(FI, Builder.getFrameIndexTy());
|
|
|
|
}
|
|
|
|
if (!Incoming.getNode())
|
|
|
|
Incoming = Builder.getValue(V);
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Value " << *V
|
|
|
|
<< " requireSpillSlot = " << requireSpillSlot(V) << "\n");
|
2020-04-07 12:04:19 +08:00
|
|
|
lowerIncomingStatepointValue(Incoming, requireSpillSlot(V), Ops, MemRefs,
|
|
|
|
Builder);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
// Finally, go ahead and lower all the gc arguments.
|
|
|
|
pushStackMapConstant(Ops, Builder, LoweredGCPtrs.size());
|
|
|
|
for (SDValue SDV : LoweredGCPtrs)
|
|
|
|
lowerIncomingStatepointValue(SDV, !LowerAsVReg.count(SDV), Ops, MemRefs,
|
2020-04-07 12:04:19 +08:00
|
|
|
Builder);
|
2015-05-12 21:12:14 +08:00
|
|
|
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
// Copy to out vector. LoweredGCPtrs will be empty after this point.
|
|
|
|
GCPtrs = LoweredGCPtrs.takeVector();
|
2015-03-27 12:52:48 +08:00
|
|
|
|
2015-04-30 05:52:45 +08:00
|
|
|
// If there are any explicit spill slots passed to the statepoint, record
|
2015-03-27 12:52:48 +08:00
|
|
|
// them, but otherwise do not do anything special. These are user provided
|
2015-04-30 05:52:45 +08:00
|
|
|
// allocas and give control over placement to the consumer. In this case,
|
2015-03-27 12:52:48 +08:00
|
|
|
// it is the contents of the slot which may get updated, not the pointer to
|
|
|
|
// the alloca
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
SmallVector<SDValue, 4> Allocas;
|
2016-03-17 07:08:00 +08:00
|
|
|
for (Value *V : SI.GCArgs) {
|
2015-03-27 12:52:48 +08:00
|
|
|
SDValue Incoming = Builder.getValue(V);
|
|
|
|
if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Incoming)) {
|
|
|
|
// This handles allocas as arguments to the statepoint
|
2017-04-28 01:17:16 +08:00
|
|
|
assert(Incoming.getValueType() == Builder.getFrameIndexTy() &&
|
|
|
|
"Incoming value is a frame index!");
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
Allocas.push_back(Builder.DAG.getTargetFrameIndex(
|
|
|
|
FI->getIndex(), Builder.getFrameIndexTy()));
|
2019-03-13 03:12:33 +08:00
|
|
|
|
|
|
|
auto &MF = Builder.DAG.getMachineFunction();
|
|
|
|
auto *MMO = getMachineMemOperand(MF, *FI);
|
|
|
|
MemRefs.push_back(MMO);
|
2015-03-27 12:52:48 +08:00
|
|
|
}
|
|
|
|
}
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
pushStackMapConstant(Ops, Builder, Allocas.size());
|
|
|
|
Ops.append(Allocas.begin(), Allocas.end());
|
|
|
|
|
|
|
|
// Now construct GC base/derived map;
|
|
|
|
pushStackMapConstant(Ops, Builder, SI.Ptrs.size());
|
|
|
|
SDLoc L = Builder.getCurSDLoc();
|
|
|
|
for (unsigned i = 0; i < SI.Ptrs.size(); ++i) {
|
|
|
|
SDValue Base = Builder.getValue(SI.Bases[i]);
|
|
|
|
assert(GCPtrIndexMap.count(Base) && "base not found in index map");
|
|
|
|
Ops.push_back(
|
|
|
|
Builder.DAG.getTargetConstant(GCPtrIndexMap[Base], L, MVT::i64));
|
|
|
|
SDValue Derived = Builder.getValue(SI.Ptrs[i]);
|
|
|
|
assert(GCPtrIndexMap.count(Derived) && "derived not found in index map");
|
|
|
|
Ops.push_back(
|
|
|
|
Builder.DAG.getTargetConstant(GCPtrIndexMap[Derived], L, MVT::i64));
|
|
|
|
}
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
2015-02-20 23:28:35 +08:00
|
|
|
|
2016-03-22 08:59:13 +08:00
|
|
|
SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
|
2016-03-17 07:08:00 +08:00
|
|
|
SelectionDAGBuilder::StatepointLoweringInfo &SI) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// The basic scheme here is that information about both the original call and
|
|
|
|
// the safepoint is encoded in the CallInst. We create a temporary call and
|
|
|
|
// lower it, then reverse engineer the calling sequence.
|
|
|
|
|
|
|
|
NumOfStatepoints++;
|
|
|
|
// Clear state
|
|
|
|
StatepointLowering.startNewStatepoint(*this);
|
2021-02-25 17:28:50 +08:00
|
|
|
assert(SI.Bases.size() == SI.Ptrs.size());
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
LLVM_DEBUG(dbgs() << "Lowering statepoint " << *SI.StatepointInstr << "\n");
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#ifndef NDEBUG
|
2016-03-17 07:08:00 +08:00
|
|
|
for (auto *Reloc : SI.GCRelocates)
|
|
|
|
if (Reloc->getParent() == SI.StatepointInstr->getParent())
|
|
|
|
StatepointLowering.scheduleRelocCall(*Reloc);
|
2014-12-03 05:01:48 +08:00
|
|
|
#endif
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Lower statepoint vmstate and gcstate arguments
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
|
|
|
|
// All lowered meta args.
|
2015-05-06 07:06:49 +08:00
|
|
|
SmallVector<SDValue, 10> LoweredMetaArgs;
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
// Lowered GC pointers (subset of above).
|
|
|
|
SmallVector<SDValue, 16> LoweredGCArgs;
|
2019-03-13 03:12:33 +08:00
|
|
|
SmallVector<MachineMemOperand*, 16> MemRefs;
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
// Maps derived pointer SDValue to statepoint result of relocated pointer.
|
|
|
|
DenseMap<SDValue, int> LowerAsVReg;
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
lowerStatepointMetaArgs(LoweredMetaArgs, MemRefs, LoweredGCArgs, LowerAsVReg,
|
|
|
|
SI, *this);
|
2016-03-17 07:08:00 +08:00
|
|
|
|
|
|
|
// Now that we've emitted the spills, we need to update the root so that the
|
|
|
|
// call sequence is ordered correctly.
|
|
|
|
SI.CLI.setChain(getRoot());
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Get call node, we will replace it later with statepoint
|
2016-03-17 07:08:00 +08:00
|
|
|
SDValue ReturnVal;
|
|
|
|
SDNode *CallNode;
|
|
|
|
std::tie(ReturnVal, CallNode) =
|
|
|
|
lowerCallFromStatepointLoweringInfo(SI, *this, PendingExports);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
// Construct the actual GC_TRANSITION_START, STATEPOINT, and GC_TRANSITION_END
|
|
|
|
// nodes with all the appropriate arguments and return values.
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Call Node: Chain, Target, {Args}, RegMask, [Glue]
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
SDValue Chain = CallNode->getOperand(0);
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
SDValue Glue;
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
bool CallHasIncomingGlue = CallNode->getGluedNode();
|
|
|
|
if (CallHasIncomingGlue) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Glue is always last operand
|
|
|
|
Glue = CallNode->getOperand(CallNode->getNumOperands() - 1);
|
|
|
|
}
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
|
|
|
|
// Build the GC_TRANSITION_START node if necessary.
|
|
|
|
//
|
|
|
|
// The operands to the GC_TRANSITION_{START,END} nodes are laid out in the
|
|
|
|
// order in which they appear in the call to the statepoint intrinsic. If
|
|
|
|
// any of the operands is a pointer-typed, that operand is immediately
|
|
|
|
// followed by a SRCVALUE for the pointer that may be used during lowering
|
|
|
|
// (e.g. to form MachinePointerInfo values for loads/stores).
|
|
|
|
const bool IsGCTransition =
|
2016-03-17 07:08:00 +08:00
|
|
|
(SI.StatepointFlags & (uint64_t)StatepointFlags::GCTransition) ==
|
|
|
|
(uint64_t)StatepointFlags::GCTransition;
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
if (IsGCTransition) {
|
|
|
|
SmallVector<SDValue, 8> TSOps;
|
|
|
|
|
|
|
|
// Add chain
|
|
|
|
TSOps.push_back(Chain);
|
|
|
|
|
|
|
|
// Add GC transition arguments
|
2016-03-17 07:08:00 +08:00
|
|
|
for (const Value *V : SI.GCTransitionArgs) {
|
2015-05-13 05:33:48 +08:00
|
|
|
TSOps.push_back(getValue(V));
|
|
|
|
if (V->getType()->isPointerTy())
|
|
|
|
TSOps.push_back(DAG.getSrcValue(V));
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Add glue if necessary
|
|
|
|
if (CallHasIncomingGlue)
|
|
|
|
TSOps.push_back(Glue);
|
|
|
|
|
|
|
|
SDVTList NodeTys = DAG.getVTList(MVT::Other, MVT::Glue);
|
|
|
|
|
|
|
|
SDValue GCTransitionStart =
|
|
|
|
DAG.getNode(ISD::GC_TRANSITION_START, getCurSDLoc(), NodeTys, TSOps);
|
|
|
|
|
|
|
|
Chain = GCTransitionStart.getValue(0);
|
|
|
|
Glue = GCTransitionStart.getValue(1);
|
|
|
|
}
|
|
|
|
|
2015-05-13 07:52:24 +08:00
|
|
|
// TODO: Currently, all of these operands are being marked as read/write in
|
|
|
|
// PrologEpilougeInserter.cpp, we should special case the VMState arguments
|
|
|
|
// and flags to be read-only.
|
|
|
|
SmallVector<SDValue, 40> Ops;
|
|
|
|
|
|
|
|
// Add the <id> and <numBytes> constants.
|
2016-03-17 07:08:00 +08:00
|
|
|
Ops.push_back(DAG.getTargetConstant(SI.ID, getCurSDLoc(), MVT::i64));
|
2015-05-13 07:52:24 +08:00
|
|
|
Ops.push_back(
|
2016-03-17 07:08:00 +08:00
|
|
|
DAG.getTargetConstant(SI.NumPatchBytes, getCurSDLoc(), MVT::i32));
|
2015-05-13 07:52:24 +08:00
|
|
|
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
// Calculate and push starting position of vmstate arguments
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Get number of arguments incoming directly into call node
|
|
|
|
unsigned NumCallRegArgs =
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
CallNode->getNumOperands() - (CallHasIncomingGlue ? 4 : 3);
|
2015-04-28 22:05:47 +08:00
|
|
|
Ops.push_back(DAG.getTargetConstant(NumCallRegArgs, getCurSDLoc(), MVT::i32));
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Add call target
|
|
|
|
SDValue CallTarget = SDValue(CallNode->getOperand(1).getNode(), 0);
|
|
|
|
Ops.push_back(CallTarget);
|
|
|
|
|
|
|
|
// Add call arguments
|
|
|
|
// Get position of register mask in the call
|
|
|
|
SDNode::op_iterator RegMaskIt;
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
if (CallHasIncomingGlue)
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
RegMaskIt = CallNode->op_end() - 2;
|
|
|
|
else
|
|
|
|
RegMaskIt = CallNode->op_end() - 1;
|
|
|
|
Ops.insert(Ops.end(), CallNode->op_begin() + 2, RegMaskIt);
|
|
|
|
|
2015-05-13 03:50:19 +08:00
|
|
|
// Add a constant argument for the calling convention
|
2016-03-17 07:08:00 +08:00
|
|
|
pushStackMapConstant(Ops, *this, SI.CLI.CallConv);
|
2015-05-13 03:50:19 +08:00
|
|
|
|
|
|
|
// Add a constant argument for the flags
|
2016-03-17 07:08:00 +08:00
|
|
|
uint64_t Flags = SI.StatepointFlags;
|
2016-02-20 03:37:07 +08:00
|
|
|
assert(((Flags & ~(uint64_t)StatepointFlags::MaskAll) == 0) &&
|
|
|
|
"Unknown flag used");
|
2015-05-13 03:50:19 +08:00
|
|
|
pushStackMapConstant(Ops, *this, Flags);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Insert all vmstate and gcstate arguments
|
2020-12-29 11:55:16 +08:00
|
|
|
llvm::append_range(Ops, LoweredMetaArgs);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Add register mask from call node
|
|
|
|
Ops.push_back(*RegMaskIt);
|
|
|
|
|
|
|
|
// Add chain
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
Ops.push_back(Chain);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// Same for the glue, but we add it only if original call had it
|
|
|
|
if (Glue.getNode())
|
|
|
|
Ops.push_back(Glue);
|
|
|
|
|
2015-02-19 23:26:17 +08:00
|
|
|
// Compute return values. Provide a glue output since we consume one as
|
|
|
|
// input. This allows someone else to chain off us as needed.
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
SmallVector<EVT, 8> NodeTys;
|
[Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:
STATEPOINT
<id>, <num patch bytes >, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<gc base/derived pairs...> <gc allocas...>
Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:
%vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)
Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.
This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:
STATEPOINT
<id>, <num patch bytes>, <num call arguments>, <call target>,
[call arguments...],
<StackMaps::ConstantOp>, <calling convention>,
<StackMaps::ConstantOp>, <statepoint flags>,
<StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
<StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
<StackMaps::ConstantOp>, <num gc allocas>, [gc allocas...]
<StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]
Changes are:
- every gc pointer is listed only once in a flat length-prefixed list;
- alloca list is prefixed with its length too;
- following alloca list is length-prefixed list of base-derived
indices of pointers from gc pointer list. Note that indices are
logical (number of pointer), not absolute (index of machine operand).
Differential Revision: https://reviews.llvm.org/D87154
2020-09-05 01:45:41 +08:00
|
|
|
for (auto SD : LoweredGCArgs) {
|
2020-07-26 07:40:06 +08:00
|
|
|
if (!LowerAsVReg.count(SD))
|
|
|
|
continue;
|
|
|
|
NodeTys.push_back(SD.getValueType());
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
}
|
|
|
|
LLVM_DEBUG(dbgs() << "Statepoint has " << NodeTys.size() << " results\n");
|
|
|
|
assert(NodeTys.size() == LowerAsVReg.size() && "Inconsistent GC Ptr lowering");
|
|
|
|
NodeTys.push_back(MVT::Other);
|
|
|
|
NodeTys.push_back(MVT::Glue);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
unsigned NumResults = NodeTys.size();
|
2019-03-13 03:12:33 +08:00
|
|
|
MachineSDNode *StatepointMCNode =
|
|
|
|
DAG.getMachineNode(TargetOpcode::STATEPOINT, getCurSDLoc(), NodeTys, Ops);
|
|
|
|
DAG.setNodeMemRefs(StatepointMCNode, MemRefs);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
2020-07-30 02:41:40 +08:00
|
|
|
// For values lowered to tied-defs, create the virtual registers. Note that
|
2020-09-15 22:10:07 +08:00
|
|
|
// for simplicity, we *always* create a vreg even within a single block.
|
|
|
|
DenseMap<SDValue, Register> VirtRegs;
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
for (const auto *Relocate : SI.GCRelocates) {
|
|
|
|
Value *Derived = Relocate->getDerivedPtr();
|
|
|
|
SDValue SD = getValue(Derived);
|
2020-07-26 07:40:06 +08:00
|
|
|
if (!LowerAsVReg.count(SD))
|
|
|
|
continue;
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
|
|
|
|
// Handle multiple gc.relocates of the same input efficiently.
|
2020-09-15 22:10:07 +08:00
|
|
|
if (VirtRegs.count(SD))
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
SDValue Relocated = SDValue(StatepointMCNode, LowerAsVReg[SD]);
|
|
|
|
|
|
|
|
auto *RetTy = Relocate->getType();
|
|
|
|
Register Reg = FuncInfo.CreateRegs(RetTy);
|
|
|
|
RegsForValue RFV(*DAG.getContext(), DAG.getTargetLoweringInfo(),
|
|
|
|
DAG.getDataLayout(), Reg, RetTy, None);
|
2020-09-07 23:04:07 +08:00
|
|
|
SDValue Chain = DAG.getRoot();
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
RFV.getCopyToRegs(Relocated, DAG, getCurSDLoc(), Chain, nullptr);
|
|
|
|
PendingExports.push_back(Chain);
|
2020-09-15 22:10:07 +08:00
|
|
|
|
|
|
|
VirtRegs[SD] = Reg;
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
}
|
|
|
|
|
2020-07-30 02:41:40 +08:00
|
|
|
// Record for later use how each relocation was lowered. This is needed to
|
|
|
|
// allow later gc.relocates to mirror the lowering chosen.
|
|
|
|
const Instruction *StatepointInstr = SI.StatepointInstr;
|
|
|
|
auto &RelocationMap = FuncInfo.StatepointRelocationMaps[StatepointInstr];
|
|
|
|
for (const GCRelocateInst *Relocate : SI.GCRelocates) {
|
|
|
|
const Value *V = Relocate->getDerivedPtr();
|
|
|
|
SDValue SDV = getValue(V);
|
|
|
|
SDValue Loc = StatepointLowering.getLocation(SDV);
|
|
|
|
|
|
|
|
RecordType Record;
|
2020-08-27 20:51:30 +08:00
|
|
|
if (LowerAsVReg.count(SDV)) {
|
2020-07-30 02:41:40 +08:00
|
|
|
Record.type = RecordType::VReg;
|
2020-09-15 22:10:07 +08:00
|
|
|
assert(VirtRegs.count(SDV));
|
|
|
|
Record.payload.Reg = VirtRegs[SDV];
|
2020-08-27 20:51:30 +08:00
|
|
|
} else if (Loc.getNode()) {
|
|
|
|
Record.type = RecordType::Spill;
|
|
|
|
Record.payload.FI = cast<FrameIndexSDNode>(Loc)->getIndex();
|
2020-07-30 02:41:40 +08:00
|
|
|
} else {
|
|
|
|
Record.type = RecordType::NoRelocate;
|
|
|
|
// If we didn't relocate a value, we'll essentialy end up inserting an
|
|
|
|
// additional use of the original value when lowering the gc.relocate.
|
|
|
|
// We need to make sure the value is available at the new use, which
|
|
|
|
// might be in another block.
|
|
|
|
if (Relocate->getParent() != StatepointInstr->getParent())
|
|
|
|
ExportFromCurrentBlock(V);
|
|
|
|
}
|
|
|
|
RelocationMap[V] = Record;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SDNode *SinkNode = StatepointMCNode;
|
|
|
|
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
// Build the GC_TRANSITION_END node if necessary.
|
|
|
|
//
|
|
|
|
// See the comment above regarding GC_TRANSITION_START for the layout of
|
|
|
|
// the operands to the GC_TRANSITION_END node.
|
|
|
|
if (IsGCTransition) {
|
|
|
|
SmallVector<SDValue, 8> TEOps;
|
|
|
|
|
|
|
|
// Add chain
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
TEOps.push_back(SDValue(StatepointMCNode, NumResults - 2));
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
|
|
|
|
// Add GC transition arguments
|
2016-03-17 07:08:00 +08:00
|
|
|
for (const Value *V : SI.GCTransitionArgs) {
|
2015-05-13 05:33:48 +08:00
|
|
|
TEOps.push_back(getValue(V));
|
|
|
|
if (V->getType()->isPointerTy())
|
|
|
|
TEOps.push_back(DAG.getSrcValue(V));
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Add glue
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
TEOps.push_back(SDValue(StatepointMCNode, NumResults - 1));
|
Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)
to:
@llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Differential Revision: http://reviews.llvm.org/D9501
llvm-svn: 236888
2015-05-09 02:07:42 +08:00
|
|
|
|
|
|
|
SDVTList NodeTys = DAG.getVTList(MVT::Other, MVT::Glue);
|
|
|
|
|
|
|
|
SDValue GCTransitionStart =
|
|
|
|
DAG.getNode(ISD::GC_TRANSITION_END, getCurSDLoc(), NodeTys, TEOps);
|
|
|
|
|
|
|
|
SinkNode = GCTransitionStart.getNode();
|
|
|
|
}
|
|
|
|
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// Replace original call
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
// Call: ch,glue = CALL ...
|
|
|
|
// Statepoint: [gc relocates],ch,glue = STATEPOINT ...
|
|
|
|
unsigned NumSinkValues = SinkNode->getNumValues();
|
|
|
|
SDValue StatepointValues[2] = {SDValue(SinkNode, NumSinkValues - 2),
|
|
|
|
SDValue(SinkNode, NumSinkValues - 1)};
|
|
|
|
DAG.ReplaceAllUsesWith(CallNode, StatepointValues);
|
2015-08-09 02:27:36 +08:00
|
|
|
// Remove original call node
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
DAG.DeleteNode(CallNode);
|
|
|
|
|
2020-09-07 23:04:07 +08:00
|
|
|
// Since we always emit CopyToRegs (even for local relocates), we must
|
|
|
|
// update root, so that they are emitted before any local uses.
|
|
|
|
(void)getControlRoot();
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
|
|
|
|
// TODO: A better future implementation would be to emit a single variable
|
|
|
|
// argument, variable return value STATEPOINT node here and then hookup the
|
|
|
|
// return value of each gc.relocate to the respective output of the
|
|
|
|
// previously emitted STATEPOINT value. Unfortunately, this doesn't appear
|
|
|
|
// to actually be possible today.
|
2016-03-17 07:08:00 +08:00
|
|
|
|
|
|
|
return ReturnVal;
|
|
|
|
}
|
|
|
|
|
2022-02-01 01:56:14 +08:00
|
|
|
/// Return two gc.results if present. First result is a block local
|
|
|
|
/// gc.result, second result is a non-block local gc.result. Corresponding
|
|
|
|
/// entry will be nullptr if not present.
|
|
|
|
static std::pair<const GCResultInst*, const GCResultInst*>
|
|
|
|
getGCResultLocality(const GCStatepointInst &S) {
|
|
|
|
std::pair<const GCResultInst *, const GCResultInst*> Res(nullptr, nullptr);
|
|
|
|
for (auto *U : S.users()) {
|
|
|
|
auto *GRI = dyn_cast<GCResultInst>(U);
|
|
|
|
if (!GRI)
|
|
|
|
continue;
|
|
|
|
if (GRI->getParent() == S.getParent())
|
|
|
|
Res.first = GRI;
|
|
|
|
else
|
|
|
|
Res.second = GRI;
|
|
|
|
}
|
2022-02-01 01:33:41 +08:00
|
|
|
return Res;
|
|
|
|
}
|
|
|
|
|
2016-03-17 07:08:00 +08:00
|
|
|
void
|
2020-05-28 09:56:50 +08:00
|
|
|
SelectionDAGBuilder::LowerStatepoint(const GCStatepointInst &I,
|
2016-03-17 07:08:00 +08:00
|
|
|
const BasicBlock *EHPadBB /*= nullptr*/) {
|
2020-05-28 09:56:50 +08:00
|
|
|
assert(I.getCallingConv() != CallingConv::AnyReg &&
|
2016-03-17 07:08:00 +08:00
|
|
|
"anyregcc is not supported on statepoints!");
|
|
|
|
|
|
|
|
#ifndef NDEBUG
|
|
|
|
// Check that the associated GCStrategy expects to encounter statepoints.
|
|
|
|
assert(GFI->getStrategy().useStatepoints() &&
|
|
|
|
"GCStrategy does not expect to encounter statepoints");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
SDValue ActualCallee;
|
2020-05-29 01:45:07 +08:00
|
|
|
SDValue Callee = getValue(I.getActualCalledOperand());
|
2016-03-17 07:08:00 +08:00
|
|
|
|
2020-05-28 09:56:50 +08:00
|
|
|
if (I.getNumPatchBytes() > 0) {
|
2016-03-17 07:08:00 +08:00
|
|
|
// If we've been asked to emit a nop sequence instead of a call instruction
|
|
|
|
// for this statepoint then don't lower the call target, but use a constant
|
2020-04-05 23:21:37 +08:00
|
|
|
// `undef` instead. Not lowering the call target lets statepoint clients
|
|
|
|
// get away without providing a physical address for the symbolic call
|
|
|
|
// target at link time.
|
|
|
|
ActualCallee = DAG.getUNDEF(Callee.getValueType());
|
2016-03-17 07:08:00 +08:00
|
|
|
} else {
|
2020-04-05 23:21:37 +08:00
|
|
|
ActualCallee = Callee;
|
2016-03-17 07:08:00 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
StatepointLoweringInfo SI(DAG);
|
2020-05-28 09:56:50 +08:00
|
|
|
populateCallLoweringInfo(SI.CLI, &I, GCStatepointInst::CallArgsBeginPos,
|
|
|
|
I.getNumCallArgs(), ActualCallee,
|
2020-05-29 01:45:07 +08:00
|
|
|
I.getActualReturnType(), false /* IsPatchPoint */);
|
2016-03-17 07:08:00 +08:00
|
|
|
|
2020-03-12 12:23:56 +08:00
|
|
|
// There may be duplication in the gc.relocate list; such as two copies of
|
|
|
|
// each relocation on normal and exceptional path for an invoke. We only
|
|
|
|
// need to spill once and record one copy in the stackmap, but we need to
|
|
|
|
// reload once per gc.relocate. (Dedupping gc.relocates is trickier and best
|
|
|
|
// handled as a CSE problem elsewhere.)
|
|
|
|
// TODO: There a couple of major stackmap size optimizations we could do
|
|
|
|
// here if we wished.
|
|
|
|
// 1) If we've encountered a derived pair {B, D}, we don't need to actually
|
|
|
|
// record {B,B} if it's seen later.
|
|
|
|
// 2) Due to rematerialization, actual derived pointers are somewhat rare;
|
|
|
|
// given that, we could change the format to record base pointer relocations
|
|
|
|
// separately with half the space. This would require a format rev and a
|
|
|
|
// fairly major rework of the STATEPOINT node though.
|
|
|
|
SmallSet<SDValue, 8> Seen;
|
2020-05-29 04:49:41 +08:00
|
|
|
for (const GCRelocateInst *Relocate : I.getGCRelocates()) {
|
2016-03-23 10:24:10 +08:00
|
|
|
SI.GCRelocates.push_back(Relocate);
|
2020-03-12 12:23:56 +08:00
|
|
|
|
|
|
|
SDValue DerivedSD = getValue(Relocate->getDerivedPtr());
|
|
|
|
if (Seen.insert(DerivedSD).second) {
|
|
|
|
SI.Bases.push_back(Relocate->getBasePtr());
|
|
|
|
SI.Ptrs.push_back(Relocate->getDerivedPtr());
|
|
|
|
}
|
2016-03-23 10:24:10 +08:00
|
|
|
}
|
|
|
|
|
2021-02-25 17:28:50 +08:00
|
|
|
// If we find a deopt value which isn't explicitly added, we need to
|
|
|
|
// ensure it gets lowered such that gc cycles occurring before the
|
|
|
|
// deoptimization event during the lifetime of the call don't invalidate
|
|
|
|
// the pointer we're deopting with. Note that we assume that all
|
|
|
|
// pointers passed to deopt are base pointers; relaxing that assumption
|
|
|
|
// would require relatively large changes to how we represent relocations.
|
|
|
|
for (Value *V : I.deopt_operands()) {
|
|
|
|
if (!isGCValue(V, *this))
|
|
|
|
continue;
|
|
|
|
if (Seen.insert(getValue(V)).second) {
|
|
|
|
SI.Bases.push_back(V);
|
|
|
|
SI.Ptrs.push_back(V);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-29 04:34:12 +08:00
|
|
|
SI.GCArgs = ArrayRef<const Use>(I.gc_args_begin(), I.gc_args_end());
|
2020-05-28 09:56:50 +08:00
|
|
|
SI.StatepointInstr = &I;
|
|
|
|
SI.ID = I.getID();
|
2020-05-28 00:14:54 +08:00
|
|
|
|
2020-06-04 11:31:17 +08:00
|
|
|
SI.DeoptState = ArrayRef<const Use>(I.deopt_begin(), I.deopt_end());
|
|
|
|
SI.GCTransitionArgs = ArrayRef<const Use>(I.gc_transition_args_begin(),
|
|
|
|
I.gc_transition_args_end());
|
2020-05-28 00:14:54 +08:00
|
|
|
|
2020-05-28 09:56:50 +08:00
|
|
|
SI.StatepointFlags = I.getFlags();
|
|
|
|
SI.NumPatchBytes = I.getNumPatchBytes();
|
2016-03-17 07:08:00 +08:00
|
|
|
SI.EHPadBB = EHPadBB;
|
|
|
|
|
2016-03-22 08:59:13 +08:00
|
|
|
SDValue ReturnValue = LowerAsSTATEPOINT(SI);
|
2016-03-17 07:08:00 +08:00
|
|
|
|
|
|
|
// Export the result value if needed
|
2022-02-01 01:56:14 +08:00
|
|
|
const auto GCResultLocality = getGCResultLocality(I);
|
2020-07-08 06:50:50 +08:00
|
|
|
|
2022-02-01 01:56:14 +08:00
|
|
|
if (!GCResultLocality.first && !GCResultLocality.second) {
|
|
|
|
// The return value is not needed, just generate a poison value.
|
|
|
|
// Note: This covers the void return case.
|
2020-05-28 09:56:50 +08:00
|
|
|
setValue(&I, DAG.getIntPtrConstant(-1, getCurSDLoc()));
|
2020-07-08 06:50:50 +08:00
|
|
|
return;
|
2016-03-17 07:08:00 +08:00
|
|
|
}
|
2020-07-08 06:50:50 +08:00
|
|
|
|
2021-03-11 13:53:45 +08:00
|
|
|
if (GCResultLocality.first) {
|
2020-07-08 06:50:50 +08:00
|
|
|
// Result value will be used in a same basic block. Don't export it or
|
|
|
|
// perform any explicit register copies. The gc_result will simply grab
|
|
|
|
// this value.
|
|
|
|
setValue(&I, ReturnValue);
|
|
|
|
}
|
[Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)
This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator. Today, we force spill all operands in SelectionDAG. The code to do so is distinctly non-optimal. The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to. In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.
This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above. In particular, the first N are lowered to variadic tied def/use pairs. So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...
N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).
The current patch is restricted to handling relocations within a single basic block. Cross block relocations (e.g. invokes) are handled via the legacy mechanism. This restriction will be relaxed in future patches.
Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-12 01:50:34 +08:00
|
|
|
|
2021-03-11 13:53:45 +08:00
|
|
|
if (!GCResultLocality.second)
|
|
|
|
return;
|
2020-07-08 06:50:50 +08:00
|
|
|
// Result value will be used in a different basic block so we need to export
|
|
|
|
// it now. Default exporting mechanism will not work here because statepoint
|
|
|
|
// call has a different type than the actual call. It means that by default
|
|
|
|
// llvm will create export register of the wrong type (always i32 in our
|
|
|
|
// case). So instead we need to create export register with correct type
|
|
|
|
// manually.
|
|
|
|
// TODO: To eliminate this problem we can remove gc.result intrinsics
|
|
|
|
// completely and make statepoint call to return a tuple.
|
2022-02-01 01:56:14 +08:00
|
|
|
Type *RetTy = GCResultLocality.second->getType();
|
2020-07-08 06:50:50 +08:00
|
|
|
unsigned Reg = FuncInfo.CreateRegs(RetTy);
|
|
|
|
RegsForValue RFV(*DAG.getContext(), DAG.getTargetLoweringInfo(),
|
|
|
|
DAG.getDataLayout(), Reg, RetTy,
|
|
|
|
I.getCallingConv());
|
|
|
|
SDValue Chain = DAG.getEntryNode();
|
|
|
|
|
|
|
|
RFV.getCopyToRegs(ReturnValue, DAG, getCurSDLoc(), Chain, nullptr);
|
|
|
|
PendingExports.push_back(Chain);
|
|
|
|
FuncInfo.ValueMap[&I] = Reg;
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
2016-03-25 06:51:49 +08:00
|
|
|
void SelectionDAGBuilder::LowerCallSiteWithDeoptBundleImpl(
|
2019-02-11 15:42:30 +08:00
|
|
|
const CallBase *Call, SDValue Callee, const BasicBlock *EHPadBB,
|
2016-04-06 09:33:49 +08:00
|
|
|
bool VarArgDisallowed, bool ForceVoidReturnTy) {
|
2016-03-22 08:59:13 +08:00
|
|
|
StatepointLoweringInfo SI(DAG);
|
2019-02-11 15:42:30 +08:00
|
|
|
unsigned ArgBeginIndex = Call->arg_begin() - Call->op_begin();
|
2016-04-06 09:33:49 +08:00
|
|
|
populateCallLoweringInfo(
|
2021-10-03 23:22:19 +08:00
|
|
|
SI.CLI, Call, ArgBeginIndex, Call->arg_size(), Callee,
|
2019-02-11 15:42:30 +08:00
|
|
|
ForceVoidReturnTy ? Type::getVoidTy(*DAG.getContext()) : Call->getType(),
|
2016-04-06 09:33:49 +08:00
|
|
|
false);
|
2016-03-25 06:51:49 +08:00
|
|
|
if (!VarArgDisallowed)
|
2019-02-11 15:42:30 +08:00
|
|
|
SI.CLI.IsVarArg = Call->getFunctionType()->isVarArg();
|
2016-03-22 08:59:13 +08:00
|
|
|
|
2019-02-11 15:42:30 +08:00
|
|
|
auto DeoptBundle = *Call->getOperandBundle(LLVMContext::OB_deopt);
|
2016-03-22 08:59:13 +08:00
|
|
|
|
|
|
|
unsigned DefaultID = StatepointDirectives::DeoptBundleStatepointID;
|
|
|
|
|
2019-02-11 15:42:30 +08:00
|
|
|
auto SD = parseStatepointDirectivesFromAttrs(Call->getAttributes());
|
2016-03-22 08:59:13 +08:00
|
|
|
SI.ID = SD.StatepointID.getValueOr(DefaultID);
|
|
|
|
SI.NumPatchBytes = SD.NumPatchBytes.getValueOr(0);
|
|
|
|
|
|
|
|
SI.DeoptState =
|
|
|
|
ArrayRef<const Use>(DeoptBundle.Inputs.begin(), DeoptBundle.Inputs.end());
|
|
|
|
SI.StatepointFlags = static_cast<uint64_t>(StatepointFlags::None);
|
|
|
|
SI.EHPadBB = EHPadBB;
|
|
|
|
|
2016-03-25 06:51:49 +08:00
|
|
|
// NB! The GC arguments are deliberately left empty.
|
|
|
|
|
2016-03-22 08:59:13 +08:00
|
|
|
if (SDValue ReturnVal = LowerAsSTATEPOINT(SI)) {
|
2019-02-11 15:42:30 +08:00
|
|
|
ReturnVal = lowerRangeToAssertZExt(DAG, *Call, ReturnVal);
|
|
|
|
setValue(Call, ReturnVal);
|
2016-03-22 08:59:13 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-25 06:51:49 +08:00
|
|
|
void SelectionDAGBuilder::LowerCallSiteWithDeoptBundle(
|
2019-02-11 15:42:30 +08:00
|
|
|
const CallBase *Call, SDValue Callee, const BasicBlock *EHPadBB) {
|
|
|
|
LowerCallSiteWithDeoptBundleImpl(Call, Callee, EHPadBB,
|
2016-04-06 09:33:49 +08:00
|
|
|
/* VarArgDisallowed = */ false,
|
|
|
|
/* ForceVoidReturnTy = */ false);
|
2016-03-25 06:51:49 +08:00
|
|
|
}
|
|
|
|
|
2016-04-13 02:05:10 +08:00
|
|
|
void SelectionDAGBuilder::visitGCResult(const GCResultInst &CI) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
// The result value of the gc_result is simply the result of the actual
|
|
|
|
// call. We've already emitted this, so just grab the value.
|
2020-07-08 06:44:06 +08:00
|
|
|
const GCStatepointInst *SI = CI.getStatepoint();
|
|
|
|
|
|
|
|
if (SI->getParent() == CI.getParent()) {
|
|
|
|
setValue(&CI, getValue(SI));
|
|
|
|
return;
|
2015-03-11 00:26:48 +08:00
|
|
|
}
|
2020-07-08 06:44:06 +08:00
|
|
|
// Statepoint is in different basic block so we should have stored call
|
|
|
|
// result in a virtual register.
|
|
|
|
// We can not use default getValue() functionality to copy value from this
|
|
|
|
// register because statepoint and actual call return types can be
|
|
|
|
// different, and getValue() will use CopyFromReg of the wrong type,
|
|
|
|
// which is always i32 in our case.
|
2022-02-01 01:42:34 +08:00
|
|
|
Type *RetTy = CI.getType();
|
2020-07-08 06:44:06 +08:00
|
|
|
SDValue CopyFromReg = getCopyFromRegs(SI, RetTy);
|
|
|
|
|
|
|
|
assert(CopyFromReg.getNode());
|
|
|
|
setValue(&CI, CopyFromReg);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
|
|
|
|
2016-01-05 12:03:00 +08:00
|
|
|
void SelectionDAGBuilder::visitGCRelocate(const GCRelocateInst &Relocate) {
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#ifndef NDEBUG
|
|
|
|
// Consistency check
|
2016-11-20 21:14:57 +08:00
|
|
|
// We skip this check for relocates not in the same basic block as their
|
2015-11-04 09:16:10 +08:00
|
|
|
// statepoint. It would be too expensive to preserve validation info through
|
|
|
|
// different basic blocks.
|
2020-07-26 09:34:02 +08:00
|
|
|
if (Relocate.getStatepoint()->getParent() == Relocate.getParent())
|
2016-01-05 12:03:00 +08:00
|
|
|
StatepointLowering.relocCallVisited(Relocate);
|
2016-03-15 09:16:31 +08:00
|
|
|
|
|
|
|
auto *Ty = Relocate.getType()->getScalarType();
|
|
|
|
if (auto IsManaged = GFI->getStrategy().isGCManagedPointer(Ty))
|
|
|
|
assert(*IsManaged && "Non gc managed pointer relocated!");
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
#endif
|
|
|
|
|
2016-01-05 12:03:00 +08:00
|
|
|
const Value *DerivedPtr = Relocate.getDerivedPtr();
|
2020-07-30 02:41:40 +08:00
|
|
|
auto &RelocationMap =
|
|
|
|
FuncInfo.StatepointRelocationMaps[Relocate.getStatepoint()];
|
|
|
|
auto SlotIt = RelocationMap.find(DerivedPtr);
|
|
|
|
assert(SlotIt != RelocationMap.end() && "Relocating not lowered gc value");
|
|
|
|
const RecordType &Record = SlotIt->second;
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
|
|
|
|
// If relocation was done via virtual register..
|
2020-07-30 02:41:40 +08:00
|
|
|
if (Record.type == RecordType::VReg) {
|
|
|
|
Register InReg = Record.payload.Reg;
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
RegsForValue RFV(*DAG.getContext(), DAG.getTargetLoweringInfo(),
|
|
|
|
DAG.getDataLayout(), InReg, Relocate.getType(),
|
|
|
|
None); // This is not an ABI copy.
|
2020-10-01 16:09:57 +08:00
|
|
|
// We generate copy to/from regs even for local uses, hence we must
|
|
|
|
// chain with current root to ensure proper ordering of copies w.r.t.
|
|
|
|
// statepoint.
|
|
|
|
SDValue Chain = DAG.getRoot();
|
[Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.
The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.
Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.
The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.
I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.
Differential Revision: https://reviews.llvm.org/D84692
2020-07-30 00:13:15 +08:00
|
|
|
SDValue Relocation = RFV.getCopyFromRegs(DAG, FuncInfo, getCurSDLoc(),
|
|
|
|
Chain, nullptr, nullptr);
|
|
|
|
setValue(&Relocate, Relocation);
|
|
|
|
return;
|
|
|
|
}
|
2021-03-10 15:14:03 +08:00
|
|
|
|
|
|
|
if (Record.type == RecordType::Spill) {
|
|
|
|
unsigned Index = Record.payload.FI;
|
|
|
|
SDValue SpillSlot = DAG.getTargetFrameIndex(Index, getFrameIndexTy());
|
|
|
|
|
|
|
|
// All the reloads are independent and are reading memory only modified by
|
|
|
|
// statepoints (i.e. no other aliasing stores); informing SelectionDAG of
|
|
|
|
// this this let's CSE kick in for free and allows reordering of
|
|
|
|
// instructions if possible. The lowering for statepoint sets the root,
|
|
|
|
// so this is ordering all reloads with the either
|
|
|
|
// a) the statepoint node itself, or
|
|
|
|
// b) the entry of the current block for an invoke statepoint.
|
|
|
|
const SDValue Chain = DAG.getRoot(); // != Builder.getRoot()
|
|
|
|
|
|
|
|
auto &MF = DAG.getMachineFunction();
|
|
|
|
auto &MFI = MF.getFrameInfo();
|
|
|
|
auto PtrInfo = MachinePointerInfo::getFixedStack(MF, Index);
|
|
|
|
auto *LoadMMO = MF.getMachineMemOperand(PtrInfo, MachineMemOperand::MOLoad,
|
|
|
|
MFI.getObjectSize(Index),
|
|
|
|
MFI.getObjectAlign(Index));
|
|
|
|
|
|
|
|
auto LoadVT = DAG.getTargetLoweringInfo().getValueType(DAG.getDataLayout(),
|
|
|
|
Relocate.getType());
|
|
|
|
|
|
|
|
SDValue SpillLoad =
|
|
|
|
DAG.getLoad(LoadVT, getCurSDLoc(), Chain, SpillSlot, LoadMMO);
|
|
|
|
PendingLoads.push_back(SpillLoad.getValue(1));
|
|
|
|
|
|
|
|
assert(SpillLoad.getNode());
|
|
|
|
setValue(&Relocate, SpillLoad);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(Record.type == RecordType::NoRelocate);
|
2015-05-20 19:37:25 +08:00
|
|
|
SDValue SD = getValue(DerivedPtr);
|
|
|
|
|
2020-05-28 20:26:56 +08:00
|
|
|
if (SD.isUndef() && SD.getValueType().getSizeInBits() <= 64) {
|
|
|
|
// Lowering relocate(undef) as arbitrary constant. Current constant value
|
|
|
|
// is chosen such that it's unlikely to be a valid pointer.
|
|
|
|
setValue(&Relocate, DAG.getTargetConstant(0xFEFEFEFE, SDLoc(SD), MVT::i64));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-05-20 19:37:25 +08:00
|
|
|
// We didn't need to spill these special cases (constants and allocas).
|
|
|
|
// See the handling in spillIncomingValueForStatepoint for detail.
|
2021-03-10 15:14:03 +08:00
|
|
|
setValue(&Relocate, SD);
|
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
llvm-svn: 223137
2014-12-03 02:50:36 +08:00
|
|
|
}
|
2016-03-25 04:23:29 +08:00
|
|
|
|
|
|
|
void SelectionDAGBuilder::LowerDeoptimizeCall(const CallInst *CI) {
|
|
|
|
const auto &TLI = DAG.getTargetLoweringInfo();
|
|
|
|
SDValue Callee = DAG.getExternalSymbol(TLI.getLibcallName(RTLIB::DEOPTIMIZE),
|
|
|
|
TLI.getPointerTy(DAG.getDataLayout()));
|
|
|
|
|
2016-03-25 06:51:49 +08:00
|
|
|
// We don't lower calls to __llvm_deoptimize as varargs, but as a regular
|
2016-04-06 09:33:49 +08:00
|
|
|
// call. We also do not lower the return value to any virtual register, and
|
|
|
|
// change the immediately following return to a trap instruction.
|
2016-03-25 06:51:49 +08:00
|
|
|
LowerCallSiteWithDeoptBundleImpl(CI, Callee, /* EHPadBB = */ nullptr,
|
2016-04-06 09:33:49 +08:00
|
|
|
/* VarArgDisallowed = */ true,
|
|
|
|
/* ForceVoidReturnTy = */ true);
|
|
|
|
}
|
|
|
|
|
|
|
|
void SelectionDAGBuilder::LowerDeoptimizingReturn() {
|
|
|
|
// We do not lower the return value from llvm.deoptimize to any virtual
|
|
|
|
// register, and change the immediately following return to a trap
|
|
|
|
// instruction.
|
|
|
|
if (DAG.getTarget().Options.TrapUnreachable)
|
|
|
|
DAG.setRoot(
|
|
|
|
DAG.getNode(ISD::TRAP, getCurSDLoc(), MVT::Other, DAG.getRoot()));
|
2016-03-25 04:23:29 +08:00
|
|
|
}
|