2014-04-30 23:31:33 +08:00
|
|
|
//===-- SILowerI1Copies.cpp - Lower I1 Copies -----------------------------===//
|
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2014-04-30 23:31:33 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
// This pass lowers all occurrences of i1 values (with a vreg_1 register class)
|
2019-06-17 01:13:09 +08:00
|
|
|
// to lane masks (32 / 64-bit scalar registers). The pass assumes machine SSA
|
|
|
|
// form and a wave-level control flow graph.
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
//
|
|
|
|
// Before this pass, values that are semantically i1 and are defined and used
|
|
|
|
// within the same basic block are already represented as lane masks in scalar
|
|
|
|
// registers. However, values that cross basic blocks are always transferred
|
|
|
|
// between basic blocks in vreg_1 virtual registers and are lowered by this
|
|
|
|
// pass.
|
|
|
|
//
|
|
|
|
// The only instructions that use or define vreg_1 virtual registers are COPY,
|
|
|
|
// PHI, and IMPLICIT_DEF.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
2014-04-30 23:31:33 +08:00
|
|
|
|
|
|
|
#include "AMDGPU.h"
|
2014-08-05 05:25:23 +08:00
|
|
|
#include "AMDGPUSubtarget.h"
|
AMDGPU: Remove #include "MCTargetDesc/AMDGPUMCTargetDesc.h" from common headers
Summary:
MCTargetDesc/AMDGPUMCTargetDesc.h contains enums for all the instuction
and register defintions, which are huge so we only want to include
them where needed.
This will also make it easier if we want to split the R600 and GCN
definitions into separate tablegenerated files.
I was unable to remove AMDGPUMCTargetDesc.h from SIMachineFunctionInfo.h
because it uses some enums from the header to initialize default values
for the SIMachineFunction class, so I ended up having to remove includes of
SIMachineFunctionInfo.h from headers too.
Reviewers: arsenm, nhaehnle
Reviewed By: nhaehnle
Subscribers: MatzeB, kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D46272
llvm-svn: 332930
2018-05-22 10:03:23 +08:00
|
|
|
#include "MCTargetDesc/AMDGPUMCTargetDesc.h"
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
#include "SIInstrInfo.h"
|
|
|
|
#include "llvm/CodeGen/MachineDominators.h"
|
2014-04-30 23:31:33 +08:00
|
|
|
#include "llvm/CodeGen/MachineFunctionPass.h"
|
|
|
|
#include "llvm/CodeGen/MachineInstrBuilder.h"
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
#include "llvm/CodeGen/MachinePostDominators.h"
|
2014-04-30 23:31:33 +08:00
|
|
|
#include "llvm/CodeGen/MachineRegisterInfo.h"
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
#include "llvm/CodeGen/MachineSSAUpdater.h"
|
2014-04-30 23:31:33 +08:00
|
|
|
#include "llvm/IR/Function.h"
|
2017-06-06 19:49:48 +08:00
|
|
|
#include "llvm/IR/LLVMContext.h"
|
2014-04-30 23:31:33 +08:00
|
|
|
#include "llvm/Support/Debug.h"
|
|
|
|
#include "llvm/Target/TargetMachine.h"
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
#define DEBUG_TYPE "si-i1-copies"
|
|
|
|
|
2014-04-30 23:31:33 +08:00
|
|
|
using namespace llvm;
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
static unsigned createLaneMaskReg(MachineFunction &MF);
|
|
|
|
static unsigned insertUndefLaneMask(MachineBasicBlock &MBB);
|
|
|
|
|
2014-04-30 23:31:33 +08:00
|
|
|
namespace {
|
|
|
|
|
|
|
|
class SILowerI1Copies : public MachineFunctionPass {
|
|
|
|
public:
|
|
|
|
static char ID;
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
private:
|
2019-06-17 01:13:09 +08:00
|
|
|
bool IsWave32 = false;
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
MachineFunction *MF = nullptr;
|
|
|
|
MachineDominatorTree *DT = nullptr;
|
|
|
|
MachinePostDominatorTree *PDT = nullptr;
|
|
|
|
MachineRegisterInfo *MRI = nullptr;
|
|
|
|
const GCNSubtarget *ST = nullptr;
|
|
|
|
const SIInstrInfo *TII = nullptr;
|
|
|
|
|
2019-06-17 01:13:09 +08:00
|
|
|
unsigned ExecReg;
|
|
|
|
unsigned MovOp;
|
|
|
|
unsigned AndOp;
|
|
|
|
unsigned OrOp;
|
|
|
|
unsigned XorOp;
|
|
|
|
unsigned AndN2Op;
|
|
|
|
unsigned OrN2Op;
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
DenseSet<unsigned> ConstrainRegs;
|
|
|
|
|
2014-04-30 23:31:33 +08:00
|
|
|
public:
|
|
|
|
SILowerI1Copies() : MachineFunctionPass(ID) {
|
|
|
|
initializeSILowerI1CopiesPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
2014-08-31 00:48:34 +08:00
|
|
|
bool runOnMachineFunction(MachineFunction &MF) override;
|
2014-04-30 23:31:33 +08:00
|
|
|
|
2016-10-01 10:56:57 +08:00
|
|
|
StringRef getPassName() const override { return "SI Lower i1 Copies"; }
|
2014-04-30 23:31:33 +08:00
|
|
|
|
2014-08-31 00:48:34 +08:00
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
2014-04-30 23:31:33 +08:00
|
|
|
AU.setPreservesCFG();
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
AU.addRequired<MachineDominatorTree>();
|
|
|
|
AU.addRequired<MachinePostDominatorTree>();
|
2014-04-30 23:31:33 +08:00
|
|
|
MachineFunctionPass::getAnalysisUsage(AU);
|
|
|
|
}
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
private:
|
|
|
|
void lowerCopiesFromI1();
|
|
|
|
void lowerPhis();
|
|
|
|
void lowerCopiesToI1();
|
|
|
|
bool isConstantLaneMask(unsigned Reg, bool &Val) const;
|
|
|
|
void buildMergeLaneMasks(MachineBasicBlock &MBB,
|
|
|
|
MachineBasicBlock::iterator I, const DebugLoc &DL,
|
|
|
|
unsigned DstReg, unsigned PrevReg, unsigned CurReg);
|
|
|
|
MachineBasicBlock::iterator
|
|
|
|
getSaluInsertionAtEnd(MachineBasicBlock &MBB) const;
|
|
|
|
|
|
|
|
bool isLaneMaskReg(unsigned Reg) const {
|
|
|
|
return TII->getRegisterInfo().isSGPRReg(*MRI, Reg) &&
|
|
|
|
TII->getRegisterInfo().getRegSizeInBits(Reg, *MRI) ==
|
|
|
|
ST->getWavefrontSize();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Helper class that determines the relationship between incoming values of a
|
|
|
|
/// phi in the control flow graph to determine where an incoming value can
|
|
|
|
/// simply be taken as a scalar lane mask as-is, and where it needs to be
|
|
|
|
/// merged with another, previously defined lane mask.
|
|
|
|
///
|
|
|
|
/// The approach is as follows:
|
|
|
|
/// - Determine all basic blocks which, starting from the incoming blocks,
|
|
|
|
/// a wave may reach before entering the def block (the block containing the
|
|
|
|
/// phi).
|
|
|
|
/// - If an incoming block has no predecessors in this set, we can take the
|
|
|
|
/// incoming value as a scalar lane mask as-is.
|
|
|
|
/// -- A special case of this is when the def block has a self-loop.
|
|
|
|
/// - Otherwise, the incoming value needs to be merged with a previously
|
|
|
|
/// defined lane mask.
|
|
|
|
/// - If there is a path into the set of reachable blocks that does _not_ go
|
|
|
|
/// through an incoming block where we can take the scalar lane mask as-is,
|
|
|
|
/// we need to invent an available value for the SSAUpdater. Choices are
|
|
|
|
/// 0 and undef, with differing consequences for how to merge values etc.
|
|
|
|
///
|
|
|
|
/// TODO: We could use region analysis to quickly skip over SESE regions during
|
|
|
|
/// the traversal.
|
|
|
|
///
|
|
|
|
class PhiIncomingAnalysis {
|
|
|
|
MachinePostDominatorTree &PDT;
|
|
|
|
|
|
|
|
// For each reachable basic block, whether it is a source in the induced
|
|
|
|
// subgraph of the CFG.
|
|
|
|
DenseMap<MachineBasicBlock *, bool> ReachableMap;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> ReachableOrdered;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> Stack;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> Predecessors;
|
|
|
|
|
|
|
|
public:
|
|
|
|
PhiIncomingAnalysis(MachinePostDominatorTree &PDT) : PDT(PDT) {}
|
|
|
|
|
|
|
|
/// Returns whether \p MBB is a source in the induced subgraph of reachable
|
|
|
|
/// blocks.
|
|
|
|
bool isSource(MachineBasicBlock &MBB) const {
|
|
|
|
return ReachableMap.find(&MBB)->second;
|
|
|
|
}
|
|
|
|
|
|
|
|
ArrayRef<MachineBasicBlock *> predecessors() const { return Predecessors; }
|
|
|
|
|
|
|
|
void analyze(MachineBasicBlock &DefBlock,
|
|
|
|
ArrayRef<MachineBasicBlock *> IncomingBlocks) {
|
|
|
|
assert(Stack.empty());
|
|
|
|
ReachableMap.clear();
|
|
|
|
ReachableOrdered.clear();
|
|
|
|
Predecessors.clear();
|
|
|
|
|
|
|
|
// Insert the def block first, so that it acts as an end point for the
|
|
|
|
// traversal.
|
|
|
|
ReachableMap.try_emplace(&DefBlock, false);
|
|
|
|
ReachableOrdered.push_back(&DefBlock);
|
|
|
|
|
|
|
|
for (MachineBasicBlock *MBB : IncomingBlocks) {
|
|
|
|
if (MBB == &DefBlock) {
|
|
|
|
ReachableMap[&DefBlock] = true; // self-loop on DefBlock
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ReachableMap.try_emplace(MBB, false);
|
|
|
|
ReachableOrdered.push_back(MBB);
|
|
|
|
|
|
|
|
// If this block has a divergent terminator and the def block is its
|
|
|
|
// post-dominator, the wave may first visit the other successors.
|
|
|
|
bool Divergent = false;
|
|
|
|
for (MachineInstr &MI : MBB->terminators()) {
|
|
|
|
if (MI.getOpcode() == AMDGPU::SI_NON_UNIFORM_BRCOND_PSEUDO ||
|
|
|
|
MI.getOpcode() == AMDGPU::SI_IF ||
|
|
|
|
MI.getOpcode() == AMDGPU::SI_ELSE ||
|
|
|
|
MI.getOpcode() == AMDGPU::SI_LOOP) {
|
|
|
|
Divergent = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (Divergent && PDT.dominates(&DefBlock, MBB)) {
|
|
|
|
for (MachineBasicBlock *Succ : MBB->successors())
|
|
|
|
Stack.push_back(Succ);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!Stack.empty()) {
|
|
|
|
MachineBasicBlock *MBB = Stack.pop_back_val();
|
|
|
|
if (!ReachableMap.try_emplace(MBB, false).second)
|
|
|
|
continue;
|
|
|
|
ReachableOrdered.push_back(MBB);
|
|
|
|
|
|
|
|
for (MachineBasicBlock *Succ : MBB->successors())
|
|
|
|
Stack.push_back(Succ);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (MachineBasicBlock *MBB : ReachableOrdered) {
|
|
|
|
bool HaveReachablePred = false;
|
|
|
|
for (MachineBasicBlock *Pred : MBB->predecessors()) {
|
|
|
|
if (ReachableMap.count(Pred)) {
|
|
|
|
HaveReachablePred = true;
|
|
|
|
} else {
|
|
|
|
Stack.push_back(Pred);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!HaveReachablePred)
|
|
|
|
ReachableMap[MBB] = true;
|
|
|
|
if (HaveReachablePred) {
|
|
|
|
for (MachineBasicBlock *UnreachablePred : Stack) {
|
|
|
|
if (llvm::find(Predecessors, UnreachablePred) == Predecessors.end())
|
|
|
|
Predecessors.push_back(UnreachablePred);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
Stack.clear();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Helper class that detects loops which require us to lower an i1 COPY into
|
|
|
|
/// bitwise manipulation.
|
|
|
|
///
|
|
|
|
/// Unfortunately, we cannot use LoopInfo because LoopInfo does not distinguish
|
|
|
|
/// between loops with the same header. Consider this example:
|
|
|
|
///
|
|
|
|
/// A-+-+
|
|
|
|
/// | | |
|
|
|
|
/// B-+ |
|
|
|
|
/// | |
|
|
|
|
/// C---+
|
|
|
|
///
|
|
|
|
/// A is the header of a loop containing A, B, and C as far as LoopInfo is
|
|
|
|
/// concerned. However, an i1 COPY in B that is used in C must be lowered to
|
|
|
|
/// bitwise operations to combine results from different loop iterations when
|
|
|
|
/// B has a divergent branch (since by default we will compile this code such
|
|
|
|
/// that threads in a wave are merged at the entry of C).
|
|
|
|
///
|
|
|
|
/// The following rule is implemented to determine whether bitwise operations
|
|
|
|
/// are required: use the bitwise lowering for a def in block B if a backward
|
|
|
|
/// edge to B is reachable without going through the nearest common
|
|
|
|
/// post-dominator of B and all uses of the def.
|
|
|
|
///
|
|
|
|
/// TODO: This rule is conservative because it does not check whether the
|
|
|
|
/// relevant branches are actually divergent.
|
|
|
|
///
|
|
|
|
/// The class is designed to cache the CFG traversal so that it can be re-used
|
|
|
|
/// for multiple defs within the same basic block.
|
|
|
|
///
|
|
|
|
/// TODO: We could use region analysis to quickly skip over SESE regions during
|
|
|
|
/// the traversal.
|
|
|
|
///
|
|
|
|
class LoopFinder {
|
|
|
|
MachineDominatorTree &DT;
|
|
|
|
MachinePostDominatorTree &PDT;
|
|
|
|
|
|
|
|
// All visited / reachable block, tagged by level (level 0 is the def block,
|
|
|
|
// level 1 are all blocks reachable including but not going through the def
|
|
|
|
// block's IPDOM, etc.).
|
|
|
|
DenseMap<MachineBasicBlock *, unsigned> Visited;
|
|
|
|
|
|
|
|
// Nearest common dominator of all visited blocks by level (level 0 is the
|
|
|
|
// def block). Used for seeding the SSAUpdater.
|
|
|
|
SmallVector<MachineBasicBlock *, 4> CommonDominators;
|
|
|
|
|
|
|
|
// Post-dominator of all visited blocks.
|
|
|
|
MachineBasicBlock *VisitedPostDom = nullptr;
|
|
|
|
|
|
|
|
// Level at which a loop was found: 0 is not possible; 1 = a backward edge is
|
|
|
|
// reachable without going through the IPDOM of the def block (if the IPDOM
|
|
|
|
// itself has an edge to the def block, the loop level is 2), etc.
|
|
|
|
unsigned FoundLoopLevel = ~0u;
|
|
|
|
|
|
|
|
MachineBasicBlock *DefBlock = nullptr;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> Stack;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> NextLevel;
|
|
|
|
|
|
|
|
public:
|
|
|
|
LoopFinder(MachineDominatorTree &DT, MachinePostDominatorTree &PDT)
|
|
|
|
: DT(DT), PDT(PDT) {}
|
|
|
|
|
|
|
|
void initialize(MachineBasicBlock &MBB) {
|
|
|
|
Visited.clear();
|
|
|
|
CommonDominators.clear();
|
|
|
|
Stack.clear();
|
|
|
|
NextLevel.clear();
|
|
|
|
VisitedPostDom = nullptr;
|
|
|
|
FoundLoopLevel = ~0u;
|
|
|
|
|
|
|
|
DefBlock = &MBB;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Check whether a backward edge can be reached without going through the
|
|
|
|
/// given \p PostDom of the def block.
|
|
|
|
///
|
|
|
|
/// Return the level of \p PostDom if a loop was found, or 0 otherwise.
|
|
|
|
unsigned findLoop(MachineBasicBlock *PostDom) {
|
|
|
|
MachineDomTreeNode *PDNode = PDT.getNode(DefBlock);
|
|
|
|
|
|
|
|
if (!VisitedPostDom)
|
|
|
|
advanceLevel();
|
|
|
|
|
|
|
|
unsigned Level = 0;
|
|
|
|
while (PDNode->getBlock() != PostDom) {
|
|
|
|
if (PDNode->getBlock() == VisitedPostDom)
|
|
|
|
advanceLevel();
|
|
|
|
PDNode = PDNode->getIDom();
|
|
|
|
Level++;
|
|
|
|
if (FoundLoopLevel == Level)
|
|
|
|
return Level;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Add undef values dominating the loop and the optionally given additional
|
|
|
|
/// blocks, so that the SSA updater doesn't have to search all the way to the
|
|
|
|
/// function entry.
|
|
|
|
void addLoopEntries(unsigned LoopLevel, MachineSSAUpdater &SSAUpdater,
|
|
|
|
ArrayRef<MachineBasicBlock *> Blocks = {}) {
|
|
|
|
assert(LoopLevel < CommonDominators.size());
|
|
|
|
|
|
|
|
MachineBasicBlock *Dom = CommonDominators[LoopLevel];
|
|
|
|
for (MachineBasicBlock *MBB : Blocks)
|
|
|
|
Dom = DT.findNearestCommonDominator(Dom, MBB);
|
|
|
|
|
|
|
|
if (!inLoopLevel(*Dom, LoopLevel, Blocks)) {
|
|
|
|
SSAUpdater.AddAvailableValue(Dom, insertUndefLaneMask(*Dom));
|
|
|
|
} else {
|
|
|
|
// The dominator is part of the loop or the given blocks, so add the
|
|
|
|
// undef value to unreachable predecessors instead.
|
|
|
|
for (MachineBasicBlock *Pred : Dom->predecessors()) {
|
|
|
|
if (!inLoopLevel(*Pred, LoopLevel, Blocks))
|
|
|
|
SSAUpdater.AddAvailableValue(Pred, insertUndefLaneMask(*Pred));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
bool inLoopLevel(MachineBasicBlock &MBB, unsigned LoopLevel,
|
|
|
|
ArrayRef<MachineBasicBlock *> Blocks) const {
|
|
|
|
auto DomIt = Visited.find(&MBB);
|
|
|
|
if (DomIt != Visited.end() && DomIt->second <= LoopLevel)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (llvm::find(Blocks, &MBB) != Blocks.end())
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
void advanceLevel() {
|
|
|
|
MachineBasicBlock *VisitedDom;
|
|
|
|
|
|
|
|
if (!VisitedPostDom) {
|
|
|
|
VisitedPostDom = DefBlock;
|
|
|
|
VisitedDom = DefBlock;
|
|
|
|
Stack.push_back(DefBlock);
|
|
|
|
} else {
|
|
|
|
VisitedPostDom = PDT.getNode(VisitedPostDom)->getIDom()->getBlock();
|
|
|
|
VisitedDom = CommonDominators.back();
|
|
|
|
|
|
|
|
for (unsigned i = 0; i < NextLevel.size();) {
|
|
|
|
if (PDT.dominates(VisitedPostDom, NextLevel[i])) {
|
|
|
|
Stack.push_back(NextLevel[i]);
|
|
|
|
|
|
|
|
NextLevel[i] = NextLevel.back();
|
|
|
|
NextLevel.pop_back();
|
|
|
|
} else {
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned Level = CommonDominators.size();
|
|
|
|
while (!Stack.empty()) {
|
|
|
|
MachineBasicBlock *MBB = Stack.pop_back_val();
|
|
|
|
if (!PDT.dominates(VisitedPostDom, MBB))
|
|
|
|
NextLevel.push_back(MBB);
|
|
|
|
|
|
|
|
Visited[MBB] = Level;
|
|
|
|
VisitedDom = DT.findNearestCommonDominator(VisitedDom, MBB);
|
|
|
|
|
|
|
|
for (MachineBasicBlock *Succ : MBB->successors()) {
|
|
|
|
if (Succ == DefBlock) {
|
|
|
|
if (MBB == VisitedPostDom)
|
|
|
|
FoundLoopLevel = std::min(FoundLoopLevel, Level + 1);
|
|
|
|
else
|
|
|
|
FoundLoopLevel = std::min(FoundLoopLevel, Level);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (Visited.try_emplace(Succ, ~0u).second) {
|
|
|
|
if (MBB == VisitedPostDom)
|
|
|
|
NextLevel.push_back(Succ);
|
|
|
|
else
|
|
|
|
Stack.push_back(Succ);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
CommonDominators.push_back(VisitedDom);
|
|
|
|
}
|
2014-04-30 23:31:33 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
} // End anonymous namespace.
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
INITIALIZE_PASS_BEGIN(SILowerI1Copies, DEBUG_TYPE, "SI Lower i1 Copies", false,
|
|
|
|
false)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(MachinePostDominatorTree)
|
|
|
|
INITIALIZE_PASS_END(SILowerI1Copies, DEBUG_TYPE, "SI Lower i1 Copies", false,
|
|
|
|
false)
|
2014-04-30 23:31:33 +08:00
|
|
|
|
|
|
|
char SILowerI1Copies::ID = 0;
|
|
|
|
|
|
|
|
char &llvm::SILowerI1CopiesID = SILowerI1Copies::ID;
|
|
|
|
|
|
|
|
FunctionPass *llvm::createSILowerI1CopiesPass() {
|
|
|
|
return new SILowerI1Copies();
|
|
|
|
}
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
static unsigned createLaneMaskReg(MachineFunction &MF) {
|
2019-06-17 01:13:09 +08:00
|
|
|
const GCNSubtarget &ST = MF.getSubtarget<GCNSubtarget>();
|
2014-04-30 23:31:33 +08:00
|
|
|
MachineRegisterInfo &MRI = MF.getRegInfo();
|
2019-06-17 01:13:09 +08:00
|
|
|
return MRI.createVirtualRegister(ST.isWave32() ? &AMDGPU::SReg_32RegClass
|
|
|
|
: &AMDGPU::SReg_64RegClass);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned insertUndefLaneMask(MachineBasicBlock &MBB) {
|
|
|
|
MachineFunction &MF = *MBB.getParent();
|
2018-07-12 04:59:01 +08:00
|
|
|
const GCNSubtarget &ST = MF.getSubtarget<GCNSubtarget>();
|
2016-06-24 14:30:11 +08:00
|
|
|
const SIInstrInfo *TII = ST.getInstrInfo();
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
unsigned UndefReg = createLaneMaskReg(MF);
|
|
|
|
BuildMI(MBB, MBB.getFirstTerminator(), {}, TII->get(AMDGPU::IMPLICIT_DEF),
|
|
|
|
UndefReg);
|
|
|
|
return UndefReg;
|
|
|
|
}
|
2016-06-24 14:30:11 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
/// Lower all instructions that def or use vreg_1 registers.
|
|
|
|
///
|
|
|
|
/// In a first pass, we lower COPYs from vreg_1 to vector registers, as can
|
|
|
|
/// occur around inline assembly. We do this first, before vreg_1 registers
|
|
|
|
/// are changed to scalar mask registers.
|
|
|
|
///
|
|
|
|
/// Then we lower all defs of vreg_1 registers. Phi nodes are lowered before
|
|
|
|
/// all others, because phi lowering looks through copies and can therefore
|
|
|
|
/// often make copy lowering unnecessary.
|
|
|
|
bool SILowerI1Copies::runOnMachineFunction(MachineFunction &TheMF) {
|
|
|
|
MF = &TheMF;
|
|
|
|
MRI = &MF->getRegInfo();
|
|
|
|
DT = &getAnalysis<MachineDominatorTree>();
|
|
|
|
PDT = &getAnalysis<MachinePostDominatorTree>();
|
2014-04-30 23:31:33 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
ST = &MF->getSubtarget<GCNSubtarget>();
|
|
|
|
TII = ST->getInstrInfo();
|
2019-06-17 01:13:09 +08:00
|
|
|
IsWave32 = ST->isWave32();
|
|
|
|
|
|
|
|
if (IsWave32) {
|
|
|
|
ExecReg = AMDGPU::EXEC_LO;
|
|
|
|
MovOp = AMDGPU::S_MOV_B32;
|
|
|
|
AndOp = AMDGPU::S_AND_B32;
|
|
|
|
OrOp = AMDGPU::S_OR_B32;
|
|
|
|
XorOp = AMDGPU::S_XOR_B32;
|
|
|
|
AndN2Op = AMDGPU::S_ANDN2_B32;
|
|
|
|
OrN2Op = AMDGPU::S_ORN2_B32;
|
|
|
|
} else {
|
|
|
|
ExecReg = AMDGPU::EXEC;
|
|
|
|
MovOp = AMDGPU::S_MOV_B64;
|
|
|
|
AndOp = AMDGPU::S_AND_B64;
|
|
|
|
OrOp = AMDGPU::S_OR_B64;
|
|
|
|
XorOp = AMDGPU::S_XOR_B64;
|
|
|
|
AndN2Op = AMDGPU::S_ANDN2_B64;
|
|
|
|
OrN2Op = AMDGPU::S_ORN2_B64;
|
|
|
|
}
|
2014-04-30 23:31:33 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
lowerCopiesFromI1();
|
|
|
|
lowerPhis();
|
|
|
|
lowerCopiesToI1();
|
2014-04-30 23:31:33 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
for (unsigned Reg : ConstrainRegs)
|
2019-06-17 01:13:09 +08:00
|
|
|
MRI->constrainRegClass(Reg, &AMDGPU::SReg_1_XEXECRegClass);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
ConstrainRegs.clear();
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
2014-11-15 02:43:41 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
void SILowerI1Copies::lowerCopiesFromI1() {
|
|
|
|
SmallVector<MachineInstr *, 4> DeadCopies;
|
|
|
|
|
|
|
|
for (MachineBasicBlock &MBB : *MF) {
|
|
|
|
for (MachineInstr &MI : MBB) {
|
2014-12-03 13:22:35 +08:00
|
|
|
if (MI.getOpcode() != AMDGPU::COPY)
|
2014-04-30 23:31:33 +08:00
|
|
|
continue;
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
unsigned DstReg = MI.getOperand(0).getReg();
|
|
|
|
unsigned SrcReg = MI.getOperand(1).getReg();
|
|
|
|
if (!TargetRegisterInfo::isVirtualRegister(SrcReg) ||
|
|
|
|
MRI->getRegClass(SrcReg) != &AMDGPU::VReg_1RegClass)
|
2014-12-03 13:22:35 +08:00
|
|
|
continue;
|
2014-04-30 23:31:33 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
if (isLaneMaskReg(DstReg) ||
|
|
|
|
(TargetRegisterInfo::isVirtualRegister(DstReg) &&
|
|
|
|
MRI->getRegClass(DstReg) == &AMDGPU::VReg_1RegClass))
|
|
|
|
continue;
|
2014-04-30 23:31:33 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
// Copy into a 32-bit vector register.
|
|
|
|
LLVM_DEBUG(dbgs() << "Lower copy from i1: " << MI);
|
2016-11-29 02:58:49 +08:00
|
|
|
DebugLoc DL = MI.getDebugLoc();
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
assert(TII->getRegisterInfo().getRegSizeInBits(DstReg, *MRI) == 32);
|
|
|
|
assert(!MI.getOperand(0).getSubReg());
|
|
|
|
|
|
|
|
ConstrainRegs.insert(SrcReg);
|
|
|
|
BuildMI(MBB, MI, DL, TII->get(AMDGPU::V_CNDMASK_B32_e64), DstReg)
|
2019-03-19 03:25:39 +08:00
|
|
|
.addImm(0)
|
|
|
|
.addImm(0)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addImm(0)
|
|
|
|
.addImm(-1)
|
|
|
|
.addReg(SrcReg);
|
|
|
|
DeadCopies.push_back(&MI);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (MachineInstr *MI : DeadCopies)
|
|
|
|
MI->eraseFromParent();
|
|
|
|
DeadCopies.clear();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void SILowerI1Copies::lowerPhis() {
|
|
|
|
MachineSSAUpdater SSAUpdater(*MF);
|
|
|
|
LoopFinder LF(*DT, *PDT);
|
|
|
|
PhiIncomingAnalysis PIA(*PDT);
|
|
|
|
SmallVector<MachineInstr *, 4> DeadPhis;
|
|
|
|
SmallVector<MachineBasicBlock *, 4> IncomingBlocks;
|
|
|
|
SmallVector<unsigned, 4> IncomingRegs;
|
|
|
|
SmallVector<unsigned, 4> IncomingUpdated;
|
2019-04-23 21:12:52 +08:00
|
|
|
#ifndef NDEBUG
|
|
|
|
DenseSet<unsigned> PhiRegisters;
|
|
|
|
#endif
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
for (MachineBasicBlock &MBB : *MF) {
|
|
|
|
LF.initialize(MBB);
|
|
|
|
|
|
|
|
for (MachineInstr &MI : MBB.phis()) {
|
|
|
|
unsigned DstReg = MI.getOperand(0).getReg();
|
|
|
|
if (MRI->getRegClass(DstReg) != &AMDGPU::VReg_1RegClass)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << "Lower PHI: " << MI);
|
|
|
|
|
2019-06-17 01:13:09 +08:00
|
|
|
MRI->setRegClass(DstReg, IsWave32 ? &AMDGPU::SReg_32RegClass
|
|
|
|
: &AMDGPU::SReg_64RegClass);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
// Collect incoming values.
|
|
|
|
for (unsigned i = 1; i < MI.getNumOperands(); i += 2) {
|
|
|
|
assert(i + 1 < MI.getNumOperands());
|
|
|
|
unsigned IncomingReg = MI.getOperand(i).getReg();
|
|
|
|
MachineBasicBlock *IncomingMBB = MI.getOperand(i + 1).getMBB();
|
|
|
|
MachineInstr *IncomingDef = MRI->getUniqueVRegDef(IncomingReg);
|
|
|
|
|
|
|
|
if (IncomingDef->getOpcode() == AMDGPU::COPY) {
|
|
|
|
IncomingReg = IncomingDef->getOperand(1).getReg();
|
|
|
|
assert(isLaneMaskReg(IncomingReg));
|
|
|
|
assert(!IncomingDef->getOperand(1).getSubReg());
|
|
|
|
} else if (IncomingDef->getOpcode() == AMDGPU::IMPLICIT_DEF) {
|
|
|
|
continue;
|
|
|
|
} else {
|
2019-04-23 21:12:52 +08:00
|
|
|
assert(IncomingDef->isPHI() || PhiRegisters.count(IncomingReg));
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
IncomingBlocks.push_back(IncomingMBB);
|
|
|
|
IncomingRegs.push_back(IncomingReg);
|
|
|
|
}
|
|
|
|
|
2019-04-23 21:12:52 +08:00
|
|
|
#ifndef NDEBUG
|
|
|
|
PhiRegisters.insert(DstReg);
|
|
|
|
#endif
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
// Phis in a loop that are observed outside the loop receive a simple but
|
|
|
|
// conservatively correct treatment.
|
|
|
|
MachineBasicBlock *PostDomBound = &MBB;
|
|
|
|
for (MachineInstr &Use : MRI->use_instructions(DstReg)) {
|
|
|
|
PostDomBound =
|
|
|
|
PDT->findNearestCommonDominator(PostDomBound, Use.getParent());
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned FoundLoopLevel = LF.findLoop(PostDomBound);
|
|
|
|
|
|
|
|
SSAUpdater.Initialize(DstReg);
|
|
|
|
|
|
|
|
if (FoundLoopLevel) {
|
|
|
|
LF.addLoopEntries(FoundLoopLevel, SSAUpdater, IncomingBlocks);
|
|
|
|
|
|
|
|
for (unsigned i = 0; i < IncomingRegs.size(); ++i) {
|
|
|
|
IncomingUpdated.push_back(createLaneMaskReg(*MF));
|
|
|
|
SSAUpdater.AddAvailableValue(IncomingBlocks[i],
|
|
|
|
IncomingUpdated.back());
|
|
|
|
}
|
|
|
|
|
|
|
|
for (unsigned i = 0; i < IncomingRegs.size(); ++i) {
|
|
|
|
MachineBasicBlock &IMBB = *IncomingBlocks[i];
|
|
|
|
buildMergeLaneMasks(
|
|
|
|
IMBB, getSaluInsertionAtEnd(IMBB), {}, IncomingUpdated[i],
|
|
|
|
SSAUpdater.GetValueInMiddleOfBlock(&IMBB), IncomingRegs[i]);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// The phi is not observed from outside a loop. Use a more accurate
|
|
|
|
// lowering.
|
|
|
|
PIA.analyze(MBB, IncomingBlocks);
|
|
|
|
|
|
|
|
for (MachineBasicBlock *MBB : PIA.predecessors())
|
|
|
|
SSAUpdater.AddAvailableValue(MBB, insertUndefLaneMask(*MBB));
|
|
|
|
|
|
|
|
for (unsigned i = 0; i < IncomingRegs.size(); ++i) {
|
|
|
|
MachineBasicBlock &IMBB = *IncomingBlocks[i];
|
|
|
|
if (PIA.isSource(IMBB)) {
|
|
|
|
IncomingUpdated.push_back(0);
|
|
|
|
SSAUpdater.AddAvailableValue(&IMBB, IncomingRegs[i]);
|
|
|
|
} else {
|
|
|
|
IncomingUpdated.push_back(createLaneMaskReg(*MF));
|
|
|
|
SSAUpdater.AddAvailableValue(&IMBB, IncomingUpdated.back());
|
2014-12-03 13:22:35 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
for (unsigned i = 0; i < IncomingRegs.size(); ++i) {
|
|
|
|
if (!IncomingUpdated[i])
|
|
|
|
continue;
|
|
|
|
|
|
|
|
MachineBasicBlock &IMBB = *IncomingBlocks[i];
|
|
|
|
buildMergeLaneMasks(
|
|
|
|
IMBB, getSaluInsertionAtEnd(IMBB), {}, IncomingUpdated[i],
|
|
|
|
SSAUpdater.GetValueInMiddleOfBlock(&IMBB), IncomingRegs[i]);
|
2016-11-29 02:58:49 +08:00
|
|
|
}
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
unsigned NewReg = SSAUpdater.GetValueInMiddleOfBlock(&MBB);
|
|
|
|
if (NewReg != DstReg) {
|
|
|
|
MRI->replaceRegWith(NewReg, DstReg);
|
|
|
|
|
|
|
|
// Ensure that DstReg has a single def and mark the old PHI node for
|
|
|
|
// deletion.
|
|
|
|
MI.getOperand(0).setReg(NewReg);
|
|
|
|
DeadPhis.push_back(&MI);
|
|
|
|
}
|
|
|
|
|
|
|
|
IncomingBlocks.clear();
|
|
|
|
IncomingRegs.clear();
|
|
|
|
IncomingUpdated.clear();
|
|
|
|
}
|
|
|
|
|
|
|
|
for (MachineInstr *MI : DeadPhis)
|
|
|
|
MI->eraseFromParent();
|
|
|
|
DeadPhis.clear();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void SILowerI1Copies::lowerCopiesToI1() {
|
|
|
|
MachineSSAUpdater SSAUpdater(*MF);
|
|
|
|
LoopFinder LF(*DT, *PDT);
|
|
|
|
SmallVector<MachineInstr *, 4> DeadCopies;
|
|
|
|
|
|
|
|
for (MachineBasicBlock &MBB : *MF) {
|
|
|
|
LF.initialize(MBB);
|
|
|
|
|
|
|
|
for (MachineInstr &MI : MBB) {
|
|
|
|
if (MI.getOpcode() != AMDGPU::IMPLICIT_DEF &&
|
|
|
|
MI.getOpcode() != AMDGPU::COPY)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
unsigned DstReg = MI.getOperand(0).getReg();
|
|
|
|
if (!TargetRegisterInfo::isVirtualRegister(DstReg) ||
|
|
|
|
MRI->getRegClass(DstReg) != &AMDGPU::VReg_1RegClass)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (MRI->use_empty(DstReg)) {
|
|
|
|
DeadCopies.push_back(&MI);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << "Lower Other: " << MI);
|
|
|
|
|
2019-06-17 01:13:09 +08:00
|
|
|
MRI->setRegClass(DstReg, IsWave32 ? &AMDGPU::SReg_32RegClass
|
|
|
|
: &AMDGPU::SReg_64RegClass);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
if (MI.getOpcode() == AMDGPU::IMPLICIT_DEF)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
DebugLoc DL = MI.getDebugLoc();
|
|
|
|
unsigned SrcReg = MI.getOperand(1).getReg();
|
|
|
|
assert(!MI.getOperand(1).getSubReg());
|
|
|
|
|
|
|
|
if (!TargetRegisterInfo::isVirtualRegister(SrcReg) ||
|
|
|
|
!isLaneMaskReg(SrcReg)) {
|
|
|
|
assert(TII->getRegisterInfo().getRegSizeInBits(SrcReg, *MRI) == 32);
|
|
|
|
unsigned TmpReg = createLaneMaskReg(*MF);
|
|
|
|
BuildMI(MBB, MI, DL, TII->get(AMDGPU::V_CMP_NE_U32_e64), TmpReg)
|
|
|
|
.addReg(SrcReg)
|
|
|
|
.addImm(0);
|
|
|
|
MI.getOperand(1).setReg(TmpReg);
|
|
|
|
SrcReg = TmpReg;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Defs in a loop that are observed outside the loop must be transformed
|
|
|
|
// into appropriate bit manipulation.
|
|
|
|
MachineBasicBlock *PostDomBound = &MBB;
|
|
|
|
for (MachineInstr &Use : MRI->use_instructions(DstReg)) {
|
|
|
|
PostDomBound =
|
|
|
|
PDT->findNearestCommonDominator(PostDomBound, Use.getParent());
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned FoundLoopLevel = LF.findLoop(PostDomBound);
|
|
|
|
if (FoundLoopLevel) {
|
|
|
|
SSAUpdater.Initialize(DstReg);
|
|
|
|
SSAUpdater.AddAvailableValue(&MBB, DstReg);
|
|
|
|
LF.addLoopEntries(FoundLoopLevel, SSAUpdater);
|
|
|
|
|
|
|
|
buildMergeLaneMasks(MBB, MI, DL, DstReg,
|
|
|
|
SSAUpdater.GetValueInMiddleOfBlock(&MBB), SrcReg);
|
|
|
|
DeadCopies.push_back(&MI);
|
2014-04-30 23:31:33 +08:00
|
|
|
}
|
|
|
|
}
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
for (MachineInstr *MI : DeadCopies)
|
|
|
|
MI->eraseFromParent();
|
|
|
|
DeadCopies.clear();
|
2014-04-30 23:31:33 +08:00
|
|
|
}
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
2014-05-15 22:41:50 +08:00
|
|
|
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
bool SILowerI1Copies::isConstantLaneMask(unsigned Reg, bool &Val) const {
|
|
|
|
const MachineInstr *MI;
|
|
|
|
for (;;) {
|
|
|
|
MI = MRI->getUniqueVRegDef(Reg);
|
|
|
|
if (MI->getOpcode() != AMDGPU::COPY)
|
|
|
|
break;
|
|
|
|
|
|
|
|
Reg = MI->getOperand(1).getReg();
|
|
|
|
if (!TargetRegisterInfo::isVirtualRegister(Reg))
|
|
|
|
return false;
|
|
|
|
if (!isLaneMaskReg(Reg))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2019-06-17 01:13:09 +08:00
|
|
|
if (MI->getOpcode() != MovOp)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!MI->getOperand(1).isImm())
|
|
|
|
return false;
|
|
|
|
|
|
|
|
int64_t Imm = MI->getOperand(1).getImm();
|
|
|
|
if (Imm == 0) {
|
|
|
|
Val = false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
if (Imm == -1) {
|
|
|
|
Val = true;
|
|
|
|
return true;
|
|
|
|
}
|
2014-05-15 22:41:50 +08:00
|
|
|
|
2014-04-30 23:31:33 +08:00
|
|
|
return false;
|
|
|
|
}
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
|
|
|
|
static void instrDefsUsesSCC(const MachineInstr &MI, bool &Def, bool &Use) {
|
|
|
|
Def = false;
|
|
|
|
Use = false;
|
|
|
|
|
|
|
|
for (const MachineOperand &MO : MI.operands()) {
|
|
|
|
if (MO.isReg() && MO.getReg() == AMDGPU::SCC) {
|
|
|
|
if (MO.isUse())
|
|
|
|
Use = true;
|
|
|
|
else
|
|
|
|
Def = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Return a point at the end of the given \p MBB to insert SALU instructions
|
|
|
|
/// for lane mask calculation. Take terminators and SCC into account.
|
|
|
|
MachineBasicBlock::iterator
|
|
|
|
SILowerI1Copies::getSaluInsertionAtEnd(MachineBasicBlock &MBB) const {
|
|
|
|
auto InsertionPt = MBB.getFirstTerminator();
|
|
|
|
bool TerminatorsUseSCC = false;
|
|
|
|
for (auto I = InsertionPt, E = MBB.end(); I != E; ++I) {
|
|
|
|
bool DefsSCC;
|
|
|
|
instrDefsUsesSCC(*I, DefsSCC, TerminatorsUseSCC);
|
|
|
|
if (TerminatorsUseSCC || DefsSCC)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!TerminatorsUseSCC)
|
|
|
|
return InsertionPt;
|
|
|
|
|
|
|
|
while (InsertionPt != MBB.begin()) {
|
|
|
|
InsertionPt--;
|
|
|
|
|
|
|
|
bool DefSCC, UseSCC;
|
|
|
|
instrDefsUsesSCC(*InsertionPt, DefSCC, UseSCC);
|
|
|
|
if (DefSCC)
|
|
|
|
return InsertionPt;
|
|
|
|
}
|
|
|
|
|
|
|
|
// We should have at least seen an IMPLICIT_DEF or COPY
|
|
|
|
llvm_unreachable("SCC used by terminator but no def in block");
|
|
|
|
}
|
|
|
|
|
|
|
|
void SILowerI1Copies::buildMergeLaneMasks(MachineBasicBlock &MBB,
|
|
|
|
MachineBasicBlock::iterator I,
|
|
|
|
const DebugLoc &DL, unsigned DstReg,
|
|
|
|
unsigned PrevReg, unsigned CurReg) {
|
|
|
|
bool PrevVal;
|
|
|
|
bool PrevConstant = isConstantLaneMask(PrevReg, PrevVal);
|
|
|
|
bool CurVal;
|
|
|
|
bool CurConstant = isConstantLaneMask(CurReg, CurVal);
|
|
|
|
|
|
|
|
if (PrevConstant && CurConstant) {
|
|
|
|
if (PrevVal == CurVal) {
|
|
|
|
BuildMI(MBB, I, DL, TII->get(AMDGPU::COPY), DstReg).addReg(CurReg);
|
|
|
|
} else if (CurVal) {
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(AMDGPU::COPY), DstReg).addReg(ExecReg);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
} else {
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(XorOp), DstReg)
|
|
|
|
.addReg(ExecReg)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addImm(-1);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned PrevMaskedReg = 0;
|
|
|
|
unsigned CurMaskedReg = 0;
|
|
|
|
if (!PrevConstant) {
|
|
|
|
if (CurConstant && CurVal) {
|
|
|
|
PrevMaskedReg = PrevReg;
|
|
|
|
} else {
|
|
|
|
PrevMaskedReg = createLaneMaskReg(*MF);
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(AndN2Op), PrevMaskedReg)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addReg(PrevReg)
|
2019-06-17 01:13:09 +08:00
|
|
|
.addReg(ExecReg);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!CurConstant) {
|
|
|
|
// TODO: check whether CurReg is already masked by EXEC
|
|
|
|
if (PrevConstant && PrevVal) {
|
|
|
|
CurMaskedReg = CurReg;
|
|
|
|
} else {
|
|
|
|
CurMaskedReg = createLaneMaskReg(*MF);
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(AndOp), CurMaskedReg)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addReg(CurReg)
|
2019-06-17 01:13:09 +08:00
|
|
|
.addReg(ExecReg);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PrevConstant && !PrevVal) {
|
|
|
|
BuildMI(MBB, I, DL, TII->get(AMDGPU::COPY), DstReg)
|
|
|
|
.addReg(CurMaskedReg);
|
|
|
|
} else if (CurConstant && !CurVal) {
|
|
|
|
BuildMI(MBB, I, DL, TII->get(AMDGPU::COPY), DstReg)
|
|
|
|
.addReg(PrevMaskedReg);
|
|
|
|
} else if (PrevConstant && PrevVal) {
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(OrN2Op), DstReg)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addReg(CurMaskedReg)
|
2019-06-17 01:13:09 +08:00
|
|
|
.addReg(ExecReg);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
} else {
|
2019-06-17 01:13:09 +08:00
|
|
|
BuildMI(MBB, I, DL, TII->get(OrOp), DstReg)
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
.addReg(PrevMaskedReg)
|
2019-06-17 01:13:09 +08:00
|
|
|
.addReg(CurMaskedReg ? CurMaskedReg : ExecReg);
|
AMDGPU: Rewrite SILowerI1Copies to always stay on SALU
Summary:
Instead of writing boolean values temporarily into 32-bit VGPRs
if they are involved in PHIs or are observed from outside a loop,
we use bitwise masking operations to combine lane masks in a way
that is consistent with wave control flow.
Move SIFixSGPRCopies to before this pass, since that pass
incorrectly attempts to move SGPR phis to VGPRs.
This should recover most of the code quality that was lost with
the bug fix in "AMDGPU: Remove PHI loop condition optimization".
There are still some relevant cases where code quality could be
improved, in particular:
- We often introduce redundant masks with EXEC. Ideally, we'd
have a generic computeKnownBits-like analysis to determine
whether masks are already masked by EXEC, so we can avoid this
masking both here and when lowering uniform control flow.
- The criterion we use to determine whether a def is observed
from outside a loop is conservative: it doesn't check whether
(loop) branch conditions are uniform.
Change-Id: Ibabdb373a7510e426b90deef00f5e16c5d56e64b
Reviewers: arsenm, rampitec, tpr
Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, t-tye, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D53496
llvm-svn: 345719
2018-10-31 21:27:08 +08:00
|
|
|
}
|
|
|
|
}
|