2016-09-02 01:54:54 +08:00
|
|
|
//===-- MIMGInstructions.td - MIMG Instruction Defintions -----------------===//
|
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2016-09-02 01:54:54 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
// MIMG-specific encoding families to distinguish between semantically
|
|
|
|
// equivalent machine instructions with different encoding.
|
|
|
|
//
|
|
|
|
// - MIMGEncGfx6: encoding introduced with gfx6 (obsoleted for atomics in gfx8)
|
|
|
|
// - MIMGEncGfx8: encoding introduced with gfx8 for atomics
|
2019-05-02 00:32:58 +08:00
|
|
|
// - MIMGEncGfx10Default: gfx default (non-NSA) encoding
|
|
|
|
// - MIMGEncGfx10NSA: gfx10 NSA encoding
|
2018-06-21 21:36:44 +08:00
|
|
|
class MIMGEncoding;
|
|
|
|
|
|
|
|
def MIMGEncGfx6 : MIMGEncoding;
|
|
|
|
def MIMGEncGfx8 : MIMGEncoding;
|
2019-05-02 00:32:58 +08:00
|
|
|
def MIMGEncGfx10Default : MIMGEncoding;
|
|
|
|
def MIMGEncGfx10NSA : MIMGEncoding;
|
2018-06-21 21:36:44 +08:00
|
|
|
|
|
|
|
def MIMGEncoding : GenericEnum {
|
|
|
|
let FilterClass = "MIMGEncoding";
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
// Represent an ISA-level opcode, independent of the encoding and the
|
|
|
|
// vdata/vaddr size.
|
|
|
|
class MIMGBaseOpcode {
|
|
|
|
MIMGBaseOpcode BaseOpcode = !cast<MIMGBaseOpcode>(NAME);
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
bit Store = 0;
|
|
|
|
bit Atomic = 0;
|
|
|
|
bit AtomicX2 = 0; // (f)cmpswap
|
|
|
|
bit Sampler = 0;
|
[AMDGPU] Add support for TFE/LWE in image intrinsics. 2nd try
TFE and LWE support requires extra result registers that are written in the
event of a failure in order to detect that failure case.
The specific use-case that initiated these changes is sparse texture support.
This means that if image intrinsics are used with either option turned on, the
programmer must ensure that the return type can contain all of the expected
results. This can result in redundant registers since the vector size must be a
power-of-2.
This change takes roughly 6 parts:
1. Modify the instruction defs in tablegen to add new instruction variants that
can accomodate the extra return values.
2. Updates to lowerImage in SIISelLowering.cpp to accomodate setting TFE or LWE
(where the bulk of the work for these instruction types is now done)
3. Extra verification code to catch cases where intrinsics have been used but
insufficient return registers are used.
4. Modification to the adjustWritemask optimisation to account for TFE/LWE being
enabled (requires extra registers to be maintained for error return value).
5. An extra pass to zero initialize the error value return - this is because if
the error does not occur, the register is not written and thus must be zeroed
before use. Also added a new (on by default) option to ensure ALL return values
are zero-initialized that is required for sparse texture support.
6. Disable the inst_combine optimization in the presence of tfe/lwe (later TODO
for this to re-enable and handle correctly).
There's an additional fix now to avoid a dmask=0
For an image intrinsic with tfe where all result channels except tfe
were unused, I was getting an image instruction with dmask=0 and only a
single vgpr result for tfe. That is incorrect because the hardware
assumes there is at least one vgpr result, plus the one for tfe.
Fixed by forcing dmask to 1, which gives the desired two vgpr result
with tfe in the second one.
The TFE or LWE result is returned from the intrinsics using an aggregate
type. Look in the test code provided to see how this works, but in essence IR
code to invoke the intrinsic looks as follows:
%v = call {<4 x float>,i32} @llvm.amdgcn.image.load.1d.v4f32i32.i32(i32 15,
i32 %s, <8 x i32> %rsrc, i32 1, i32 0)
%v.vec = extractvalue {<4 x float>, i32} %v, 0
%v.err = extractvalue {<4 x float>, i32} %v, 1
This re-submit of the change also includes a slight modification in
SIISelLowering.cpp to work-around a compiler bug for the powerpc_le
platform that caused a buildbot failure on a previous submission.
Differential revision: https://reviews.llvm.org/D48826
Change-Id: If222bc03642e76cf98059a6bef5d5bffeda38dda
Work around for ppcle compiler bug
Change-Id: Ie284cf24b2271215be1b9dc95b485fd15000e32b
llvm-svn: 351054
2019-01-14 19:55:24 +08:00
|
|
|
bit Gather4 = 0;
|
2018-06-21 21:36:44 +08:00
|
|
|
bits<8> NumExtraArgs = 0;
|
|
|
|
bit Gradients = 0;
|
|
|
|
bit Coordinates = 1;
|
|
|
|
bit LodOrClampOrMip = 0;
|
|
|
|
bit HasD16 = 0;
|
2018-01-26 23:43:29 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
def MIMGBaseOpcode : GenericEnum {
|
|
|
|
let FilterClass = "MIMGBaseOpcode";
|
|
|
|
}
|
|
|
|
|
|
|
|
def MIMGBaseOpcodesTable : GenericTable {
|
|
|
|
let FilterClass = "MIMGBaseOpcode";
|
|
|
|
let CppTypeName = "MIMGBaseOpcodeInfo";
|
[AMDGPU] Add support for TFE/LWE in image intrinsics. 2nd try
TFE and LWE support requires extra result registers that are written in the
event of a failure in order to detect that failure case.
The specific use-case that initiated these changes is sparse texture support.
This means that if image intrinsics are used with either option turned on, the
programmer must ensure that the return type can contain all of the expected
results. This can result in redundant registers since the vector size must be a
power-of-2.
This change takes roughly 6 parts:
1. Modify the instruction defs in tablegen to add new instruction variants that
can accomodate the extra return values.
2. Updates to lowerImage in SIISelLowering.cpp to accomodate setting TFE or LWE
(where the bulk of the work for these instruction types is now done)
3. Extra verification code to catch cases where intrinsics have been used but
insufficient return registers are used.
4. Modification to the adjustWritemask optimisation to account for TFE/LWE being
enabled (requires extra registers to be maintained for error return value).
5. An extra pass to zero initialize the error value return - this is because if
the error does not occur, the register is not written and thus must be zeroed
before use. Also added a new (on by default) option to ensure ALL return values
are zero-initialized that is required for sparse texture support.
6. Disable the inst_combine optimization in the presence of tfe/lwe (later TODO
for this to re-enable and handle correctly).
There's an additional fix now to avoid a dmask=0
For an image intrinsic with tfe where all result channels except tfe
were unused, I was getting an image instruction with dmask=0 and only a
single vgpr result for tfe. That is incorrect because the hardware
assumes there is at least one vgpr result, plus the one for tfe.
Fixed by forcing dmask to 1, which gives the desired two vgpr result
with tfe in the second one.
The TFE or LWE result is returned from the intrinsics using an aggregate
type. Look in the test code provided to see how this works, but in essence IR
code to invoke the intrinsic looks as follows:
%v = call {<4 x float>,i32} @llvm.amdgcn.image.load.1d.v4f32i32.i32(i32 15,
i32 %s, <8 x i32> %rsrc, i32 1, i32 0)
%v.vec = extractvalue {<4 x float>, i32} %v, 0
%v.err = extractvalue {<4 x float>, i32} %v, 1
This re-submit of the change also includes a slight modification in
SIISelLowering.cpp to work-around a compiler bug for the powerpc_le
platform that caused a buildbot failure on a previous submission.
Differential revision: https://reviews.llvm.org/D48826
Change-Id: If222bc03642e76cf98059a6bef5d5bffeda38dda
Work around for ppcle compiler bug
Change-Id: Ie284cf24b2271215be1b9dc95b485fd15000e32b
llvm-svn: 351054
2019-01-14 19:55:24 +08:00
|
|
|
let Fields = ["BaseOpcode", "Store", "Atomic", "AtomicX2", "Sampler", "Gather4",
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
"NumExtraArgs", "Gradients", "Coordinates", "LodOrClampOrMip",
|
|
|
|
"HasD16"];
|
2018-06-21 21:36:44 +08:00
|
|
|
GenericEnum TypeOf_BaseOpcode = MIMGBaseOpcode;
|
|
|
|
|
|
|
|
let PrimaryKey = ["BaseOpcode"];
|
|
|
|
let PrimaryKeyName = "getMIMGBaseOpcodeInfo";
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
}
|
|
|
|
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
def MIMGDim : GenericEnum {
|
|
|
|
let FilterClass = "AMDGPUDimProps";
|
|
|
|
}
|
|
|
|
|
|
|
|
def MIMGDimInfoTable : GenericTable {
|
|
|
|
let FilterClass = "AMDGPUDimProps";
|
|
|
|
let CppTypeName = "MIMGDimInfo";
|
2019-05-02 00:32:58 +08:00
|
|
|
let Fields = ["Dim", "NumCoords", "NumGradients", "DA", "Encoding", "AsmSuffix"];
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
GenericEnum TypeOf_Dim = MIMGDim;
|
|
|
|
|
|
|
|
let PrimaryKey = ["Dim"];
|
|
|
|
let PrimaryKeyName = "getMIMGDimInfo";
|
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
def getMIMGDimInfoByEncoding : SearchIndex {
|
|
|
|
let Table = MIMGDimInfoTable;
|
|
|
|
let Key = ["Encoding"];
|
|
|
|
}
|
|
|
|
|
|
|
|
def getMIMGDimInfoByAsmSuffix : SearchIndex {
|
|
|
|
let Table = MIMGDimInfoTable;
|
|
|
|
let Key = ["AsmSuffix"];
|
|
|
|
}
|
|
|
|
|
|
|
|
class mimg <bits<7> si_gfx10, bits<7> vi = si_gfx10> {
|
|
|
|
field bits<7> SI_GFX10 = si_gfx10;
|
|
|
|
field bits<7> VI = vi;
|
|
|
|
}
|
|
|
|
|
[AMDGPU] Optimize _L image intrinsic to _LZ when lod is zero
Summary:
Add _L to _LZ image intrinsic table mapping to table gen.
In ISelLowering check if image intrinsic has lod and if it's equal
to zero, if so remove lod and change opcode to equivalent mapped _LZ.
Change-Id: Ie24cd7e788e2195d846c7bd256151178cbb9ec71
Subscribers: arsenm, mehdi_amini, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49483
llvm-svn: 338523
2018-08-01 20:12:01 +08:00
|
|
|
class MIMGLZMapping<MIMGBaseOpcode l, MIMGBaseOpcode lz> {
|
|
|
|
MIMGBaseOpcode L = l;
|
|
|
|
MIMGBaseOpcode LZ = lz;
|
|
|
|
}
|
|
|
|
|
|
|
|
def MIMGLZMappingTable : GenericTable {
|
|
|
|
let FilterClass = "MIMGLZMapping";
|
|
|
|
let CppTypeName = "MIMGLZMappingInfo";
|
|
|
|
let Fields = ["L", "LZ"];
|
|
|
|
GenericEnum TypeOf_L = MIMGBaseOpcode;
|
|
|
|
GenericEnum TypeOf_LZ = MIMGBaseOpcode;
|
|
|
|
|
|
|
|
let PrimaryKey = ["L"];
|
|
|
|
let PrimaryKeyName = "getMIMGLZMappingInfo";
|
|
|
|
}
|
|
|
|
|
[AMDGPU] Optimize image_[load|store]_mip
Summary:
Replace image_load_mip/image_store_mip
with image_load/image_store if lod is 0.
Reviewers: arsenm, nhaehnle
Reviewed By: arsenm
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63073
llvm-svn: 362957
2019-06-10 23:58:51 +08:00
|
|
|
class MIMGMIPMapping<MIMGBaseOpcode mip, MIMGBaseOpcode nonmip> {
|
|
|
|
MIMGBaseOpcode MIP = mip;
|
|
|
|
MIMGBaseOpcode NONMIP = nonmip;
|
|
|
|
}
|
|
|
|
|
|
|
|
def MIMGMIPMappingTable : GenericTable {
|
|
|
|
let FilterClass = "MIMGMIPMapping";
|
|
|
|
let CppTypeName = "MIMGMIPMappingInfo";
|
|
|
|
let Fields = ["MIP", "NONMIP"];
|
|
|
|
GenericEnum TypeOf_MIP = MIMGBaseOpcode;
|
|
|
|
GenericEnum TypeOf_NONMIP = MIMGBaseOpcode;
|
|
|
|
|
|
|
|
let PrimaryKey = ["MIP"];
|
|
|
|
let PrimaryKeyName = "getMIMGMIPMappingInfo";
|
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
class MIMG <dag outs, string dns = "">
|
|
|
|
: InstSI <outs, (ins), "", []> {
|
|
|
|
|
|
|
|
let VM_CNT = 1;
|
|
|
|
let EXP_CNT = 1;
|
|
|
|
let MIMG = 1;
|
|
|
|
let Uses = [EXEC];
|
2016-09-02 01:54:54 +08:00
|
|
|
let mayLoad = 1;
|
|
|
|
let mayStore = 0;
|
|
|
|
let hasPostISelHook = 1;
|
2018-06-21 21:36:44 +08:00
|
|
|
let SchedRW = [WriteVMEM];
|
|
|
|
let UseNamedOperandTable = 1;
|
|
|
|
let hasSideEffects = 0; // XXX ????
|
|
|
|
|
2016-09-02 01:54:54 +08:00
|
|
|
let DecoderNamespace = dns;
|
|
|
|
let isAsmParserOnly = !if(!eq(dns,""), 1, 0);
|
|
|
|
let AsmMatchConverter = "cvtMIMG";
|
2016-12-20 23:52:17 +08:00
|
|
|
let usesCustomInserter = 1;
|
2018-06-21 21:36:44 +08:00
|
|
|
|
|
|
|
Instruction Opcode = !cast<Instruction>(NAME);
|
|
|
|
MIMGBaseOpcode BaseOpcode;
|
2019-05-02 00:32:58 +08:00
|
|
|
MIMGEncoding MIMGEncoding;
|
2018-06-21 21:36:44 +08:00
|
|
|
bits<8> VDataDwords;
|
|
|
|
bits<8> VAddrDwords;
|
|
|
|
}
|
|
|
|
|
|
|
|
def MIMGInfoTable : GenericTable {
|
|
|
|
let FilterClass = "MIMG";
|
|
|
|
let CppTypeName = "MIMGInfo";
|
|
|
|
let Fields = ["Opcode", "BaseOpcode", "MIMGEncoding", "VDataDwords", "VAddrDwords"];
|
|
|
|
GenericEnum TypeOf_BaseOpcode = MIMGBaseOpcode;
|
|
|
|
GenericEnum TypeOf_MIMGEncoding = MIMGEncoding;
|
|
|
|
|
|
|
|
let PrimaryKey = ["BaseOpcode", "MIMGEncoding", "VDataDwords", "VAddrDwords"];
|
|
|
|
let PrimaryKeyName = "getMIMGOpcodeHelper";
|
|
|
|
}
|
|
|
|
|
|
|
|
def getMIMGInfo : SearchIndex {
|
|
|
|
let Table = MIMGInfoTable;
|
|
|
|
let Key = ["Opcode"];
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
// This is a separate class so that TableGen memoizes the computations.
|
|
|
|
class MIMGNSAHelper<int num_addrs> {
|
|
|
|
list<string> AddrAsmNames =
|
|
|
|
!foldl([]<string>, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], lhs, i,
|
|
|
|
!if(!lt(i, num_addrs), !listconcat(lhs, ["vaddr"#!size(lhs)]), lhs));
|
|
|
|
dag AddrIns = !dag(ins, !foreach(arg, AddrAsmNames, VGPR_32), AddrAsmNames);
|
|
|
|
string AddrAsm = "[" # !foldl("$" # !head(AddrAsmNames), !tail(AddrAsmNames), lhs, rhs,
|
|
|
|
lhs # ", $" # rhs) # "]";
|
|
|
|
|
|
|
|
int NSA = !if(!le(num_addrs, 1), ?,
|
|
|
|
!if(!le(num_addrs, 5), 1,
|
|
|
|
!if(!le(num_addrs, 9), 2,
|
|
|
|
!if(!le(num_addrs, 13), 3, ?))));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Base class of all pre-gfx10 MIMG instructions.
|
|
|
|
class MIMG_gfx6789<bits<7> op, dag outs, string dns = "">
|
|
|
|
: MIMG<outs, dns>, MIMGe_gfx6789<op> {
|
|
|
|
let SubtargetPredicate = isGFX6GFX7GFX8GFX9;
|
|
|
|
let AssemblerPredicates = [isGFX6GFX7GFX8GFX9];
|
|
|
|
|
|
|
|
let MIMGEncoding = MIMGEncGfx6;
|
|
|
|
|
|
|
|
let d16 = !if(BaseOpcode.HasD16, ?, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Base class of all non-NSA gfx10 MIMG instructions.
|
|
|
|
class MIMG_gfx10<int op, dag outs, string dns = "">
|
|
|
|
: MIMG<outs, dns>, MIMGe_gfx10<op> {
|
|
|
|
let SubtargetPredicate = isGFX10Plus;
|
|
|
|
let AssemblerPredicates = [isGFX10Plus];
|
|
|
|
|
|
|
|
let MIMGEncoding = MIMGEncGfx10Default;
|
|
|
|
|
|
|
|
let d16 = !if(BaseOpcode.HasD16, ?, 0);
|
|
|
|
let nsa = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Base class for all NSA MIMG instructions. Note that 1-dword addresses always
|
|
|
|
// use non-NSA variants.
|
|
|
|
class MIMG_nsa_gfx10<int op, dag outs, int num_addrs, string dns="">
|
|
|
|
: MIMG<outs, dns>, MIMGe_gfx10<op> {
|
|
|
|
let SubtargetPredicate = isGFX10Plus;
|
|
|
|
let AssemblerPredicates = [isGFX10Plus];
|
|
|
|
|
|
|
|
let MIMGEncoding = MIMGEncGfx10NSA;
|
|
|
|
|
|
|
|
MIMGNSAHelper nsah = MIMGNSAHelper<num_addrs>;
|
|
|
|
dag AddrIns = nsah.AddrIns;
|
|
|
|
string AddrAsm = nsah.AddrAsm;
|
|
|
|
|
|
|
|
let d16 = !if(BaseOpcode.HasD16, ?, 0);
|
|
|
|
let nsa = nsah.NSA;
|
|
|
|
}
|
|
|
|
|
2016-09-02 01:54:54 +08:00
|
|
|
class MIMG_NoSampler_Helper <bits<7> op, string asm,
|
|
|
|
RegisterClass dst_rc,
|
|
|
|
RegisterClass addr_rc,
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
string dns="">
|
2019-05-02 00:32:58 +08:00
|
|
|
: MIMG_gfx6789 <op, (outs dst_rc:$vdata), dns> {
|
2018-06-21 21:36:44 +08:00
|
|
|
let InOperandList = !con((ins addr_rc:$vaddr, SReg_256:$srsrc,
|
|
|
|
DMask:$dmask, UNorm:$unorm, GLC:$glc, SLC:$slc,
|
2018-08-28 23:07:30 +08:00
|
|
|
R128A16:$r128, TFE:$tfe, LWE:$lwe, DA:$da),
|
2018-06-21 21:36:44 +08:00
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = asm#" $vdata, $vaddr, $srsrc$dmask$unorm$glc$slc$r128$tfe$lwe$da"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_NoSampler_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, RegisterClass AddrRC,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_gfx10<op, (outs DataRC:$vdata), dns> {
|
|
|
|
let InOperandList = !con((ins AddrRC:$vaddr0, SReg_256:$srsrc, DMask:$dmask,
|
|
|
|
Dim:$dim, UNorm:$unorm, DLC:$dlc, GLC:$glc,
|
|
|
|
SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, $vaddr0, $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_NoSampler_nsa_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, int num_addrs,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_nsa_gfx10<op, (outs DataRC:$vdata), num_addrs, dns> {
|
|
|
|
let InOperandList = !con(AddrIns,
|
|
|
|
(ins SReg_256:$srsrc, DMask:$dmask,
|
|
|
|
Dim:$dim, UNorm:$unorm, DLC:$dlc, GLC:$glc,
|
|
|
|
SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, "#AddrAsm#", $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
2016-09-02 01:54:54 +08:00
|
|
|
multiclass MIMG_NoSampler_Src_Helper <bits<7> op, string asm,
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
RegisterClass dst_rc,
|
2018-06-21 21:36:44 +08:00
|
|
|
bit enableDisasm> {
|
2019-05-02 00:32:58 +08:00
|
|
|
let ssamp = 0 in {
|
|
|
|
let VAddrDwords = 1 in {
|
|
|
|
def _V1 : MIMG_NoSampler_Helper <op, asm, dst_rc, VGPR_32,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
def _V1_gfx10 : MIMG_NoSampler_gfx10<op, asm, dst_rc, VGPR_32,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
}
|
|
|
|
|
|
|
|
let VAddrDwords = 2 in {
|
|
|
|
def _V2 : MIMG_NoSampler_Helper <op, asm, dst_rc, VReg_64>;
|
|
|
|
def _V2_gfx10 : MIMG_NoSampler_gfx10<op, asm, dst_rc, VReg_64>;
|
|
|
|
def _V2_nsa_gfx10 : MIMG_NoSampler_nsa_gfx10<op, asm, dst_rc, 2>;
|
|
|
|
}
|
|
|
|
|
|
|
|
let VAddrDwords = 3 in {
|
|
|
|
def _V3 : MIMG_NoSampler_Helper <op, asm, dst_rc, VReg_96>;
|
|
|
|
def _V3_gfx10 : MIMG_NoSampler_gfx10<op, asm, dst_rc, VReg_96>;
|
|
|
|
def _V3_nsa_gfx10 : MIMG_NoSampler_nsa_gfx10<op, asm, dst_rc, 3>;
|
|
|
|
}
|
|
|
|
|
|
|
|
let VAddrDwords = 4 in {
|
|
|
|
def _V4 : MIMG_NoSampler_Helper <op, asm, dst_rc, VReg_128>;
|
|
|
|
def _V4_gfx10 : MIMG_NoSampler_gfx10<op, asm, dst_rc, VReg_128>;
|
|
|
|
def _V4_nsa_gfx10 : MIMG_NoSampler_nsa_gfx10<op, asm, dst_rc, 4,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
}
|
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
multiclass MIMG_NoSampler <bits<7> op, string asm, bit has_d16, bit mip = 0,
|
|
|
|
bit isResInfo = 0> {
|
|
|
|
def "" : MIMGBaseOpcode {
|
|
|
|
let Coordinates = !if(isResInfo, 0, 1);
|
|
|
|
let LodOrClampOrMip = mip;
|
|
|
|
let HasD16 = has_d16;
|
|
|
|
}
|
2016-09-02 01:54:54 +08:00
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
let BaseOpcode = !cast<MIMGBaseOpcode>(NAME),
|
|
|
|
mayLoad = !if(isResInfo, 0, 1) in {
|
|
|
|
let VDataDwords = 1 in
|
|
|
|
defm _V1 : MIMG_NoSampler_Src_Helper <op, asm, VGPR_32, 1>;
|
|
|
|
let VDataDwords = 2 in
|
|
|
|
defm _V2 : MIMG_NoSampler_Src_Helper <op, asm, VReg_64, 0>;
|
|
|
|
let VDataDwords = 3 in
|
|
|
|
defm _V3 : MIMG_NoSampler_Src_Helper <op, asm, VReg_96, 0>;
|
|
|
|
let VDataDwords = 4 in
|
|
|
|
defm _V4 : MIMG_NoSampler_Src_Helper <op, asm, VReg_128, 0>;
|
2019-03-22 23:21:11 +08:00
|
|
|
let VDataDwords = 5 in
|
|
|
|
defm _V5 : MIMG_NoSampler_Src_Helper <op, asm, VReg_160, 0>;
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
2018-03-28 23:44:16 +08:00
|
|
|
}
|
|
|
|
|
2016-09-02 01:54:54 +08:00
|
|
|
class MIMG_Store_Helper <bits<7> op, string asm,
|
|
|
|
RegisterClass data_rc,
|
2017-12-14 05:07:51 +08:00
|
|
|
RegisterClass addr_rc,
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
string dns = "">
|
2019-05-02 00:32:58 +08:00
|
|
|
: MIMG_gfx6789<op, (outs), dns> {
|
2018-06-21 21:36:44 +08:00
|
|
|
let InOperandList = !con((ins data_rc:$vdata, addr_rc:$vaddr, SReg_256:$srsrc,
|
|
|
|
DMask:$dmask, UNorm:$unorm, GLC:$glc, SLC:$slc,
|
2018-08-28 23:07:30 +08:00
|
|
|
R128A16:$r128, TFE:$tfe, LWE:$lwe, DA:$da),
|
2018-06-21 21:36:44 +08:00
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = asm#" $vdata, $vaddr, $srsrc$dmask$unorm$glc$slc$r128$tfe$lwe$da"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_Store_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, RegisterClass AddrRC,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_gfx10<op, (outs), dns> {
|
|
|
|
let InOperandList = !con((ins DataRC:$vdata, AddrRC:$vaddr0, SReg_256:$srsrc,
|
|
|
|
DMask:$dmask, Dim:$dim, UNorm:$unorm, DLC:$dlc,
|
|
|
|
GLC:$glc, SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, $vaddr0, $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Store_nsa_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, int num_addrs,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_nsa_gfx10<op, (outs), num_addrs, dns> {
|
|
|
|
let InOperandList = !con((ins DataRC:$vdata),
|
|
|
|
AddrIns,
|
|
|
|
(ins SReg_256:$srsrc, DMask:$dmask,
|
|
|
|
Dim:$dim, UNorm:$unorm, DLC:$dlc, GLC:$glc,
|
|
|
|
SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, "#AddrAsm#", $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
|
|
|
multiclass MIMG_Store_Addr_Helper <int op, string asm,
|
2016-09-02 01:54:54 +08:00
|
|
|
RegisterClass data_rc,
|
2018-06-21 21:36:44 +08:00
|
|
|
bit enableDisasm> {
|
2019-05-02 00:32:58 +08:00
|
|
|
let mayLoad = 0, mayStore = 1, hasSideEffects = 0, hasPostISelHook = 0,
|
|
|
|
DisableWQM = 1, ssamp = 0 in {
|
|
|
|
let VAddrDwords = 1 in {
|
|
|
|
def _V1 : MIMG_Store_Helper <op, asm, data_rc, VGPR_32,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
def _V1_gfx10 : MIMG_Store_gfx10 <op, asm, data_rc, VGPR_32,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 2 in {
|
|
|
|
def _V2 : MIMG_Store_Helper <op, asm, data_rc, VReg_64>;
|
|
|
|
def _V2_gfx10 : MIMG_Store_gfx10 <op, asm, data_rc, VReg_64>;
|
|
|
|
def _V2_nsa_gfx10 : MIMG_Store_nsa_gfx10 <op, asm, data_rc, 2>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 3 in {
|
|
|
|
def _V3 : MIMG_Store_Helper <op, asm, data_rc, VReg_96>;
|
|
|
|
def _V3_gfx10 : MIMG_Store_gfx10 <op, asm, data_rc, VReg_96>;
|
|
|
|
def _V3_nsa_gfx10 : MIMG_Store_nsa_gfx10 <op, asm, data_rc, 3>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 4 in {
|
|
|
|
def _V4 : MIMG_Store_Helper <op, asm, data_rc, VReg_128>;
|
|
|
|
def _V4_gfx10 : MIMG_Store_gfx10 <op, asm, data_rc, VReg_128>;
|
|
|
|
def _V4_nsa_gfx10 : MIMG_Store_nsa_gfx10 <op, asm, data_rc, 4,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")>;
|
|
|
|
}
|
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
multiclass MIMG_Store <bits<7> op, string asm, bit has_d16, bit mip = 0> {
|
|
|
|
def "" : MIMGBaseOpcode {
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
let Store = 1;
|
2018-06-21 21:36:44 +08:00
|
|
|
let LodOrClampOrMip = mip;
|
|
|
|
let HasD16 = has_d16;
|
|
|
|
}
|
2016-09-02 01:54:54 +08:00
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
let BaseOpcode = !cast<MIMGBaseOpcode>(NAME) in {
|
|
|
|
let VDataDwords = 1 in
|
|
|
|
defm _V1 : MIMG_Store_Addr_Helper <op, asm, VGPR_32, 1>;
|
|
|
|
let VDataDwords = 2 in
|
|
|
|
defm _V2 : MIMG_Store_Addr_Helper <op, asm, VReg_64, 0>;
|
|
|
|
let VDataDwords = 3 in
|
|
|
|
defm _V3 : MIMG_Store_Addr_Helper <op, asm, VReg_96, 0>;
|
|
|
|
let VDataDwords = 4 in
|
|
|
|
defm _V4 : MIMG_Store_Addr_Helper <op, asm, VReg_128, 0>;
|
|
|
|
}
|
2018-03-28 23:44:16 +08:00
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_Atomic_gfx6789_base <bits<7> op, string asm, RegisterClass data_rc,
|
|
|
|
RegisterClass addr_rc, string dns="">
|
|
|
|
: MIMG_gfx6789 <op, (outs data_rc:$vdst), dns> {
|
2016-09-02 01:54:54 +08:00
|
|
|
let Constraints = "$vdst = $vdata";
|
|
|
|
let AsmMatchConverter = "cvtMIMGAtomic";
|
2018-06-21 21:36:44 +08:00
|
|
|
|
|
|
|
let InOperandList = (ins data_rc:$vdata, addr_rc:$vaddr, SReg_256:$srsrc,
|
|
|
|
DMask:$dmask, UNorm:$unorm, GLC:$glc, SLC:$slc,
|
2018-08-28 23:07:30 +08:00
|
|
|
R128A16:$r128, TFE:$tfe, LWE:$lwe, DA:$da);
|
2018-06-21 21:36:44 +08:00
|
|
|
let AsmString = asm#" $vdst, $vaddr, $srsrc$dmask$unorm$glc$slc$r128$tfe$lwe$da";
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_Atomic_si<mimg op, string asm, RegisterClass data_rc,
|
|
|
|
RegisterClass addr_rc, bit enableDasm = 0>
|
|
|
|
: MIMG_Atomic_gfx6789_base<op.SI_GFX10, asm, data_rc, addr_rc,
|
|
|
|
!if(enableDasm, "GFX6GFX7", "")> {
|
|
|
|
let AssemblerPredicates = [isGFX6GFX7];
|
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_Atomic_vi<mimg op, string asm, RegisterClass data_rc,
|
|
|
|
RegisterClass addr_rc, bit enableDasm = 0>
|
|
|
|
: MIMG_Atomic_gfx6789_base<op.VI, asm, data_rc, addr_rc, !if(enableDasm, "GFX8", "")> {
|
|
|
|
let AssemblerPredicates = [isGFX8GFX9];
|
|
|
|
let MIMGEncoding = MIMGEncGfx8;
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Atomic_gfx10<mimg op, string opcode,
|
|
|
|
RegisterClass DataRC, RegisterClass AddrRC,
|
|
|
|
bit enableDisasm = 0>
|
|
|
|
: MIMG_gfx10<!cast<int>(op.SI_GFX10), (outs DataRC:$vdst),
|
|
|
|
!if(enableDisasm, "AMDGPU", "")> {
|
|
|
|
let Constraints = "$vdst = $vdata";
|
|
|
|
let AsmMatchConverter = "cvtMIMGAtomic";
|
|
|
|
|
|
|
|
let InOperandList = (ins DataRC:$vdata, AddrRC:$vaddr0, SReg_256:$srsrc,
|
|
|
|
DMask:$dmask, Dim:$dim, UNorm:$unorm, DLC:$dlc,
|
|
|
|
GLC:$glc, SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe);
|
|
|
|
let AsmString = opcode#" $vdst, $vaddr0, $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe";
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Atomic_nsa_gfx10<mimg op, string opcode,
|
|
|
|
RegisterClass DataRC, int num_addrs,
|
|
|
|
bit enableDisasm = 0>
|
|
|
|
: MIMG_nsa_gfx10<!cast<int>(op.SI_GFX10), (outs DataRC:$vdst), num_addrs,
|
|
|
|
!if(enableDisasm, "AMDGPU", "")> {
|
|
|
|
let Constraints = "$vdst = $vdata";
|
|
|
|
let AsmMatchConverter = "cvtMIMGAtomic";
|
|
|
|
|
|
|
|
let InOperandList = !con((ins DataRC:$vdata),
|
|
|
|
AddrIns,
|
|
|
|
(ins SReg_256:$srsrc, DMask:$dmask,
|
|
|
|
Dim:$dim, UNorm:$unorm, DLC:$dlc, GLC:$glc,
|
|
|
|
SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe));
|
|
|
|
let AsmString = opcode#" $vdata, "#AddrAsm#", $srsrc$dmask$dim$unorm$dlc$glc$slc$r128$tfe$lwe";
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
multiclass MIMG_Atomic_Addr_Helper_m <mimg op, string asm,
|
2018-01-26 23:43:29 +08:00
|
|
|
RegisterClass data_rc,
|
|
|
|
bit enableDasm = 0> {
|
2019-05-02 00:32:58 +08:00
|
|
|
let hasSideEffects = 1, // FIXME: remove this
|
|
|
|
mayLoad = 1, mayStore = 1, hasPostISelHook = 0, DisableWQM = 1,
|
|
|
|
ssamp = 0 in {
|
|
|
|
let VAddrDwords = 1 in {
|
|
|
|
def _V1_si : MIMG_Atomic_si <op, asm, data_rc, VGPR_32, enableDasm>;
|
|
|
|
def _V1_vi : MIMG_Atomic_vi <op, asm, data_rc, VGPR_32, enableDasm>;
|
|
|
|
def _V1_gfx10 : MIMG_Atomic_gfx10 <op, asm, data_rc, VGPR_32, enableDasm>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 2 in {
|
|
|
|
def _V2_si : MIMG_Atomic_si <op, asm, data_rc, VReg_64, 0>;
|
|
|
|
def _V2_vi : MIMG_Atomic_vi <op, asm, data_rc, VReg_64, 0>;
|
|
|
|
def _V2_gfx10 : MIMG_Atomic_gfx10 <op, asm, data_rc, VReg_64, 0>;
|
|
|
|
def _V2_nsa_gfx10 : MIMG_Atomic_nsa_gfx10 <op, asm, data_rc, 2, 0>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 3 in {
|
|
|
|
def _V3_si : MIMG_Atomic_si <op, asm, data_rc, VReg_96, 0>;
|
|
|
|
def _V3_vi : MIMG_Atomic_vi <op, asm, data_rc, VReg_96, 0>;
|
|
|
|
def _V3_gfx10 : MIMG_Atomic_gfx10 <op, asm, data_rc, VReg_96, 0>;
|
|
|
|
def _V3_nsa_gfx10 : MIMG_Atomic_nsa_gfx10 <op, asm, data_rc, 3, 0>;
|
|
|
|
}
|
|
|
|
let VAddrDwords = 4 in {
|
|
|
|
def _V4_si : MIMG_Atomic_si <op, asm, data_rc, VReg_128, 0>;
|
|
|
|
def _V4_vi : MIMG_Atomic_vi <op, asm, data_rc, VReg_128, 0>;
|
|
|
|
def _V4_gfx10 : MIMG_Atomic_gfx10 <op, asm, data_rc, VReg_128, 0>;
|
|
|
|
def _V4_nsa_gfx10 : MIMG_Atomic_nsa_gfx10 <op, asm, data_rc, 4, enableDasm>;
|
|
|
|
}
|
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
multiclass MIMG_Atomic <mimg op, string asm, bit isCmpSwap = 0> { // 64-bit atomics
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
def "" : MIMGBaseOpcode {
|
|
|
|
let Atomic = 1;
|
|
|
|
let AtomicX2 = isCmpSwap;
|
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
|
|
|
|
let BaseOpcode = !cast<MIMGBaseOpcode>(NAME) in {
|
|
|
|
// _V* variants have different dst size, but the size is encoded implicitly,
|
|
|
|
// using dmask and tfe. Only 32-bit variant is registered with disassembler.
|
|
|
|
// Other variants are reconstructed by disassembler using dmask and tfe.
|
|
|
|
let VDataDwords = !if(isCmpSwap, 2, 1) in
|
|
|
|
defm _V1 : MIMG_Atomic_Addr_Helper_m <op, asm, !if(isCmpSwap, VReg_64, VGPR_32), 1>;
|
|
|
|
let VDataDwords = !if(isCmpSwap, 4, 2) in
|
|
|
|
defm _V2 : MIMG_Atomic_Addr_Helper_m <op, asm, !if(isCmpSwap, VReg_128, VReg_64)>;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Sampler_Helper <bits<7> op, string asm, RegisterClass dst_rc,
|
|
|
|
RegisterClass src_rc, string dns="">
|
2019-05-02 00:32:58 +08:00
|
|
|
: MIMG_gfx6789 <op, (outs dst_rc:$vdata), dns> {
|
2018-06-21 21:36:44 +08:00
|
|
|
let InOperandList = !con((ins src_rc:$vaddr, SReg_256:$srsrc, SReg_128:$ssamp,
|
|
|
|
DMask:$dmask, UNorm:$unorm, GLC:$glc, SLC:$slc,
|
2018-08-28 23:07:30 +08:00
|
|
|
R128A16:$r128, TFE:$tfe, LWE:$lwe, DA:$da),
|
2018-06-21 21:36:44 +08:00
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = asm#" $vdata, $vaddr, $srsrc, $ssamp$dmask$unorm$glc$slc$r128$tfe$lwe$da"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
class MIMG_Sampler_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, RegisterClass AddrRC,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_gfx10<op, (outs DataRC:$vdata), dns> {
|
|
|
|
let InOperandList = !con((ins AddrRC:$vaddr0, SReg_256:$srsrc, SReg_128:$ssamp,
|
|
|
|
DMask:$dmask, Dim:$dim, UNorm:$unorm, DLC:$dlc,
|
|
|
|
GLC:$glc, SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, $vaddr0, $srsrc, $ssamp$dmask$dim$unorm"
|
|
|
|
#"$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Sampler_nsa_gfx10<int op, string opcode,
|
|
|
|
RegisterClass DataRC, int num_addrs,
|
|
|
|
string dns="">
|
|
|
|
: MIMG_nsa_gfx10<op, (outs DataRC:$vdata), num_addrs, dns> {
|
|
|
|
let InOperandList = !con(AddrIns,
|
|
|
|
(ins SReg_256:$srsrc, SReg_128:$ssamp, DMask:$dmask,
|
|
|
|
Dim:$dim, UNorm:$unorm, DLC:$dlc, GLC:$glc,
|
|
|
|
SLC:$slc, R128A16:$r128, TFE:$tfe, LWE:$lwe),
|
|
|
|
!if(BaseOpcode.HasD16, (ins D16:$d16), (ins)));
|
|
|
|
let AsmString = opcode#" $vdata, "#AddrAsm#", $srsrc, $ssamp$dmask$dim$unorm"
|
|
|
|
#"$dlc$glc$slc$r128$tfe$lwe"
|
|
|
|
#!if(BaseOpcode.HasD16, "$d16", "");
|
|
|
|
}
|
|
|
|
|
2018-06-21 21:37:55 +08:00
|
|
|
class MIMGAddrSize<int dw, bit enable_disasm> {
|
|
|
|
int NumWords = dw;
|
|
|
|
|
|
|
|
RegisterClass RegClass = !if(!le(NumWords, 0), ?,
|
|
|
|
!if(!eq(NumWords, 1), VGPR_32,
|
|
|
|
!if(!eq(NumWords, 2), VReg_64,
|
|
|
|
!if(!eq(NumWords, 3), VReg_96,
|
|
|
|
!if(!eq(NumWords, 4), VReg_128,
|
|
|
|
!if(!le(NumWords, 8), VReg_256,
|
|
|
|
!if(!le(NumWords, 16), VReg_512, ?)))))));
|
|
|
|
|
|
|
|
// Whether the instruction variant with this vaddr size should be enabled for
|
|
|
|
// the auto-generated disassembler.
|
|
|
|
bit Disassemble = enable_disasm;
|
|
|
|
}
|
|
|
|
|
2019-05-02 00:32:58 +08:00
|
|
|
// Return whether x is in lst.
|
|
|
|
class isIntInList<int x, list<int> lst> {
|
|
|
|
bit ret = !foldl(0, lst, lhs, y, !or(lhs, !eq(x, y)));
|
|
|
|
}
|
|
|
|
|
2018-06-21 21:37:55 +08:00
|
|
|
// Return whether a value inside the range [min, max] (endpoints inclusive)
|
|
|
|
// is in the given list.
|
|
|
|
class isRangeInList<int min, int max, list<int> lst> {
|
|
|
|
bit ret = !foldl(0, lst, lhs, y, !or(lhs, !and(!le(min, y), !le(y, max))));
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMGAddrSizes_tmp<list<MIMGAddrSize> lst, int min> {
|
|
|
|
list<MIMGAddrSize> List = lst;
|
|
|
|
int Min = min;
|
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Sampler_AddrSizes<AMDGPUSampleVariant sample> {
|
|
|
|
// List of all possible numbers of address words, taking all combinations of
|
|
|
|
// A16 and image dimension into account (note: no MSAA, since this is for
|
|
|
|
// sample/gather ops).
|
|
|
|
list<int> AllNumAddrWords =
|
|
|
|
!foreach(dw, !if(sample.Gradients,
|
|
|
|
!if(!eq(sample.LodOrClamp, ""),
|
|
|
|
[2, 3, 4, 5, 6, 7, 9],
|
|
|
|
[2, 3, 4, 5, 7, 8, 10]),
|
|
|
|
!if(!eq(sample.LodOrClamp, ""),
|
|
|
|
[1, 2, 3],
|
|
|
|
[1, 2, 3, 4])),
|
|
|
|
!add(dw, !size(sample.ExtraAddrArgs)));
|
|
|
|
|
|
|
|
// Generate machine instructions based on possible register classes for the
|
|
|
|
// required numbers of address words. The disassembler defaults to the
|
|
|
|
// smallest register class.
|
|
|
|
list<MIMGAddrSize> MachineInstrs =
|
|
|
|
!foldl(MIMGAddrSizes_tmp<[], 0>, [1, 2, 3, 4, 8, 16], lhs, dw,
|
|
|
|
!if(isRangeInList<lhs.Min, dw, AllNumAddrWords>.ret,
|
|
|
|
MIMGAddrSizes_tmp<
|
|
|
|
!listconcat(lhs.List, [MIMGAddrSize<dw, !empty(lhs.List)>]),
|
|
|
|
!if(!eq(dw, 3), 3, !add(dw, 1))>, // we still need _V4 for codegen w/ 3 dwords
|
|
|
|
lhs)).List;
|
2019-05-02 00:32:58 +08:00
|
|
|
|
|
|
|
// For NSA, generate machine instructions for all possible numbers of words
|
|
|
|
// except 1 (which is already covered by the non-NSA case).
|
|
|
|
// The disassembler defaults to the largest number of arguments among the
|
|
|
|
// variants with the same number of NSA words, and custom code then derives
|
|
|
|
// the exact variant based on the sample variant and the image dimension.
|
|
|
|
list<MIMGAddrSize> NSAInstrs =
|
|
|
|
!foldl([]<MIMGAddrSize>, [[12, 11, 10], [9, 8, 7, 6], [5, 4, 3, 2]], prev, nsa_group,
|
|
|
|
!listconcat(prev,
|
|
|
|
!foldl([]<MIMGAddrSize>, nsa_group, lhs, dw,
|
|
|
|
!if(isIntInList<dw, AllNumAddrWords>.ret,
|
|
|
|
!listconcat(lhs, [MIMGAddrSize<dw, !empty(lhs)>]),
|
|
|
|
lhs))));
|
2018-06-21 21:37:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
multiclass MIMG_Sampler_Src_Helper <bits<7> op, string asm,
|
|
|
|
AMDGPUSampleVariant sample, RegisterClass dst_rc,
|
2018-06-21 21:36:44 +08:00
|
|
|
bit enableDisasm = 0> {
|
2018-06-21 21:37:55 +08:00
|
|
|
foreach addr = MIMG_Sampler_AddrSizes<sample>.MachineInstrs in {
|
2019-05-02 00:32:58 +08:00
|
|
|
let VAddrDwords = addr.NumWords in {
|
|
|
|
def _V # addr.NumWords
|
|
|
|
: MIMG_Sampler_Helper <op, asm, dst_rc, addr.RegClass,
|
|
|
|
!if(!and(enableDisasm, addr.Disassemble), "AMDGPU", "")>;
|
|
|
|
def _V # addr.NumWords # _gfx10
|
|
|
|
: MIMG_Sampler_gfx10 <op, asm, dst_rc, addr.RegClass,
|
|
|
|
!if(!and(enableDisasm, addr.Disassemble), "AMDGPU", "")>;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
foreach addr = MIMG_Sampler_AddrSizes<sample>.NSAInstrs in {
|
|
|
|
let VAddrDwords = addr.NumWords in {
|
|
|
|
def _V # addr.NumWords # _nsa_gfx10
|
|
|
|
: MIMG_Sampler_nsa_gfx10<op, asm, dst_rc, addr.NumWords,
|
|
|
|
!if(!and(enableDisasm, addr.Disassemble), "AMDGPU", "")>;
|
|
|
|
}
|
2018-06-21 21:37:55 +08:00
|
|
|
}
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
class MIMG_Sampler_BaseOpcode<AMDGPUSampleVariant sample>
|
|
|
|
: MIMGBaseOpcode {
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
let Sampler = 1;
|
2018-06-21 21:36:44 +08:00
|
|
|
let NumExtraArgs = !size(sample.ExtraAddrArgs);
|
|
|
|
let Gradients = sample.Gradients;
|
|
|
|
let LodOrClampOrMip = !ne(sample.LodOrClamp, "");
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:13 +08:00
|
|
|
multiclass MIMG_Sampler <bits<7> op, AMDGPUSampleVariant sample, bit wqm = 0,
|
2018-06-21 21:36:44 +08:00
|
|
|
bit isGetLod = 0,
|
2018-06-21 21:36:13 +08:00
|
|
|
string asm = "image_sample"#sample.LowerCaseMod> {
|
2018-06-21 21:36:44 +08:00
|
|
|
def "" : MIMG_Sampler_BaseOpcode<sample> {
|
|
|
|
let HasD16 = !if(isGetLod, 0, 1);
|
|
|
|
}
|
2016-09-02 01:54:54 +08:00
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
let BaseOpcode = !cast<MIMGBaseOpcode>(NAME), WQM = wqm,
|
|
|
|
mayLoad = !if(isGetLod, 0, 1) in {
|
|
|
|
let VDataDwords = 1 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V1 : MIMG_Sampler_Src_Helper<op, asm, sample, VGPR_32, 1>;
|
2018-06-21 21:36:44 +08:00
|
|
|
let VDataDwords = 2 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V2 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_64>;
|
2018-06-21 21:36:44 +08:00
|
|
|
let VDataDwords = 3 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V3 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_96>;
|
2018-06-21 21:36:44 +08:00
|
|
|
let VDataDwords = 4 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V4 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_128>;
|
2019-03-22 23:21:11 +08:00
|
|
|
let VDataDwords = 5 in
|
|
|
|
defm _V5 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_160>;
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
multiclass MIMG_Sampler_WQM <bits<7> op, AMDGPUSampleVariant sample>
|
|
|
|
: MIMG_Sampler<op, sample, 1>;
|
2018-01-19 06:08:53 +08:00
|
|
|
|
2018-06-21 21:36:13 +08:00
|
|
|
multiclass MIMG_Gather <bits<7> op, AMDGPUSampleVariant sample, bit wqm = 0,
|
|
|
|
string asm = "image_gather4"#sample.LowerCaseMod> {
|
2018-06-21 21:36:44 +08:00
|
|
|
def "" : MIMG_Sampler_BaseOpcode<sample> {
|
|
|
|
let HasD16 = 1;
|
[AMDGPU] Add support for TFE/LWE in image intrinsics. 2nd try
TFE and LWE support requires extra result registers that are written in the
event of a failure in order to detect that failure case.
The specific use-case that initiated these changes is sparse texture support.
This means that if image intrinsics are used with either option turned on, the
programmer must ensure that the return type can contain all of the expected
results. This can result in redundant registers since the vector size must be a
power-of-2.
This change takes roughly 6 parts:
1. Modify the instruction defs in tablegen to add new instruction variants that
can accomodate the extra return values.
2. Updates to lowerImage in SIISelLowering.cpp to accomodate setting TFE or LWE
(where the bulk of the work for these instruction types is now done)
3. Extra verification code to catch cases where intrinsics have been used but
insufficient return registers are used.
4. Modification to the adjustWritemask optimisation to account for TFE/LWE being
enabled (requires extra registers to be maintained for error return value).
5. An extra pass to zero initialize the error value return - this is because if
the error does not occur, the register is not written and thus must be zeroed
before use. Also added a new (on by default) option to ensure ALL return values
are zero-initialized that is required for sparse texture support.
6. Disable the inst_combine optimization in the presence of tfe/lwe (later TODO
for this to re-enable and handle correctly).
There's an additional fix now to avoid a dmask=0
For an image intrinsic with tfe where all result channels except tfe
were unused, I was getting an image instruction with dmask=0 and only a
single vgpr result for tfe. That is incorrect because the hardware
assumes there is at least one vgpr result, plus the one for tfe.
Fixed by forcing dmask to 1, which gives the desired two vgpr result
with tfe in the second one.
The TFE or LWE result is returned from the intrinsics using an aggregate
type. Look in the test code provided to see how this works, but in essence IR
code to invoke the intrinsic looks as follows:
%v = call {<4 x float>,i32} @llvm.amdgcn.image.load.1d.v4f32i32.i32(i32 15,
i32 %s, <8 x i32> %rsrc, i32 1, i32 0)
%v.vec = extractvalue {<4 x float>, i32} %v, 0
%v.err = extractvalue {<4 x float>, i32} %v, 1
This re-submit of the change also includes a slight modification in
SIISelLowering.cpp to work-around a compiler bug for the powerpc_le
platform that caused a buildbot failure on a previous submission.
Differential revision: https://reviews.llvm.org/D48826
Change-Id: If222bc03642e76cf98059a6bef5d5bffeda38dda
Work around for ppcle compiler bug
Change-Id: Ie284cf24b2271215be1b9dc95b485fd15000e32b
llvm-svn: 351054
2019-01-14 19:55:24 +08:00
|
|
|
let Gather4 = 1;
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
let BaseOpcode = !cast<MIMGBaseOpcode>(NAME), WQM = wqm,
|
|
|
|
Gather4 = 1, hasPostISelHook = 0 in {
|
|
|
|
let VDataDwords = 2 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V2 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_64>; /* for packed D16 only */
|
2018-06-21 21:36:44 +08:00
|
|
|
let VDataDwords = 4 in
|
2018-06-21 21:37:55 +08:00
|
|
|
defm _V4 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_128, 1>;
|
2019-03-22 23:21:11 +08:00
|
|
|
let VDataDwords = 5 in
|
|
|
|
defm _V5 : MIMG_Sampler_Src_Helper<op, asm, sample, VReg_160>;
|
2018-06-21 21:36:44 +08:00
|
|
|
}
|
2016-09-02 01:54:54 +08:00
|
|
|
}
|
|
|
|
|
2018-06-21 21:36:13 +08:00
|
|
|
multiclass MIMG_Gather_WQM <bits<7> op, AMDGPUSampleVariant sample>
|
|
|
|
: MIMG_Gather<op, sample, 1>;
|
2016-09-02 01:54:54 +08:00
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// MIMG Instructions
|
|
|
|
//===----------------------------------------------------------------------===//
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
defm IMAGE_LOAD : MIMG_NoSampler <0x00000000, "image_load", 1>;
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_LOAD_MIP : MIMG_NoSampler <0x00000001, "image_load_mip", 1, 1>;
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
defm IMAGE_LOAD_PCK : MIMG_NoSampler <0x00000002, "image_load_pck", 0>;
|
|
|
|
defm IMAGE_LOAD_PCK_SGN : MIMG_NoSampler <0x00000003, "image_load_pck_sgn", 0>;
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_LOAD_MIP_PCK : MIMG_NoSampler <0x00000004, "image_load_mip_pck", 0, 1>;
|
|
|
|
defm IMAGE_LOAD_MIP_PCK_SGN : MIMG_NoSampler <0x00000005, "image_load_mip_pck_sgn", 0, 1>;
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
defm IMAGE_STORE : MIMG_Store <0x00000008, "image_store", 1>;
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_STORE_MIP : MIMG_Store <0x00000009, "image_store_mip", 1, 1>;
|
AMDGPU: Turn D16 for MIMG instructions into a regular operand
Summary:
This allows us to reduce the number of different machine instruction
opcodes, which reduces the table sizes and helps flatten the TableGen
multiclass hierarchies.
We can do this because for each hardware MIMG opcode, we have a full set
of IMAGE_xxx_Vn_Vm machine instructions for all required sizes of vdata
and vaddr registers. Instead of having separate D16 machine instructions,
a packed D16 instructions loading e.g. 4 components can simply use the
same V2 opcode variant that non-D16 instructions use.
We still require a TSFlag for D16 buffer instructions, because the
D16-ness of buffer instructions is part of the opcode. Renaming the flag
should help avoid future confusion.
The one non-obvious code change is that for gather4 instructions, the
disassembler can no longer automatically decide whether to use a V2 or
a V4 variant. The existing logic which choose the correct variant for
other MIMG instruction is extended to cover gather4 as well.
As a bonus, some of the assembler error messages are now more helpful
(e.g., complaining about a wrong data size instead of a non-existing
instruction).
While we're at it, delete a whole bunch of dead legacy TableGen code.
Change-Id: I89b02c2841c06f95e662541433e597f5d4553978
Reviewers: arsenm, rampitec, kzhuravl, artem.tamazov, dp, rtaylor
Subscribers: wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D47434
llvm-svn: 335222
2018-06-21 21:36:01 +08:00
|
|
|
defm IMAGE_STORE_PCK : MIMG_Store <0x0000000a, "image_store_pck", 0>;
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_STORE_MIP_PCK : MIMG_Store <0x0000000b, "image_store_mip_pck", 0, 1>;
|
2017-12-09 04:00:57 +08:00
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_GET_RESINFO : MIMG_NoSampler <0x0000000e, "image_get_resinfo", 0, 1, 1>;
|
2017-12-09 04:00:57 +08:00
|
|
|
|
2016-09-02 01:54:54 +08:00
|
|
|
defm IMAGE_ATOMIC_SWAP : MIMG_Atomic <mimg<0x0f, 0x10>, "image_atomic_swap">;
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_ATOMIC_CMPSWAP : MIMG_Atomic <mimg<0x10, 0x11>, "image_atomic_cmpswap", 1>;
|
2016-09-02 01:54:54 +08:00
|
|
|
defm IMAGE_ATOMIC_ADD : MIMG_Atomic <mimg<0x11, 0x12>, "image_atomic_add">;
|
|
|
|
defm IMAGE_ATOMIC_SUB : MIMG_Atomic <mimg<0x12, 0x13>, "image_atomic_sub">;
|
|
|
|
//def IMAGE_ATOMIC_RSUB : MIMG_NoPattern_ <"image_atomic_rsub", 0x00000013>; -- not on VI
|
|
|
|
defm IMAGE_ATOMIC_SMIN : MIMG_Atomic <mimg<0x14>, "image_atomic_smin">;
|
|
|
|
defm IMAGE_ATOMIC_UMIN : MIMG_Atomic <mimg<0x15>, "image_atomic_umin">;
|
|
|
|
defm IMAGE_ATOMIC_SMAX : MIMG_Atomic <mimg<0x16>, "image_atomic_smax">;
|
|
|
|
defm IMAGE_ATOMIC_UMAX : MIMG_Atomic <mimg<0x17>, "image_atomic_umax">;
|
|
|
|
defm IMAGE_ATOMIC_AND : MIMG_Atomic <mimg<0x18>, "image_atomic_and">;
|
|
|
|
defm IMAGE_ATOMIC_OR : MIMG_Atomic <mimg<0x19>, "image_atomic_or">;
|
|
|
|
defm IMAGE_ATOMIC_XOR : MIMG_Atomic <mimg<0x1a>, "image_atomic_xor">;
|
|
|
|
defm IMAGE_ATOMIC_INC : MIMG_Atomic <mimg<0x1b>, "image_atomic_inc">;
|
|
|
|
defm IMAGE_ATOMIC_DEC : MIMG_Atomic <mimg<0x1c>, "image_atomic_dec">;
|
2018-06-21 21:36:44 +08:00
|
|
|
//def IMAGE_ATOMIC_FCMPSWAP : MIMG_NoPattern_ <"image_atomic_fcmpswap", 0x0000001d, 1>; -- not on VI
|
2016-09-02 01:54:54 +08:00
|
|
|
//def IMAGE_ATOMIC_FMIN : MIMG_NoPattern_ <"image_atomic_fmin", 0x0000001e>; -- not on VI
|
|
|
|
//def IMAGE_ATOMIC_FMAX : MIMG_NoPattern_ <"image_atomic_fmax", 0x0000001f>; -- not on VI
|
2018-06-21 21:36:13 +08:00
|
|
|
defm IMAGE_SAMPLE : MIMG_Sampler_WQM <0x00000020, AMDGPUSample>;
|
|
|
|
defm IMAGE_SAMPLE_CL : MIMG_Sampler_WQM <0x00000021, AMDGPUSample_cl>;
|
|
|
|
defm IMAGE_SAMPLE_D : MIMG_Sampler <0x00000022, AMDGPUSample_d>;
|
|
|
|
defm IMAGE_SAMPLE_D_CL : MIMG_Sampler <0x00000023, AMDGPUSample_d_cl>;
|
|
|
|
defm IMAGE_SAMPLE_L : MIMG_Sampler <0x00000024, AMDGPUSample_l>;
|
|
|
|
defm IMAGE_SAMPLE_B : MIMG_Sampler_WQM <0x00000025, AMDGPUSample_b>;
|
|
|
|
defm IMAGE_SAMPLE_B_CL : MIMG_Sampler_WQM <0x00000026, AMDGPUSample_b_cl>;
|
|
|
|
defm IMAGE_SAMPLE_LZ : MIMG_Sampler <0x00000027, AMDGPUSample_lz>;
|
|
|
|
defm IMAGE_SAMPLE_C : MIMG_Sampler_WQM <0x00000028, AMDGPUSample_c>;
|
|
|
|
defm IMAGE_SAMPLE_C_CL : MIMG_Sampler_WQM <0x00000029, AMDGPUSample_c_cl>;
|
|
|
|
defm IMAGE_SAMPLE_C_D : MIMG_Sampler <0x0000002a, AMDGPUSample_c_d>;
|
|
|
|
defm IMAGE_SAMPLE_C_D_CL : MIMG_Sampler <0x0000002b, AMDGPUSample_c_d_cl>;
|
|
|
|
defm IMAGE_SAMPLE_C_L : MIMG_Sampler <0x0000002c, AMDGPUSample_c_l>;
|
|
|
|
defm IMAGE_SAMPLE_C_B : MIMG_Sampler_WQM <0x0000002d, AMDGPUSample_c_b>;
|
|
|
|
defm IMAGE_SAMPLE_C_B_CL : MIMG_Sampler_WQM <0x0000002e, AMDGPUSample_c_b_cl>;
|
|
|
|
defm IMAGE_SAMPLE_C_LZ : MIMG_Sampler <0x0000002f, AMDGPUSample_c_lz>;
|
|
|
|
defm IMAGE_SAMPLE_O : MIMG_Sampler_WQM <0x00000030, AMDGPUSample_o>;
|
|
|
|
defm IMAGE_SAMPLE_CL_O : MIMG_Sampler_WQM <0x00000031, AMDGPUSample_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_D_O : MIMG_Sampler <0x00000032, AMDGPUSample_d_o>;
|
|
|
|
defm IMAGE_SAMPLE_D_CL_O : MIMG_Sampler <0x00000033, AMDGPUSample_d_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_L_O : MIMG_Sampler <0x00000034, AMDGPUSample_l_o>;
|
|
|
|
defm IMAGE_SAMPLE_B_O : MIMG_Sampler_WQM <0x00000035, AMDGPUSample_b_o>;
|
|
|
|
defm IMAGE_SAMPLE_B_CL_O : MIMG_Sampler_WQM <0x00000036, AMDGPUSample_b_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_LZ_O : MIMG_Sampler <0x00000037, AMDGPUSample_lz_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_O : MIMG_Sampler_WQM <0x00000038, AMDGPUSample_c_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_CL_O : MIMG_Sampler_WQM <0x00000039, AMDGPUSample_c_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_D_O : MIMG_Sampler <0x0000003a, AMDGPUSample_c_d_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_D_CL_O : MIMG_Sampler <0x0000003b, AMDGPUSample_c_d_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_L_O : MIMG_Sampler <0x0000003c, AMDGPUSample_c_l_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_B_CL_O : MIMG_Sampler_WQM <0x0000003e, AMDGPUSample_c_b_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_B_O : MIMG_Sampler_WQM <0x0000003d, AMDGPUSample_c_b_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_LZ_O : MIMG_Sampler <0x0000003f, AMDGPUSample_c_lz_o>;
|
|
|
|
defm IMAGE_GATHER4 : MIMG_Gather_WQM <0x00000040, AMDGPUSample>;
|
|
|
|
defm IMAGE_GATHER4_CL : MIMG_Gather_WQM <0x00000041, AMDGPUSample_cl>;
|
|
|
|
defm IMAGE_GATHER4_L : MIMG_Gather <0x00000044, AMDGPUSample_l>;
|
|
|
|
defm IMAGE_GATHER4_B : MIMG_Gather_WQM <0x00000045, AMDGPUSample_b>;
|
|
|
|
defm IMAGE_GATHER4_B_CL : MIMG_Gather_WQM <0x00000046, AMDGPUSample_b_cl>;
|
|
|
|
defm IMAGE_GATHER4_LZ : MIMG_Gather <0x00000047, AMDGPUSample_lz>;
|
|
|
|
defm IMAGE_GATHER4_C : MIMG_Gather_WQM <0x00000048, AMDGPUSample_c>;
|
|
|
|
defm IMAGE_GATHER4_C_CL : MIMG_Gather_WQM <0x00000049, AMDGPUSample_c_cl>;
|
|
|
|
defm IMAGE_GATHER4_C_L : MIMG_Gather <0x0000004c, AMDGPUSample_c_l>;
|
|
|
|
defm IMAGE_GATHER4_C_B : MIMG_Gather_WQM <0x0000004d, AMDGPUSample_c_b>;
|
|
|
|
defm IMAGE_GATHER4_C_B_CL : MIMG_Gather_WQM <0x0000004e, AMDGPUSample_c_b_cl>;
|
|
|
|
defm IMAGE_GATHER4_C_LZ : MIMG_Gather <0x0000004f, AMDGPUSample_c_lz>;
|
|
|
|
defm IMAGE_GATHER4_O : MIMG_Gather_WQM <0x00000050, AMDGPUSample_o>;
|
|
|
|
defm IMAGE_GATHER4_CL_O : MIMG_Gather_WQM <0x00000051, AMDGPUSample_cl_o>;
|
|
|
|
defm IMAGE_GATHER4_L_O : MIMG_Gather <0x00000054, AMDGPUSample_l_o>;
|
|
|
|
defm IMAGE_GATHER4_B_O : MIMG_Gather_WQM <0x00000055, AMDGPUSample_b_o>;
|
|
|
|
defm IMAGE_GATHER4_B_CL_O : MIMG_Gather <0x00000056, AMDGPUSample_b_cl_o>;
|
|
|
|
defm IMAGE_GATHER4_LZ_O : MIMG_Gather <0x00000057, AMDGPUSample_lz_o>;
|
|
|
|
defm IMAGE_GATHER4_C_O : MIMG_Gather_WQM <0x00000058, AMDGPUSample_c_o>;
|
|
|
|
defm IMAGE_GATHER4_C_CL_O : MIMG_Gather_WQM <0x00000059, AMDGPUSample_c_cl_o>;
|
|
|
|
defm IMAGE_GATHER4_C_L_O : MIMG_Gather <0x0000005c, AMDGPUSample_c_l_o>;
|
|
|
|
defm IMAGE_GATHER4_C_B_O : MIMG_Gather_WQM <0x0000005d, AMDGPUSample_c_b_o>;
|
|
|
|
defm IMAGE_GATHER4_C_B_CL_O : MIMG_Gather_WQM <0x0000005e, AMDGPUSample_c_b_cl_o>;
|
|
|
|
defm IMAGE_GATHER4_C_LZ_O : MIMG_Gather <0x0000005f, AMDGPUSample_c_lz_o>;
|
2017-12-09 04:00:57 +08:00
|
|
|
|
2018-06-21 21:36:44 +08:00
|
|
|
defm IMAGE_GET_LOD : MIMG_Sampler <0x00000060, AMDGPUSample, 1, 1, "image_get_lod">;
|
2017-12-09 04:00:57 +08:00
|
|
|
|
2018-06-21 21:36:13 +08:00
|
|
|
defm IMAGE_SAMPLE_CD : MIMG_Sampler <0x00000068, AMDGPUSample_cd>;
|
|
|
|
defm IMAGE_SAMPLE_CD_CL : MIMG_Sampler <0x00000069, AMDGPUSample_cd_cl>;
|
|
|
|
defm IMAGE_SAMPLE_C_CD : MIMG_Sampler <0x0000006a, AMDGPUSample_c_cd>;
|
|
|
|
defm IMAGE_SAMPLE_C_CD_CL : MIMG_Sampler <0x0000006b, AMDGPUSample_c_cd_cl>;
|
|
|
|
defm IMAGE_SAMPLE_CD_O : MIMG_Sampler <0x0000006c, AMDGPUSample_cd_o>;
|
|
|
|
defm IMAGE_SAMPLE_CD_CL_O : MIMG_Sampler <0x0000006d, AMDGPUSample_cd_cl_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_CD_O : MIMG_Sampler <0x0000006e, AMDGPUSample_c_cd_o>;
|
|
|
|
defm IMAGE_SAMPLE_C_CD_CL_O : MIMG_Sampler <0x0000006f, AMDGPUSample_c_cd_cl_o>;
|
2016-09-02 01:54:54 +08:00
|
|
|
//def IMAGE_RSRC256 : MIMG_NoPattern_RSRC256 <"image_rsrc256", 0x0000007e>;
|
|
|
|
//def IMAGE_SAMPLER : MIMG_NoPattern_ <"image_sampler", 0x0000007f>;
|
|
|
|
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
/********** ========================================= **********/
|
|
|
|
/********** Table of dimension-aware image intrinsics **********/
|
|
|
|
/********** ========================================= **********/
|
AMDGPU: Dimension-aware image intrinsics
Summary:
These new image intrinsics contain the texture type as part of
their name and have each component of the address/coordinate as
individual parameters.
This is a preparatory step for implementing the A16 feature, where
coordinates are passed as half-floats or -ints, but the Z compare
value and texel offsets are still full dwords, making it difficult
or impossible to distinguish between A16 on or off in the old-style
intrinsics.
Additionally, these intrinsics pass the 'texfailpolicy' and
'cachectrl' as i32 bit fields to reduce operand clutter and allow
for future extensibility.
v2:
- gather4 supports 2darray images
- fix a bug with 1D images on SI
Change-Id: I099f309e0a394082a5901ea196c3967afb867f04
Reviewers: arsenm, rampitec, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D44939
llvm-svn: 329166
2018-04-04 18:58:54 +08:00
|
|
|
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
class ImageDimIntrinsicInfo<AMDGPUImageDimIntrinsic I> {
|
|
|
|
Intrinsic Intr = I;
|
|
|
|
MIMGBaseOpcode BaseOpcode = !cast<MIMGBaseOpcode>(!strconcat("IMAGE_", I.P.OpMod));
|
|
|
|
AMDGPUDimProps Dim = I.P.Dim;
|
AMDGPU: Dimension-aware image intrinsics
Summary:
These new image intrinsics contain the texture type as part of
their name and have each component of the address/coordinate as
individual parameters.
This is a preparatory step for implementing the A16 feature, where
coordinates are passed as half-floats or -ints, but the Z compare
value and texel offsets are still full dwords, making it difficult
or impossible to distinguish between A16 on or off in the old-style
intrinsics.
Additionally, these intrinsics pass the 'texfailpolicy' and
'cachectrl' as i32 bit fields to reduce operand clutter and allow
for future extensibility.
v2:
- gather4 supports 2darray images
- fix a bug with 1D images on SI
Change-Id: I099f309e0a394082a5901ea196c3967afb867f04
Reviewers: arsenm, rampitec, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D44939
llvm-svn: 329166
2018-04-04 18:58:54 +08:00
|
|
|
}
|
|
|
|
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
def ImageDimIntrinsicTable : GenericTable {
|
|
|
|
let FilterClass = "ImageDimIntrinsicInfo";
|
|
|
|
let Fields = ["Intr", "BaseOpcode", "Dim"];
|
|
|
|
GenericEnum TypeOf_BaseOpcode = MIMGBaseOpcode;
|
|
|
|
GenericEnum TypeOf_Dim = MIMGDim;
|
AMDGPU: Dimension-aware image intrinsics
Summary:
These new image intrinsics contain the texture type as part of
their name and have each component of the address/coordinate as
individual parameters.
This is a preparatory step for implementing the A16 feature, where
coordinates are passed as half-floats or -ints, but the Z compare
value and texel offsets are still full dwords, making it difficult
or impossible to distinguish between A16 on or off in the old-style
intrinsics.
Additionally, these intrinsics pass the 'texfailpolicy' and
'cachectrl' as i32 bit fields to reduce operand clutter and allow
for future extensibility.
v2:
- gather4 supports 2darray images
- fix a bug with 1D images on SI
Change-Id: I099f309e0a394082a5901ea196c3967afb867f04
Reviewers: arsenm, rampitec, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D44939
llvm-svn: 329166
2018-04-04 18:58:54 +08:00
|
|
|
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
let PrimaryKey = ["Intr"];
|
|
|
|
let PrimaryKeyName = "getImageDimIntrinsicInfo";
|
|
|
|
let PrimaryKeyEarlyOut = 1;
|
AMDGPU: Dimension-aware image intrinsics
Summary:
These new image intrinsics contain the texture type as part of
their name and have each component of the address/coordinate as
individual parameters.
This is a preparatory step for implementing the A16 feature, where
coordinates are passed as half-floats or -ints, but the Z compare
value and texel offsets are still full dwords, making it difficult
or impossible to distinguish between A16 on or off in the old-style
intrinsics.
Additionally, these intrinsics pass the 'texfailpolicy' and
'cachectrl' as i32 bit fields to reduce operand clutter and allow
for future extensibility.
v2:
- gather4 supports 2darray images
- fix a bug with 1D images on SI
Change-Id: I099f309e0a394082a5901ea196c3967afb867f04
Reviewers: arsenm, rampitec, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D44939
llvm-svn: 329166
2018-04-04 18:58:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
foreach intr = !listconcat(AMDGPUImageDimIntrinsics,
|
AMDGPU: Select MIMG instructions manually in SITargetLowering
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
2018-06-21 21:36:57 +08:00
|
|
|
AMDGPUImageDimAtomicIntrinsics) in {
|
|
|
|
def : ImageDimIntrinsicInfo<intr>;
|
AMDGPU: Dimension-aware image intrinsics
Summary:
These new image intrinsics contain the texture type as part of
their name and have each component of the address/coordinate as
individual parameters.
This is a preparatory step for implementing the A16 feature, where
coordinates are passed as half-floats or -ints, but the Z compare
value and texel offsets are still full dwords, making it difficult
or impossible to distinguish between A16 on or off in the old-style
intrinsics.
Additionally, these intrinsics pass the 'texfailpolicy' and
'cachectrl' as i32 bit fields to reduce operand clutter and allow
for future extensibility.
v2:
- gather4 supports 2darray images
- fix a bug with 1D images on SI
Change-Id: I099f309e0a394082a5901ea196c3967afb867f04
Reviewers: arsenm, rampitec, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D44939
llvm-svn: 329166
2018-04-04 18:58:54 +08:00
|
|
|
}
|
[AMDGPU] Optimize _L image intrinsic to _LZ when lod is zero
Summary:
Add _L to _LZ image intrinsic table mapping to table gen.
In ISelLowering check if image intrinsic has lod and if it's equal
to zero, if so remove lod and change opcode to equivalent mapped _LZ.
Change-Id: Ie24cd7e788e2195d846c7bd256151178cbb9ec71
Subscribers: arsenm, mehdi_amini, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49483
llvm-svn: 338523
2018-08-01 20:12:01 +08:00
|
|
|
|
|
|
|
// L to LZ Optimization Mapping
|
|
|
|
def : MIMGLZMapping<IMAGE_SAMPLE_L, IMAGE_SAMPLE_LZ>;
|
|
|
|
def : MIMGLZMapping<IMAGE_SAMPLE_C_L, IMAGE_SAMPLE_C_LZ>;
|
|
|
|
def : MIMGLZMapping<IMAGE_SAMPLE_L_O, IMAGE_SAMPLE_LZ_O>;
|
|
|
|
def : MIMGLZMapping<IMAGE_SAMPLE_C_L_O, IMAGE_SAMPLE_C_LZ_O>;
|
|
|
|
def : MIMGLZMapping<IMAGE_GATHER4_L, IMAGE_GATHER4_LZ>;
|
|
|
|
def : MIMGLZMapping<IMAGE_GATHER4_C_L, IMAGE_GATHER4_C_LZ>;
|
|
|
|
def : MIMGLZMapping<IMAGE_GATHER4_L_O, IMAGE_GATHER4_LZ_O>;
|
|
|
|
def : MIMGLZMapping<IMAGE_GATHER4_C_L_O, IMAGE_GATHER4_C_LZ_O>;
|
[AMDGPU] Optimize image_[load|store]_mip
Summary:
Replace image_load_mip/image_store_mip
with image_load/image_store if lod is 0.
Reviewers: arsenm, nhaehnle
Reviewed By: arsenm
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63073
llvm-svn: 362957
2019-06-10 23:58:51 +08:00
|
|
|
|
|
|
|
// MIP to NONMIP Optimization Mapping
|
|
|
|
def : MIMGMIPMapping<IMAGE_LOAD_MIP, IMAGE_LOAD>;
|
|
|
|
def : MIMGMIPMapping<IMAGE_STORE_MIP, IMAGE_STORE>;
|