2019-07-17 03:22:21 +08:00
|
|
|
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
|
|
|
|
# RUN: llc -march=amdgcn -mcpu=tahiti -run-pass=instruction-select -verify-machineinstrs -global-isel-abort=0 -o - %s | FileCheck -check-prefix=GFX6 %s
|
|
|
|
# RUN: llc -march=amdgcn -mcpu=gfx900 -run-pass=instruction-select -verify-machineinstrs -global-isel-abort=0 -o - %s | FileCheck -check-prefix=GFX9 %s
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_4
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_4
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_DWORD_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_4
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_DWORD_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_2
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_2
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_USHORT_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_USHORT_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 2, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_USHORT_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_2
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_USHORT_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_USHORT_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 2, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_USHORT_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 2, align 2, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_p3_from_4
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_p3_from_4
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX6: [[LOAD:%[0-9]+]]:vgpr_32(p3) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX6: $vgpr0 = COPY [[LOAD]](p3)
|
|
|
|
; GFX9-LABEL: name: load_private_p3_from_4
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX9: [[LOAD:%[0-9]+]]:vgpr_32(p3) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX9: $vgpr0 = COPY [[LOAD]](p3)
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(p3) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_p5_from_4
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_p5_from_4
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX6: [[LOAD:%[0-9]+]]:vgpr_32(p5) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX6: $vgpr0 = COPY [[LOAD]](p5)
|
|
|
|
; GFX9-LABEL: name: load_private_p5_from_4
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX9: [[LOAD:%[0-9]+]]:vgpr_32(p5) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX9: $vgpr0 = COPY [[LOAD]](p5)
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(p5) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_v2s16
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_v2s16
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX6: [[LOAD:%[0-9]+]]:vgpr_32(<2 x s16>) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX6: $vgpr0 = COPY [[LOAD]](<2 x s16>)
|
|
|
|
; GFX9-LABEL: name: load_private_v2s16
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr(p5) = COPY $vgpr0
|
|
|
|
; GFX9: [[LOAD:%[0-9]+]]:vgpr_32(<2 x s16>) = G_LOAD [[COPY]](p5) :: (load 4, addrspace 5)
|
|
|
|
; GFX9: $vgpr0 = COPY [[LOAD]](<2 x s16>)
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(<2 x s16>) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
################################################################################
|
|
|
|
### Stress addressing modes
|
|
|
|
################################################################################
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_2047
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_2047
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 2047, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_2047
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 2047, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 2047
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
2019-09-09 23:39:32 +08:00
|
|
|
name: load_private_s32_from_1_gep_2047_known_bits
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_2047_known_bits
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 2147483647, implicit $exec
|
|
|
|
; GFX6: [[V_AND_B32_e64_:%[0-9]+]]:vgpr_32 = V_AND_B32_e64 [[COPY]], [[V_MOV_B32_e32_]], implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_AND_B32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 2047, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-09-09 23:39:32 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_2047_known_bits
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 2147483647, implicit $exec
|
|
|
|
; GFX9: [[V_AND_B32_e64_:%[0-9]+]]:vgpr_32 = V_AND_B32_e64 [[COPY]], [[V_MOV_B32_e32_]], implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_AND_B32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 2047, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-09-09 23:39:32 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(s32) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 2147483647
|
|
|
|
%2:vgpr(s32) = G_AND %0, %1
|
|
|
|
%3:vgpr(p5) = G_INTTOPTR %2
|
|
|
|
%4:vgpr(s32) = G_CONSTANT i32 2047
|
|
|
|
%5:vgpr(p5) = G_GEP %3, %4
|
|
|
|
%6:vgpr(s32) = G_LOAD %5 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %6
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
2019-07-17 03:22:21 +08:00
|
|
|
name: load_private_s32_from_1_gep_2048
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_2048
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 2048, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_2048
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 2048, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 2048
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m2047
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m2047
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294965249, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m2047
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294965249, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -2047
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m2048
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m2048
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294965248, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m2048
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294965248, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -2048
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_4095
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_4095
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4095, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_4095
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[COPY]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 4095, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 4095
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_4096
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_4096
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_4096
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 4096
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m4095
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m4095
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294963201, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m4095
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294963201, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -4095
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m4096
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m4096
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294963200, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m4096
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294963200, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -4096
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_8191
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_8191
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 8191, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_8191
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 8191, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 8191
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_8192
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_8192
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 8192, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_8192
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 8192, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 8192
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m8191
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m8191
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294959105, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m8191
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294959105, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -8191
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_gep_m8192
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $vgpr0
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_gep_m8192
|
|
|
|
; GFX6: liveins: $vgpr0
|
|
|
|
; GFX6: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294959104, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_gep_m8192
|
|
|
|
; GFX9: liveins: $vgpr0
|
|
|
|
; GFX9: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4294959104, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[V_MOV_B32_e32_]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = COPY $vgpr0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 -8192
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_4_constant_0
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_4_constant_0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_DWORD_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFSET]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_4_constant_0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_DWORD_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFSET]]
|
|
|
|
%0:vgpr(p5) = G_CONSTANT i32 0
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_4_constant_sgpr_16
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_4_constant_sgpr_16
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_DWORD_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 16, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFSET]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_4_constant_sgpr_16
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_DWORD_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 16, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFSET]]
|
|
|
|
%0:sgpr(p5) = G_CONSTANT i32 16
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_constant_4095
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_constant_4095
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 4095, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFSET]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_constant_4095
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFSET:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 4095, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFSET]]
|
|
|
|
%0:vgpr(p5) = G_CONSTANT i32 4095
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_constant_4096
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_constant_4096
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_MOV_B32_e32_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_constant_4096
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_MOV_B32_e32_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = G_CONSTANT i32 4096
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_fi
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
stack:
|
|
|
|
- { id: 0, size: 4, alignment: 4 }
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_fi
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_DWORD_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFEN %stack.0, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr32, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_fi
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_DWORD_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_DWORD_OFFEN %stack.0, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr32, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_DWORD_OFFEN]]
|
|
|
|
%0:vgpr(p5) = G_FRAME_INDEX %stack.0
|
|
|
|
%1:vgpr(s32) = G_LOAD %0 :: (load 4, align 4, addrspace 5)
|
|
|
|
$vgpr0 = COPY %1
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_fi_offset_4095
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
stack:
|
|
|
|
- { id: 0, size: 4096, alignment: 4 }
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_fi_offset_4095
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 %stack.0, implicit $exec
|
|
|
|
; GFX6: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4095, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[V_MOV_B32_e32_]], [[V_MOV_B32_e32_1]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_fi_offset_4095
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %stack.0, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr32, 4095, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = G_FRAME_INDEX %stack.0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 4095
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
name: load_private_s32_from_1_fi_offset_4096
|
|
|
|
legalized: true
|
|
|
|
regBankSelected: true
|
|
|
|
tracksRegLiveness: true
|
|
|
|
machineFunctionInfo:
|
|
|
|
scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3
|
|
|
|
scratchWaveOffsetReg: $sgpr4
|
|
|
|
stackPtrOffsetReg: $sgpr32
|
|
|
|
stack:
|
|
|
|
- { id: 0, size: 8192, alignment: 4 }
|
|
|
|
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
|
|
|
|
; GFX6-LABEL: name: load_private_s32_from_1_fi_offset_4096
|
|
|
|
; GFX6: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 %stack.0, implicit $exec
|
|
|
|
; GFX6: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
|
|
|
; GFX6: %2:vgpr_32, dead %4:sreg_64_xexec = V_ADD_I32_e64 [[V_MOV_B32_e32_]], [[V_MOV_B32_e32_1]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX6: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN %2, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX6: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
; GFX9-LABEL: name: load_private_s32_from_1_fi_offset_4096
|
|
|
|
; GFX9: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 %stack.0, implicit $exec
|
|
|
|
; GFX9: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 4096, implicit $exec
|
|
|
|
; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[V_MOV_B32_e32_]], [[V_MOV_B32_e32_1]], 0, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GFX9: [[BUFFER_LOAD_UBYTE_OFFEN:%[0-9]+]]:vgpr_32 = BUFFER_LOAD_UBYTE_OFFEN [[V_ADD_U32_e64_]], $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr4, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 1, addrspace 5)
|
2019-07-17 03:22:21 +08:00
|
|
|
; GFX9: $vgpr0 = COPY [[BUFFER_LOAD_UBYTE_OFFEN]]
|
|
|
|
%0:vgpr(p5) = G_FRAME_INDEX %stack.0
|
|
|
|
%1:vgpr(s32) = G_CONSTANT i32 4096
|
|
|
|
%2:vgpr(p5) = G_GEP %0, %1
|
|
|
|
%3:vgpr(s32) = G_LOAD %2 :: (load 1, align 1, addrspace 5)
|
|
|
|
$vgpr0 = COPY %3
|
|
|
|
|
|
|
|
...
|