2019-03-28 00:58:27 +08:00
|
|
|
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
|
|
|
|
# RUN: llc -march=amdgcn -verify-machineinstrs -run-pass=si-optimize-exec-masking-pre-ra %s -o - | FileCheck -check-prefix=GCN %s
|
|
|
|
|
|
|
|
# Make sure dbg_value doesn't change codeegn when collapsing end_cf
|
|
|
|
---
|
|
|
|
name: simple_nested_if_dbg_value
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: simple_nested_if_dbg_value
|
|
|
|
; GCN: bb.0:
|
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:27 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:27 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 $exec, [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:27 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: DBG_VALUE
|
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: DBG_VALUE
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:27 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:27 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:27 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:27 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-03-28 00:58:27 +08:00
|
|
|
DBG_VALUE
|
|
|
|
|
|
|
|
bb.4:
|
|
|
|
DBG_VALUE
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:27 +08:00
|
|
|
%15:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%16:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %16, %15, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
# Empty block separates the collapsable s_or_b64
|
|
|
|
---
|
|
|
|
name: simple_nested_if_empty_block_between
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: simple_nested_if_empty_block_between
|
|
|
|
; GCN: bb.0:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.5(0x40000000)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: SI_MASK_BRANCH %bb.5, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 $exec, [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: successors: %bb.5(0x80000000)
|
|
|
|
; GCN: bb.5:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.5:
|
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.4:
|
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%15:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%16:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %16, %15, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
# Effectively empty block separates the collapsable s_or_b64
|
|
|
|
---
|
|
|
|
name: simple_nested_if_empty_block_dbg_between
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: simple_nested_if_empty_block_dbg_between
|
|
|
|
; GCN: bb.0:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.5(0x40000000)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.5, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 $exec, [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: successors: %bb.5(0x80000000)
|
|
|
|
; GCN: DBG_VALUE
|
|
|
|
; GCN: bb.5:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
2019-08-21 01:45:25 +08:00
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-04-04 04:53:20 +08:00
|
|
|
|
2019-08-01 09:25:27 +08:00
|
|
|
bb.5:
|
2019-08-21 01:45:25 +08:00
|
|
|
DBG_VALUE
|
2019-08-01 09:25:27 +08:00
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.4:
|
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%15:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%16:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %16, %15, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
---
|
|
|
|
name: skip_salu_and_meta_insts_find_first
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: skip_salu_and_meta_insts_find_first
|
|
|
|
; GCN: bb.0:
|
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: [[COPY4:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY4]], [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: [[DEF:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
|
|
|
|
; GCN: dead %16:sgpr_32 = S_BREV_B32 [[DEF]]
|
|
|
|
; GCN: KILL [[DEF]]
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY4]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.4:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
|
|
|
%15:sgpr_32 = IMPLICIT_DEF
|
|
|
|
%16:sgpr_32 = S_BREV_B32 %15
|
|
|
|
KILL %15
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.4:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%17:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%18:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %18, %17, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
# Make sure SALU instructions, meta instructions, and SGPR->SGPR
|
|
|
|
# copies are skipped.
|
|
|
|
---
|
|
|
|
name: skip_salu_and_meta_insts_after
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: skip_salu_and_meta_insts_after
|
|
|
|
; GCN: bb.0:
|
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 $exec, [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: [[DEF:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
|
|
|
|
; GCN: [[S_BREV_B32_:%[0-9]+]]:sgpr_32 = S_BREV_B32 [[DEF]]
|
|
|
|
; GCN: KILL [[DEF]]
|
|
|
|
; GCN: dead %17:sgpr_32 = COPY [[S_BREV_B32_]]
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
2019-08-21 01:45:25 +08:00
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%15:sgpr_32 = IMPLICIT_DEF
|
|
|
|
%16:sgpr_32 = S_BREV_B32 %15
|
|
|
|
KILL %15
|
|
|
|
%19:sgpr_32 = COPY %16
|
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.4:
|
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%17:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%18:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %18, %17, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
# SALU instruction depends on exec through a normal operand.
|
|
|
|
---
|
|
|
|
name: salu_exec_dependency
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: salu_exec_dependency
|
|
|
|
; GCN: bb.0:
|
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY4:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY4]], [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY4]], implicit-def $scc
|
2019-08-01 09:25:27 +08:00
|
|
|
; GCN: dead %15:sreg_64 = S_BREV_B64 $exec
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-08-01 09:25:27 +08:00
|
|
|
%15:sreg_64 = S_BREV_B64 $exec
|
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.4:
|
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%17:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%18:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %18, %17, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
# Copy to / from VGPR should be assumed to read exec
|
|
|
|
---
|
|
|
|
name: copy_no_explicit_exec_dependency
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: copy_no_explicit_exec_dependency
|
|
|
|
; GCN: bb.0:
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY4:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY4]], [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: bb.3:
|
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY4]], implicit-def $scc
|
2019-08-01 09:25:27 +08:00
|
|
|
; GCN: dead %15:vgpr_32 = COPY %5.sub2
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
|
|
|
bb.0:
|
2019-08-21 01:45:25 +08:00
|
|
|
successors: %bb.1, %bb.4
|
2019-03-28 00:58:30 +08:00
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
2019-08-21 01:45:25 +08:00
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
2019-03-28 00:58:30 +08:00
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 00:58:30 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 00:58:30 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
2019-08-01 09:25:27 +08:00
|
|
|
%15:vgpr_32 = COPY %5.sub2
|
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.4:
|
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 00:58:30 +08:00
|
|
|
%17:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%18:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %18, %17, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
|
|
|
...
|
|
|
|
|
2019-03-28 22:01:39 +08:00
|
|
|
# There's no real reason this can't be handled, but isn't now.
|
|
|
|
---
|
|
|
|
name: simple_nested_if_not_layout_successor
|
|
|
|
tracksRegLiveness: true
|
|
|
|
liveins:
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%0' }
|
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%1' }
|
|
|
|
machineFunctionInfo:
|
|
|
|
isEntryFunction: true
|
|
|
|
body: |
|
|
|
|
; GCN-LABEL: name: simple_nested_if_not_layout_successor
|
|
|
|
; GCN: bb.0:
|
|
|
|
; GCN: successors: %bb.1(0x40000000), %bb.4(0x40000000)
|
|
|
|
; GCN: liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY:%[0-9]+]]:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
; GCN: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
|
|
|
|
; GCN: [[V_CMP_LT_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_LT_U32_e64 1, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY2:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY2]], [[V_CMP_LT_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.1
|
|
|
|
; GCN: bb.1:
|
|
|
|
; GCN: successors: %bb.2(0x40000000), %bb.3(0x40000000)
|
2019-05-01 06:08:23 +08:00
|
|
|
; GCN: undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM [[COPY]], 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: %6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: [[COPY3:%[0-9]+]]:vgpr_32 = COPY %5.sub1
|
|
|
|
; GCN: undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
; GCN: %8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, [[COPY3]], %9, 0, implicit $exec
|
|
|
|
; GCN: %5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
; GCN: %5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: [[V_CMP_NE_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_NE_U32_e64 2, [[COPY1]], implicit $exec
|
|
|
|
; GCN: [[COPY4:%[0-9]+]]:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
; GCN: [[S_AND_B64_1:%[0-9]+]]:sreg_64 = S_AND_B64 [[COPY4]], [[V_CMP_NE_U32_e64_]], implicit-def dead $scc
|
|
|
|
; GCN: $exec = S_MOV_B64_term [[S_AND_B64_1]]
|
|
|
|
; GCN: SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
; GCN: S_BRANCH %bb.2
|
|
|
|
; GCN: bb.2:
|
|
|
|
; GCN: successors: %bb.3(0x80000000)
|
|
|
|
; GCN: %5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: %5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
; GCN: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
; GCN: BUFFER_STORE_DWORD_ADDR64 [[V_MOV_B32_e32_]], %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: bb.3:
|
2019-08-01 09:25:27 +08:00
|
|
|
; GCN: successors: %bb.5(0x80000000)
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY4]], implicit-def $scc
|
|
|
|
; GCN: S_BRANCH %bb.5
|
|
|
|
; GCN: bb.4:
|
|
|
|
; GCN: $exec = S_OR_B64 $exec, [[COPY2]], implicit-def $scc
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
; GCN: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
; GCN: $m0 = S_MOV_B32 -1
|
|
|
|
; GCN: DS_WRITE_B32 [[V_MOV_B32_e32_2]], [[V_MOV_B32_e32_1]], 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
; GCN: S_ENDPGM 0
|
2019-08-21 01:45:25 +08:00
|
|
|
; GCN: bb.5:
|
2019-03-28 22:01:39 +08:00
|
|
|
; GCN: successors: %bb.4(0x80000000)
|
|
|
|
; GCN: S_BRANCH %bb.4
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.4
|
|
|
|
liveins: $vgpr0, $sgpr0_sgpr1
|
|
|
|
|
|
|
|
%1:sgpr_64 = COPY $sgpr0_sgpr1
|
|
|
|
%0:vgpr_32 = COPY $vgpr0
|
|
|
|
%2:sreg_64 = V_CMP_LT_U32_e64 1, %0, implicit $exec
|
|
|
|
%3:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%4:sreg_64 = S_AND_B64 %3, %2, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %4
|
|
|
|
SI_MASK_BRANCH %bb.4, implicit $exec
|
|
|
|
S_BRANCH %bb.1
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
successors: %bb.2, %bb.3
|
|
|
|
|
2019-05-01 06:08:23 +08:00
|
|
|
undef %5.sub0_sub1:sgpr_128 = S_LOAD_DWORDX2_IMM %1, 9, 0, 0 :: (dereferenceable invariant load 8, align 4, addrspace 4)
|
2019-03-28 22:01:39 +08:00
|
|
|
undef %6.sub0:vreg_64 = V_LSHLREV_B32_e32 2, %0, implicit $exec
|
|
|
|
%6.sub1:vreg_64 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
%7:vgpr_32 = COPY %5.sub1
|
|
|
|
undef %8.sub0:vreg_64, %9:sreg_64_xexec = V_ADD_I32_e64 %5.sub0, %6.sub0, 0, implicit $exec
|
|
|
|
%8.sub1:vreg_64, dead %10:sreg_64_xexec = V_ADDC_U32_e64 0, %7, %9, 0, implicit $exec
|
|
|
|
%5.sub3:sgpr_128 = S_MOV_B32 61440
|
|
|
|
%5.sub2:sgpr_128 = S_MOV_B32 0
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %6.sub1, %6, %5, 0, 0, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 22:01:39 +08:00
|
|
|
%11:sreg_64 = V_CMP_NE_U32_e64 2, %0, implicit $exec
|
|
|
|
%12:sreg_64 = COPY $exec, implicit-def $exec
|
|
|
|
%13:sreg_64 = S_AND_B64 %12, %11, implicit-def dead $scc
|
|
|
|
$exec = S_MOV_B64_term %13
|
|
|
|
SI_MASK_BRANCH %bb.3, implicit $exec
|
|
|
|
S_BRANCH %bb.2
|
|
|
|
|
|
|
|
bb.2:
|
|
|
|
%5.sub0:sgpr_128 = COPY %5.sub2
|
|
|
|
%5.sub1:sgpr_128 = COPY %5.sub2
|
|
|
|
%14:vgpr_32 = V_MOV_B32_e32 1, implicit $exec
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 %14, %8, %5, 0, 4, 0, 0, 0, 0, 0, implicit $exec :: (store 4, addrspace 1)
|
2019-03-28 22:01:39 +08:00
|
|
|
|
|
|
|
bb.3:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %12, implicit-def $scc
|
|
|
|
S_BRANCH %bb.5
|
2019-03-28 22:01:39 +08:00
|
|
|
|
|
|
|
bb.4:
|
2019-08-21 01:45:25 +08:00
|
|
|
$exec = S_OR_B64 $exec, %3, implicit-def $scc
|
2019-03-28 22:01:39 +08:00
|
|
|
%15:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
|
|
|
|
%16:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
|
|
|
|
$m0 = S_MOV_B32 -1
|
|
|
|
DS_WRITE_B32 %16, %15, 0, 0, implicit $m0, implicit $exec :: (store 4, addrspace 3)
|
|
|
|
S_ENDPGM 0
|
|
|
|
|
2019-08-21 01:45:25 +08:00
|
|
|
bb.5:
|
2019-03-28 22:01:39 +08:00
|
|
|
S_BRANCH %bb.4
|
|
|
|
|
|
|
|
...
|