2017-02-28 04:21:31 +08:00
|
|
|
# RUN: llc -march=amdgcn -run-pass peephole-opt -verify-machineinstrs %s -o - | FileCheck -check-prefix=GCN %s
|
|
|
|
...
|
|
|
|
# GCN-LABEL: name: no_fold_imm_madak_mac_clamp_f32
|
2018-02-01 06:04:26 +08:00
|
|
|
# GCN: %23:vgpr_32 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
# GCN-NEXT: %24:vgpr_32 = nofpexcept V_MAC_F32_e64 0, killed %19, 0, killed %21, 0, %23, 1, 0, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
name: no_fold_imm_madak_mac_clamp_f32
|
|
|
|
tracksRegLiveness: true
|
|
|
|
registers:
|
|
|
|
- { id: 0, class: sgpr_64 }
|
|
|
|
- { id: 1, class: sreg_32_xm0 }
|
|
|
|
- { id: 2, class: sgpr_32 }
|
|
|
|
- { id: 3, class: vgpr_32 }
|
|
|
|
- { id: 4, class: sreg_64_xexec }
|
|
|
|
- { id: 5, class: sreg_64_xexec }
|
|
|
|
- { id: 6, class: sreg_64_xexec }
|
|
|
|
- { id: 7, class: sreg_32 }
|
|
|
|
- { id: 8, class: sreg_32 }
|
|
|
|
- { id: 9, class: sreg_32_xm0 }
|
|
|
|
- { id: 10, class: sreg_64 }
|
|
|
|
- { id: 11, class: sreg_32_xm0 }
|
|
|
|
- { id: 12, class: sreg_32_xm0 }
|
|
|
|
- { id: 13, class: sgpr_64 }
|
|
|
|
- { id: 14, class: sgpr_128 }
|
|
|
|
- { id: 15, class: sreg_32_xm0 }
|
|
|
|
- { id: 16, class: sreg_64 }
|
|
|
|
- { id: 17, class: sgpr_128 }
|
|
|
|
- { id: 18, class: sgpr_128 }
|
|
|
|
- { id: 19, class: vgpr_32 }
|
|
|
|
- { id: 20, class: vreg_64 }
|
|
|
|
- { id: 21, class: vgpr_32 }
|
|
|
|
- { id: 22, class: vreg_64 }
|
|
|
|
- { id: 23, class: vgpr_32 }
|
|
|
|
- { id: 24, class: vgpr_32 }
|
|
|
|
- { id: 25, class: vgpr_32 }
|
|
|
|
- { id: 26, class: vreg_64 }
|
|
|
|
- { id: 27, class: vgpr_32 }
|
|
|
|
- { id: 28, class: vreg_64 }
|
|
|
|
- { id: 29, class: vreg_64 }
|
|
|
|
liveins:
|
2018-02-01 06:04:26 +08:00
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%0' }
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%3' }
|
2017-02-28 04:21:31 +08:00
|
|
|
body: |
|
2017-07-07 04:56:57 +08:00
|
|
|
bb.0:
|
2018-02-01 06:04:26 +08:00
|
|
|
liveins: $sgpr0_sgpr1, $vgpr0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
2018-02-01 06:04:26 +08:00
|
|
|
%3 = COPY $vgpr0
|
|
|
|
%0 = COPY $sgpr0_sgpr1
|
2019-05-01 06:08:23 +08:00
|
|
|
%4 = S_LOAD_DWORDX2_IMM %0, 9, 0, 0
|
|
|
|
%5 = S_LOAD_DWORDX2_IMM %0, 11, 0, 0
|
|
|
|
%6 = S_LOAD_DWORDX2_IMM %0, 13, 0, 0
|
2018-02-01 06:04:26 +08:00
|
|
|
%27 = V_ASHRREV_I32_e32 31, %3, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%28 = REG_SEQUENCE %3, 1, %27, 2
|
|
|
|
%11 = S_MOV_B32 61440
|
|
|
|
%12 = S_MOV_B32 0
|
|
|
|
%13 = REG_SEQUENCE killed %12, 1, killed %11, 2
|
|
|
|
%14 = REG_SEQUENCE killed %5, 17, %13, 18
|
|
|
|
%15 = S_MOV_B32 2
|
2018-02-01 06:04:26 +08:00
|
|
|
%29 = V_LSHL_B64 killed %28, killed %15, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%17 = REG_SEQUENCE killed %6, 17, %13, 18
|
|
|
|
%18 = REG_SEQUENCE killed %4, 17, %13, 18
|
|
|
|
%20 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%19 = BUFFER_LOAD_DWORD_ADDR64 %20, killed %14, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%22 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%21 = BUFFER_LOAD_DWORD_ADDR64 %22, killed %17, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2018-02-01 06:04:26 +08:00
|
|
|
%23 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
%24 = nofpexcept V_MAC_F32_e64 0, killed %19, 0, killed %21, 0, %23, 1, 0, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%26 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 killed %24, %26, killed %18, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
[AMDGPU] Add support for immediate operand for S_ENDPGM
Summary:
Add support for immediate operand in S_ENDPGM
Change-Id: I0c56a076a10980f719fb2a8f16407e9c301013f6
Reviewers: alexshap
Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, eraman, arphaman, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59213
llvm-svn: 355902
2019-03-12 17:52:58 +08:00
|
|
|
S_ENDPGM 0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
...
|
|
|
|
---
|
|
|
|
# GCN-LABEL: name: no_fold_imm_madak_mac_omod_f32
|
2018-02-01 06:04:26 +08:00
|
|
|
# GCN: %23:vgpr_32 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
# GCN: %24:vgpr_32 = nofpexcept V_MAC_F32_e64 0, killed %19, 0, killed %21, 0, %23, 0, 2, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
name: no_fold_imm_madak_mac_omod_f32
|
|
|
|
tracksRegLiveness: true
|
|
|
|
registers:
|
|
|
|
- { id: 0, class: sgpr_64 }
|
|
|
|
- { id: 1, class: sreg_32_xm0 }
|
|
|
|
- { id: 2, class: sgpr_32 }
|
|
|
|
- { id: 3, class: vgpr_32 }
|
|
|
|
- { id: 4, class: sreg_64_xexec }
|
|
|
|
- { id: 5, class: sreg_64_xexec }
|
|
|
|
- { id: 6, class: sreg_64_xexec }
|
|
|
|
- { id: 7, class: sreg_32 }
|
|
|
|
- { id: 8, class: sreg_32 }
|
|
|
|
- { id: 9, class: sreg_32_xm0 }
|
|
|
|
- { id: 10, class: sreg_64 }
|
|
|
|
- { id: 11, class: sreg_32_xm0 }
|
|
|
|
- { id: 12, class: sreg_32_xm0 }
|
|
|
|
- { id: 13, class: sgpr_64 }
|
|
|
|
- { id: 14, class: sgpr_128 }
|
|
|
|
- { id: 15, class: sreg_32_xm0 }
|
|
|
|
- { id: 16, class: sreg_64 }
|
|
|
|
- { id: 17, class: sgpr_128 }
|
|
|
|
- { id: 18, class: sgpr_128 }
|
|
|
|
- { id: 19, class: vgpr_32 }
|
|
|
|
- { id: 20, class: vreg_64 }
|
|
|
|
- { id: 21, class: vgpr_32 }
|
|
|
|
- { id: 22, class: vreg_64 }
|
|
|
|
- { id: 23, class: vgpr_32 }
|
|
|
|
- { id: 24, class: vgpr_32 }
|
|
|
|
- { id: 25, class: vgpr_32 }
|
|
|
|
- { id: 26, class: vreg_64 }
|
|
|
|
- { id: 27, class: vgpr_32 }
|
|
|
|
- { id: 28, class: vreg_64 }
|
|
|
|
- { id: 29, class: vreg_64 }
|
|
|
|
liveins:
|
2018-02-01 06:04:26 +08:00
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%0' }
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%3' }
|
2017-02-28 04:21:31 +08:00
|
|
|
body: |
|
2017-07-07 04:56:57 +08:00
|
|
|
bb.0:
|
2018-02-01 06:04:26 +08:00
|
|
|
liveins: $sgpr0_sgpr1, $vgpr0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
2018-02-01 06:04:26 +08:00
|
|
|
%3 = COPY $vgpr0
|
|
|
|
%0 = COPY $sgpr0_sgpr1
|
2019-05-01 06:08:23 +08:00
|
|
|
%4 = S_LOAD_DWORDX2_IMM %0, 9, 0, 0
|
|
|
|
%5 = S_LOAD_DWORDX2_IMM %0, 11, 0, 0
|
|
|
|
%6 = S_LOAD_DWORDX2_IMM %0, 13, 0, 0
|
2018-02-01 06:04:26 +08:00
|
|
|
%27 = V_ASHRREV_I32_e32 31, %3, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%28 = REG_SEQUENCE %3, 1, %27, 2
|
|
|
|
%11 = S_MOV_B32 61440
|
|
|
|
%12 = S_MOV_B32 0
|
|
|
|
%13 = REG_SEQUENCE killed %12, 1, killed %11, 2
|
|
|
|
%14 = REG_SEQUENCE killed %5, 17, %13, 18
|
|
|
|
%15 = S_MOV_B32 2
|
2018-02-01 06:04:26 +08:00
|
|
|
%29 = V_LSHL_B64 killed %28, killed %15, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%17 = REG_SEQUENCE killed %6, 17, %13, 18
|
|
|
|
%18 = REG_SEQUENCE killed %4, 17, %13, 18
|
|
|
|
%20 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%19 = BUFFER_LOAD_DWORD_ADDR64 %20, killed %14, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%22 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%21 = BUFFER_LOAD_DWORD_ADDR64 %22, killed %17, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2018-02-01 06:04:26 +08:00
|
|
|
%23 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
%24 = nofpexcept V_MAC_F32_e64 0, killed %19, 0, killed %21, 0, %23, 0, 2, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%26 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 killed %24, %26, killed %18, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
[AMDGPU] Add support for immediate operand for S_ENDPGM
Summary:
Add support for immediate operand in S_ENDPGM
Change-Id: I0c56a076a10980f719fb2a8f16407e9c301013f6
Reviewers: alexshap
Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, eraman, arphaman, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59213
llvm-svn: 355902
2019-03-12 17:52:58 +08:00
|
|
|
S_ENDPGM 0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
...
|
|
|
|
---
|
|
|
|
# GCN: name: no_fold_imm_madak_mad_clamp_f32
|
2018-02-01 06:04:26 +08:00
|
|
|
# GCN: %23:vgpr_32 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
# GCN: %24:vgpr_32 = nofpexcept V_MAD_F32 0, killed %19, 0, killed %21, 0, %23, 1, 0, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
name: no_fold_imm_madak_mad_clamp_f32
|
|
|
|
tracksRegLiveness: true
|
|
|
|
registers:
|
|
|
|
- { id: 0, class: sgpr_64 }
|
|
|
|
- { id: 1, class: sreg_32_xm0 }
|
|
|
|
- { id: 2, class: sgpr_32 }
|
|
|
|
- { id: 3, class: vgpr_32 }
|
|
|
|
- { id: 4, class: sreg_64_xexec }
|
|
|
|
- { id: 5, class: sreg_64_xexec }
|
|
|
|
- { id: 6, class: sreg_64_xexec }
|
|
|
|
- { id: 7, class: sreg_32 }
|
|
|
|
- { id: 8, class: sreg_32 }
|
|
|
|
- { id: 9, class: sreg_32_xm0 }
|
|
|
|
- { id: 10, class: sreg_64 }
|
|
|
|
- { id: 11, class: sreg_32_xm0 }
|
|
|
|
- { id: 12, class: sreg_32_xm0 }
|
|
|
|
- { id: 13, class: sgpr_64 }
|
|
|
|
- { id: 14, class: sgpr_128 }
|
|
|
|
- { id: 15, class: sreg_32_xm0 }
|
|
|
|
- { id: 16, class: sreg_64 }
|
|
|
|
- { id: 17, class: sgpr_128 }
|
|
|
|
- { id: 18, class: sgpr_128 }
|
|
|
|
- { id: 19, class: vgpr_32 }
|
|
|
|
- { id: 20, class: vreg_64 }
|
|
|
|
- { id: 21, class: vgpr_32 }
|
|
|
|
- { id: 22, class: vreg_64 }
|
|
|
|
- { id: 23, class: vgpr_32 }
|
|
|
|
- { id: 24, class: vgpr_32 }
|
|
|
|
- { id: 25, class: vgpr_32 }
|
|
|
|
- { id: 26, class: vreg_64 }
|
|
|
|
- { id: 27, class: vgpr_32 }
|
|
|
|
- { id: 28, class: vreg_64 }
|
|
|
|
- { id: 29, class: vreg_64 }
|
|
|
|
liveins:
|
2018-02-01 06:04:26 +08:00
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%0' }
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%3' }
|
2017-02-28 04:21:31 +08:00
|
|
|
body: |
|
2017-07-07 04:56:57 +08:00
|
|
|
bb.0:
|
2018-02-01 06:04:26 +08:00
|
|
|
liveins: $sgpr0_sgpr1, $vgpr0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
2018-02-01 06:04:26 +08:00
|
|
|
%3 = COPY $vgpr0
|
|
|
|
%0 = COPY $sgpr0_sgpr1
|
2019-05-01 06:08:23 +08:00
|
|
|
%4 = S_LOAD_DWORDX2_IMM %0, 9, 0, 0
|
|
|
|
%5 = S_LOAD_DWORDX2_IMM %0, 11, 0, 0
|
|
|
|
%6 = S_LOAD_DWORDX2_IMM %0, 13, 0, 0
|
2018-02-01 06:04:26 +08:00
|
|
|
%27 = V_ASHRREV_I32_e32 31, %3, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%28 = REG_SEQUENCE %3, 1, %27, 2
|
|
|
|
%11 = S_MOV_B32 61440
|
|
|
|
%12 = S_MOV_B32 0
|
|
|
|
%13 = REG_SEQUENCE killed %12, 1, killed %11, 2
|
|
|
|
%14 = REG_SEQUENCE killed %5, 17, %13, 18
|
|
|
|
%15 = S_MOV_B32 2
|
2018-02-01 06:04:26 +08:00
|
|
|
%29 = V_LSHL_B64 killed %28, killed %15, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%17 = REG_SEQUENCE killed %6, 17, %13, 18
|
|
|
|
%18 = REG_SEQUENCE killed %4, 17, %13, 18
|
|
|
|
%20 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%19 = BUFFER_LOAD_DWORD_ADDR64 %20, killed %14, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%22 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%21 = BUFFER_LOAD_DWORD_ADDR64 %22, killed %17, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2018-02-01 06:04:26 +08:00
|
|
|
%23 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
%24 = nofpexcept V_MAD_F32 0, killed %19, 0, killed %21, 0, %23, 1, 0, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%26 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 killed %24, %26, killed %18, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
[AMDGPU] Add support for immediate operand for S_ENDPGM
Summary:
Add support for immediate operand in S_ENDPGM
Change-Id: I0c56a076a10980f719fb2a8f16407e9c301013f6
Reviewers: alexshap
Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, eraman, arphaman, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59213
llvm-svn: 355902
2019-03-12 17:52:58 +08:00
|
|
|
S_ENDPGM 0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
...
|
|
|
|
---
|
|
|
|
# GCN: name: no_fold_imm_madak_mad_omod_f32
|
2018-02-01 06:04:26 +08:00
|
|
|
# GCN: %23:vgpr_32 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
# GCN: %24:vgpr_32 = nofpexcept V_MAD_F32 0, killed %19, 0, killed %21, 0, %23, 0, 1, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
name: no_fold_imm_madak_mad_omod_f32
|
|
|
|
tracksRegLiveness: true
|
|
|
|
registers:
|
|
|
|
- { id: 0, class: sgpr_64 }
|
|
|
|
- { id: 1, class: sreg_32_xm0 }
|
|
|
|
- { id: 2, class: sgpr_32 }
|
|
|
|
- { id: 3, class: vgpr_32 }
|
|
|
|
- { id: 4, class: sreg_64_xexec }
|
|
|
|
- { id: 5, class: sreg_64_xexec }
|
|
|
|
- { id: 6, class: sreg_64_xexec }
|
|
|
|
- { id: 7, class: sreg_32 }
|
|
|
|
- { id: 8, class: sreg_32 }
|
|
|
|
- { id: 9, class: sreg_32_xm0 }
|
|
|
|
- { id: 10, class: sreg_64 }
|
|
|
|
- { id: 11, class: sreg_32_xm0 }
|
|
|
|
- { id: 12, class: sreg_32_xm0 }
|
|
|
|
- { id: 13, class: sgpr_64 }
|
|
|
|
- { id: 14, class: sgpr_128 }
|
|
|
|
- { id: 15, class: sreg_32_xm0 }
|
|
|
|
- { id: 16, class: sreg_64 }
|
|
|
|
- { id: 17, class: sgpr_128 }
|
|
|
|
- { id: 18, class: sgpr_128 }
|
|
|
|
- { id: 19, class: vgpr_32 }
|
|
|
|
- { id: 20, class: vreg_64 }
|
|
|
|
- { id: 21, class: vgpr_32 }
|
|
|
|
- { id: 22, class: vreg_64 }
|
|
|
|
- { id: 23, class: vgpr_32 }
|
|
|
|
- { id: 24, class: vgpr_32 }
|
|
|
|
- { id: 25, class: vgpr_32 }
|
|
|
|
- { id: 26, class: vreg_64 }
|
|
|
|
- { id: 27, class: vgpr_32 }
|
|
|
|
- { id: 28, class: vreg_64 }
|
|
|
|
- { id: 29, class: vreg_64 }
|
|
|
|
liveins:
|
2018-02-01 06:04:26 +08:00
|
|
|
- { reg: '$sgpr0_sgpr1', virtual-reg: '%0' }
|
|
|
|
- { reg: '$vgpr0', virtual-reg: '%3' }
|
2017-02-28 04:21:31 +08:00
|
|
|
body: |
|
2017-07-07 04:56:57 +08:00
|
|
|
bb.0:
|
2018-02-01 06:04:26 +08:00
|
|
|
liveins: $sgpr0_sgpr1, $vgpr0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
2018-02-01 06:04:26 +08:00
|
|
|
%3 = COPY $vgpr0
|
|
|
|
%0 = COPY $sgpr0_sgpr1
|
2019-05-01 06:08:23 +08:00
|
|
|
%4 = S_LOAD_DWORDX2_IMM %0, 9, 0, 0
|
|
|
|
%5 = S_LOAD_DWORDX2_IMM %0, 11, 0, 0
|
|
|
|
%6 = S_LOAD_DWORDX2_IMM %0, 13, 0, 0
|
2018-02-01 06:04:26 +08:00
|
|
|
%27 = V_ASHRREV_I32_e32 31, %3, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%28 = REG_SEQUENCE %3, 1, %27, 2
|
|
|
|
%11 = S_MOV_B32 61440
|
|
|
|
%12 = S_MOV_B32 0
|
|
|
|
%13 = REG_SEQUENCE killed %12, 1, killed %11, 2
|
|
|
|
%14 = REG_SEQUENCE killed %5, 17, %13, 18
|
|
|
|
%15 = S_MOV_B32 2
|
2018-02-01 06:04:26 +08:00
|
|
|
%29 = V_LSHL_B64 killed %28, killed %15, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%17 = REG_SEQUENCE killed %6, 17, %13, 18
|
|
|
|
%18 = REG_SEQUENCE killed %4, 17, %13, 18
|
|
|
|
%20 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%19 = BUFFER_LOAD_DWORD_ADDR64 %20, killed %14, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%22 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
%21 = BUFFER_LOAD_DWORD_ADDR64 %22, killed %17, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
2018-02-01 06:04:26 +08:00
|
|
|
%23 = V_MOV_B32_e32 1090519040, implicit $exec
|
2020-05-28 01:25:37 +08:00
|
|
|
%24 = nofpexcept V_MAD_F32 0, killed %19, 0, killed %21, 0, %23, 0, 1, implicit $mode, implicit $exec
|
2017-02-28 04:21:31 +08:00
|
|
|
%26 = COPY %29
|
[AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.
Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store
Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.
The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.
There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.
Reviewers: arsenm, nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68200
llvm-svn: 373491
2019-10-03 01:22:36 +08:00
|
|
|
BUFFER_STORE_DWORD_ADDR64 killed %24, %26, killed %18, 0, 0, 0, 0, 0, 0, 0, implicit $exec
|
[AMDGPU] Add support for immediate operand for S_ENDPGM
Summary:
Add support for immediate operand in S_ENDPGM
Change-Id: I0c56a076a10980f719fb2a8f16407e9c301013f6
Reviewers: alexshap
Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, eraman, arphaman, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59213
llvm-svn: 355902
2019-03-12 17:52:58 +08:00
|
|
|
S_ENDPGM 0
|
2017-02-28 04:21:31 +08:00
|
|
|
|
|
|
|
...
|