llvm-project/llvm/test/CodeGen/AArch64/GlobalISel/select-sextload.mir

48 lines
1.4 KiB
Plaintext
Raw Normal View History

[globalisel] Update GlobalISel emitter to match new representation of extending loads Summary: Previously, a extending load was represented at (G_*EXT (G_LOAD x)). This had a few drawbacks: * G_LOAD had to be legal for all sizes you could extend from, even if registers didn't naturally hold those sizes. * All sizes you could extend from had to be allocatable just in case the extend went missing (e.g. by optimization). * At minimum, G_*EXT and G_TRUNC had to be legal for these sizes. As we improve optimization of extends and truncates, this legality requirement would spread without considerable care w.r.t when certain combines were permitted. * The SelectionDAG importer required some ugly and fragile pattern rewriting to translate patterns into this style. This patch changes the representation to: * (G_[SZ]EXTLOAD x) * (G_LOAD x) any-extends when MMO.getSize() * 8 < ResultTy.getSizeInBits() which resolves these issues by allowing targets to work entirely in their native register sizes, and by having a more direct translation from SelectionDAG patterns. Each extending load can be lowered by the legalizer into separate extends and loads, however a target that supports s1 will need the any-extending load to extend to at least s8 since LLVM does not represent memory accesses smaller than 8 bit. The legalizer can widenScalar G_LOAD into an any-extending load but sign/zero-extending loads need help from something else like a combiner pass. A follow-up patch that adds combiner helpers for for this will follow. The new representation requires that the MMO correctly reflect the memory access so this has been corrected in a couple tests. I've also moved the extending loads to their own tests since they are (mostly) separate opcodes now. Additionally, the re-write appears to have invalidated two tests from select-with-no-legality-check.mir since the matcher table no longer contains loads that result in s1's and they aren't legal in AArch64 anymore. Depends on D45540 Reviewers: ab, aditya_nandakumar, bogner, rtereshin, volkan, rovka, javed.absar Reviewed By: rtereshin Subscribers: javed.absar, llvm-commits, kristof.beyls Differential Revision: https://reviews.llvm.org/D45541 llvm-svn: 331601
2018-05-06 04:53:24 +08:00
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
# RUN: llc -mtriple=aarch64-- -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s
[globalisel] Update GlobalISel emitter to match new representation of extending loads Summary: Previously, a extending load was represented at (G_*EXT (G_LOAD x)). This had a few drawbacks: * G_LOAD had to be legal for all sizes you could extend from, even if registers didn't naturally hold those sizes. * All sizes you could extend from had to be allocatable just in case the extend went missing (e.g. by optimization). * At minimum, G_*EXT and G_TRUNC had to be legal for these sizes. As we improve optimization of extends and truncates, this legality requirement would spread without considerable care w.r.t when certain combines were permitted. * The SelectionDAG importer required some ugly and fragile pattern rewriting to translate patterns into this style. This patch changes the representation to: * (G_[SZ]EXTLOAD x) * (G_LOAD x) any-extends when MMO.getSize() * 8 < ResultTy.getSizeInBits() which resolves these issues by allowing targets to work entirely in their native register sizes, and by having a more direct translation from SelectionDAG patterns. Each extending load can be lowered by the legalizer into separate extends and loads, however a target that supports s1 will need the any-extending load to extend to at least s8 since LLVM does not represent memory accesses smaller than 8 bit. The legalizer can widenScalar G_LOAD into an any-extending load but sign/zero-extending loads need help from something else like a combiner pass. A follow-up patch that adds combiner helpers for for this will follow. The new representation requires that the MMO correctly reflect the memory access so this has been corrected in a couple tests. I've also moved the extending loads to their own tests since they are (mostly) separate opcodes now. Additionally, the re-write appears to have invalidated two tests from select-with-no-legality-check.mir since the matcher table no longer contains loads that result in s1's and they aren't legal in AArch64 anymore. Depends on D45540 Reviewers: ab, aditya_nandakumar, bogner, rtereshin, volkan, rovka, javed.absar Reviewed By: rtereshin Subscribers: javed.absar, llvm-commits, kristof.beyls Differential Revision: https://reviews.llvm.org/D45541 llvm-svn: 331601
2018-05-06 04:53:24 +08:00
--- |
target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
define void @sextload_s32_from_s16(i16 *%addr) { ret void }
define void @sextload_s32_from_s16_not_combined(i16 *%addr) { ret void }
...
---
name: sextload_s32_from_s16
legalized: true
regBankSelected: true
body: |
bb.0:
liveins: $x0
; CHECK-LABEL: name: sextload_s32_from_s16
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRSHWui [[COPY]], 0 :: (load 2 from %ir.addr)
; CHECK: $w0 = COPY [[T0]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_SEXTLOAD %0 :: (load 2 from %ir.addr)
$w0 = COPY %1(s32)
...
---
name: sextload_s32_from_s16_not_combined
legalized: true
regBankSelected: true
body: |
bb.0:
liveins: $x0
; CHECK-LABEL: name: sextload_s32_from_s16_not_combined
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
; CHECK: [[T1:%[0-9]+]]:gpr32 = SBFMWri [[T0]], 0, 15
; CHECK: $w0 = COPY [[T1]]
%0:gpr(p0) = COPY $x0
%1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
%2:gpr(s32) = G_SEXT %1
$w0 = COPY %2(s32)
...