llvm-project/llvm/test/Transforms/CanonicalizeFreezeInLoops/aarch64.ll

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

59 lines
1.7 KiB
LLVM
Raw Normal View History

2020-05-21 09:03:27 +08:00
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -mtriple=arm64-unknown-unknown -O3 < %s | FileCheck %s
; This test should show that @f and @f_without_freeze generate equivalent
; assembly
; REQUIRES: aarch64-registered-target
define void @f(i8* %p, i32 %n, i32 %m) {
; CHECK-LABEL: f:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: add w8, w2, #1
2020-05-21 09:03:27 +08:00
; CHECK-NEXT: .LBB0_1: // %loop
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: strb wzr, [x0, w8, sxtw]
; CHECK-NEXT: add w8, w8, #1
[AArch64] Make -mcpu=generic schedule for an in-order core We would like to start pushing -mcpu=generic towards enabling the set of features that improves performance for some CPUs, without hurting any others. A blend of the performance options hopefully beneficial to all CPUs. The largest part of that is enabling in-order scheduling using the Cortex-A55 schedule model. This is similar to the Arm backend change from eecb353d0e25ba which made -mcpu=generic perform in-order scheduling using the cortex-a8 schedule model. The idea is that in-order cpu's require the most help in instruction scheduling, whereas out-of-order cpus can for the most part out-of-order schedule around different codegen. Our benchmarking suggests that hypothesis holds. When running on an in-order core this improved performance by 3.8% geomean on a set of DSP workloads, 2% geomean on some other embedded benchmark and between 1% and 1.8% on a set of singlecore and multicore workloads, all running on a Cortex-A55 cluster. On an out-of-order cpu the results are a lot more noisy but show flat performance or an improvement. On the set of DSP and embedded benchmarks, run on a Cortex-A78 there was a very noisy 1% speed improvement. Using the most detailed results I could find, SPEC2006 runs on a Neoverse N1 show a small increase in instruction count (+0.127%), but a decrease in cycle counts (-0.155%, on average). The instruction count is very low noise, the cycle count is more noisy with a 0.15% decrease not being significant. SPEC2k17 shows a small decrease (-0.2%) in instruction count leading to a -0.296% decrease in cycle count. These results are within noise margins but tend to show a small improvement in general. When specifying an Apple target, clang will set "-target-cpu apple-a7" on the command line, so should not be affected by this change when running from clang. This also doesn't enable more runtime unrolling like -mcpu=cortex-a55 does, only changing the schedule used. A lot of existing tests have updated. This is a summary of the important differences: - Most changes are the same instructions in a different order. - Sometimes this leads to very minor inefficiencies, such as requiring an extra mov to move variables into r0/v0 for the return value of a test function. - misched-fusion.ll was no longer fusing the pairs of instructions it should, as per D110561. I've changed the schedule used in the test for now. - neon-mla-mls.ll now uses "mul; sub" as opposed to "neg; mla" due to the different latencies. This seems fine to me. - Some SVE tests do not always remove movprfx where they did before due to different register allocation giving different destructive forms. - The tests argument-blocks-array-of-struct.ll and arm64-windows-calls.ll produce two LDR where they previously produced an LDP due to store-pair-suppress kicking in. - arm64-ldp.ll and arm64-neon-copy.ll are missing pre/postinc on LPD. - Some tests such as arm64-neon-mul-div.ll and ragreedy-local-interval-cost.ll have more, less or just different spilling. - In aarch64_generated_funcs.ll.generated.expected one part of the function is no longer outlined. Interestingly if I switch this to use any other scheduled even less is outlined. Some of these are expected to happen, such as differences in outlining or register spilling. There will be places where these result in worse codegen, places where they are better, with the SPEC instruction counts suggesting it is not a decrease overall, on average. Differential Revision: https://reviews.llvm.org/D110830
2021-10-09 22:58:31 +08:00
; CHECK-NEXT: subs w1, w1, #1
2020-05-21 09:03:27 +08:00
; CHECK-NEXT: b.ne .LBB0_1
; CHECK-NEXT: // %bb.2: // %exit
; CHECK-NEXT: ret
entry:
br label %loop
loop:
%i = phi i32 [0, %entry], [%i.next, %loop]
%i.next = add i32 %i, 1
%i.next.fr = freeze i32 %i.next
%j = add i32 %m, %i.next.fr
%q = getelementptr i8, i8* %p, i32 %j
store i8 0, i8* %q
%cond = icmp eq i32 %i.next.fr, %n
br i1 %cond, label %exit, label %loop
exit:
ret void
}
define void @f_without_freeze(i8* %p, i32 %n, i32 %m) {
; CHECK-LABEL: f_without_freeze:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: add w8, w2, #1
2020-05-21 09:03:27 +08:00
; CHECK-NEXT: .LBB1_1: // %loop
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: strb wzr, [x0, w8, sxtw]
; CHECK-NEXT: subs w1, w1, #1
; CHECK-NEXT: add w8, w8, #1
2020-05-21 09:03:27 +08:00
; CHECK-NEXT: b.ne .LBB1_1
; CHECK-NEXT: // %bb.2: // %exit
; CHECK-NEXT: ret
entry:
br label %loop
loop:
%i = phi i32 [0, %entry], [%i.next, %loop]
%i.next = add i32 %i, 1
%j = add i32 %m, %i.next
%q = getelementptr i8, i8* %p, i32 %j
store i8 0, i8* %q
%cond = icmp eq i32 %i.next, %n
br i1 %cond, label %exit, label %loop
exit:
ret void
}