llvm-project/llvm/test/CodeGen/RISCV/umulo-128-legalisation-lowe...

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

131 lines
4.8 KiB
LLVM
Raw Normal View History

[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=riscv32 -mattr=+m | FileCheck %s --check-prefixes=RISCV32
define { i128, i8 } @muloti_test(i128 %l, i128 %r) #0 {
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-LABEL: muloti_test:
; RISCV32: # %bb.0: # %start
; RISCV32-NEXT: addi sp, sp, -96
; RISCV32-NEXT: sw ra, 92(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s0, 88(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s1, 84(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s2, 80(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s3, 76(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s4, 72(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s5, 68(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s6, 64(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s7, 60(sp) # 4-byte Folded Spill
; RISCV32-NEXT: sw s8, 56(sp) # 4-byte Folded Spill
; RISCV32-NEXT: lw s2, 12(a1)
; RISCV32-NEXT: lw s6, 8(a1)
; RISCV32-NEXT: lw s3, 12(a2)
; RISCV32-NEXT: lw s7, 8(a2)
; RISCV32-NEXT: lw s0, 0(a1)
; RISCV32-NEXT: lw s8, 4(a1)
; RISCV32-NEXT: lw s1, 0(a2)
; RISCV32-NEXT: lw s5, 4(a2)
; RISCV32-NEXT: mv s4, a0
; RISCV32-NEXT: sw zero, 20(sp)
; RISCV32-NEXT: sw zero, 16(sp)
; RISCV32-NEXT: sw zero, 36(sp)
; RISCV32-NEXT: sw zero, 32(sp)
; RISCV32-NEXT: sw s5, 12(sp)
; RISCV32-NEXT: sw s1, 8(sp)
; RISCV32-NEXT: sw s8, 28(sp)
; RISCV32-NEXT: addi a0, sp, 40
; RISCV32-NEXT: addi a1, sp, 24
; RISCV32-NEXT: addi a2, sp, 8
; RISCV32-NEXT: sw s0, 24(sp)
; RISCV32-NEXT: call __multi3@plt
; RISCV32-NEXT: mul a0, s8, s7
; RISCV32-NEXT: mul a1, s3, s0
; RISCV32-NEXT: add a0, a1, a0
; RISCV32-NEXT: mulhu a5, s7, s0
; RISCV32-NEXT: add a0, a5, a0
; RISCV32-NEXT: mul a1, s5, s6
; RISCV32-NEXT: mul a2, s2, s1
; RISCV32-NEXT: add a1, a2, a1
; RISCV32-NEXT: mulhu t0, s6, s1
; RISCV32-NEXT: add t1, t0, a1
; RISCV32-NEXT: add a6, t1, a0
; RISCV32-NEXT: mul a1, s7, s0
; RISCV32-NEXT: mul a3, s6, s1
; RISCV32-NEXT: add a4, a3, a1
; RISCV32-NEXT: lw a1, 52(sp)
; RISCV32-NEXT: lw a2, 48(sp)
; RISCV32-NEXT: sltu a3, a4, a3
; RISCV32-NEXT: add a3, a6, a3
; RISCV32-NEXT: add a3, a1, a3
; RISCV32-NEXT: add a6, a2, a4
; RISCV32-NEXT: sltu a2, a6, a2
; RISCV32-NEXT: add a7, a3, a2
; RISCV32-NEXT: beq a7, a1, .LBB0_2
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-NEXT: # %bb.1: # %start
; RISCV32-NEXT: sltu a2, a7, a1
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-NEXT: .LBB0_2: # %start
; RISCV32-NEXT: sltu a0, a0, a5
; RISCV32-NEXT: snez a1, s8
; RISCV32-NEXT: snez a3, s3
; RISCV32-NEXT: and a1, a3, a1
; RISCV32-NEXT: mulhu a3, s3, s0
; RISCV32-NEXT: snez a3, a3
; RISCV32-NEXT: or a1, a1, a3
; RISCV32-NEXT: mulhu a3, s8, s7
; RISCV32-NEXT: snez a3, a3
; RISCV32-NEXT: or a1, a1, a3
; RISCV32-NEXT: or a0, a1, a0
; RISCV32-NEXT: sltu a1, t1, t0
; RISCV32-NEXT: snez a3, s5
; RISCV32-NEXT: snez a4, s2
; RISCV32-NEXT: and a3, a4, a3
; RISCV32-NEXT: mulhu a4, s2, s1
; RISCV32-NEXT: snez a4, a4
; RISCV32-NEXT: or a3, a3, a4
; RISCV32-NEXT: mulhu a4, s5, s6
; RISCV32-NEXT: snez a4, a4
; RISCV32-NEXT: or a3, a3, a4
; RISCV32-NEXT: or a1, a3, a1
; RISCV32-NEXT: or a3, s7, s3
; RISCV32-NEXT: snez a3, a3
; RISCV32-NEXT: or a4, s6, s2
; RISCV32-NEXT: snez a4, a4
; RISCV32-NEXT: and a3, a4, a3
; RISCV32-NEXT: or a1, a3, a1
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-NEXT: or a0, a1, a0
; RISCV32-NEXT: lw a1, 44(sp)
; RISCV32-NEXT: lw a3, 40(sp)
; RISCV32-NEXT: or a0, a0, a2
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-NEXT: andi a0, a0, 1
; RISCV32-NEXT: sw a1, 4(s4)
; RISCV32-NEXT: sw a3, 0(s4)
; RISCV32-NEXT: sw a6, 8(s4)
; RISCV32-NEXT: sw a7, 12(s4)
; RISCV32-NEXT: sb a0, 16(s4)
; RISCV32-NEXT: lw s8, 56(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s7, 60(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s6, 64(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s5, 68(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s4, 72(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s3, 76(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s2, 80(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s1, 84(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw s0, 88(sp) # 4-byte Folded Reload
; RISCV32-NEXT: lw ra, 92(sp) # 4-byte Folded Reload
; RISCV32-NEXT: addi sp, sp, 96
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
; RISCV32-NEXT: ret
start:
%0 = tail call { i128, i1 } @llvm.umul.with.overflow.i128(i128 %l, i128 %r) #2
%1 = extractvalue { i128, i1 } %0, 0
%2 = extractvalue { i128, i1 } %0, 1
%3 = zext i1 %2 to i8
%4 = insertvalue { i128, i8 } undef, i128 %1, 0
%5 = insertvalue { i128, i8 } %4, i8 %3, 1
ret { i128, i8 } %5
}
; Function Attrs: nounwind readnone speculatable
declare { i128, i1 } @llvm.umul.with.overflow.i128(i128, i128) #1
attributes #0 = { nounwind readnone }
[SelectionDAG] Improve the legalisation lowering of UMULO. There is no way in the universe, that doing a full-width division in software will be faster than doing overflowing multiplication in software in the first place, especially given that this same full-width multiplication needs to be done anyway. This patch replaces the previous implementation with a direct lowering into an overflowing multiplication algorithm based on half-width operations. Correctness of the algorithm was verified by exhaustively checking the output of this algorithm for overflowing multiplication of 16 bit integers against an obviously correct widening multiplication. Baring any oversights introduced by porting the algorithm to DAG, confidence in correctness of this algorithm is extremely high. Following table shows the change in both t = runtime and s = space. The change is expressed as a multiplier of original, so anything under 1 is “better” and anything above 1 is worse. +-------+-----------+-----------+-------------+-------------+ | Arch | u64*u64 t | u64*u64 s | u128*u128 t | u128*u128 s | +-------+-----------+-----------+-------------+-------------+ | X64 | - | - | ~0.5 | ~0.64 | | i686 | ~0.5 | ~0.6666 | ~0.05 | ~0.9 | | armv7 | - | ~0.75 | - | ~1.4 | +-------+-----------+-----------+-------------+-------------+ Performance numbers have been collected by running overflowing multiplication in a loop under `perf` on two x86_64 (one Intel Haswell, other AMD Ryzen) based machines. Size numbers have been collected by looking at the size of function containing an overflowing multiply in a loop. All in all, it can be seen that both performance and size has improved except in the case of armv7 where code size has regressed for 128-bit multiply. u128*u128 overflowing multiply on 32-bit platforms seem to benefit from this change a lot, taking only 5% of the time compared to original algorithm to calculate the same thing. The final benefit of this change is that LLVM is now capable of lowering the overflowing unsigned multiply for integers of any bit-width as long as the target is capable of lowering regular multiplication for the same bit-width. Previously, 128-bit overflowing multiply was the widest possible. Patch by Simonas Kazlauskas! Differential Revision: https://reviews.llvm.org/D50310 llvm-svn: 339922
2018-08-17 02:39:39 +08:00
attributes #1 = { nounwind readnone speculatable }
attributes #2 = { nounwind }