2019-03-20 03:01:34 +08:00
|
|
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
|
2018-08-25 03:24:20 +08:00
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-REG %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx -fast-isel -O0 \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-FISL %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr8 \
|
|
|
|
; RUN: -mtriple=powerpc64le-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-LE %s
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
|
|
|
|
define double @test1(double %a, double %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test1:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xsmuldp f1, f1, f2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test1:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xsmuldp f1, f1, f2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test1:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xsmuldp f1, f1, f2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test1:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xsmuldp f1, f1, f2
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
entry:
|
|
|
|
%v = fmul double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test2(double %a, double %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test2:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xsdivdp f1, f1, f2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test2:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xsdivdp f1, f1, f2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test2:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xsdivdp f1, f1, f2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test2:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xsdivdp f1, f1, f2
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
entry:
|
|
|
|
%v = fdiv double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test3(double %a, double %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test3:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xsadddp f1, f1, f2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test3:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xsadddp f1, f1, f2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test3:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xsadddp f1, f1, f2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test3:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xsadddp f1, f1, f2
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
entry:
|
|
|
|
%v = fadd double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test4(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test4:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xvadddp v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test4:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xvadddp v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test4:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xvadddp v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test4:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xvadddp v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
entry:
|
|
|
|
%v = fadd <2 x double> %a, %b
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
define <4 x i32> @test5(<4 x i32> %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test5:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test5:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test5:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test5:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = xor <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test6(<8 x i16> %a, <8 x i16> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test6:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test6:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test6:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlxor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test6:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = xor <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test7(<16 x i8> %a, <16 x i8> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test7:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test7:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test7:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlxor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test7:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlxor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = xor <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test8(<4 x i32> %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test8:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test8:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test8:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test8:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test9(<8 x i16> %a, <8 x i16> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test9:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test9:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test9:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test9:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test10(<16 x i8> %a, <16 x i8> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test10:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test10:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test10:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test10:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test11(<4 x i32> %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test11:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test11:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test11:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test11:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = and <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test12(<8 x i16> %a, <8 x i16> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test12:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test12:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test12:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxland v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test12:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = and <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test13(<16 x i8> %a, <16 x i8> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test13:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test13:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test13:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxland v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test13:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = and <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test14(<4 x i32> %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test14:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test14:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test14:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test14:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <4 x i32> %a, %b
|
|
|
|
%w = xor <4 x i32> %v, <i32 -1, i32 -1, i32 -1, i32 -1>
|
|
|
|
ret <4 x i32> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test15(<8 x i16> %a, <8 x i16> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test15:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test15:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test15:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor v4, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: xxlnor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test15:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <8 x i16> %a, %b
|
|
|
|
%w = xor <8 x i16> %v, <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>
|
|
|
|
ret <8 x i16> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test16(<16 x i8> %a, <16 x i8> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test16:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test16:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test16:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor v4, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: xxlnor v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test16:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlnor v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%v = or <16 x i8> %a, %b
|
|
|
|
%w = xor <16 x i8> %v, <i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1>
|
|
|
|
ret <16 x i8> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test17(<4 x i32> %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test17:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test17:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test17:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xxlnor vs0, v3, v3
|
|
|
|
; CHECK-FISL-NEXT: xxland v2, v2, vs0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test17:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%w = xor <4 x i32> %b, <i32 -1, i32 -1, i32 -1, i32 -1>
|
|
|
|
%v = and <4 x i32> %a, %w
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test18(<8 x i16> %a, <8 x i16> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test18:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test18:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test18:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlnor v4, v3, v3
|
|
|
|
; CHECK-FISL-NEXT: xxlandc v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test18:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%w = xor <8 x i16> %b, <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>
|
|
|
|
%v = and <8 x i16> %a, %w
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test19(<16 x i8> %a, <16 x i8> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test19:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test19:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test19:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlnor v4, v3, v3
|
|
|
|
; CHECK-FISL-NEXT: xxlandc v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test19:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xxlandc v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
entry:
|
|
|
|
%w = xor <16 x i8> %b, <i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1>
|
|
|
|
%v = and <16 x i8> %a, %w
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
define <4 x i32> @test20(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c, <4 x i32> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test20:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test20:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test20:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test20:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = icmp eq <4 x i32> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x i32> %a, <4 x i32> %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test21(<4 x float> %a, <4 x float> %b, <4 x float> %c, <4 x float> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test21:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test21:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test21:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test21:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = fcmp oeq <4 x float> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x float> %a, <4 x float> %b
|
|
|
|
ret <4 x float> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test22(<4 x float> %a, <4 x float> %b, <4 x float> %c, <4 x float> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test22:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
2021-01-13 01:52:00 +08:00
|
|
|
; CHECK-NEXT: xvcmpgtsp vs0, v5, v4
|
|
|
|
; CHECK-NEXT: xvcmpgtsp vs1, v4, v5
|
2021-03-18 23:52:48 +08:00
|
|
|
; CHECK-NEXT: xxlor vs0, vs1, vs0
|
|
|
|
; CHECK-NEXT: xxsel v2, v2, v3, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test22:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
2021-01-13 01:52:00 +08:00
|
|
|
; CHECK-REG-NEXT: xvcmpgtsp vs0, v5, v4
|
|
|
|
; CHECK-REG-NEXT: xvcmpgtsp vs1, v4, v5
|
2021-03-18 23:52:48 +08:00
|
|
|
; CHECK-REG-NEXT: xxlor vs0, vs1, vs0
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v2, v3, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test22:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2021-01-13 01:52:00 +08:00
|
|
|
; CHECK-FISL-NEXT: xvcmpgtsp vs1, v5, v4
|
|
|
|
; CHECK-FISL-NEXT: xvcmpgtsp vs0, v4, v5
|
2021-03-18 23:52:48 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, vs0, vs1
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v2, v3, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test22:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
2021-01-13 01:52:00 +08:00
|
|
|
; CHECK-LE-NEXT: xvcmpgtsp vs0, v5, v4
|
|
|
|
; CHECK-LE-NEXT: xvcmpgtsp vs1, v4, v5
|
2021-03-18 23:52:48 +08:00
|
|
|
; CHECK-LE-NEXT: xxlor vs0, vs1, vs0
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v2, v3, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = fcmp ueq <4 x float> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x float> %a, <4 x float> %b
|
|
|
|
ret <4 x float> %v
|
|
|
|
|
2019-03-20 03:01:34 +08:00
|
|
|
|
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test23(<8 x i16> %a, <8 x i16> %b, <8 x i16> %c, <8 x i16> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test23:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test23:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test23:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: vcmpequh v4, v4, v5
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, v4, v4
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v3, v2, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test23:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = icmp eq <8 x i16> %c, %d
|
|
|
|
%v = select <8 x i1> %m, <8 x i16> %a, <8 x i16> %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test24(<16 x i8> %a, <16 x i8> %b, <16 x i8> %c, <16 x i8> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test24:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test24:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test24:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: vcmpequb v4, v4, v5
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, v4, v4
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v3, v2, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test24:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = icmp eq <16 x i8> %c, %d
|
|
|
|
%v = select <16 x i1> %m, <16 x i8> %a, <16 x i8> %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test25(<2 x double> %a, <2 x double> %b, <2 x double> %c, <2 x double> %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test25:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xvcmpeqdp vs0, v4, v5
|
|
|
|
; CHECK-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test25:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xvcmpeqdp vs0, v4, v5
|
|
|
|
; CHECK-REG-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test25:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
|
|
|
; CHECK-FISL-NEXT: xvcmpeqdp vs0, v4, v5
|
|
|
|
; CHECK-FISL-NEXT: xxsel v2, v3, v2, vs0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test25:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xvcmpeqdp v4, v4, v5
|
|
|
|
; CHECK-LE-NEXT: xxsel v2, v3, v2, v4
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
entry:
|
|
|
|
%m = fcmp oeq <2 x double> %c, %d
|
|
|
|
%v = select <2 x i1> %m, <2 x double> %a, <2 x double> %b
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 00:12:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test26(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test26:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-NEXT: add r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-NEXT: add r3, r4, r3
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test26:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-REG-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-REG-NEXT: add r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-REG-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: add r3, r4, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test26:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: ld r4, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -40(r1)
|
|
|
|
; CHECK-FISL-NEXT: add r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: ld r4, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: add r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test26:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vaddudm v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 00:12:58 +08:00
|
|
|
%v = add <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2014-03-30 23:10:18 +08:00
|
|
|
|
|
|
|
; Make sure we use only two stores (one for each operand).
|
|
|
|
|
2014-03-27 00:12:58 +08:00
|
|
|
; FIXME: The code quality here is not good; just make sure we do something for now.
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 00:12:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test27(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test27:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test27:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test27:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxland v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test27:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxland v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 00:12:58 +08:00
|
|
|
%v = and <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
define <2 x double> @test28(<2 x double>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test28:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test28:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test28:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test28:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x double>, <2 x double>* %a, align 16
|
2014-03-27 02:26:30 +08:00
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test29(<2 x double>* %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test29:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test29:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test29:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test29:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
|
|
|
; CHECK-LE-NEXT: stxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
store <2 x double> %b, <2 x double>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 03:39:09 +08:00
|
|
|
define <2 x double> @test28u(<2 x double>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test28u:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test28u:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test28u:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test28u:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x double>, <2 x double>* %a, align 8
|
2014-03-27 03:39:09 +08:00
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:39:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test29u(<2 x double>* %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test29u:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test29u:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test29u:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test29u:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
|
|
|
; CHECK-LE-NEXT: stxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:39:09 +08:00
|
|
|
store <2 x double> %b, <2 x double>* %a, align 8
|
|
|
|
ret void
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:39:09 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
define <2 x i64> @test30(<2 x i64>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test30:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test30:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test30:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test30:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x i64>, <2 x i64>* %a, align 16
|
2014-03-27 02:26:30 +08:00
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test31(<2 x i64>* %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test31:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test31:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test31:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test31:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
|
|
|
; CHECK-LE-NEXT: stxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
store <2 x i64> %b, <2 x i64>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
define <4 x float> @test32(<4 x float>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test32:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test32:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test32:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test32:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lvx v2, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x float>, <4 x float>* %a, align 16
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x float> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test33(<4 x float>* %a, <4 x float> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test33:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test33:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test33:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test33:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: stvx v2, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
store <4 x float> %b, <4 x float>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test32u(<4 x float>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test32u:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: li r4, 15
|
|
|
|
; CHECK-NEXT: lvsl v3, 0, r3
|
|
|
|
; CHECK-NEXT: lvx v2, r3, r4
|
|
|
|
; CHECK-NEXT: lvx v4, 0, r3
|
|
|
|
; CHECK-NEXT: vperm v2, v4, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test32u:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: li r4, 15
|
|
|
|
; CHECK-REG-NEXT: lvsl v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lvx v2, r3, r4
|
|
|
|
; CHECK-REG-NEXT: lvx v4, 0, r3
|
|
|
|
; CHECK-REG-NEXT: vperm v2, v4, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test32u:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: li r4, 15
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lvx v3, r3, r4
|
|
|
|
; CHECK-FISL-NEXT: lvsl v4, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: lvx v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: vperm v2, v2, v3, v4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test32u:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x float>, <4 x float>* %a, align 8
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x float> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test33u(<4 x float>* %a, <4 x float> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test33u:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test33u:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test33u:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test33u:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
|
|
|
; CHECK-LE-NEXT: stxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
store <4 x float> %b, <4 x float>* %a, align 8
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test34(<4 x i32>* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test34:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test34:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test34:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test34:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lvx v2, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x i32>, <4 x i32>* %a, align 16
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test35(<4 x i32>* %a, <4 x i32> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test35:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test35:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test35:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test35:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: stvx v2, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
store <4 x i32> %b, <4 x i32>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
define <2 x double> @test40(<2 x i64> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test40:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xvcvuxddp v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test40:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xvcvuxddp v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test40:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xvcvuxddp v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test40:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xvcvuxddp v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = uitofp <2 x i64> %a to <2 x double>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test41(<2 x i64> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test41:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test41:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test41:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test41:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = sitofp <2 x i64> %a to <2 x double>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test42(<2 x double> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test42:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xvcvdpuxds v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test42:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xvcvdpuxds v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test42:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xvcvdpuxds v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test42:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xvcvdpuxds v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = fptoui <2 x double> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test43(<2 x double> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test43:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xvcvdpsxds v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test43:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xvcvdpsxds v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test43:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xvcvdpsxds v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test43:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xvcvdpsxds v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = fptosi <2 x double> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x float> @test44(<2 x i64> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test44:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: addi r4, r1, -64
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-NEXT: lfd f0, -24(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -48
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: lxvw4x v3, 0, r4
|
|
|
|
; CHECK-NEXT: vmrghw v2, v3, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test44:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -64
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: lfd f0, -24(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -48
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-REG-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v3, 0, r4
|
|
|
|
; CHECK-REG-NEXT: vmrghw v2, v3, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test44:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfd f0, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: fcfidus f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
2020-09-22 17:20:10 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvw4x v3, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -64
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: vmrghw v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test44:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
2021-06-11 14:51:36 +08:00
|
|
|
; CHECK-LE-NEXT: xscvuxdsp f1, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: xscvuxdsp f0, f0
|
2021-04-20 19:25:18 +08:00
|
|
|
; CHECK-LE-NEXT: xscvdpspn v3, f1
|
|
|
|
; CHECK-LE-NEXT: xscvdpspn v2, f0
|
2020-06-19 10:53:50 +08:00
|
|
|
; CHECK-LE-NEXT: vmrghw v2, v3, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = uitofp <2 x i64> %a to <2 x float>
|
|
|
|
ret <2 x float> %v
|
|
|
|
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x float> @test45(<2 x i64> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test45:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: addi r4, r1, -64
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-NEXT: lfd f0, -24(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -48
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: lxvw4x v3, 0, r4
|
|
|
|
; CHECK-NEXT: vmrghw v2, v3, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test45:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -64
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: lfd f0, -24(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -48
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-REG-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v3, 0, r4
|
|
|
|
; CHECK-REG-NEXT: vmrghw v2, v3, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test45:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfd f0, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfs f0, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfd f0, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: fcfids f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfs f0, -64(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
2020-09-22 17:20:10 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvw4x v3, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -64
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: vmrghw v2, v2, v3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test45:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs0, v2
|
2021-06-11 14:51:36 +08:00
|
|
|
; CHECK-LE-NEXT: xscvsxdsp f1, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: xscvsxdsp f0, f0
|
2021-04-20 19:25:18 +08:00
|
|
|
; CHECK-LE-NEXT: xscvdpspn v3, f1
|
|
|
|
; CHECK-LE-NEXT: xscvdpspn v2, f0
|
2020-06-19 10:53:50 +08:00
|
|
|
; CHECK-LE-NEXT: vmrghw v2, v3, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = sitofp <2 x i64> %a to <2 x float>
|
|
|
|
ret <2 x float> %v
|
|
|
|
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test46(<2 x float> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test46:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-NEXT: xscvdpuxds f0, f0
|
|
|
|
; CHECK-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -32(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: xscvdpuxds f0, f0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: stfd f0, -24(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test46:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-REG-NEXT: xscvdpuxds f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -32(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: xscvdpuxds f0, f0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: stfd f0, -24(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test46:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-FISL-NEXT: xscvdpuxds f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: xscvdpuxds f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfd f0, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test46:
|
|
|
|
; CHECK-LE: # %bb.0:
|
2020-03-24 06:34:05 +08:00
|
|
|
; CHECK-LE-NEXT: xxmrglw vs0, v2, v2
|
|
|
|
; CHECK-LE-NEXT: xvcvspdp vs0, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: xvcvdpuxds v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = fptoui <2 x float> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test47(<2 x float> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test47:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-NEXT: xscvdpsxds f0, f0
|
|
|
|
; CHECK-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -32(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: xscvdpsxds f0, f0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-NEXT: stfd f0, -24(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test47:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-REG-NEXT: xscvdpsxds f0, f0
|
|
|
|
; CHECK-REG-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -32(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: xscvdpsxds f0, f0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
2021-07-14 09:33:23 +08:00
|
|
|
; CHECK-REG-NEXT: stfd f0, -24(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test47:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvw4x v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: lfs f0, -44(r1)
|
|
|
|
; CHECK-FISL-NEXT: xscvdpsxds f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfd f0, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: lfs f0, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: xscvdpsxds f0, f0
|
|
|
|
; CHECK-FISL-NEXT: stfd f0, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test47:
|
|
|
|
; CHECK-LE: # %bb.0:
|
2020-03-24 06:34:05 +08:00
|
|
|
; CHECK-LE-NEXT: xxmrglw vs0, v2, v2
|
|
|
|
; CHECK-LE-NEXT: xvcvspdp vs0, vs0
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: xvcvdpsxds v2, vs0
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
%v = fptosi <2 x float> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
}
|
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
define <2 x double> @test50(double* %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test50:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: lxvdsx v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test50:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: lxvdsx v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test50:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: lxvdsx v2, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test50:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: lxvdsx v2, 0, r3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load double, double* %a, align 8
|
2014-03-27 06:58:37 +08:00
|
|
|
%w = insertelement <2 x double> undef, double %v, i32 0
|
|
|
|
%x = insertelement <2 x double> %w, double %v, i32 1
|
|
|
|
ret <2 x double> %x
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test51(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test51:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxspltd v2, v2, 0
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test51:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxspltd v2, v2, 0
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test51:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxspltd v2, v2, 0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test51:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxspltd v2, v2, 1
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 0, i32 0>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test52(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test52:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxmrghd v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test52:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxmrghd v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test52:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxmrghd v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test52:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxmrgld v2, v3, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 0, i32 2>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test53(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test53:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxmrghd v2, v3, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test53:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxmrghd v2, v3, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test53:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxmrghd v2, v3, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test53:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 2, i32 0>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test54(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test54:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxpermdi v2, v2, v3, 2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test54:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxpermdi v2, v2, v3, 2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test54:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxpermdi v2, v2, v3, 2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test54:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxpermdi v2, v3, v2, 2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 1, i32 2>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test55(<2 x double> %a, <2 x double> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test55:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test55:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test55:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test55:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxmrghd v2, v3, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 1, i32 3>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test56(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test56:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test56:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test56:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxmrgld v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test56:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxmrghd v2, v3, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
%v = shufflevector <2 x i64> %a, <2 x i64> %b, <2 x i32> <i32 1, i32 3>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
2014-03-28 05:26:33 +08:00
|
|
|
define <2 x i64> @test60(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test60:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-NEXT: sld r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-NEXT: sld r3, r4, r3
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test60:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-REG-NEXT: sld r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-REG-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-REG-NEXT: sld r3, r4, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test60:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -20(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -40(r1)
|
|
|
|
; CHECK-FISL-NEXT: sld r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -28(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: sld r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test60:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vsld v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-28 05:26:33 +08:00
|
|
|
%v = shl <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test61(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test61:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-NEXT: srd r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-NEXT: srd r3, r4, r3
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test61:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-REG-NEXT: srd r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-REG-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-REG-NEXT: srd r3, r4, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test61:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -20(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -40(r1)
|
|
|
|
; CHECK-FISL-NEXT: srd r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -28(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: srd r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test61:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vsrd v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-28 05:26:33 +08:00
|
|
|
%v = lshr <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test62(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test62:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-NEXT: srad r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-NEXT: srad r3, r4, r3
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test62:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -20(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r4, -40(r1)
|
|
|
|
; CHECK-REG-NEXT: srad r3, r4, r3
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-REG-NEXT: ld r4, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: lwz r3, -28(r1)
|
|
|
|
; CHECK-REG-NEXT: srad r3, r4, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test62:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -20(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -40(r1)
|
|
|
|
; CHECK-FISL-NEXT: srad r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lwz r4, -28(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: srad r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test62:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vsrad v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-28 05:26:33 +08:00
|
|
|
%v = ashr <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
|
|
|
}
|
|
|
|
|
2014-03-28 06:22:48 +08:00
|
|
|
define double @test63(<2 x double> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test63:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxlor f1, v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test63:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxlor f1, v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test63:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
2019-05-16 20:50:39 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor f1, v2, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test63:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxswapd vs1, v2
|
|
|
|
; CHECK-LE-NEXT: # kill: def $f1 killed $f1 killed $vsl1
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-28 06:22:48 +08:00
|
|
|
%v = extractelement <2 x double> %a, i32 0
|
|
|
|
ret double %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-28 06:22:48 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test64(<2 x double> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test64:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxswapd vs1, v2
|
|
|
|
; CHECK-NEXT: # kill: def $f1 killed $f1 killed $vsl1
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test64:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxswapd vs1, v2
|
|
|
|
; CHECK-REG-NEXT: # kill: def $f1 killed $f1 killed $vsl1
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test64:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxswapd vs0, v2
|
|
|
|
; CHECK-FISL-NEXT: fmr f1, f0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test64:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxlor f1, v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-28 06:22:48 +08:00
|
|
|
%v = extractelement <2 x double> %a, i32 1
|
|
|
|
ret double %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-28 06:22:48 +08:00
|
|
|
}
|
|
|
|
|
2014-03-30 00:04:40 +08:00
|
|
|
define <2 x i1> @test65(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test65:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test65:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test65:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test65:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vcmpequd v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
%w = icmp eq <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i1> @test66(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test66:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-NEXT: xxlnor v2, v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test66:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: xxlnor v2, v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test66:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: vcmpequw v2, v2, v3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlnor v2, v2, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test66:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vcmpequd v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: xxlnor v2, v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
%w = icmp ne <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i1> @test67(<2 x i64> %a, <2 x i64> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test67:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-NEXT: ld r4, -40(r1)
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-NEXT: ld r6, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: cmpld r4, r3
|
|
|
|
; CHECK-NEXT: li r3, 0
|
|
|
|
; CHECK-NEXT: li r4, -1
|
2020-05-24 22:05:28 +08:00
|
|
|
; CHECK-NEXT: isellt r5, r4, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r5, -8(r1)
|
|
|
|
; CHECK-NEXT: ld r5, -32(r1)
|
|
|
|
; CHECK-NEXT: cmpld r6, r5
|
2020-05-24 22:05:28 +08:00
|
|
|
; CHECK-NEXT: isellt r3, r4, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test67:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -48
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r4
|
|
|
|
; CHECK-REG-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r4, -40(r1)
|
2019-07-02 11:28:52 +08:00
|
|
|
; CHECK-REG-NEXT: ld r6, -48(r1)
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: cmpld r4, r3
|
|
|
|
; CHECK-REG-NEXT: li r3, 0
|
|
|
|
; CHECK-REG-NEXT: li r4, -1
|
2020-05-24 22:05:28 +08:00
|
|
|
; CHECK-REG-NEXT: isellt r5, r4, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r5, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r5, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: cmpld r6, r5
|
2020-05-24 22:05:28 +08:00
|
|
|
; CHECK-REG-NEXT: isellt r3, r4, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test67:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -48
|
|
|
|
; CHECK-FISL-NEXT: stxvd2x v2, 0, r3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: ld r4, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -40(r1)
|
|
|
|
; CHECK-FISL-NEXT: cmpld r3, r4
|
|
|
|
; CHECK-FISL-NEXT: li r4, 0
|
|
|
|
; CHECK-FISL-NEXT: li r3, -1
|
|
|
|
; CHECK-FISL-NEXT: isellt r5, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r5, -8(r1)
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: ld r6, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r5, -48(r1)
|
|
|
|
; CHECK-FISL-NEXT: cmpld r5, r6
|
|
|
|
; CHECK-FISL-NEXT: isellt r3, r3, r4
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test67:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: vcmpgtud v2, v3, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
%w = icmp ult <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test68(<2 x i32> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test68:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: xxmrghw vs0, v2, v2
|
|
|
|
; CHECK-NEXT: xvcvsxwdp v2, vs0
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test68:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: xxmrghw vs0, v2, v2
|
|
|
|
; CHECK-REG-NEXT: xvcvsxwdp v2, vs0
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test68:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: xxmrghw vs0, v2, v2
|
|
|
|
; CHECK-FISL-NEXT: xvcvsxwdp v2, vs0
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test68:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: xxmrglw v2, v2, v2
|
|
|
|
; CHECK-LE-NEXT: xvcvsxwdp v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
%w = sitofp <2 x i32> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test69(<2 x i16> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test69:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addis r3, r2, .LCPI63_0@toc@ha
|
|
|
|
; CHECK-NEXT: addi r3, r3, .LCPI63_0@toc@l
|
|
|
|
; CHECK-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: lha r3, -18(r1)
|
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: lha r3, -26(r1)
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test69:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addis r3, r2, .LCPI63_0@toc@ha
|
|
|
|
; CHECK-REG-NEXT: addi r3, r3, .LCPI63_0@toc@l
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: lha r3, -18(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: lha r3, -26(r1)
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test69:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addis r3, r2, .LCPI63_0@toc@ha
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r3, .LCPI63_0@toc@l
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: vperm v2, v2, v2, v3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, v2, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: stxvd2x vs0, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: lha r3, -18(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: lha r3, -26(r1)
|
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test69:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: addis r3, r2, .LCPI63_0@toc@ha
|
|
|
|
; CHECK-LE-NEXT: addi r3, r3, .LCPI63_0@toc@l
|
|
|
|
; CHECK-LE-NEXT: lvx v3, 0, r3
|
|
|
|
; CHECK-LE-NEXT: addis r3, r2, .LCPI63_1@toc@ha
|
|
|
|
; CHECK-LE-NEXT: addi r3, r3, .LCPI63_1@toc@l
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v3, vs0
|
|
|
|
; CHECK-LE-NEXT: vsld v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: vsrad v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
%w = sitofp <2 x i16> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test70(<2 x i8> %a) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test70:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addis r3, r2, .LCPI64_0@toc@ha
|
|
|
|
; CHECK-NEXT: addi r3, r3, .LCPI64_0@toc@l
|
|
|
|
; CHECK-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-NEXT: extsb r3, r3
|
|
|
|
; CHECK-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-NEXT: extsb r3, r3
|
|
|
|
; CHECK-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test70:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addis r3, r2, .LCPI64_0@toc@ha
|
|
|
|
; CHECK-REG-NEXT: addi r3, r3, .LCPI64_0@toc@l
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -32
|
|
|
|
; CHECK-REG-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: stxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-REG-NEXT: extsb r3, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-REG-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-REG-NEXT: extsb r3, r3
|
|
|
|
; CHECK-REG-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-REG-NEXT: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-REG-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test70:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: addis r3, r2, .LCPI64_0@toc@ha
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r3, .LCPI64_0@toc@l
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-FISL-NEXT: vperm v2, v2, v2, v3
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: xxlor vs0, v2, v2
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -32
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: stxvd2x vs0, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: ld r3, -24(r1)
|
|
|
|
; CHECK-FISL-NEXT: extsb r3, r3
|
|
|
|
; CHECK-FISL-NEXT: std r3, -8(r1)
|
|
|
|
; CHECK-FISL-NEXT: ld r3, -32(r1)
|
|
|
|
; CHECK-FISL-NEXT: extsb r3, r3
|
|
|
|
; CHECK-FISL-NEXT: std r3, -16(r1)
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: lxvd2x v2, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test70:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: addis r3, r2, .LCPI64_0@toc@ha
|
|
|
|
; CHECK-LE-NEXT: addi r3, r3, .LCPI64_0@toc@l
|
|
|
|
; CHECK-LE-NEXT: lvx v3, 0, r3
|
|
|
|
; CHECK-LE-NEXT: addis r3, r2, .LCPI64_1@toc@ha
|
|
|
|
; CHECK-LE-NEXT: addi r3, r3, .LCPI64_1@toc@l
|
|
|
|
; CHECK-LE-NEXT: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE-NEXT: vperm v2, v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: xxswapd v3, vs0
|
|
|
|
; CHECK-LE-NEXT: vsld v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: vsrad v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: xvcvsxddp v2, v2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
%w = sitofp <2 x i8> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
llvm-svn: 205179
2014-03-31 19:43:19 +08:00
|
|
|
define <2 x i32> @test80(i32 %v) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test80:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: addi r4, r1, -16
|
|
|
|
; CHECK-NEXT: stw r3, -16(r1)
|
|
|
|
; CHECK-NEXT: addis r3, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-NEXT: lxvw4x vs0, 0, r4
|
|
|
|
; CHECK-NEXT: addi r3, r3, .LCPI65_0@toc@l
|
|
|
|
; CHECK-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-NEXT: xxspltw v2, vs0, 0
|
|
|
|
; CHECK-NEXT: vadduwm v2, v2, v3
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test80:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: addi r4, r1, -16
|
|
|
|
; CHECK-REG-NEXT: stw r3, -16(r1)
|
|
|
|
; CHECK-REG-NEXT: addis r3, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-REG-NEXT: lxvw4x vs0, 0, r4
|
|
|
|
; CHECK-REG-NEXT: addi r3, r3, .LCPI65_0@toc@l
|
|
|
|
; CHECK-REG-NEXT: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-REG-NEXT: xxspltw v2, vs0, 0
|
|
|
|
; CHECK-REG-NEXT: vadduwm v2, v2, v3
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test80:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
2019-05-16 20:50:39 +08:00
|
|
|
; CHECK-FISL-NEXT: # kill: def $r3 killed $r3 killed $x3
|
|
|
|
; CHECK-FISL-NEXT: stw r3, -16(r1)
|
2020-09-15 21:16:14 +08:00
|
|
|
; CHECK-FISL-NEXT: addi r3, r1, -16
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x vs0, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: xxspltw v2, vs0, 0
|
2020-09-15 21:16:14 +08:00
|
|
|
; CHECK-FISL-NEXT: addis r3, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-FISL-NEXT: addi r3, r3, .LCPI65_0@toc@l
|
|
|
|
; CHECK-FISL-NEXT: lxvw4x v3, 0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: vadduwm v2, v2, v3
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test80:
|
|
|
|
; CHECK-LE: # %bb.0:
|
2020-06-19 10:53:50 +08:00
|
|
|
; CHECK-LE-NEXT: mtfprwz f0, r3
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: addis r4, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-LE-NEXT: addi r3, r4, .LCPI65_0@toc@l
|
2020-06-19 10:53:50 +08:00
|
|
|
; CHECK-LE-NEXT: xxspltw v2, vs0, 1
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LE-NEXT: lvx v3, 0, r3
|
|
|
|
; CHECK-LE-NEXT: vadduwm v2, v2, v3
|
|
|
|
; CHECK-LE-NEXT: blr
|
Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
llvm-svn: 205179
2014-03-31 19:43:19 +08:00
|
|
|
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
|
|
|
|
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
|
|
|
|
%i = add <2 x i32> %b2, <i32 2, i32 3>
|
|
|
|
ret <2 x i32> %i
|
|
|
|
|
2019-03-20 03:01:34 +08:00
|
|
|
|
|
|
|
|
Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
llvm-svn: 205179
2014-03-31 19:43:19 +08:00
|
|
|
}
|
|
|
|
|
2014-04-02 03:24:27 +08:00
|
|
|
define <2 x double> @test81(<4 x float> %b) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test81:
|
|
|
|
; CHECK: # %bb.0:
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test81:
|
|
|
|
; CHECK-REG: # %bb.0:
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test81:
|
|
|
|
; CHECK-FISL: # %bb.0:
|
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test81:
|
|
|
|
; CHECK-LE: # %bb.0:
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-04-02 03:24:27 +08:00
|
|
|
%w = bitcast <4 x float> %b to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-04-02 03:24:27 +08:00
|
|
|
}
|
|
|
|
|
2014-10-23 00:58:20 +08:00
|
|
|
define double @test82(double %a, double %b, double %c, double %d) {
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-LABEL: test82:
|
|
|
|
; CHECK: # %bb.0: # %entry
|
|
|
|
; CHECK-NEXT: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-NEXT: beqlr cr0
|
|
|
|
; CHECK-NEXT: # %bb.1: # %entry
|
|
|
|
; CHECK-NEXT: fmr f1, f2
|
|
|
|
; CHECK-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-REG-LABEL: test82:
|
|
|
|
; CHECK-REG: # %bb.0: # %entry
|
|
|
|
; CHECK-REG-NEXT: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-REG-NEXT: beqlr cr0
|
|
|
|
; CHECK-REG-NEXT: # %bb.1: # %entry
|
|
|
|
; CHECK-REG-NEXT: fmr f1, f2
|
|
|
|
; CHECK-REG-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-FISL-LABEL: test82:
|
|
|
|
; CHECK-FISL: # %bb.0: # %entry
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: stfd f2, -16(r1) # 8-byte Folded Spill
|
|
|
|
; CHECK-FISL-NEXT: fmr f2, f1
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-FISL-NEXT: stfd f2, -8(r1) # 8-byte Folded Spill
|
|
|
|
; CHECK-FISL-NEXT: beq cr0, .LBB67_2
|
|
|
|
; CHECK-FISL-NEXT: # %bb.1: # %entry
|
2020-09-22 17:20:10 +08:00
|
|
|
; CHECK-FISL-NEXT: lfd f0, -16(r1) # 8-byte Folded Reload
|
2020-09-22 20:55:54 +08:00
|
|
|
; CHECK-FISL-NEXT: stfd f0, -8(r1) # 8-byte Folded Spill
|
|
|
|
; CHECK-FISL-NEXT: .LBB67_2: # %entry
|
|
|
|
; CHECK-FISL-NEXT: lfd f1, -8(r1) # 8-byte Folded Reload
|
2019-03-20 03:01:34 +08:00
|
|
|
; CHECK-FISL-NEXT: blr
|
|
|
|
;
|
|
|
|
; CHECK-LE-LABEL: test82:
|
|
|
|
; CHECK-LE: # %bb.0: # %entry
|
|
|
|
; CHECK-LE-NEXT: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-LE-NEXT: beqlr cr0
|
|
|
|
; CHECK-LE-NEXT: # %bb.1: # %entry
|
|
|
|
; CHECK-LE-NEXT: fmr f1, f2
|
|
|
|
; CHECK-LE-NEXT: blr
|
2014-10-23 00:58:20 +08:00
|
|
|
entry:
|
|
|
|
%m = fcmp oeq double %c, %d
|
|
|
|
%v = select i1 %m, double %a, double %b
|
|
|
|
ret double %v
|
|
|
|
|
|
|
|
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
}
|