2018-08-25 03:24:20 +08:00
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-REG %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr7 \
|
|
|
|
; RUN: -mtriple=powerpc64-unknown-linux-gnu -mattr=+vsx -fast-isel -O0 \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-FISL %s
|
|
|
|
; RUN: llc -relocation-model=static -verify-machineinstrs -mcpu=pwr8 \
|
|
|
|
; RUN: -mtriple=powerpc64le-unknown-linux-gnu -mattr=+vsx \
|
|
|
|
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck \
|
|
|
|
; RUN: -check-prefix=CHECK-LE %s
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
|
|
|
|
define double @test1(double %a, double %b) {
|
|
|
|
entry:
|
|
|
|
%v = fmul double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test1
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xsmuldp f1, f1, f2
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test1
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xsmuldp f1, f1, f2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test2(double %a, double %b) {
|
|
|
|
entry:
|
|
|
|
%v = fdiv double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test2
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xsdivdp f1, f1, f2
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test2
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xsdivdp f1, f1, f2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test3(double %a, double %b) {
|
|
|
|
entry:
|
|
|
|
%v = fadd double %a, %b
|
|
|
|
ret double %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test3
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xsadddp f1, f1, f2
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test3
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xsadddp f1, f1, f2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test4(<2 x double> %a, <2 x double> %b) {
|
|
|
|
entry:
|
|
|
|
%v = fadd <2 x double> %a, %b
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test4
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xvadddp v2, v2, v3
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test4
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvadddp v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 15:58:58 +08:00
|
|
|
}
|
|
|
|
|
2014-03-26 12:55:40 +08:00
|
|
|
define <4 x i32> @test5(<4 x i32> %a, <4 x i32> %b) {
|
|
|
|
entry:
|
|
|
|
%v = xor <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test5
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test5
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test5
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlxor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test6(<8 x i16> %a, <8 x i16> %b) {
|
|
|
|
entry:
|
|
|
|
%v = xor <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test6
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test6
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test6
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlxor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test7(<16 x i8> %a, <16 x i8> %b) {
|
|
|
|
entry:
|
|
|
|
%v = xor <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test7
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test7
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlxor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test7
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlxor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test8(<4 x i32> %a, <4 x i32> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test8
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test8
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test8
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test9(<8 x i16> %a, <8 x i16> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test9
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test9
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test9
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test10(<16 x i8> %a, <16 x i8> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test10
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test10
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test10
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test11(<4 x i32> %a, <4 x i32> %b) {
|
|
|
|
entry:
|
|
|
|
%v = and <4 x i32> %a, %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test11
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test11
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test11
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxland v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test12(<8 x i16> %a, <8 x i16> %b) {
|
|
|
|
entry:
|
|
|
|
%v = and <8 x i16> %a, %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test12
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test12
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test12
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxland v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test13(<16 x i8> %a, <16 x i8> %b) {
|
|
|
|
entry:
|
|
|
|
%v = and <16 x i8> %a, %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test13
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test13
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test13
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxland v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test14(<4 x i32> %a, <4 x i32> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <4 x i32> %a, %b
|
|
|
|
%w = xor <4 x i32> %v, <i32 -1, i32 -1, i32 -1, i32 -1>
|
|
|
|
ret <4 x i32> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test14
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlnor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test14
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlnor v2, v2, v3
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: li r3, -16
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvd2x vs0, r1, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test14
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlnor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test15(<8 x i16> %a, <8 x i16> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <8 x i16> %a, %b
|
|
|
|
%w = xor <8 x i16> %v, <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>
|
|
|
|
ret <8 x i16> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test15
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlnor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test15
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v4, vs0, vs0
|
|
|
|
; CHECK-FISL: xxlnor vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v2, vs0, vs0
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: li r3, -16
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvd2x v4, r1, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test15
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlnor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test16(<16 x i8> %a, <16 x i8> %b) {
|
|
|
|
entry:
|
|
|
|
%v = or <16 x i8> %a, %b
|
|
|
|
%w = xor <16 x i8> %v, <i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1>
|
|
|
|
ret <16 x i8> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test16
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlnor v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test16
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v4, vs0, vs0
|
|
|
|
; CHECK-FISL: xxlnor vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v2, vs0, vs0
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: li r3, -16
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvd2x v4, r1, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test16
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlnor v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test17(<4 x i32> %a, <4 x i32> %b) {
|
|
|
|
entry:
|
|
|
|
%w = xor <4 x i32> %b, <i32 -1, i32 -1, i32 -1, i32 -1>
|
|
|
|
%v = and <4 x i32> %a, %w
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test17
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlandc v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test17
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlnor v3, v3, v3
|
|
|
|
; CHECK-FISL: xxland v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test17
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlandc v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test18(<8 x i16> %a, <8 x i16> %b) {
|
|
|
|
entry:
|
|
|
|
%w = xor <8 x i16> %b, <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>
|
|
|
|
%v = and <8 x i16> %a, %w
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test18
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlandc v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test18
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlnor vs0, v3, v3
|
|
|
|
; CHECK-FISL: xxlor v4, vs0, vs0
|
|
|
|
; CHECK-FISL: xxlandc vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v2, vs0, vs0
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: li r3, -16
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvd2x v4, r1, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test18
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlandc v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test19(<16 x i8> %a, <16 x i8> %b) {
|
|
|
|
entry:
|
|
|
|
%w = xor <16 x i8> %b, <i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1>
|
|
|
|
%v = and <16 x i8> %a, %w
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test19
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlandc v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test19
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlnor vs0, v3, v3
|
|
|
|
; CHECK-FISL: xxlor v4, vs0, vs0
|
|
|
|
; CHECK-FISL: xxlandc vs0, v2, v3
|
|
|
|
; CHECK-FISL: xxlor v2, vs0, vs0
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: li r3, -16
|
2017-10-12 04:20:58 +08:00
|
|
|
; CHECK-FISL-NOT: lis
|
|
|
|
; CHECK-FISL-NOT: ori
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvd2x v4, r1, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test19
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlandc v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 12:55:40 +08:00
|
|
|
}
|
|
|
|
|
2014-03-26 20:49:28 +08:00
|
|
|
define <4 x i32> @test20(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c, <4 x i32> %d) {
|
|
|
|
entry:
|
|
|
|
%m = icmp eq <4 x i32> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x i32> %a, <4 x i32> %b
|
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test20
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-REG: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test20
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-FISL: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test20
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpequw v4, v4, v5
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, v4
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test21(<4 x float> %a, <4 x float> %b, <4 x float> %c, <4 x float> %d) {
|
|
|
|
entry:
|
|
|
|
%m = fcmp oeq <4 x float> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x float> %a, <4 x float> %b
|
|
|
|
ret <4 x float> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test21
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-REG: xxsel v2, v3, v2, vs0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test21
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xvcmpeqsp v4, v4, v5
|
|
|
|
; CHECK-FISL: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test21
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test22(<4 x float> %a, <4 x float> %b, <4 x float> %c, <4 x float> %d) {
|
|
|
|
entry:
|
|
|
|
%m = fcmp ueq <4 x float> %c, %d
|
|
|
|
%v = select <4 x i1> %m, <4 x float> %a, <4 x float> %b
|
|
|
|
ret <4 x float> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test22
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG-DAG: xvcmpeqsp vs0, v5, v5
|
|
|
|
; CHECK-REG-DAG: xvcmpeqsp vs1, v4, v4
|
|
|
|
; CHECK-REG-DAG: xvcmpeqsp vs2, v4, v5
|
|
|
|
; CHECK-REG-DAG: xxlnor vs0, vs0, vs0
|
|
|
|
; CHECK-REG-DAG: xxlnor vs1, vs1, vs1
|
|
|
|
; CHECK-REG-DAG: xxlor vs0, vs1, vs0
|
|
|
|
; CHECK-REG-DAG: xxlor vs0, vs2, vs0
|
|
|
|
; CHECK-REG: xxsel v2, v3, v2, vs0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test22
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL-DAG: xvcmpeqsp vs0, v4, v5
|
|
|
|
; CHECK-FISL-DAG: xvcmpeqsp v5, v5, v5
|
|
|
|
; CHECK-FISL-DAG: xvcmpeqsp v4, v4, v4
|
|
|
|
; CHECK-FISL-DAG: xxlnor v5, v5, v5
|
|
|
|
; CHECK-FISL-DAG: xxlnor v4, v4, v4
|
|
|
|
; CHECK-FISL-DAG: xxlor v4, v4, v5
|
|
|
|
; CHECK-FISL-DAG: xxlor vs0, vs0, v4
|
|
|
|
; CHECK-FISL: xxsel v2, v3, v2, vs0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test22
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE-DAG: xvcmpeqsp vs0, v5, v5
|
|
|
|
; CHECK-LE-DAG: xvcmpeqsp vs1, v4, v4
|
|
|
|
; CHECK-LE-DAG: xvcmpeqsp vs2, v4, v5
|
|
|
|
; CHECK-LE-DAG: xxlnor vs0, vs0, vs0
|
|
|
|
; CHECK-LE-DAG: xxlnor vs1, vs1, vs1
|
|
|
|
; CHECK-LE-DAG: xxlor vs0, vs1, vs0
|
|
|
|
; CHECK-LE-DAG: xxlor vs0, vs2, vs0
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <8 x i16> @test23(<8 x i16> %a, <8 x i16> %b, <8 x i16> %c, <8 x i16> %d) {
|
|
|
|
entry:
|
|
|
|
%m = icmp eq <8 x i16> %c, %d
|
|
|
|
%v = select <8 x i1> %m, <8 x i16> %a, <8 x i16> %b
|
|
|
|
ret <8 x i16> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test23
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-REG: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test23
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-FISL: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test23
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpequh v4, v4, v5
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, v4
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <16 x i8> @test24(<16 x i8> %a, <16 x i8> %b, <16 x i8> %c, <16 x i8> %d) {
|
|
|
|
entry:
|
|
|
|
%m = icmp eq <16 x i8> %c, %d
|
|
|
|
%v = select <16 x i1> %m, <16 x i8> %a, <16 x i8> %b
|
|
|
|
ret <16 x i8> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test24
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-REG: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test24
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-FISL: xxsel v2, v3, v2, v4
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test24
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpequb v4, v4, v5
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, v4
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test25(<2 x double> %a, <2 x double> %b, <2 x double> %c, <2 x double> %d) {
|
|
|
|
entry:
|
|
|
|
%m = fcmp oeq <2 x double> %c, %d
|
|
|
|
%v = select <2 x i1> %m, <2 x double> %a, <2 x double> %b
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test25
|
2018-08-27 21:20:42 +08:00
|
|
|
; CHECK: xvcmpeqdp vs0, v4, v5
|
|
|
|
; CHECK: xxsel v2, v3, v2, vs0
|
2014-03-27 00:12:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test25
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcmpeqdp v4, v4, v5
|
|
|
|
; CHECK-LE: xxsel v2, v3, v2, v4
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 00:12:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test26(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = add <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test26
|
2014-03-30 23:10:18 +08:00
|
|
|
|
|
|
|
; Make sure we use only two stores (one for each operand).
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v3, 0, r3
|
|
|
|
; CHECK: stxvd2x v2, 0, r4
|
2014-03-30 23:10:18 +08:00
|
|
|
; CHECK-NOT: stxvd2x
|
|
|
|
|
2014-03-27 00:12:58 +08:00
|
|
|
; FIXME: The code quality here is not good; just make sure we do something for now.
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: add r3, r4, r3
|
|
|
|
; CHECK: add r3, r4, r3
|
2014-03-27 00:12:58 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vaddudm v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 00:12:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test27(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = and <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test27
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxland v2, v2, v3
|
2014-03-26 20:49:28 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test27
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxland v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-26 20:49:28 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
define <2 x double> @test28(<2 x double>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x double>, <2 x double>* %a, align 16
|
2014-03-27 02:26:30 +08:00
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test28
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-27 02:26:30 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test28
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE: xxswapd v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test29(<2 x double>* %a, <2 x double> %b) {
|
|
|
|
store <2 x double> %b, <2 x double>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test29
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v2, 0, r3
|
2014-03-27 02:26:30 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test29
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxswapd vs0, v2
|
|
|
|
; CHECK-LE: stxvd2x vs0, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 03:39:09 +08:00
|
|
|
define <2 x double> @test28u(<2 x double>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x double>, <2 x double>* %a, align 8
|
2014-03-27 03:39:09 +08:00
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test28u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-27 03:39:09 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test28u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE: xxswapd v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:39:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test29u(<2 x double>* %a, <2 x double> %b) {
|
|
|
|
store <2 x double> %b, <2 x double>* %a, align 8
|
|
|
|
ret void
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test29u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v2, 0, r3
|
2014-03-27 03:39:09 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test29u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxswapd vs0, v2
|
|
|
|
; CHECK-LE: stxvd2x vs0, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:39:09 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 02:26:30 +08:00
|
|
|
define <2 x i64> @test30(<2 x i64>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <2 x i64>, <2 x i64>* %a, align 16
|
2014-03-27 02:26:30 +08:00
|
|
|
ret <2 x i64> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test30
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: lxvd2x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test30
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-FISL: xxlor v2, vs0, vs0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test30
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE: xxswapd v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test31(<2 x i64>* %a, <2 x i64> %b) {
|
|
|
|
store <2 x i64> %b, <2 x i64>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test31
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v2, 0, r3
|
2014-03-27 02:26:30 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test31
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxswapd vs0, v2
|
|
|
|
; CHECK-LE: stxvd2x vs0, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 02:26:30 +08:00
|
|
|
}
|
|
|
|
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
define <4 x float> @test32(<4 x float>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x float>, <4 x float>* %a, align 16
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x float> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test32
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: lxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test32
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: lxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test32
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lvx v2, 0, r3
|
2017-05-02 09:47:34 +08:00
|
|
|
; CHECK-LE-NOT: xxswapd
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test33(<4 x float>* %a, <4 x float> %b) {
|
|
|
|
store <4 x float> %b, <4 x float>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test33
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test33
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test33
|
2017-05-02 09:47:34 +08:00
|
|
|
; CHECK-LE-NOT: xxswapd
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: stvx v2, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x float> @test32u(<4 x float>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x float>, <4 x float>* %a, align 8
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x float> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test32u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-DAG: lvsl v3, 0, r3
|
|
|
|
; CHECK-DAG: lvx v2, r3, r4
|
|
|
|
; CHECK-DAG: lvx v4, 0, r3
|
|
|
|
; CHECK: vperm v2, v4, v2, v3
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test32u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lxvd2x vs0, 0, r3
|
|
|
|
; CHECK-LE: xxswapd v2, vs0
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test33u(<4 x float>* %a, <4 x float> %b) {
|
|
|
|
store <4 x float> %b, <4 x float>* %a, align 8
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test33u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test33u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test33u
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxswapd vs0, v2
|
|
|
|
; CHECK-LE: stxvd2x vs0, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test34(<4 x i32>* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load <4 x i32>, <4 x i32>* %a, align 16
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
ret <4 x i32> %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test34
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: lxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test34
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: lxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test34
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lvx v2, 0, r3
|
2017-05-02 09:47:34 +08:00
|
|
|
; CHECK-LE-NOT: xxswapd
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define void @test35(<4 x i32>* %a, <4 x i32> %b) {
|
|
|
|
store <4 x i32> %b, <4 x i32>* %a, align 16
|
|
|
|
ret void
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test35
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test35
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: stxvw4x v2, 0, r3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test35
|
2017-05-02 09:47:34 +08:00
|
|
|
; CHECK-LE-NOT: xxswapd
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: stvx v2, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
[PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types. This
patch adds that support.
As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.
In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled. Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.
A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests. I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature. For now, that simply tests the unaligned load/store
behavior.
This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.
llvm-svn: 220047
2014-10-17 23:13:38 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 03:13:54 +08:00
|
|
|
define <2 x double> @test40(<2 x i64> %a) {
|
|
|
|
%v = uitofp <2 x i64> %a to <2 x double>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test40
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xvcvuxddp v2, v2
|
2014-03-27 03:13:54 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test40
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcvuxddp v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test41(<2 x i64> %a) {
|
|
|
|
%v = sitofp <2 x i64> %a to <2 x double>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test41
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xvcvsxddp v2, v2
|
2014-03-27 03:13:54 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test41
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcvsxddp v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test42(<2 x double> %a) {
|
|
|
|
%v = fptoui <2 x double> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test42
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xvcvdpuxds v2, v2
|
2014-03-27 03:13:54 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test42
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcvdpuxds v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test43(<2 x double> %a) {
|
|
|
|
%v = fptosi <2 x double> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test43
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xvcvdpsxds v2, v2
|
2014-03-27 03:13:54 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test43
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xvcvdpsxds v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 03:13:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x float> @test44(<2 x i64> %a) {
|
|
|
|
%v = uitofp <2 x i64> %a to <2 x float>
|
|
|
|
ret <2 x float> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test44
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x float> @test45(<2 x i64> %a) {
|
|
|
|
%v = sitofp <2 x i64> %a to <2 x float>
|
|
|
|
ret <2 x float> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test45
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test46(<2 x float> %a) {
|
|
|
|
%v = fptoui <2 x float> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test46
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test47(<2 x float> %a) {
|
|
|
|
%v = fptosi <2 x float> %a to <2 x i64>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test47
|
|
|
|
; FIXME: The code quality here looks pretty bad.
|
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
2014-03-27 06:58:37 +08:00
|
|
|
define <2 x double> @test50(double* %a) {
|
2015-02-28 05:17:42 +08:00
|
|
|
%v = load double, double* %a, align 8
|
2014-03-27 06:58:37 +08:00
|
|
|
%w = insertelement <2 x double> undef, double %v, i32 0
|
|
|
|
%x = insertelement <2 x double> %w, double %v, i32 1
|
|
|
|
ret <2 x double> %x
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test50
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: lxvdsx v2, 0, r3
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test50
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: lxvdsx v2, 0, r3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test51(<2 x double> %a, <2 x double> %b) {
|
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 0, i32 0>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test51
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxspltd v2, v2, 0
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test51
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxspltd v2, v2, 1
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test52(<2 x double> %a, <2 x double> %b) {
|
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 0, i32 2>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test52
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxmrghd v2, v2, v3
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test52
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxmrgld v2, v3, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test53(<2 x double> %a, <2 x double> %b) {
|
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 2, i32 0>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test53
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxmrghd v2, v3, v2
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test53
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxmrgld v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test54(<2 x double> %a, <2 x double> %b) {
|
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 1, i32 2>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test54
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxpermdi v2, v2, v3, 2
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test54
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxpermdi v2, v3, v2, 2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x double> @test55(<2 x double> %a, <2 x double> %b) {
|
|
|
|
%v = shufflevector <2 x double> %a, <2 x double> %b, <2 x i32> <i32 1, i32 3>
|
|
|
|
ret <2 x double> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test55
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxmrgld v2, v2, v3
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test55
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxmrghd v2, v3, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test56(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = shufflevector <2 x i64> %a, <2 x i64> %b, <2 x i32> <i32 1, i32 3>
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test56
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxmrgld v2, v2, v3
|
2014-03-27 06:58:37 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test56
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxmrghd v2, v3, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-27 06:58:37 +08:00
|
|
|
}
|
|
|
|
|
2014-03-28 05:26:33 +08:00
|
|
|
define <2 x i64> @test60(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = shl <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test60
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v3, 0, r3
|
|
|
|
; CHECK: stxvd2x v2, 0, r4
|
|
|
|
; CHECK: sld r3, r4, r3
|
|
|
|
; CHECK: sld r3, r4, r3
|
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-28 05:26:33 +08:00
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test61(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = lshr <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test61
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v3, 0, r3
|
|
|
|
; CHECK: stxvd2x v2, 0, r4
|
|
|
|
; CHECK: srd r3, r4, r3
|
|
|
|
; CHECK: srd r3, r4, r3
|
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-28 05:26:33 +08:00
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i64> @test62(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%v = ashr <2 x i64> %a, %b
|
|
|
|
ret <2 x i64> %v
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test62
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v3, 0, r3
|
|
|
|
; CHECK: stxvd2x v2, 0, r4
|
|
|
|
; CHECK: srad r3, r4, r3
|
|
|
|
; CHECK: srad r3, r4, r3
|
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-28 05:26:33 +08:00
|
|
|
; CHECK: blr
|
|
|
|
}
|
|
|
|
|
2014-03-28 06:22:48 +08:00
|
|
|
define double @test63(<2 x double> %a) {
|
|
|
|
%v = extractelement <2 x double> %a, i32 0
|
|
|
|
ret double %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test63
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxlor f1, v2, v2
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test63
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxlor f0, v2, v2
|
|
|
|
; CHECK-FISL: fmr f1, f0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test63
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxswapd vs1, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-28 06:22:48 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define double @test64(<2 x double> %a) {
|
|
|
|
%v = extractelement <2 x double> %a, i32 1
|
|
|
|
ret double %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test64
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xxswapd vs1, v2
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test64
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xxswapd v2, v2
|
|
|
|
; CHECK-FISL: xxlor f0, v2, v2
|
|
|
|
; CHECK-FISL: fmr f1, f0
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test64
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxlor f1, v2, v2
|
2014-03-28 06:22:48 +08:00
|
|
|
}
|
|
|
|
|
2014-03-30 00:04:40 +08:00
|
|
|
define <2 x i1> @test65(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%w = icmp eq <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test65
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: vcmpequw v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test65
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: vcmpequw v2, v2, v3
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test65
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpequd v2, v2, v3
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i1> @test66(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%w = icmp ne <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test66
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-REG: xxlnor v2, v2, v2
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test66
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: vcmpequw v2, v2, v3
|
|
|
|
; CHECK-FISL: xxlnor v2, v2, v2
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test66
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpequd v2, v2, v3
|
|
|
|
; CHECK-LE: xxlnor v2, v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
define <2 x i1> @test67(<2 x i64> %a, <2 x i64> %b) {
|
|
|
|
%w = icmp ult <2 x i64> %a, %b
|
|
|
|
ret <2 x i1> %w
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test67
|
|
|
|
; This should scalarize, and the current code quality is not good.
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: stxvd2x v3, 0, r3
|
|
|
|
; CHECK: stxvd2x v2, 0, r4
|
|
|
|
; CHECK: cmpld r4, r3
|
|
|
|
; CHECK: cmpld r6, r5
|
|
|
|
; CHECK: lxvd2x v2, 0, r3
|
2014-03-30 00:04:40 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test67
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: vcmpgtud v2, v3, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 00:04:40 +08:00
|
|
|
}
|
|
|
|
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test68(<2 x i32> %a) {
|
|
|
|
%w = sitofp <2 x i32> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test68
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK: xxmrghw vs0, v2, v2
|
|
|
|
; CHECK: xvcvsxwdp v2, vs0
|
2014-03-30 21:22:59 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test68
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xxmrglw v2, v2, v2
|
|
|
|
; CHECK-LE: xvcvsxwdp v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test69(<2 x i16> %a) {
|
|
|
|
%w = sitofp <2 x i16> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test69
|
2018-12-29 21:40:48 +08:00
|
|
|
; CHECK-DAG: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-DAG: xvcvsxddp v2, v2
|
2014-03-30 21:22:59 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test69
|
2018-12-29 21:40:48 +08:00
|
|
|
; CHECK-LE: vperm
|
|
|
|
; CHECK-LE: vsld
|
|
|
|
; CHECK-LE: vsrad
|
|
|
|
; CHECK-LE: xvcvsxddp v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
2014-03-30 21:22:59 +08:00
|
|
|
define <2 x double> @test70(<2 x i8> %a) {
|
|
|
|
%w = sitofp <2 x i8> %a to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test70
|
2018-12-29 21:40:48 +08:00
|
|
|
; CHECK-DAG: lxvd2x v2, 0, r3
|
|
|
|
; CHECK-DAG: xvcvsxddp v2, v2
|
2014-03-30 21:22:59 +08:00
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test70
|
2018-12-29 21:40:48 +08:00
|
|
|
; CHECK-LE: vperm
|
|
|
|
; CHECK-LE: vsld
|
|
|
|
; CHECK-LE: vsrad
|
|
|
|
; CHECK-LE: xvcvsxddp v2, v2
|
2015-07-02 03:40:07 +08:00
|
|
|
; CHECK-LE: blr
|
2014-03-30 21:22:59 +08:00
|
|
|
}
|
|
|
|
|
2016-07-05 17:22:29 +08:00
|
|
|
; This gets scalarized so the code isn't great
|
Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
llvm-svn: 205179
2014-03-31 19:43:19 +08:00
|
|
|
define <2 x i32> @test80(i32 %v) {
|
|
|
|
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
|
|
|
|
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
|
|
|
|
%i = add <2 x i32> %b2, <i32 2, i32 3>
|
|
|
|
ret <2 x i32> %i
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test80
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG-DAG: stw r3, -16(r1)
|
|
|
|
; CHECK-REG-DAG: addi r4, r1, -16
|
|
|
|
; CHECK-REG: addis r3, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-REG-DAG: addi r3, r3, .LCPI65_0@toc@l
|
|
|
|
; CHECK-REG-DAG: lxvw4x vs0, 0, r4
|
|
|
|
; CHECK-REG-DAG: lxvw4x v3, 0, r3
|
|
|
|
; CHECK-REG: xxspltw v2, vs0, 0
|
|
|
|
; CHECK-REG: vadduwm v2, v2, v3
|
2016-07-05 17:22:29 +08:00
|
|
|
; CHECK-REG-NOT: stxvw4x
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG: blr
|
|
|
|
|
|
|
|
; CHECK-FISL-LABEL: @test80
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: mr r4, r3
|
|
|
|
; CHECK-FISL: stw r4, -16(r1)
|
|
|
|
; CHECK-FISL: addi r3, r1, -16
|
|
|
|
; CHECK-FISL-DAG: lxvw4x vs0, 0, r3
|
|
|
|
; CHECK-FISL-DAG: xxspltw v2, vs0, 0
|
|
|
|
; CHECK-FISL: addis r3, r2, .LCPI65_0@toc@ha
|
|
|
|
; CHECK-FISL: addi r3, r3, .LCPI65_0@toc@l
|
|
|
|
; CHECK-FISL-DAG: lxvw4x v3, 0, r3
|
2016-07-05 17:22:29 +08:00
|
|
|
; CHECK-FISL: vadduwm
|
|
|
|
; CHECK-FISL-NOT: stxvw4x
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test80
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE-DAG: mtvsrd f0, r3
|
|
|
|
; CHECK-LE-DAG: xxswapd vs0, vs0
|
|
|
|
; CHECK-LE-DAG: addi r3, r4, .LCPI65_0@toc@l
|
|
|
|
; CHECK-LE-DAG: lvx v3, 0, r3
|
|
|
|
; CHECK-LE-DAG: xxspltw v2, vs0, 3
|
|
|
|
; CHECK-LE-NOT: xxswapd v3,
|
|
|
|
; CHECK-LE: vadduwm v2, v2, v3
|
2015-11-02 22:01:11 +08:00
|
|
|
; CHECK-LE: blr
|
Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
llvm-svn: 205179
2014-03-31 19:43:19 +08:00
|
|
|
}
|
|
|
|
|
2014-04-02 03:24:27 +08:00
|
|
|
define <2 x double> @test81(<4 x float> %b) {
|
|
|
|
%w = bitcast <4 x float> %b to <2 x double>
|
|
|
|
ret <2 x double> %w
|
|
|
|
|
|
|
|
; CHECK-LABEL: @test81
|
|
|
|
; CHECK: blr
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test81
|
|
|
|
; CHECK-LE: blr
|
2014-04-02 03:24:27 +08:00
|
|
|
}
|
|
|
|
|
2014-10-23 00:58:20 +08:00
|
|
|
define double @test82(double %a, double %b, double %c, double %d) {
|
|
|
|
entry:
|
|
|
|
%m = fcmp oeq double %c, %d
|
|
|
|
%v = select i1 %m, double %a, double %b
|
|
|
|
ret double %v
|
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-REG-LABEL: @test82
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-REG: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-REG: beqlr cr0
|
2014-10-23 00:58:20 +08:00
|
|
|
|
2014-12-06 04:32:05 +08:00
|
|
|
; CHECK-FISL-LABEL: @test82
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-FISL: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-FISL: beq cr0
|
2015-07-02 03:40:07 +08:00
|
|
|
|
|
|
|
; CHECK-LE-LABEL: @test82
|
2018-08-25 03:24:20 +08:00
|
|
|
; CHECK-LE: xscmpudp cr0, f3, f4
|
|
|
|
; CHECK-LE: beqlr cr0
|
2014-12-06 04:32:05 +08:00
|
|
|
}
|