[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
2020-04-22 23:33:11 +08:00
// RUN: %clang_cc1 -triple thumbv8.1m.main-none-none-eabi -target-feature +mve.fp -mfloat-abi hard -fallow-half-arguments-and-returns -O0 -disable-O0-optnone -S -emit-llvm -o - %s | opt -S -sroa | FileCheck %s
// RUN: %clang_cc1 -triple thumbv8.1m.main-none-none-eabi -target-feature +mve.fp -mfloat-abi hard -fallow-half-arguments-and-returns -O0 -disable-O0-optnone -DPOLYMORPHIC -S -emit-llvm -o - %s | opt -S -sroa | FileCheck %s
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
# include <arm_mve.h>
// CHECK-LABEL: @test_vfmaq_f16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = call <8 x half> @llvm.fma.v8f16(<8 x half> [[B:%.*]], <8 x half> [[C:%.*]], <8 x half> [[A:%.*]])
// CHECK-NEXT: ret <8 x half> [[TMP0]]
//
float16x8_t test_vfmaq_f16 ( float16x8_t a , float16x8_t b , float16x8_t c ) {
# ifdef POLYMORPHIC
return vfmaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmaq_f16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_f32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x float> @llvm.fma.v4f32(<4 x float> [[B:%.*]], <4 x float> [[C:%.*]], <4 x float> [[A:%.*]])
// CHECK-NEXT: ret <4 x float> [[TMP0]]
//
float32x4_t test_vfmaq_f32 ( float32x4_t a , float32x4_t b , float32x4_t c ) {
# ifdef POLYMORPHIC
return vfmaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmaq_f32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_n_f16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x half> poison, half [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x half> [[DOTSPLATINSERT]], <8 x half> poison, <8 x i32> zeroinitializer
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <8 x half> @llvm.fma.v8f16(<8 x half> [[B:%.*]], <8 x half> [[DOTSPLAT]], <8 x half> [[A:%.*]])
// CHECK-NEXT: ret <8 x half> [[TMP0]]
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
//
float16x8_t test_vfmaq_n_f16 ( float16x8_t a , float16x8_t b , float16_t c ) {
# ifdef POLYMORPHIC
return vfmaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmaq_n_f16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_n_f32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x float> poison, float [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x float> [[DOTSPLATINSERT]], <4 x float> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x float> @llvm.fma.v4f32(<4 x float> [[B:%.*]], <4 x float> [[DOTSPLAT]], <4 x float> [[A:%.*]])
// CHECK-NEXT: ret <4 x float> [[TMP0]]
//
float32x4_t test_vfmaq_n_f32 ( float32x4_t a , float32x4_t b , float32_t c ) {
# ifdef POLYMORPHIC
return vfmaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmaq_n_f32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmasq_n_f16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x half> poison, half [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x half> [[DOTSPLATINSERT]], <8 x half> poison, <8 x i32> zeroinitializer
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <8 x half> @llvm.fma.v8f16(<8 x half> [[A:%.*]], <8 x half> [[B:%.*]], <8 x half> [[DOTSPLAT]])
// CHECK-NEXT: ret <8 x half> [[TMP0]]
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
//
float16x8_t test_vfmasq_n_f16 ( float16x8_t a , float16x8_t b , float16_t c ) {
# ifdef POLYMORPHIC
return vfmasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmasq_n_f16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmasq_n_f32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x float> poison, float [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x float> [[DOTSPLATINSERT]], <4 x float> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x float> @llvm.fma.v4f32(<4 x float> [[A:%.*]], <4 x float> [[B:%.*]], <4 x float> [[DOTSPLAT]])
// CHECK-NEXT: ret <4 x float> [[TMP0]]
//
float32x4_t test_vfmasq_n_f32 ( float32x4_t a , float32x4_t b , float32_t c ) {
# ifdef POLYMORPHIC
return vfmasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmasq_n_f32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmsq_f16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = fneg <8 x half> [[C:%.*]]
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x half> @llvm.fma.v8f16(<8 x half> [[B:%.*]], <8 x half> [[TMP0]], <8 x half> [[A:%.*]])
// CHECK-NEXT: ret <8 x half> [[TMP1]]
//
float16x8_t test_vfmsq_f16 ( float16x8_t a , float16x8_t b , float16x8_t c ) {
# ifdef POLYMORPHIC
return vfmsq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmsq_f16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmsq_f32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = fneg <4 x float> [[C:%.*]]
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x float> @llvm.fma.v4f32(<4 x float> [[B:%.*]], <4 x float> [[TMP0]], <4 x float> [[A:%.*]])
// CHECK-NEXT: ret <4 x float> [[TMP1]]
//
float32x4_t test_vfmsq_f32 ( float32x4_t a , float32x4_t b , float32x4_t c ) {
# ifdef POLYMORPHIC
return vfmsq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vfmsq_f32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-LABEL: @test_vmlaq_n_s8(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <16 x i8> poison, i8 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <16 x i8> [[DOTSPLATINSERT]], <16 x i8> poison, <16 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <16 x i8> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <16 x i8> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vmlaq_n_s8 ( int8x16_t a , int8x16_t b , int8_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_s8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_n_s16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x i16> poison, i16 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x i16> [[DOTSPLATINSERT]], <8 x i16> poison, <8 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <8 x i16> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <8 x i16> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vmlaq_n_s16 ( int16x8_t a , int16x8_t b , int16_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_s16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_n_s32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x i32> poison, i32 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x i32> [[DOTSPLATINSERT]], <4 x i32> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <4 x i32> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <4 x i32> [[TMP1]]
//
int32x4_t test_vmlaq_n_s32 ( int32x4_t a , int32x4_t b , int32_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_s32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_n_u8(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <16 x i8> poison, i8 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <16 x i8> [[DOTSPLATINSERT]], <16 x i8> poison, <16 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <16 x i8> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <16 x i8> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
uint8x16_t test_vmlaq_n_u8 ( uint8x16_t a , uint8x16_t b , uint8_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_u8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_n_u16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x i16> poison, i16 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x i16> [[DOTSPLATINSERT]], <8 x i16> poison, <8 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <8 x i16> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <8 x i16> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
uint16x8_t test_vmlaq_n_u16 ( uint16x8_t a , uint16x8_t b , uint16_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_u16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_n_u32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x i32> poison, i32 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x i32> [[DOTSPLATINSERT]], <4 x i32> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = mul <4 x i32> [[B:%.*]], [[DOTSPLAT]]
// CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[TMP0]], [[A:%.*]]
// CHECK-NEXT: ret <4 x i32> [[TMP1]]
//
uint32x4_t test_vmlaq_n_u32 ( uint32x4_t a , uint32x4_t b , uint32_t c ) {
# ifdef POLYMORPHIC
return vmlaq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlaq_n_u32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <16 x i8> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <16 x i8> poison, i8 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <16 x i8> [[DOTSPLATINSERT]], <16 x i8> poison, <16 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <16 x i8> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vmlasq_n_s8 ( int8x16_t a , int8x16_t b , int8_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_s8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <8 x i16> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x i16> poison, i16 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x i16> [[DOTSPLATINSERT]], <8 x i16> poison, <8 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <8 x i16> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vmlasq_n_s16 ( int16x8_t a , int16x8_t b , int16_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_s16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <4 x i32> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x i32> poison, i32 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x i32> [[DOTSPLATINSERT]], <4 x i32> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <4 x i32> [[TMP1]]
//
int32x4_t test_vmlasq_n_s32 ( int32x4_t a , int32x4_t b , int32_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_s32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_u8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <16 x i8> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <16 x i8> poison, i8 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <16 x i8> [[DOTSPLATINSERT]], <16 x i8> poison, <16 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <16 x i8> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
uint8x16_t test_vmlasq_n_u8 ( uint8x16_t a , uint8x16_t b , uint8_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_u8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_u16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <8 x i16> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x i16> poison, i16 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x i16> [[DOTSPLATINSERT]], <8 x i16> poison, <8 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <8 x i16> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
uint16x8_t test_vmlasq_n_u16 ( uint16x8_t a , uint16x8_t b , uint16_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_u16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_n_u32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = mul <4 x i32> [[A:%.*]], [[B:%.*]]
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x i32> poison, i32 [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x i32> [[DOTSPLATINSERT]], <4 x i32> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[TMP0]], [[DOTSPLAT]]
// CHECK-NEXT: ret <4 x i32> [[TMP1]]
//
uint32x4_t test_vmlasq_n_u32 ( uint32x4_t a , uint32x4_t b , uint32_t c ) {
# ifdef POLYMORPHIC
return vmlasq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vmlasq_n_u32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
2020-03-11 20:48:52 +08:00
// CHECK-LABEL: @test_vqdmlahq_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = call <16 x i8> @llvm.arm.mve.vqdmlah.v16i8(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vqdmlahq_n_s8 ( int8x16_t a , int8x16_t b , int8_t c ) {
# ifdef POLYMORPHIC
return vqdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqdmlahq_n_s8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlahq_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i16> @llvm.arm.mve.vqdmlah.v8i16(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vqdmlahq_n_s16 ( int16x8_t a , int16x8_t b , int16_t c ) {
# ifdef POLYMORPHIC
return vqdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqdmlahq_n_s16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlahq_n_s32(
// CHECK-NEXT: entry:
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x i32> @llvm.arm.mve.vqdmlah.v4i32(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP0]]
//
int32x4_t test_vqdmlahq_n_s32 ( int32x4_t a , int32x4_t b , int32_t c ) {
# ifdef POLYMORPHIC
return vqdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqdmlahq_n_s32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
2020-03-25 17:46:08 +08:00
// CHECK-LABEL: @test_vqdmlashq_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[ADD:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <16 x i8> @llvm.arm.mve.vqdmlash.v16i8(<16 x i8> [[M1:%.*]], <16 x i8> [[M2:%.*]], i32 [[TMP0]])
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vqdmlashq_n_s8 ( int8x16_t m1 , int8x16_t m2 , int8_t add ) {
# ifdef POLYMORPHIC
return vqdmlashq ( m1 , m2 , add ) ;
# else /* POLYMORPHIC */
return vqdmlashq_n_s8 ( m1 , m2 , add ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlashq_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[ADD:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i16> @llvm.arm.mve.vqdmlash.v8i16(<8 x i16> [[M1:%.*]], <8 x i16> [[M2:%.*]], i32 [[TMP0]])
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vqdmlashq_n_s16 ( int16x8_t m1 , int16x8_t m2 , int16_t add ) {
# ifdef POLYMORPHIC
return vqdmlashq ( m1 , m2 , add ) ;
# else /* POLYMORPHIC */
return vqdmlashq_n_s16 ( m1 , m2 , add ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlashq_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x i32> @llvm.arm.mve.vqdmlash.v4i32(<4 x i32> [[M1:%.*]], <4 x i32> [[M2:%.*]], i32 [[ADD:%.*]])
// CHECK-NEXT: ret <4 x i32> [[TMP0]]
//
int32x4_t test_vqdmlashq_n_s32 ( int32x4_t m1 , int32x4_t m2 , int32_t add ) {
# ifdef POLYMORPHIC
return vqdmlashq ( m1 , m2 , add ) ;
# else /* POLYMORPHIC */
return vqdmlashq_n_s32 ( m1 , m2 , add ) ;
# endif /* POLYMORPHIC */
}
2020-03-11 20:48:52 +08:00
// CHECK-LABEL: @test_vqrdmlahq_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = call <16 x i8> @llvm.arm.mve.vqrdmlah.v16i8(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vqrdmlahq_n_s8 ( int8x16_t a , int8x16_t b , int8_t c ) {
# ifdef POLYMORPHIC
return vqrdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_n_s8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlahq_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i16> @llvm.arm.mve.vqrdmlah.v8i16(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vqrdmlahq_n_s16 ( int16x8_t a , int16x8_t b , int16_t c ) {
# ifdef POLYMORPHIC
return vqrdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_n_s16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlahq_n_s32(
// CHECK-NEXT: entry:
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x i32> @llvm.arm.mve.vqrdmlah.v4i32(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP0]]
//
int32x4_t test_vqrdmlahq_n_s32 ( int32x4_t a , int32x4_t b , int32_t c ) {
# ifdef POLYMORPHIC
return vqrdmlahq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_n_s32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <16 x i8> @llvm.arm.mve.vqrdmlash.v16i8(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]])
// CHECK-NEXT: ret <16 x i8> [[TMP1]]
//
int8x16_t test_vqrdmlashq_n_s8 ( int8x16_t a , int8x16_t b , int8_t c ) {
# ifdef POLYMORPHIC
return vqrdmlashq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_n_s8 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i16> @llvm.arm.mve.vqrdmlash.v8i16(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]])
// CHECK-NEXT: ret <8 x i16> [[TMP1]]
//
int16x8_t test_vqrdmlashq_n_s16 ( int16x8_t a , int16x8_t b , int16_t c ) {
# ifdef POLYMORPHIC
return vqrdmlashq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_n_s16 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = call <4 x i32> @llvm.arm.mve.vqrdmlash.v4i32(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]])
// CHECK-NEXT: ret <4 x i32> [[TMP0]]
//
int32x4_t test_vqrdmlashq_n_s32 ( int32x4_t a , int32x4_t b , int32_t c ) {
# ifdef POLYMORPHIC
return vqrdmlashq ( a , b , c ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_n_s32 ( a , b , c ) ;
# endif /* POLYMORPHIC */
}
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// CHECK-LABEL: @test_vfmaq_m_f16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x half> @llvm.arm.mve.fma.predicated.v8f16.v8i1(<8 x half> [[B:%.*]], <8 x half> [[C:%.*]], <8 x half> [[A:%.*]], <8 x i1> [[TMP1]])
// CHECK-NEXT: ret <8 x half> [[TMP2]]
//
float16x8_t test_vfmaq_m_f16 ( float16x8_t a , float16x8_t b , float16x8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmaq_m_f16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_m_f32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x float> @llvm.arm.mve.fma.predicated.v4f32.v4i1(<4 x float> [[B:%.*]], <4 x float> [[C:%.*]], <4 x float> [[A:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x float> [[TMP2]]
//
float32x4_t test_vfmaq_m_f32 ( float32x4_t a , float32x4_t b , float32x4_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmaq_m_f32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_m_n_f16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x half> poison, half [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x half> [[DOTSPLATINSERT]], <8 x half> poison, <8 x i32> zeroinitializer
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x half> @llvm.arm.mve.fma.predicated.v8f16.v8i1(<8 x half> [[B:%.*]], <8 x half> [[DOTSPLAT]], <8 x half> [[A:%.*]], <8 x i1> [[TMP1]])
// CHECK-NEXT: ret <8 x half> [[TMP2]]
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
//
float16x8_t test_vfmaq_m_n_f16 ( float16x8_t a , float16x8_t b , float16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmaq_m_n_f16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmaq_m_n_f32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x float> poison, float [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x float> [[DOTSPLATINSERT]], <4 x float> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x float> @llvm.arm.mve.fma.predicated.v4f32.v4i1(<4 x float> [[B:%.*]], <4 x float> [[DOTSPLAT]], <4 x float> [[A:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x float> [[TMP2]]
//
float32x4_t test_vfmaq_m_n_f32 ( float32x4_t a , float32x4_t b , float32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmaq_m_n_f32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmasq_m_n_f16(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <8 x half> poison, half [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <8 x half> [[DOTSPLATINSERT]], <8 x half> poison, <8 x i32> zeroinitializer
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x half> @llvm.arm.mve.fma.predicated.v8f16.v8i1(<8 x half> [[A:%.*]], <8 x half> [[B:%.*]], <8 x half> [[DOTSPLAT]], <8 x i1> [[TMP1]])
// CHECK-NEXT: ret <8 x half> [[TMP2]]
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
//
float16x8_t test_vfmasq_m_n_f16 ( float16x8_t a , float16x8_t b , float16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmasq_m_n_f16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmasq_m_n_f32(
// CHECK-NEXT: entry:
2020-12-24 08:33:58 +08:00
// CHECK-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <4 x float> poison, float [[C:%.*]], i32 0
// CHECK-NEXT: [[DOTSPLAT:%.*]] = shufflevector <4 x float> [[DOTSPLATINSERT]], <4 x float> poison, <4 x i32> zeroinitializer
[ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.
I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.
The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.
In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 17:57:48 +08:00
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x float> @llvm.arm.mve.fma.predicated.v4f32.v4i1(<4 x float> [[A:%.*]], <4 x float> [[B:%.*]], <4 x float> [[DOTSPLAT]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x float> [[TMP2]]
//
float32x4_t test_vfmasq_m_n_f32 ( float32x4_t a , float32x4_t b , float32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmasq_m_n_f32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmsq_m_f16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = fneg <8 x half> [[C:%.*]]
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x half> @llvm.arm.mve.fma.predicated.v8f16.v8i1(<8 x half> [[B:%.*]], <8 x half> [[TMP0]], <8 x half> [[A:%.*]], <8 x i1> [[TMP2]])
// CHECK-NEXT: ret <8 x half> [[TMP3]]
//
float16x8_t test_vfmsq_m_f16 ( float16x8_t a , float16x8_t b , float16x8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmsq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmsq_m_f16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vfmsq_m_f32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = fneg <4 x float> [[C:%.*]]
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <4 x float> @llvm.arm.mve.fma.predicated.v4f32.v4i1(<4 x float> [[B:%.*]], <4 x float> [[TMP0]], <4 x float> [[A:%.*]], <4 x i1> [[TMP2]])
// CHECK-NEXT: ret <4 x float> [[TMP3]]
//
float32x4_t test_vfmsq_m_f32 ( float32x4_t a , float32x4_t b , float32x4_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vfmsq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vfmsq_m_f32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-LABEL: @test_vmlaq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vmla.n.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vmlaq_m_n_s8 ( int8x16_t a , int8x16_t b , int8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_s8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vmla.n.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vmlaq_m_n_s16 ( int16x8_t a , int16x8_t b , int16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_s16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vmla.n.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vmlaq_m_n_s32 ( int32x4_t a , int32x4_t b , int32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_s32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_m_n_u8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vmla.n.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
uint8x16_t test_vmlaq_m_n_u8 ( uint8x16_t a , uint8x16_t b , uint8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_u8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_m_n_u16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vmla.n.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
uint16x8_t test_vmlaq_m_n_u16 ( uint16x8_t a , uint16x8_t b , uint16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_u16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlaq_m_n_u32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vmla.n.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
[ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)
I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.
When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76122
2020-03-11 20:48:36 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
uint32x4_t test_vmlaq_m_n_u32 ( uint32x4_t a , uint32x4_t b , uint32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlaq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlaq_m_n_u32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vmlas.n.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vmlasq_m_n_s8 ( int8x16_t a , int8x16_t b , int8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_s8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vmlas.n.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vmlasq_m_n_s16 ( int16x8_t a , int16x8_t b , int16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_s16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vmlas.n.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vmlasq_m_n_s32 ( int32x4_t a , int32x4_t b , int32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_s32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_u8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vmlas.n.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
uint8x16_t test_vmlasq_m_n_u8 ( uint8x16_t a , uint8x16_t b , uint8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_u8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_u16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vmlas.n.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
uint16x8_t test_vmlasq_m_n_u16 ( uint16x8_t a , uint16x8_t b , uint16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_u16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vmlasq_m_n_u32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vmlas.n.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
uint32x4_t test_vmlasq_m_n_u32 ( uint32x4_t a , uint32x4_t b , uint32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vmlasq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vmlasq_m_n_u32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
2020-03-11 20:48:52 +08:00
// CHECK-LABEL: @test_vqdmlahq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vqdmlah.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vqdmlahq_m_n_s8 ( int8x16_t a , int8x16_t b , int8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqdmlahq_m_n_s8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlahq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vqdmlah.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vqdmlahq_m_n_s16 ( int16x8_t a , int16x8_t b , int16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqdmlahq_m_n_s16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlahq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vqdmlah.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vqdmlahq_m_n_s32 ( int32x4_t a , int32x4_t b , int32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqdmlahq_m_n_s32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
2020-03-25 17:46:08 +08:00
// CHECK-LABEL: @test_vqdmlashq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[ADD:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vqdmlash.predicated.v16i8.v16i1(<16 x i8> [[M1:%.*]], <16 x i8> [[M2:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vqdmlashq_m_n_s8 ( int8x16_t m1 , int8x16_t m2 , int8_t add , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlashq_m ( m1 , m2 , add , p ) ;
# else /* POLYMORPHIC */
return vqdmlashq_m_n_s8 ( m1 , m2 , add , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlashq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[ADD:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vqdmlash.predicated.v8i16.v8i1(<8 x i16> [[M1:%.*]], <8 x i16> [[M2:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vqdmlashq_m_n_s16 ( int16x8_t m1 , int16x8_t m2 , int16_t add , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlashq_m ( m1 , m2 , add , p ) ;
# else /* POLYMORPHIC */
return vqdmlashq_m_n_s16 ( m1 , m2 , add , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqdmlashq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vqdmlash.predicated.v4i32.v4i1(<4 x i32> [[M1:%.*]], <4 x i32> [[M2:%.*]], i32 [[ADD:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vqdmlashq_m_n_s32 ( int32x4_t m1 , int32x4_t m2 , int32_t add , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqdmlashq_m ( m1 , m2 , add , p ) ;
# else /* POLYMORPHIC */
return vqdmlashq_m_n_s32 ( m1 , m2 , add , p ) ;
# endif /* POLYMORPHIC */
}
2020-03-11 20:48:52 +08:00
// CHECK-LABEL: @test_vqrdmlahq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vqrdmlah.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vqrdmlahq_m_n_s8 ( int8x16_t a , int8x16_t b , int8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_m_n_s8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlahq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vqrdmlah.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vqrdmlahq_m_n_s16 ( int16x8_t a , int16x8_t b , int16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_m_n_s16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlahq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
2020-06-09 16:52:01 +08:00
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vqrdmlah.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
2020-03-11 20:48:52 +08:00
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vqrdmlahq_m_n_s32 ( int32x4_t a , int32x4_t b , int32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlahq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlahq_m_n_s32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_m_n_s8(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i8 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <16 x i8> @llvm.arm.mve.vqrdmlash.predicated.v16i8.v16i1(<16 x i8> [[A:%.*]], <16 x i8> [[B:%.*]], i32 [[TMP0]], <16 x i1> [[TMP2]])
// CHECK-NEXT: ret <16 x i8> [[TMP3]]
//
int8x16_t test_vqrdmlashq_m_n_s8 ( int8x16_t a , int8x16_t b , int8_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlashq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_m_n_s8 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_m_n_s16(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[C:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP2:%.*]] = call <8 x i1> @llvm.arm.mve.pred.i2v.v8i1(i32 [[TMP1]])
// CHECK-NEXT: [[TMP3:%.*]] = call <8 x i16> @llvm.arm.mve.vqrdmlash.predicated.v8i16.v8i1(<8 x i16> [[A:%.*]], <8 x i16> [[B:%.*]], i32 [[TMP0]], <8 x i1> [[TMP2]])
// CHECK-NEXT: ret <8 x i16> [[TMP3]]
//
int16x8_t test_vqrdmlashq_m_n_s16 ( int16x8_t a , int16x8_t b , int16_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlashq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_m_n_s16 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}
// CHECK-LABEL: @test_vqrdmlashq_m_n_s32(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[P:%.*]] to i32
// CHECK-NEXT: [[TMP1:%.*]] = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 [[TMP0]])
// CHECK-NEXT: [[TMP2:%.*]] = call <4 x i32> @llvm.arm.mve.vqrdmlash.predicated.v4i32.v4i1(<4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]], i32 [[C:%.*]], <4 x i1> [[TMP1]])
// CHECK-NEXT: ret <4 x i32> [[TMP2]]
//
int32x4_t test_vqrdmlashq_m_n_s32 ( int32x4_t a , int32x4_t b , int32_t c , mve_pred16_t p ) {
# ifdef POLYMORPHIC
return vqrdmlashq_m ( a , b , c , p ) ;
# else /* POLYMORPHIC */
return vqrdmlashq_m_n_s32 ( a , b , c , p ) ;
# endif /* POLYMORPHIC */
}