[ARM] Code-generation infrastructure for MVE.
This provides the low-level support to start using MVE vector types in
LLVM IR, loading and storing them, passing them to __asm__ statements
containing hand-written MVE vector instructions, and *if* you have the
hard-float ABI turned on, using them as function parameters.
(In the soft-float ABI, vector types are passed in integer registers,
and combining all those 32-bit integers into a q-reg requires support
for selection DAG nodes like insert_vector_elt and build_vector which
aren't implemented yet for MVE. In fact I've also had to add
`arm_aapcs_vfpcc` to a couple of existing tests to avoid that
problem.)
Specifically, this commit adds support for:
* spills, reloads and register moves for MVE vector registers
* ditto for the VPT predication mask that lives in VPR.P0
* make all the MVE vector types legal in ISel, and provide selection
DAG patterns for BITCAST, LOAD and STORE
* make loads and stores of scalar FP types conditional on
`hasFPRegs()` rather than `hasVFP2Base()`. As a result a few
existing tests needed their llc command lines updating to use
`-mattr=-fpregs` as their method of turning off all hardware FP
support.
Reviewers: dmgreen, samparker, SjoerdMeijer
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60708
llvm-svn: 364329
2019-06-26 00:48:46 +08:00
|
|
|
; RUN: llc < %s -mtriple=armv7-none-gnueabi -mattr=-neon,-fpregs | FileCheck --check-prefix=NONEON-NOVFP %s
|
2013-10-11 19:07:00 +08:00
|
|
|
; RUN: llc < %s -mtriple=armv7-none-gnueabi -mattr=-neon | FileCheck --check-prefix=NONEON %s
|
[ARM] Code-generation infrastructure for MVE.
This provides the low-level support to start using MVE vector types in
LLVM IR, loading and storing them, passing them to __asm__ statements
containing hand-written MVE vector instructions, and *if* you have the
hard-float ABI turned on, using them as function parameters.
(In the soft-float ABI, vector types are passed in integer registers,
and combining all those 32-bit integers into a q-reg requires support
for selection DAG nodes like insert_vector_elt and build_vector which
aren't implemented yet for MVE. In fact I've also had to add
`arm_aapcs_vfpcc` to a couple of existing tests to avoid that
problem.)
Specifically, this commit adds support for:
* spills, reloads and register moves for MVE vector registers
* ditto for the VPT predication mask that lives in VPR.P0
* make all the MVE vector types legal in ISel, and provide selection
DAG patterns for BITCAST, LOAD and STORE
* make loads and stores of scalar FP types conditional on
`hasFPRegs()` rather than `hasVFP2Base()`. As a result a few
existing tests needed their llc command lines updating to use
`-mattr=-fpregs` as their method of turning off all hardware FP
support.
Reviewers: dmgreen, samparker, SjoerdMeijer
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60708
llvm-svn: 364329
2019-06-26 00:48:46 +08:00
|
|
|
; RUN: llc < %s -mtriple=armv7-none-gnueabi -mattr=-fpregs | FileCheck --check-prefix=NOVFP %s
|
2013-10-11 19:07:00 +08:00
|
|
|
; RUN: llc < %s -mtriple=armv7-none-gnueabi -mattr=-neon,+vfp2 | FileCheck --check-prefix=NONEON-VFP %s
|
|
|
|
|
|
|
|
; Check no NEON instructions are selected when feature is disabled.
|
|
|
|
define void @neonop(i64* nocapture readonly %a, i64* nocapture %b) #0 {
|
|
|
|
%1 = bitcast i64* %a to <2 x i64>*
|
2015-02-28 05:17:42 +08:00
|
|
|
%wide.load = load <2 x i64>, <2 x i64>* %1, align 8
|
2013-10-11 19:07:00 +08:00
|
|
|
; NONEON-NOVFP-NOT: vld1.64
|
|
|
|
; NONEON-NOT: vld1.64
|
|
|
|
%add = add <2 x i64> %wide.load, %wide.load
|
|
|
|
; NONEON-NOVFP-NOT: vadd.i64
|
|
|
|
; NONEON-NOT: vadd.i64
|
|
|
|
%2 = bitcast i64* %b to <2 x i64>*
|
|
|
|
store <2 x i64> %add, <2 x i64>* %2, align 8
|
|
|
|
; NONEON-NOVFP-NOT: vst1.64
|
|
|
|
; NONEON-NOT: vst1.64
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
; Likewise with VFP instructions.
|
|
|
|
define double @fpmult(double %a, double %b) {
|
|
|
|
%res = fmul double %a, %b
|
|
|
|
; NONEON-NOVFP-NOT: vmov
|
|
|
|
; NONEON-NOVFP-NOT: vmul.f64
|
|
|
|
; NOVFP-NOT: vmov
|
|
|
|
; NOVFP-NOT: vmul.f64
|
|
|
|
; NONEON-VFP: vmov
|
|
|
|
; NONEON-VFP: vmul.f64
|
|
|
|
ret double %res
|
|
|
|
}
|
|
|
|
|