Andrzej Warzynski
|
65651f197a
|
[AArch64][SVE] Add DAG combine rules for gather loads and sext/zext
Summary:
These changes allow us to support sign-extending gather loads with the
exisiting intrinsics (i.e. @llvm.aarch64.sve.ld1.gather.*).
Reviewers: sdesmalen, huntergr, kmclaughlin, efriedma, rengolin, rovka, dancgr, mgudim
Reviewed By: sdesmalen
Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits
Tags: #llvm
Differential revision: https://reviews.llvm.org/D70812
|
2019-12-11 12:56:18 +00:00 |
Sander de Smalen
|
8bf31e28d7
|
[Aarch64][SVE] Add intrinsics for gather loads with 32-bits offsets
This patch adds intrinsics for SVE gather loads for which the offsets are 32-bits wide and are:
* unscaled
* @llvm.aarch64.sve.ld1.gather.sxtw
* @llvm.aarch64.sve.ld1.gather.uxtw
* scaled (offsets become indices)
* @llvm.arch64.sve.ld1.gather.sxtw.index
* @llvm.arch64.sve.ld1.gather.uxtw.index
The offsets are either zero (uxtw) or sign (sxtw) extended to 64 bits.
These intrinsics map 1-1 to the corresponding SVE instructions (examples for half-words):
* unscaled
* ld1h { z0.s }, p0/z, [x0, z0.s, sxtw]
* ld1h { z0.s }, p0/z, [x0, z0.s, uxtw]
* scaled
* ld1h { z0.s }, p0/z, [x0, z0.s, sxtw #1]
* ld1h { z0.s }, p0/z, [x0, z0.s, uxtw #1]
Committed on behalf of Andrzej Warzynski (andwar)
Reviewers: sdesmalen, kmclaughlin, eli.friedman, rengolin, rovka, huntergr, dancgr, mgudim, efriedma
Reviewed By: sdesmalen
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70782
|
2019-12-03 14:48:29 +00:00 |