forked from OSchip/llvm-project
b70026c43c
X86 at least is able to use movmsk or kmov to move the mask to the scalar domain. Then we can just use test instructions to test individual bits. This is more efficient than extracting each mask element individually. I special cased v1i1 to use the previous behavior. This avoids poor type legalization of bitcast of v1i1 to i1. I've skipped expandload/compressstore as I think we need to handle constant masks for those better first. Many tests end up with duplicate test instructions due to tail duplication in the branch folding pass. But the same thing happens when constructing similar code in C. So its not unique to the scalarization. Not sure if this lowering code will also be good for other targets, but we're only testing X86 today. Differential Revision: https://reviews.llvm.org/D65319 llvm-svn: 367489 |
||
---|---|---|
.. | ||
expand-masked-compressstore.ll | ||
expand-masked-expandload.ll | ||
expand-masked-gather.ll | ||
expand-masked-load.ll | ||
expand-masked-store.ll |