The EVEX to VEX pass is already assuming this is true under AVX512VL. We had special patterns to use zmm instructions if VLX and F16C weren't available.
Instead just make AVX512 imply F16C to make the EVEX to VEX behavior explicitly legal and remove the extra patterns.
All known CPUs with AVX512 have F16C so this should safe for now.
llvm-svn: 317521
Summary:
Print %subreg.<subregidxname> instead of just the subregister
index when printing immediate operands corresponding to subreg
indices in INSERT_SUBREG, EXTRACT_SUBREG, SUBREG_TO_REG and
REG_SEQUENCE.
Reviewers: qcolombet, MatzeB
Reviewed By: MatzeB
Subscribers: nhaehnle, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D39696
llvm-svn: 317513
The backend assumes pointer in default addr space is 32 bit, which is not
true for the new addr space mapping and causes assertion for unresolved
functions.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39643
llvm-svn: 317476
Added TESTM and TESTNM to the list of instructions that already zeroing unused upper bits
and does not need the redundant shift left and shift right instructions afterwards.
Added a pattern for TESTM and TESTNM in iselLowering, so now icmp(neq,and(X,Y), 0) goes folds into TESTM
and icmp(eq,and(X,Y), 0) goes folds into TESTNM
This commit is a preparation for lowering the test and testn X86 intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D38732
llvm-svn: 317465
Summary:
Try to lower a BUILD_VECTOR composed of extract-extract chains that can be
reasoned to be a permutation of a vector by indices in a non-constant vector.
We saw this pattern created by ISPC, which resolts to creating it due to the
requirement that shufflevector's mask operand be a *constant* vector.
I didn't check this but we could possibly use this pattern for lowering the X86 permute
C-instrinsics instead of llvm.x86 instrinsics.
This change can be followed by more improvements:
1. Handle vectors with undef elements.
2. Utilize pshufb and zero-mask-blending to support more effiecient
construction of vectors with constant-0 elements.
3. Use smaller-element vectors of same width, and "interpolate" the indices,
when no native operation available.
Reviewers: RKSimon, craig.topper
Reviewed By: RKSimon
Subscribers: chandlerc, DavidKreitzer
Differential Revision: https://reviews.llvm.org/D39126
llvm-svn: 317463
This patch, together with a matching clang patch (https://reviews.llvm.org/D38683), implements the lowering of X86 broadcastm intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D38684
Change-Id: I709ac0b34641095397e994c8ff7e15d1315b3540
llvm-svn: 317458
Use feature names instead of CPU names.
A future commit will add avx512vl command lines to demonstrate missed use of EVEX instructions.
llvm-svn: 317451
Summary:
AVX512 added RCP14 and RSQRT instructions which improve accuracy over the legacy RCP and RSQRT instruction, but not enough accuracy to remove the need for a Newton Raphson refinement.
Currently we use these new instructions for the legacy packed SSE instrinics, but not the scalar instrinsics. And we use it for fast math optimization of division and reciprocal sqrt.
I think switching the legacy instrinsics maybe surprising to the user since it changes the answer based on which processor you're using regardless of any fastmath settings. It's also weird that we did something different between scalar and packed.
As far at the reciprocal estimation, I think it creates unnecessary deltas in our output behavior (and prevents EVEX->VEX). A little playing around with gcc and icc and godbolt suggest they don't change which instructions they use here.
This patch adds new X86ISD nodes for the RCP14/RSQRT14 and uses those for the new intrinsics. Leaving the old intrinsics to use the old instructions.
Going forward I think our focus should be on
-Supporting 512-bit vectors, which will have to use the RCP14/RSQRT14.
-Using RSQRT28/RCP28 to remove the Newton Raphson step on processors with AVX512ER
-Supporting double precision.
Reviewers: zvi, DavidKreitzer, RKSimon
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39583
llvm-svn: 317413
AMDGPULibFunc hardcodes address space values of the old address space mapping,
which causes invalid addrspacecast instructions and undefined functions in
APPSDK sample MonteCarloAsianDP.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39616
llvm-svn: 317409
This allows masked operations to be used and allows the register allocator to use YMM16-31 if necessary.
As a follow up I'll look into teaching EVEX->VEX how to turn this back into PERM2X128 if any of the additional features don't work out.
llvm-svn: 317403
The number of iterations was incorrectly determined for DP FP vector types
and the tests were insufficient to flag this issue.
Differential revision: https://reviews.llvm.org/D39507
llvm-svn: 317349
Summary:
The current LICM allows sinking an instruction only when it is exposed to exit
blocks through a trivially replacable PHI of which all incoming values are the
same instruction. This change enhance LICM to sink a sinkable instruction
through non-trivially replacable PHIs by spliting predecessors of loop
exits.
Reviewers: hfinkel, majnemer, davidxl, bmakam, mcrosier, danielcdh, efriedma, jtony
Reviewed By: efriedma
Subscribers: nemanjai, dberlin, llvm-commits
Differential Revision: https://reviews.llvm.org/D37163
llvm-svn: 317335
Change the ISel matching of 'ins', 'dins[mu]' from tablegen code to
C++ code. This resolves an issue where ISel would select 'dins' instead
of 'dinsm' when the instructions size and position were individually in
range but their sum was out of range according to the ISA specification.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D39117
llvm-svn: 317331
Similar to the existing code to lower to PACKSS, we can use PACKUS if the input vector's leading zero bits extend all the way to the packed/truncated value.
We have to account for pre-SSE41 targets not supporting PACKUSDW
llvm-svn: 317315
The GlobalISel TableGen backend didn't check for predicates on the
source children. This caused it to generate code for ARM patterns such
as SMLABB or similar, but without properly checking for the sext_16_node
part of the operands. This in turn meant that we would select SMLABB
instead of MLA for simple sequences such as s32 + s32 * s32, which is
wrong (we want a MLA on the full operands, not just their bottom 16
bits).
This patch forces TableGen to skip patterns with predicates on the src
children, so it doesn't generate code for SMLABB and other similar ARM
instructions at all anymore. AArch64 and X86 are not affected.
Differential Revision: https://reviews.llvm.org/D39554
llvm-svn: 317313
Ideally we should probably produce WinEH here as well, but until
then, we can use dwarf exceptions, without any further changes
required in clang, libunwind or libcxxabi.
Differential Revision: https://reviews.llvm.org/D39535
llvm-svn: 317304
This fixes http://llvm.org/PR32560. We were missing a description for
half floating point type and as a result were using the FPR 32 mapping.
Because of the size mismatch the generic code was complaining that the
default mapping is not appropriate. Fix the mapping description so that
the default mapping can be properly applied.
llvm-svn: 317287
undefined reference to `llvm::TargetPassConfig::ID' on
clang-ppc64le-linux-multistage
This reverts commit eea333c33fa73ad225ef28607795984829f65688.
llvm-svn: 317213
Summary:
This is mostly a noop (most of the test diffs are renamed blocks).
There are a few temporary register renames (eax<->ecx) and a few blocks are
shuffled around.
See the discussion in PR33325 for more details.
Reviewers: spatel
Subscribers: mgorny
Differential Revision: https://reviews.llvm.org/D39456
llvm-svn: 317211
When splitting a large load to smaller legally-typed loads, the last load should be padded to reach the size of the previous one so a CONCAT_VECTORS node could reunite them again.
The code currently pads the last load to reach the size of the first load (instead of the previous).
Differential Revision: https://reviews.llvm.org/D38495
Change-Id: Ib60b55ed26ce901fabf68108daf52683fbd5013f
llvm-svn: 317206
MSA stores and loads to the stack are more likely to require an
emergency GPR spill slot due to the smaller offsets available
with those instructions.
Handle this by overestimating the size of the stack by determining
the largest offset presuming that all callee save registers are
spilled and accounting of incoming arguments when determining
whether an emergency spill slot is required.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D39056
llvm-svn: 317204
As of today we only use .cfi_offset to specify the offset of a CSR, but
we never use .cfi_restore when the CSR is restored.
If we want to perform a more advanced type of shrink-wrapping, we need
to use .cfi_restore in order to switch the CFI state between blocks.
This patch only aims at adding support for the directive.
Differential Revision: https://reviews.llvm.org/D36114
llvm-svn: 317199
The generic dag combiner will fold:
(shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)
(shl (or x, c1), c2) -> (or (shl x, c2), c1 << c2)
This can create constants which are too large to use as an immediate.
Many ALU operations are also able of performing the shl, so we can
unfold the transformation to prevent a mov imm instruction from being
generated.
Other patterns, such as b + ((a << 1) | 510), can also be simplified
in the same manner.
Differential Revision: https://reviews.llvm.org/D38084
llvm-svn: 317197