Summary:
Also make return calls terminator instructions so epilogues are
inserted before them rather than after them. Together, these changes
make WebAssembly's tail call optimization more stack-safe.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73943
If we widen the compare we might trigger a spurious exception from
the garbage data.
We have two choices here. Explicitly force the upper bits to zero.
Or use a legacy VEX vcmpps/pd instruction and convert the XMM/YMM
result to mask register.
I've chosen to go with the second option. I'm not sure which is
really best. In some cases we could get rid of the zeroing since
the producing instruction probably already zeroed it. But we lose
the ability to fold a load. So which is best is dependent on
surrounding code.
Differential Revision: https://reviews.llvm.org/D74522
Summary:
Fixes a crash in the backend where optimizations produce calls to the
cbrt runtime functions. Fixes PR 44227.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74259
This allow it to recognize more loads as being consecutive when the load's address are complex at the start.
Differential Revision: https://reviews.llvm.org/D74444
Also greatly improve i64 lowering. LegalizeIntegerTypes does the
correct narrowing if i64 isn't legal. Just workaround this for
SelectionDAG by making i64 legal and splitting in the patterns.
If the target has FP64 but not FP16 then we have custom lowering for FP_EXTEND
and STRICT_FP_EXTEND with type f64. However if the extend is from f32 to f64 the
current implementation will cause in infinite loop for STRICT_FP_EXTEND due to
emitting a merge_values of the original node which after replacement becomes a
merge_values of itself.
Fix this by not doing anything for f32 to f64 extend when we have FP64, though
for STRICT_FP_EXTEND we have to do the strict-to-nonstrict mutation as that
doesn't happen automatically for opcodes with custom lowering.
Differential Revision: https://reviews.llvm.org/D74559
Skip the loop over the CalleSavedInfos in 'restoreCalleeSavedRegisters' when
the register is a CR field and we are not targeting 32-bit ELF. This is safe
because:
1) The helper function 'restoreCRs' returns if the target is not 32-bit ELF,
making all the code in the loop related to CR fields dead for every other
subtarget. This code is only called on ELF right now, but the patch
to extend it for AIX also needs to skip 'restoreCRs'.
2) The loop will not otherwise modify the iterator, so the iterator
manipulations at the bottom of the loop end up setting 'I' to its
current value.
This simplifciation allows us to remove one argument from 'restoreCRs'.
Also add a helper function to determine if a register is one of the
callee saved condition register fields.
Exploit native VSX rounding instruction, x(v|s)r(d|s)pic, which does
rounding using current rounding mode.
According to C standard library, rint may raise INEXACT exception while
nearbyint won't.
Reviewed By: lkail
Differential Revision: https://reviews.llvm.org/D72685
In some cases BTI landing pad is inserted even compatible instruction
was there already. Meta instruction does not count in this case
therefore skip them in the check for first instructions in the function.
Differential revision: https://reviews.llvm.org/D74492
Simon pointed out that this function is doing a bitcast, which can be
incorrect for big endian. That makes the lowering of VMOVN in MVE
wrong, but the function is shared between Neon and MVE so both can
be incorrect.
This attempts to fix things by using the newly added VECTOR_REG_CAST
instead of the BITCAST. As it may now be used on Neon, I've added the
relevant patterns for it there too. I've also added a quick dag combine
for it to remove them where possible.
Differential Revision: https://reviews.llvm.org/D74485
Currently, BPF does not support dynamic static allocation.
For a program like below:
extern void bar(int *);
void foo(int n) {
int a[n];
bar(a);
}
The current error message looks like:
unimplemented operand
UNREACHABLE executed at /.../llvm/lib/Target/BPF/BPFISelLowering.cpp:199!
Let us make error message explicit so it will be clear to the user
what is the problem. With this patch, the error message looks like:
fatal error: error in backend: Unsupported dynamic stack allocation
...
Differential Revision: https://reviews.llvm.org/D74521
When SI_IF is inserted, it constrains the source register with a
register class, which was quite likely a G_ICMP. This was incorrectly
treating it as a scalar, and then applyMappingImpl would end up
producing invalid MIR since this was unexpected.
Also fix not using all VGPR sources for vcc outputs.
Summary:
This was a very odd API, where you had to pass a flag into a zext
function to say whether the extended bits really were zero or not. All
callers passed in a literal true or false.
I think it's much clearer to make the function name reflect the
operation being performed on the value we're tracking (rather than on
the KnownBits Zero and One fields), so zext means the value is being
zero extended and new function anyext means the value is being extended
with unknown bits.
NFC.
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74482
When we have to widen to a 64-bit register, we have to emit a SUBREG_TO_REG.
Add a general-purpose widening helpe which emits the correct SUBREG_TO_REG
instruction based off of a desired size and add a testcase.
Also remove some asserts which are technically incorrect in `emitTestBit`.
- p0 doesn't count as a scalar type, so we need to check `!Ty.isVector()`
instead
- Whenever we have a s1, the Size/Bit checks are too conservative, so just
remove them
Replace these asserts with less conservative ones where applicable.
Differential Revision: https://reviews.llvm.org/D74427
This has a really interesting side effect in that it improves some UMAX/UMIN reduction code which had redundant XOR(SHUFFLE(XOR(X,SIGNMASK)),SIGNMASK) patterns - the getNegatibleCost recognises it as FNEG(SHUFFLE(FNEG(X))).... We have a lot of FNEG patterns bitcasted to the integer domain for XOR signbit twiddling which is similar to what we do to allow UMAX/UMIN to be lowered using SMAX/SMIN.
Differential Revision: https://reviews.llvm.org/D74231
An option is added for PowerPC to disable use of non-volatile CR
register fields and avoid CR spilling in the prologue.
Differential Revision: https://reviews.llvm.org/D69835
Added support for the intrinsic llvm.ppc.dcbfl and llvm.ppc.dcbflp.
These will be used for emitting cache control instructions dcbfl and dcbflp
which are actually mnemonics for using dcbf instruction with different
immediate arguments.
dcbfl ra, rb -> dcbf ra, rb, 1
dcbflp, ra, rb -> dcbf ra, rb, 3
Differential Revision: https://reviews.llvm.org/D68411
Load extra bits if suitably aligned. This allows using widened
3-vector loads on SI, and fixes legalization for <9 x s32> (which LSV
apparently forms frequently on lowered kernel argument lists).
Fix incorrectly treating these as legal on SI. This should emit a
64-bit store and a 32-bit store.
I think all of the load and store rules are just about complete, but
due for a rewrite.
The isNegatibleForFree/getNegatedExpression methods currently rely on a raw char value to indicate whether a negation is beneficial or not.
This patch replaces the char return value with an NegatibleCost enum to more clearly demonstrate what is implied.
It also renames isNegatibleForFree to getNegatibleCost to more accurately reflect whats going on.
Differential Revision: https://reviews.llvm.org/D74221
This patch enables the debug entry values feature.
- Remove the (CC1) experimental -femit-debug-entry-values option
- Enable it for x86, arm and aarch64 targets
- Resolve the test failures
- Leave the llc experimental option for targets that do not
support the CallSiteInfo yet
Differential Revision: https://reviews.llvm.org/D73534
Summary:
Consider:
%r = call i32 @llvm.amdgcn.writelane(i32 0, i32 1, i32 2)
This produces a value that is 0 on lane 1, and 2 everywhere else; i.e.,
it is divergent.
Reported-by: Marek Olsak <Marek.Olsak@amd.com>
Reviewers: arsenm, foad, mareko
Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74400
These should require AVX512VL not AVX512F. The legacy VEX patterns
will match first unless AVX512VL is enabled so this doesn't cause
a functional issue.
This adds a strict version of FP16_TO_FP and FP_TO_FP16 and uses
them to implement soft promotion for the half type. This is
enough to provide basic support for __fp16 with strictfp.
Add the necessary X86 support to use VCVTPS2PH/VCVTPH2PS when F16C
is enabled.
This was creating a select on true/false values, and then comparing
that later. This produced more work for later combines, which can be
avoided by just using the boolean values. This was copied from the
original DAG expansion, which also has the same problem. This doesn't
have a observable change using SelectionDAG, but since GlobalISel is
missing these optimizations, the final code was noticeably longer.
These have nicer expansions implemented in the DAG. Ideally we would
either directly implement all of these special expansions, or stop
expanding division in the IR.
This is apparently worse than 1-byte alignment. This does not attempt
to decompose 2-byte aligned wide stores, but will stop trying to
produce them.
Also fix bug in LoadStoreVectorizer which was decreasing the alignment
and vectorizing stack accesses. It was assuming a stack object was an
alloca that could have its base alignment changed, which is not true
if the pointer is derived from a function argument.
Since natural fdiv lowering is now more conservative even with
denormals disabled, we get a slower expansion from just a plain
1.0/fdiv. Directly emit the rcp intrinsic when using it to implement
integer division to avoid a pointlessly complex sequence.
We aren't doing a good job of optimizing AVX512 outside of this code. So remove the bail out for AVX512 and replace with a FIXME. This at least gets us the AVX2 codegen.
Differential Revision: https://reviews.llvm.org/D74431