We were trying to guess at the original IR type for image intrinsics
after legalization to figure out if they were d16, but this didn't
work. Explicitly track if this is a d16 operation or not in the
opcode, as is done for the buffer intrinsics.
The OpenCL library is using f32 image writes with a dmask of 15 for
some reason, and this was incorrectly switching them to use d16. Fixes
image failures in the OpenCL conformance test. The equivalent dmask
for loads doesn't even select in either selector.
If we know the source is a valid object, we do not need to insert a
null check. This misses a lot of opportunities from
metadata/attributes not tracked in codegen.
As discussed in https://github.com/llvm/llvm-project/issues/53020 / https://reviews.llvm.org/D116692,
SCEV is forbidden from reasoning about 'backedge taken count'
if the branch condition is a poison-safe logical operation,
which is conservatively correct, but is severely limiting.
Instead, we should have a way to express those
poison blocking properties in SCEV expressions.
The proposed semantics is:
```
Sequential/in-order min/max SCEV expressions are non-commutative variants
of commutative min/max SCEV expressions. If none of their operands
are poison, then they are functionally equivalent, otherwise,
if the operand that represents the saturation point* of given expression,
comes before the first poison operand, then the whole expression is not poison,
but is said saturation point.
```
* saturation point - the maximal/minimal possible integer value for the given type
The lowering is straight-forward:
```
compare each operand to the saturation point,
perform sequential in-order logical-or (poison-safe!) ordered reduction
over those checks, and if reduction returned true then return
saturation point else return the naive min/max reduction over the operands
```
https://alive2.llvm.org/ce/z/Q7jxvH (2 ops)
https://alive2.llvm.org/ce/z/QCRrhk (3 ops)
Note that we don't need to check the last operand: https://alive2.llvm.org/ce/z/abvHQS
Note that this is not commutative: https://alive2.llvm.org/ce/z/FK9e97
That allows us to handle the patterns in question.
Reviewed By: nikic, reames
Differential Revision: https://reviews.llvm.org/D116766
(Split from original patch to separate non-NFC part and add coverage. I typoed when adding the new test, so this change includes the typo fix to let libfunc recongize the signature. Didn't figure it was worth another separate commit.)
Differential Revision: https://reviews.llvm.org/D116851 (part 2 of 2)
There are a few places where the alignment argument for AlignedAllocLike functions was previously hardcoded. This patch adds an getAllocAlignment function and a change to the MemoryBuiltin table to allow alignment arguments to be found generically.
This will shortly allow alignment inference on operator new's with align_val params and an extension to Attributor's HeapToStack. The former will follow shortly - I split Bryce's patch for purpose of having the large change be NFC. The later will be reviewed separately.
Differential Revision: https://reviews.llvm.org/D116851 (part 1 of 2)
This change removes a previous restriction where we had to prove the allocation performed by aligned_alloc was non-zero in size before using the align parameter to annotate the result. I believe this was conservatism around the C11 specification of this routine which allowed UB when size was not a multiple of alignment, but if so, it was a partial one at best. (ex: align 32, size 16 was equally UB, but not restricted) The spec has since been clarified to require nullptr return, not UB.
A nullptr - the documented return for this function on failure for all cases after UB mentioned above was removed - is trivially aligned for any power of two. This isn't totally new behavior even for this transform, we'd previously annotate potentially failing allocs (e.g. huge sizes) meaning we were putting align on potentially null pointers anyways. This change simpy does the same for all failure modes.
If we look at potentially interfering accesses we need to ensure the
"IsExact" flag is set appropriately. Accesses that have an "unknown"
size or offset cannot be exact matches and we missed to flag that.
Error and test reported by Serguei N. Dmitriev.
All paths (that actually do anything) require a successful dyn_cast<CallBase> - so just earlyout if the cast fails
Fixes static analyzer nullptr deference warning
The code in VPWidenCanonicalIVRecipe::execute only worked for fixed-width
vectors due to the way we generate the values per lane. This patch changes
the code to use a combination of vector splats and step vectors to get
the same result. This then works for both fixed-width and scalable vectors.
Tests that exercise this code path for scalable vectors have been added here:
Transforms/LoopVectorize/AArch64/sve-tail-folding.ll
Differential Revision: https://reviews.llvm.org/D113180
SROA has 3 data-structures where it stores sets of instructions that should
be deleted:
- DeadUsers -> instructions that are UB or have no users
- DeadOperands -> instructions that are UB or operands of useless phis
- DeadInsts -> "dead" instructions, including loads of uninitialized memory
with users
The first 2 sets can be RAUW with poison instead of undef. No brainer as UB
can be replaced with poison, and for instructions with no users RAUW is a
NOP.
The 3rd case cannot be currently replaced with poison because the set mixes
the loads of uninit memory. I leave that alone for now.
Another case where we can use poison is in the construction of vectors from
multiple loads. The base vector for the first insertelement is now poison as
it doesn't matter as it is fully overwritten by inserts.
Differential Revision: https://reviews.llvm.org/D116887
We need to explicitly visit a number of types, as these are no
longer reachable through the pointer type if opaque pointers are
enabled. This is similar to ValueEnumerator changes that have
been done previously.
Use G_MERGE_VALUES and G_UNMERGE_VALUES on vector elements instead of
G_EXTRACT and G_INSERT when doing custom legalization for
G_EXTRACT_VECTOR_ELT and G_INSERT_VECTOR_ELT.
With this approach legalization artifact combiner gets direct access
to all vector elements.
Differential Revision: https://reviews.llvm.org/D116115
9345ab3a45 updated generateOverflowCheck to skip creating checks that
always evaluate to false. This in turn means that we only need to
create TruncTripCount if it is actually used.
Sink the TruncTripCount creating into ComputeEndCheck, so it is only
created when there's an actual check.
This commit caused performance regressions due to differences in the
expected code during loop flattening. Reverting it until the fix is
ready, which hopefully wont take too long.
This reverts commit 86825fc2fb.
This patch fixes up an issue with InnerLoopVectorizer::getOrCreateVectorTripCount
whereby we weren't correctly generating the runtime trip count
for scalable vectors when tail-folding.
It also removes some asserts in the tail-folding path for cases when
the VF is not scalable.
In this patch I have only permitted tail-folding to be enabled
explicitly for scalable vectors when the user has specified one
of the following flags:
-prefer-predicate-over-epilogue=predicate-dont-vectorize
-prefer-predicate-over-epilogue=predicate-else-scalar-epilogue
For now it's best not to enable tail-folding with scalable vectors for
low trip counts or when optimising for code size, since there has been
no analysis on whether this is worth it.
Various tests have been added here:
Transforms/LoopVectorize/AArch64/sve-tail-folding.ll
Transforms/LoopVectorize/AArch64/sve-tail-folding-forced.ll
The tests cannot be target independent because they require masked
load/store support, i.e. TTI.isLegalMaskedLoad and TTI.isLegalMaskedStore
need to return true.
Differential Revision: https://reviews.llvm.org/D113003
Codegen of added testcase before this patch:
ptrue p0.s
cmpgt p1.s, p0/z, z0.s, z1.s
cmpge p2.s, p0/z, z2.s, z1.s
and p0.b, p0/z, p1.b, p2.b
ret
Patterns originally authored by Will Lovett.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D116749
9345ab3a45 updated generateOverflowCheck to skip creating checks that
always evaluate to false. This in turn means that we only need to
compute |Step| * Trip count if the result of the multiplication is
actually used.
Sink the multiplication into ComputeEndCheck, so it is only created
when there's an actual check.
We currently have two similar implementations of this concept:
isNoAliasCall() only checks for the noalias return attribute.
isNoAliasFn() also checks for allocation functions.
We should switch to only checking the attribute. SLC is responsible
for inferring the noalias return attribute for non-new allocation
functions (with a missing case fixed in
348bc76e35).
For new, clang is responsible for setting the attribute,
if -fno-assume-sane-operator-new is not passed.
Differential Revision: https://reviews.llvm.org/D116800
runFinalizeActions takes an AllocActions vector and attempts to run its finalize
actions. If any finalize action fails then all paired dealloc actions up to the
failing pair are run, and the error(s) returned. If all finalize actions succeed
then a vector containing the dealloc actions is returned.
runDeallocActions takes a vector<WrapperFunctionCall> containing dealloc action
calls and runs them all, returning any error(s).
These helpers are intended to simplify the implementation of
JITLinkMemoryManager::InFlightAlloc::finalize and
JITLinkMemoryManager::deallocate overrides by taking care of execution (and
potential roll-back) of allocation actions.
This can be generalized to (srl (and X, C2), C) ->
(srli (slli X, (XLen-C3), (XLen-C3) + C). Where C2 is a mask with
C3 trailing ones.
This can avoid constant materialization for C2. This is beneficial
even when C2 can be selected to ANDI because the SLLI can become
C.SLLI, but C.ANDI cannot cover all the immediates of ANDI.
This also enables CSE in some cases of i8 sdiv by constant codegen.
Patch fixes scheduling of FP load instructions with pre/post increment adding WriteAdr for address operand.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D116361
OS Laboratory. Huawei Russian Research Institute. Saint-Petersburg
Lower global symbols such as call/external symbol.
Lower other leaf DAG node such as frame address/block address/jumptable/vastart.
Normally some leaf symbols need reside in constant pool as ABI prefers, and are addressed by
lrw or jsri instructions.
Every symbol in constant pool is lowered with one entry in target constant pool. The
entry has different type corresponding to different leaf node such as blockaddress,
jumptable, or global value.
Similar for (sra (sext_inreg X, i8), C).
With Zbb, sext_inreg of i8 and i16 are legal for sext.b and sext.h.
This transform makes the Zbb codegen the same as without Zbb. The
shifts are more compressible. This also exposes an opportunity for
CSE with another slli in the i16 sdiv by constant codegen.
Summary: The patch adds support for yaml2obj parsing auxiliary
symbols for XCOFF. Since the test cases of this patch are
interdependent with D113825 ([llvm-readobj][XCOFF] dump
auxiliary symbols), test cases of this patch will be committed
after D113825 is committed.
Reviewed By: jhenderson, DiggerLin
Differential Revision: https://reviews.llvm.org/D113552
These nodes should saturate to their saturating VT. We can use this
information to know the bits past the VT are all zeros or all sign bits.
I think we might only have test coverage for the unsigned case. I'll
verify and add tests.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D116870
This diff renames emitCalleeSavedFrameMoves to avoid conflicts with
non-virtual methods of derived classes having the same name but different semantics.
E.g. the class AArch64FrameLowering used to have (non-virtual) "emitCalleeSavedFrameMoves"
but it started to override TargetFrameLowering::emitCalleeSavedFrameMoves after
https://github.com/llvm/llvm-project/commit/c3e6555616 though its usage and semantics didn't change.
P.S. for x86 there was no conflict because the signature of
non-virtual X86FrameLowering::emitCalleeSavedFrameMoves is different
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D114140
There is no need to sort inserted instructions by dominance, as the
deletion loop still requires RAUW with undef before deleting. Removing
instructions in reverse insertion order should still insure that the
number of uselist updates is kept to a minimum.
9345ab3a45 updated generateOverflowCheck to skip creating checks that
always evaluate to false. This in turn means that we only need to check
for overflows if the result of the multiplication is actually used.
Sink the Or for the overflow check into ComputeEndCheck, so it is only
created when there's an actual check.
This is a suggested follow-up to D116765.
This removes a clear of the register operand, so it is better
for code size, but it does potentially create a false register
dependency on surrounding code. If that is a problem, it should
be solvable using dependency-breaking code that is used for
other instructions.
Differential Revision: https://reviews.llvm.org/D116804
If we have multiple references into a map we need to ensure the ones
created late do not invalidate the ones created early. To do that we
need to make sure all but the first are not modifying the map, hence
for them the keys have to be present already.
Fixes#52875.
Goal is to remove use of isOpNewLike. I looked at a couple approaches to this, and this turned out to be the cheapest one. Just letting deref_or_null be generated causes a bunch of test diffs, and I couldn't convince myself there wasn't a real regression somewhere. A generic instcombine to convert deref_or_null + nonnull to deref is annoying complicated since you have to mix facts from callsite and declaration while manipulating only existing call site attributes. It just wasn't worth the code complexity.
Note that the change in new-delete-itanium.ll is a real regression. If you have a callsite which overrides the builtin status of a nobuiltin declaration, *and* you don't put the apppriate attributes on that callsite, you may lose the deref fact. I decided this didn't matter; if anyone disagrees, you can add this case to the generic non-null inference.