byval arguments should mostly get the same treatment as noalias
arguments in alias analysis. This was not the case for the
isIdentifiedFunctionLocal() function.
Marking byval arguments as identified function local means that
they cannot alias with other arguments, which I believe is correct.
Differential Revision: https://reviews.llvm.org/D93602
Temporarily revert commit 8b1c4e310c.
After 8b1c4e310c the compile-time for `MultiSource/Benchmarks/MiBench/consumer-lame`
dramatically increases with -O3 & LTO, causing issues for builders with
that configuration.
I filed PR48553 with a smallish reproducer that shows a 10-100x compile
time increase.
Adds cost model support for the new llvm.experimental.vector.{extract,insert}
intrinsics, using the existing getExtractSubvectorOverhead and
getInsertSubvectorOverhead functions for shuffles.
Previously this case would throw an assertion.
Differential Revision: https://reviews.llvm.org/D93043
This patch replaces FixedVectorType by VectorType in getIntrinsicInstrCost
in BasicTTIImpl.h. It re-arranges the scalable type test earlier return
and add tests for scalable types.
Depends on D91532
Differential Revision: https://reviews.llvm.org/D92094
BasicAA currently handles cases like Scale*V0 + (-Scale)*V1 where
V0 != V1, but does not handle the simpler case of Scale*V with
V != 0. Add it based on an isKnownNonZero() call.
I'm not passing a context instruction for now, because the existing
approach of always using GEP1 for context could result in symmetry
issues.
Differential Revision: https://reviews.llvm.org/D93162
This adds some basic MVE masked load/store costs, notably changing the
cost of legal loads/stores to the MVECostFactor and the cost of
scalarized instructions to 8*NumElts.
Differential Revision: https://reviews.llvm.org/D86538
If we have two unknown sizes and one GEP operand and one non-GEP
operand, then we currently simply return MayAlias. The comment says
we can't do anything useful ... but we can! We can still check that
the underlying objects are different (and do so for the GEP-GEP case).
To reduce the compile-time impact, this a) checks this early, before
doing the relatively expensive GEP decomposition that will not be
used and b) doesn't do the check if the other operand is a phi or
select. In that case, the phi/select will already recurse, so this
would just do two slightly different recursive walks that arrive at
the same roots.
Compile-time is still a bit of a mixed bag: https://llvm-compile-time-tracker.com/compare.php?from=624af932a808b363a888139beca49f57313d9a3b&to=845356e14adbe651a553ed11318ddb5e79a24bcd&stat=instructions
On average this is a small improvement, but sqlite with ThinLTO has
a 0.5% regression (lencod has a 1% improvement).
The BasicAA test case checks this by using two memsets with unknown
size. However, the more interesting case where this is useful is
the LoopVectorize test case, as analysis of accesses in loops tends
to always us unknown sizes.
Differential Revision: https://reviews.llvm.org/D92401
Build on the work started in 8f07629, and add the multiply case. In the process, more clearly describe the requirement for the operation we're looking through.
Differential Revision: https://reviews.llvm.org/D92726
Noticed while looking at D92701 - we only really handle TCK_RecipThroughput gather/scatter costs - for now drop back to the default implementation for non-legal gathers/scatters.
BasicAA has some special bit of logic for "same base pointer" GEPs
that performs a structural comparison: It only looks at two GEPs
with the same base (as opposed to two GEP chains with a MustAlias
base) and compares their indexes in a limited way. I generalized
part of this code in D91027, and this patch merges the remainder
into the normal decomposed GEP logic.
What this code ultimately wants to do is to determine that
gep %base, %idx1 and gep %base, %idx2 don't alias if %idx1 != %idx2,
and the access size fits within the stride.
We can express this in terms of a decomposed GEP expression with
two indexes scale*%idx1 + -scale*%idx2 where %idx1 != %idx2, and
some appropriate checks for sizes and offsets.
This makes the reasoning slightly more powerful, and more
importantly brings all the GEP logic under a common umbrella.
Differential Revision: https://reviews.llvm.org/D92723
The basic idea is that by looking through operand instructions which don't change the equality result that we can push the existing known bits comparison down past instructions which would obscure them.
We have analogous handling in InstSimplify for most - though weirdly not all - of these cases starting from an icmp root. It's a bit unfortunate to duplicate logic, but since my actual goal is to extend BasicAA, the icmp logic doesn't help. (And just makes it hard to test here.) The BasicAA change will be posted separately for review.
Differential Revision: https://reviews.llvm.org/D92698
Due to the recursion through phis basicaa does, the code needs to be extremely careful not to reason about equality between values which might represent distinct iterations. I'm generally skeptical of the correctness of the whole scheme, but this particular patch fixes one particular instance which is demonstrateable incorrect.
Interestingly, this appears to be the second attempted fix for the same issue. The former fix is incomplete and doesn't address the actual issue.
Differential Revision: https://reviews.llvm.org/D92694
When MemCpyOpt performs call slot optimization it will concatenate the `alias.scope` metadata between the function call and the memcpy. However, scoped AA relies on the domains in metadata to be maintained in a caller-callee relationship. Naive concatenation breaks this assumption leading to bad AA results.
The fix is to take the intersection of domains then union the scopes within those domains.
The original bug came from a case of rust bad codegen which uses this bad aliasing to perform additional memcpy optimizations. As show in the added test case `%src` got forwarded past its lifetime leading to a dereference of garbage data.
Testing
ninja check-llvm
Reviewed By: jeroen.dobbelaere
Differential Revision: https://reviews.llvm.org/D91576
Without FMF, we lower these intrinsics into something like this:
vmaxsd %xmm0, %xmm1, %xmm2
vcmpunordsd %xmm0, %xmm0, %xmm0
vblendvpd %xmm0, %xmm1, %xmm2, %xmm0
But if we can ignore NANs, the single min/max instruction is enough
because there is no need to fix up the x86 logic that corresponds to
X > Y ? X : Y.
We probably want to make other adjustments for FP intrinsics with FMF
to account for specialized codegen (for example, FSQRT).
Differential Revision: https://reviews.llvm.org/D92337
This was modeled to have a cost of 1, but since we do not have a MUL.2d this is
scalarized into vector inserts/extracts and scalar muls.
Motivating precommitted test is test/Transforms/SLPVectorizer/AArch64/mul.ll,
which we don't want to SLP vectorize.
Test Transforms/LoopVectorize/AArch64/extractvalue-no-scalarization-required.ll
unfortunately needed changing, but the reason is documented in
LoopVectorize.cpp:6855:
// The cost of executing VF copies of the scalar instruction. This opcode
// is unknown. Assume that it is the same as 'mul'.
which I will address next as a follow up of this.
Differential Revision: https://reviews.llvm.org/D92208
For recursive phis, we skip the recursive operands and check that
the remaining operands are NoAlias with an unknown size. Currently,
this is limited to inbounds GEPs with positive offsets, to
guarantee that the recursion only ever increases the pointer.
Make this more general by only requiring that the underlying object
of the phi operand is the phi itself, i.e. it it based on itself in
some way. To compensate, we need to use a beforeOrAfterPointer()
location size, as we no longer have the guarantee that the pointer
is strictly increasing.
This allows us to handle some additional cases like negative geps,
geps with dynamic offsets or geps that aren't inbounds.
Differential Revision: https://reviews.llvm.org/D91914
The size requirement on V2 was present because it was not clear
whether an unknown size would allow an access before the start of
V2, which could then overlap. This is clarified since D91649: In
this part of BasicAA, all accesses can occur only after the base
pointer, even if they have unknown size.
This makes the positive and negative offset cases symmetric.
Differential Revision: https://reviews.llvm.org/D91482
If usubsat() is legal, this is likely to result in smaller codegen expansion than the default cmp+select codegen expansion.
Allows us to move the x86-specific lowering to the generic expansion code.
Differential Revision: https://reviews.llvm.org/D92183
Currently, we have some confusion in the codebase regarding the
meaning of LocationSize::unknown(): Some parts (including most of
BasicAA) assume that LocationSize::unknown() only allows accesses
after the base pointer. Some parts (various callers of AA) assume
that LocationSize::unknown() allows accesses both before and after
the base pointer (but within the underlying object).
This patch splits up LocationSize::unknown() into
LocationSize::afterPointer() and LocationSize::beforeOrAfterPointer()
to make this completely unambiguous. I tried my best to determine
which one is appropriate for all the existing uses.
The test changes in cs-cs.ll in particular illustrate a previously
clearly incorrect AA result: We were effectively assuming that
argmemonly functions were only allowed to access their arguments
after the passed pointer, but not before it. I'm pretty sure that
this was not intentional, and it's certainly not specified by
LangRef that way.
Differential Revision: https://reviews.llvm.org/D91649
The cost-model is not getting the cost right for a mul with <2 x i64>
operands, i.e. we don't have a MUL.2d, and this is precommitting some
tests before adjusting this.
Add a basic implementation of getGatherScatterOpCost to BasicTTIImpl.
The implementation estimates the cost of scalarizing the loads/stores,
the cost of packing/extracting the individual lanes and the cost of
only selecting enabled lanes.
This more accurately reflects the current cost on targets like AArch64.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D91984
Update costs now that D92095 and D92102 have tweaked the SSE2 implementation
The SSE42 BLENDVPD cost can actually be used on SSE41 as we don't attempt to generate PCMPGT anymore
Add scalar i16/i32/i64 costs as we can do this cheaply with CMOV
The TypeSize warning would occur because RuntimePointerChecking::insert
was not scalable vector aware. The fix is to use
ScalarEvolution::getSizeOfExpr to grab the size of types.
Differential Revision: https://reviews.llvm.org/D90171
SCEV makes a logical mistake when handling EitherMayExit in
case when both conditions must be met to exit the loop. The
mistake looks like follows: "if condition `A` fails within at most `X` first
iterations, and `B` fails within at most `Y` first iterations, then `A & B`
fails at most within `min (X, Y)` first iterations". This is wrong, because
both of them must fail at the same time.
Simple example illustrating this is following: we have an IV with step 1,
condition `A` = "IV is even", condition `B` = "IV is odd". Both `A` and `B`
will fail within first two iterations. But it doesn't mean that both of them
will fail within first two first iterations at the same time, which would mean
that IV is neither even nor odd at the same time within first 2 iterations.
We can only do so for known exact BE counts, but not for max.
Differential Revision: https://reviews.llvm.org/D91942
Reviewed By: nikic
This might be a regression for some ARM targets, but that should
be changed in the target-specific overrides.
There is apparently still no default lowering for these nodes,
so I am assuming these intrinsics are not in common use.
X86, PowerPC, and RISC-V for example, just crash given the most
basic IR.
Test a few more variations:
* NoAlias with different strides
* MustAlias without loop
* MustAlias with same stride
* MustAlias base pointers with different stride
This is re-applying a combination of f7eac51b9b and 8ec7ea3ddc as one patch
to avoid regressions now that we have better testing in place.
Those were reverted with 32dd5870ee because of crashing in experimental intrinsics.
That bug should be fixed with 7ae346434.
Paraphrased original commit messages:
This is the last step in removing cost-kind as a consideration in the
basic class model for intrinsics.
See D89461 for the start of that.
Subsequent commits dealt with each of the special-case intrinsics that
had customization here in the basic class. This should remove a barrier
to retrying D87188 (canonicalization to the abs intrinsic).
The ARM and x86 cost diffs seen here may be wrong because the
target-specific overrides have their own bugs, but we hope this is
less wrong - if something has a significant throughput cost, then it
should have a significant size / blended cost too by default.
The only behavioral diff in current regression tests is shown in the
x86 scatter-gather test (which is misplaced or broken because it runs
the entire -O3 pipeline) - we unrolled less, and we assume that is
a improvement.
Exception: in general, we want the *size* cost for a scalar call to be
cheap even if the other costs are expensive - we expect it to just be
a branch with some optional stack manipulation.
It is likely that we will want to carve out some
exceptions/overrides to this rule as follow-up patches for
calls that have some general and/or target-specific difference
to the expected lowering.
This was noticed as a regression in unrolling, so we have a test
for that now along with a couple of direct cost model tests.
If the assumed scalarization costs for the oversized vector
calls are not realistic, that would be another follow-up
refinement of the cost models.
Differential Revision: https://reviews.llvm.org/D90554
The constrained intrinsics have metadata arguments, so the
tests here were crashing as noted in D90554 (and that was
reverted even though this bug exists independently of that
change).
This is a partial un-revert of 32dd5870ee (originally df09f82599 ).
I'm adding back the baseline tests first, so we don't have
to back-track as much in case there are still problems.