This was pointed out in review of D97520 by Nikita, but existed in the original code as well.
The basic issue is that a decomposed GEP expression describes (potentially) more than one getelementptr. The "inbounds" derived UB which justifies this aliasing rule requires that the entire offset be composed of "inbounds" geps. Otherwise, as can be seen in the recently added and changes in this patch test, we can end up with a large commulative offset with only a small sub-offset actually being "inbounds". If that small sub-offset lies within the object, the result was unsound.
We could potentially be fancier here, but for the moment, simply be conservative when any of the GEPs parsed aren't inbounds.
For the cases of two clobbering loads and one loaded object is fully contained
in the second `BasicAAResult::aliasGEP` returns just `PartialAlias` that
is actually more common case of partial overlap, it doesn't say anything about
actual overlapping sizes.
AA users such as GVN and DSE have no functionality to estimate aliasing of GEPs
with non-constant offsets. The change stores estimated relative offsets so they
can be used further.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D93529
This is a simpler variant of D96647. It just adds a straightforward
depth limit with a high cutoff, without introducing complex logic
for BatchAA consistency. It accepts that we may cache a sub-optimal
result if the depth limit is hit.
Eventually this should be more fully addressed by D96647 or similar,
but in the meantime this avoids stack overflows in a cheap way.
Differential Revision: https://reviews.llvm.org/D96996
We can always look through single-argument (LCSSA) phi nodes when
performing alias analysis. getUnderlyingObject() already does this,
but stripPointerCastsAndInvariantGroups() does not. We still look
through these phi nodes with the usual aliasPhi() logic, but
sometimes get sub-optimal results due to the restrictions on value
equivalence when looking through arbitrary phi nodes. I think it's
generally beneficial to keep the underlying object logic and the
pointer cast stripping logic in sync, insofar as it is possible.
With this patch we get marginally better results:
aa.NumMayAlias | 5010069 | 5009861
aa.NumMustAlias | 347518 | 347674
aa.NumNoAlias | 27201336 | 27201528
...
licm.NumPromoted | 1293 | 1296
I've renamed the relevant strip method to stripPointerCastsForAliasAnalysis(),
as we're past the point where we can explicitly spell out everything
that's getting stripped.
Differential Revision: https://reviews.llvm.org/D96668
At this point, we can treat the case of GEP/GEP aliasing and
GEP/non-GEP aliasing in essentially the same way. The only
differences are that we need to do an additional negative GEP base
check, and that we perform a bailout on unknown sizes for the
GEP/non-GEP case (the latter exists only to limit compile-time).
This change is not quite NFC due to the peculiar effect that
the DecomposedGEP for V2 can actually be non-trivial even if V2
is not a GEP. The reason for this is that getUnderlyingObject()
can look through LCSSA phi nodes, while stripPointerCasts() doesn't.
This can lead to slightly better results if single-entry phi nodes
occur inside a loop, where looking through the phi node via aliasPhi()
would subject it to phi cycle equivalence restrictions. It would
probably make sense to adjust pointer cast stripping (for AA) to
handle this case, and ensure consistent results.
For two GEPs with identical offsets, we currently first perform
a base address query without size information, and then if it is
MayAlias, perform another with size information. This is pointless,
as the latter query should produce strictly better results.
This was not quite true historically due to the way that NoAlias
assumptions were handled, but that issue has since been resolved.
We currently detect GEPs that have exactly the same indexes by
comparing the Offsets and VarIndices. However, the latter implicitly
performs equality comparisons between two values, which is not
generally legal inside BasicAA, due to the possibility of comparisons
across phi cycles.
I believe that in this particular instance this actually ends up being
unproblematic, at least I wasn't able to come up with any cases that
could result in an incorrect root query result.
In the interest of being defensive, compute GetIndexDifference earlier
(which knows how to handle phi cycles properly) and use the result of
that to determine whether the offsets are identical.
Rather than storing the query depth in AAResults, store it in AAQI.
This makes more sense, as it is a property of the query. This
sidesteps the issue of D94363, fixing slightly inaccurate AA
statistics. Additionally, I plan to use the Depth from BasicAA in
the future, where fetching it from AAResults would be unreliable.
This change is not quite as straightforward as it seems, because
we need to preserve the depth when creating a new AAQI for recursive
queries across phis. I'm adding a new method for this, as we may
need to preserve additional information here in the future.
We tend to assume that the AA pipeline is by default the default AA
pipeline and it's confusing when it's empty instead.
PR48779
Initially reverted due to BasicAA running analyses in an unspecified
order (multiple function calls as parameters), fixed by fetching
analyses before the call to construct BasicAA.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D95117
There are no changes relative to the original commit. However, an issue
this exposed in BasicAA assumption tracking has been fixed in the
previous commit.
-----
An alias query currently works out roughly like this:
* Look up location pair in cache.
* Perform BasicAA logic (including cache lookup and insertion...)
* Perform a recursive query using BestAAResults.
* Look up location pair in cache (and thus do not recurse into BasicAA)
* Query all the other AA providers.
* Query all the other AA providers.
This is a lot of unnecessary work, all ultimately caused by the
BestAAResults query at the end of aliasCheck(). The reason we perform
it, is that aliasCheck() is getting called recursively, and we of
course want those recursive queries to also make use of other AA
providers, not just BasicAA. We can solve this by making the recursive
queries directly use BestAAResults (which will check both BasicAA
and other providers), rather than recursing into aliasCheck().
There are some tradeoffs:
* We can no longer pass through the precomputed underlying object
to aliasCheck(). This is not a major concern, because nowadays
getUnderlyingObject() is quite cheap.
* Results from other AA providers are no longer cached inside
BasicAA. The way this worked was already a bit iffy, in that a
result could be cached, but if it was MayAlias, we'd still end
up re-querying other providers anyway. If we want to cache
non-BasicAA results, we should do that in a more principled manner.
In any case, despite those tradeoffs, this works out to be a decent
compile-time improvment. I think it also simplifies the mental model
of how BasicAA works. It took me quite a while to fully understand
how these things interact.
Differential Revision: https://reviews.llvm.org/D90094
D91936 placed the tracking for the assumptions into BasicAA.
However, when recursing over phis, we may use fresh AAQI instances.
In this case AssumptionBasedResults from an inner AAQI can reesult
in a removal of an element from the outer AAQI.
To avoid this, move the tracking into AAQI. This generally makes
more sense, as the NoAlias assumptions themselves are also stored
in AAQI.
The test case only produces an assertion failure with D90094
reapplied. I think the issue exists independently of that change
as well, but I wasn't able to come up with a reproducer.
This reverts commit a3904cc77f.
It causes the compiler to crash while building Harfbuzz for ARM in
Chromium, reduced reproducer forthcoming:
https://crbug.com/1167305
An alias query currently works out roughly like this:
* Look up location pair in cache.
* Perform BasicAA logic (including cache lookup and insertion...)
* Perform a recursive query using BestAAResults.
* Look up location pair in cache (and thus do not recurse into BasicAA)
* Query all the other AA providers.
* Query all the other AA providers.
This is a lot of unnecessary work, all ultimately caused by the
BestAAResults query at the end of aliasCheck(). The reason we perform
it, is that aliasCheck() is getting called recursively, and we of
course want those recursive queries to also make use of other AA
providers, not just BasicAA. We can solve this by making the recursive
queries directly use BestAAResults (which will check both BasicAA
and other providers), rather than recursing into aliasCheck().
There are some tradeoffs:
* We can no longer pass through the precomputed underlying object
to aliasCheck(). This is not a major concern, because nowadays
getUnderlyingObject() is quite cheap.
* Results from other AA providers are no longer cached inside
BasicAA. The way this worked was already a bit iffy, in that a
result could be cached, but if it was MayAlias, we'd still end
up re-querying other providers anyway. If we want to cache
non-BasicAA results, we should do that in a more principled manner.
In any case, despite those tradeoffs, this works out to be a decent
compile-time improvment. I think it also simplifies the mental model
of how BasicAA works. It took me quite a while to fully understand
how these things interact.
Differential Revision: https://reviews.llvm.org/D90094
This patch fixes a bug that could result in miscompiles (at least
in an OOT target). The problem could be seen by adding checks that
the DominatorTree used in BasicAliasAnalysis and ValueTracking was
valid (e.g. by adding DT->verify() call before every DT dereference
and then running all tests in test/CodeGen).
Problem was that the LegacyPassManager calculated "last user"
incorrectly for passes such as the DominatorTree when not telling
the pass manager that there was a transitive dependency between
the different analyses. And then it could happen that an incorrect
dominator tree was used when doing alias analysis (which was a pretty
serious bug as the alias analysis result could be invalid).
Fixes: https://bugs.llvm.org/show_bug.cgi?id=48709
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D94138
Change the way NoAlias assumptions in BasicAA are handled. Instead of
handling this inside the phi-phi code, always initially insert a
NoAlias result into the map and keep track whether it is used.
If it is used, then we require that we also get back NoAlias from
the recursive queries. Otherwise, the entry is changed to MayAlias.
Additionally, keep track of all location pairs we inserted that may
still be based on assumptions higher up. If it turns out one of those
assumptions is incorrect, we flush them from the cache.
The compile-time impact for the new implementation is significantly
higher than the previous iteration of this patch:
https://llvm-compile-time-tracker.com/compare.php?from=c0bb9859de6991cc233e2dedb978dd118da8c382&to=c07112373279143e37568b5bcd293daf81a35973&stat=instructions
However, it should avoid the exponential runtime cases we run into
if we don't cache assumption-based results entirely.
This also produces better results in some cases, because NoAlias
assumptions can now start at any root, rather than just phi-phi pairs.
This is not just relevant for analysis quality, but also for BatchAA
consistency: Otherwise, results would once again depend on query order,
though at least they wouldn't be wrong.
This ended up both more complicated and more expensive than I hoped,
but I wasn't able to come up with another solution that satisfies all
the constraints.
Differential Revision: https://reviews.llvm.org/D91936
D71264 started using a context instruction in a computeKnownBits()
call. However, if aliasing between two GEPs is checked, then the
choice of context instruction will be different for alias(GEP1, GEP2)
and alias(GEP2, GEP1), which is not supposed to happen.
Resolve this by remembering which GEP a certain VarIndex belongs to,
and use that as the context instruction. This makes the choice of
context instruction predictable and symmetric.
It should be noted that this choice of context instruction is
non-optimal (just like the previous choice): The AA query result is
only valid at points that are reachable from *both* instructions.
Using either one of them is conservatively correct, but a larger
context may also be valid to use.
Differential Revision: https://reviews.llvm.org/D93183
Temporarily revert commit 8b1c4e310c.
After 8b1c4e310c the compile-time for `MultiSource/Benchmarks/MiBench/consumer-lame`
dramatically increases with -O3 & LTO, causing issues for builders with
that configuration.
I filed PR48553 with a smallish reproducer that shows a 10-100x compile
time increase.
BasicAA currently handles cases like Scale*V0 + (-Scale)*V1 where
V0 != V1, but does not handle the simpler case of Scale*V with
V != 0. Add it based on an isKnownNonZero() call.
I'm not passing a context instruction for now, because the existing
approach of always using GEP1 for context could result in symmetry
issues.
Differential Revision: https://reviews.llvm.org/D93162
If we have two unknown sizes and one GEP operand and one non-GEP
operand, then we currently simply return MayAlias. The comment says
we can't do anything useful ... but we can! We can still check that
the underlying objects are different (and do so for the GEP-GEP case).
To reduce the compile-time impact, this a) checks this early, before
doing the relatively expensive GEP decomposition that will not be
used and b) doesn't do the check if the other operand is a phi or
select. In that case, the phi/select will already recurse, so this
would just do two slightly different recursive walks that arrive at
the same roots.
Compile-time is still a bit of a mixed bag: https://llvm-compile-time-tracker.com/compare.php?from=624af932a808b363a888139beca49f57313d9a3b&to=845356e14adbe651a553ed11318ddb5e79a24bcd&stat=instructions
On average this is a small improvement, but sqlite with ThinLTO has
a 0.5% regression (lencod has a 1% improvement).
The BasicAA test case checks this by using two memsets with unknown
size. However, the more interesting case where this is useful is
the LoopVectorize test case, as analysis of accesses in loops tends
to always us unknown sizes.
Differential Revision: https://reviews.llvm.org/D92401
BasicAA has some special bit of logic for "same base pointer" GEPs
that performs a structural comparison: It only looks at two GEPs
with the same base (as opposed to two GEP chains with a MustAlias
base) and compares their indexes in a limited way. I generalized
part of this code in D91027, and this patch merges the remainder
into the normal decomposed GEP logic.
What this code ultimately wants to do is to determine that
gep %base, %idx1 and gep %base, %idx2 don't alias if %idx1 != %idx2,
and the access size fits within the stride.
We can express this in terms of a decomposed GEP expression with
two indexes scale*%idx1 + -scale*%idx2 where %idx1 != %idx2, and
some appropriate checks for sizes and offsets.
This makes the reasoning slightly more powerful, and more
importantly brings all the GEP logic under a common umbrella.
Differential Revision: https://reviews.llvm.org/D92723
Due to the recursion through phis basicaa does, the code needs to be extremely careful not to reason about equality between values which might represent distinct iterations. I'm generally skeptical of the correctness of the whole scheme, but this particular patch fixes one particular instance which is demonstrateable incorrect.
Interestingly, this appears to be the second attempted fix for the same issue. The former fix is incomplete and doesn't address the actual issue.
Differential Revision: https://reviews.llvm.org/D92694
For recursive phis, we skip the recursive operands and check that
the remaining operands are NoAlias with an unknown size. Currently,
this is limited to inbounds GEPs with positive offsets, to
guarantee that the recursion only ever increases the pointer.
Make this more general by only requiring that the underlying object
of the phi operand is the phi itself, i.e. it it based on itself in
some way. To compensate, we need to use a beforeOrAfterPointer()
location size, as we no longer have the guarantee that the pointer
is strictly increasing.
This allows us to handle some additional cases like negative geps,
geps with dynamic offsets or geps that aren't inbounds.
Differential Revision: https://reviews.llvm.org/D91914
The size requirement on V2 was present because it was not clear
whether an unknown size would allow an access before the start of
V2, which could then overlap. This is clarified since D91649: In
this part of BasicAA, all accesses can occur only after the base
pointer, even if they have unknown size.
This makes the positive and negative offset cases symmetric.
Differential Revision: https://reviews.llvm.org/D91482
Add a flag that disables caching when computing aliasing results
potentially based on a phi-phi NoAlias assumption. We'll still
insert cache entries temporarily to catch infinite recursion,
but will drop them afterwards, so they won't persist in BatchAA.
Differential Revision: https://reviews.llvm.org/D91936
Currently, we have some confusion in the codebase regarding the
meaning of LocationSize::unknown(): Some parts (including most of
BasicAA) assume that LocationSize::unknown() only allows accesses
after the base pointer. Some parts (various callers of AA) assume
that LocationSize::unknown() allows accesses both before and after
the base pointer (but within the underlying object).
This patch splits up LocationSize::unknown() into
LocationSize::afterPointer() and LocationSize::beforeOrAfterPointer()
to make this completely unambiguous. I tried my best to determine
which one is appropriate for all the existing uses.
The test changes in cs-cs.ll in particular illustrate a previously
clearly incorrect AA result: We were effectively assuming that
argmemonly functions were only allowed to access their arguments
after the passed pointer, but not before it. I'm pretty sure that
this was not intentional, and it's certainly not specified by
LangRef that way.
Differential Revision: https://reviews.llvm.org/D91649
We are doing a sextOrTrunc directly afterwards, so this seems
useless. There is a multiplication in between, but truncating
before or after the multiplication should not make a difference.
Instead of requiring the caller to initialize the DecomposedGEP
structure and then passing it in by reference, make
DecomposeGEPExpression() responsible for initializing and returning
the structure.
Use DecompGEP1.Offset instead of GEP1BaseOffset, etc. I found the
asymmetry of modifying DecompGEP1.VarIndices, but not modifying
DecompGEP1.Offset odd here.
When constructing a MemoryLocation by hand, require that a
LocationSize is explicitly specified. D91649 will split up
LocationSize::unknown() into two different states, and callers
should make an explicit choice regarding the kind of MemoryLocation
they want to have.
Similarly to assumes and guards deoptimize intrinsics are
marked as writing to ensure proper control dependencies
but they never modify any particular memory location.
Differential Revision: https://reviews.llvm.org/D91658
The GEP aliasing implementation currently has two pieces of code
that solve two different subsets of the same basic problem: If you
have GEPs with offsets 4*x + 0 and 4*y + 1 (assuming access size 1),
then they do not alias regardless of whether x and y are the same.
One implementation is in aliasSameBasePointerGEPs(), which looks at
this in a limited structural way. It requires both GEP base pointers
to be exactly the same, then (optionally) a number of equal indexes,
then an unknown index, then a non-equal index into a struct. This
set of limitations works, but it's overly restrictive and hides the
core property we're trying to exploit.
The second implementation is part of aliasGEP() itself and tries to
find a common modulus in the scales, so it can then check that the
constant offset doesn't overlap under modular arithmetic. The second
implementation has the right idea of what the general problem is,
but effectively only considers power of two factors in the scales
(while aliasSameBasePointerGEPs also works with non-pow2 struct sizes.)
What this patch does is to adjust the aliasGEP() implementation to
instead find the largest common factor in all the scales (i.e. the GCD)
and use that as the modulus.
Differential Revision: https://reviews.llvm.org/D91027
aliasGEP() currently implements some special handling for the case
where all variable offsets are positive, in which case the constant
offset can be taken as the minimal offset. However, it does not
perform the same handling for the all-negative case. This means that
the alias-analysis result between two GEPs is asymmetric:
If GEP1 - GEP2 is all-positive, then GEP2 - GEP1 is all-negative,
and the first will result in NoAlias, while the second will result
in MayAlias.
Apart from producing sub-optimal results for one order, this also
violates our caching assumption. In particular, if BatchAA is used,
the cached result depends on the order of the GEPs in the first query.
This results in an inconsistency in BatchAA and AA results, which
is how I noticed this issue in the first place.
Differential Revision: https://reviews.llvm.org/D91383
The GEP aliasing code currently checks for the GEP decomposition
limit being reached (i.e., we did not reach the "final" underlying
object). As far as I can see, these checks are not necessary. It is
perfectly fine to work with a GEP whose base can still be further
decomposed.
Looking back through the commit history, these checks were originally
introduced in 1a444489e9. However, I
believe that the problem this was intended to address was later
properly fixed with 1726fc698c, and
the checks are no longer necessary since then (and were not the
right fix in the first place).
Differential Revision: https://reviews.llvm.org/D91010
The distinction between StructOffset and OtherOffset has been
originally introduced by 82069c44ca,
which applied different reasoning to both offset kinds. However,
this distinction was not actually correct, and has been fixed by
c84e77aeae. Since then, we only ever
consider the sum StructOffset + OtherOffset, so we may as well
store it in that form directly.
Instead of performing the multiplication in double the bit width
and using active bits to determine overflow, use the existing
smul_ov() APInt method to detect overflow.
The smul_ov() implementation is not particularly efficient, but
it's still better than doing this a wide, usually 128-bit, type.
Rather than performing the cache lookup with both possible orders
for the locations, use the same canonicalization as the other
AliasCache lookups in BasicAA.
Any time we insert a block into VisitedPhiBBs, previously cached
values may no longer be valid for the recursive alias queries. As
such, perform them using an empty AAQueryInfo.
Note that if we recurse to the same phi, the block will already
be inserted, so we reuse the old AAQueryInfo, and thus still
protect against infinite recursion.
This problem can appear with with an without BatchAA, but is more
likely to occur with BatchAA, as more values are cached.
Differential Revision: https://reviews.llvm.org/D90066
Visited phi blocks only need to be added for the duration of the
recursive alias queries, they should not leak into following code.
Once again, while this also improves analysis precision, this is
mainly intended to clarify the applicability scope of VisitedPhiBBs.
We only need the VisitedPhiBBs to disambiguate comparisons of
values from two different loop iterations. If we're comparing
two phis from the same basic block in lock-step, the compared
values will always be on the same iteration.
While this also increases precision, this is mainly intended
to clarify the scope of VisitedPhiBBs.