Commit Graph

2512 Commits

Author SHA1 Message Date
Gil Rapaport d475030dc2 [SCEV] Apply loop guards to divisibility tests
Extend applyLoopGuards() to take into account conditions/assumes proving some
value %v to be divisible by D by rewriting %v to (%v / D) * D. This lets the
loop unroller and the loop vectorizer identify more loops as not requiring
remainder loops.

Differential Revision: https://reviews.llvm.org/D95521
2021-02-02 08:09:39 +02:00
Florian Hahn f1e8136115
[SCEV] Bail out if URem operand cannot be zero-extended.
In some cases, LHS is larger than the target expression type. Bail out
in that case for now, to avoid crashing
2021-02-01 13:50:54 +00:00
Mindong Chen 00fcc03687 [SCEV] Fix incorrect loop exit count analysis.
In computeLoadConstantCompareExitLimit, the addrec used to compute the
exit count should be from the loop which the exiting block belongs to.

Reviewed by: mkazantsev

Differential Revision: https://reviews.llvm.org/D92367
2021-01-27 19:36:05 +08:00
David Green 0175cd00a1 [AArch64] Add vector saturating add intrinsic costs
This adds sadd.sat, uadd.sat, ssub.sat and usub.sat costs for AArch64,
similar to how they were recently added for ARM.

Differential Revision: https://reviews.llvm.org/D95292
2021-01-27 10:38:32 +00:00
Sander de Smalen b9417c3616 [CostModel] Handle CTLZ and CCTZ in getTypeBasedIntrinsicInstrCost
Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D95355
2021-01-26 14:37:51 +00:00
David Green 06ab7953e9 [AArch64] Saturating add cost tests. NFC 2021-01-24 13:49:17 +00:00
David Sherwood 83e7a96c06 Fix build failure caused by 2e080eb00a 2021-01-22 09:56:53 +00:00
David Sherwood 2e080eb00a [SVE] Add support for scalable vectorization of loops with selects and cmps
I have removed an unnecessary assert in LoopVectorizationCostModel::getInstructionCost
that prevented a cost being calculated for select instructions when using
scalable vectors. In addition, I have changed AArch64TTIImpl::getCmpSelInstrCost
to only do special cost calculations for fixed width vectors and fall
back to the base version for scalable vectors.

I have added a simple cost model test for cmps and selects:

  test/Analysis/CostModel/sve-cmpsel.ll

and some simple tests that show we vectorize loops with cmp and select:

  test/Transforms/LoopVectorize/AArch64/sve-basic-vec.ll

Differential Revision: https://reviews.llvm.org/D95039
2021-01-22 09:48:13 +00:00
Arthur Eubanks f374138058 [test] Make incorrect-exit-count.ll work under NPM 2021-01-21 21:45:32 -08:00
Arthur Eubanks 6699029b67 [NewPM][opt] Run the "default" AA pipeline by default
We tend to assume that the AA pipeline is by default the default AA
pipeline and it's confusing when it's empty instead.

PR48779

Initially reverted due to BasicAA running analyses in an unspecified
order (multiple function calls as parameters), fixed by fetching
analyses before the call to construct BasicAA.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D95117
2021-01-21 21:08:54 -08:00
Mircea Trofin c042aff886 [NFC] Disallow unused prefixes under llvm/test
This patch sets the default for llvm tests, with the exception of tests
under Reduce, because quite a few of them use 'FileCheck' as parameter
to a tool, and including a flag as that parameter would complicate
matters.

The rest of the patch undo-es the lit.local.cfg changes we progressively
introduced as temporary measure to avoid regressions under various
directories.

Differential Revision: https://reviews.llvm.org/D95111
2021-01-21 20:31:52 -08:00
Arthur Eubanks ba9b4ea4ee Revert "[NewPM][opt] Run the "default" AA pipeline by default"
This reverts commit be611431cd.

Other/new-pm-lto-defaults.ll failing
2021-01-21 20:16:34 -08:00
Arthur Eubanks be611431cd [NewPM][opt] Run the "default" AA pipeline by default
We tend to assume that the AA pipeline is by default the default AA
pipeline and it's confusing when it's empty instead.

PR48779

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D95117
2021-01-21 19:46:38 -08:00
Nikita Popov 65fd034b95 [FunctionAttrs] Infer willreturn for functions without loops
If a function doesn't contain loops and does not call non-willreturn
functions, then it is willreturn. Loops are detected by checking
for backedges in the function. We don't attempt to handle finite
loops at this point.

Differential Revision: https://reviews.llvm.org/D94633
2021-01-21 20:29:33 +01:00
Mindong Chen 5d718374a6 [SCEV] Add a test with wrong exit counts. (NFC)
This patch pre-commits a test case with wrong exit count
analysis for D92367.

Reviewed by: mkazantsev

Differential Revision: https://reviews.llvm.org/D94657
2021-01-20 20:58:34 +08:00
Juneyoung Lee 4479c0c2c0 Allow nonnull/align attribute to accept poison
Currently LLVM is relying on ValueTracking's `isKnownNonZero` to attach `nonnull`, which can return true when the value is poison.
To make the semantics of `nonnull` consistent with the behavior of `isKnownNonZero`, this makes the semantics of `nonnull` to accept poison, and return poison if the input pointer isn't null.
This makes many transformations like below legal:

```
%p = gep inbounds %x, 1 ; % p is non-null pointer or poison
call void @f(%p)        ; instcombine converts this to call void @f(nonnull %p)
```

Instead, this semantics makes propagation of `nonnull` to caller illegal.
The reason is that, passing poison to `nonnull` does not immediately raise UB anymore, so such program is still well defined, if the callee does not use the argument.
Having `noundef` attribute there re-allows this.

```
define void @f(i8* %p) {       ; functionattr cannot mark %p nonnull here anymore
  call void @g(i8* nonnull %p) ; .. because @g never raises UB if it never uses %p.
  ret void
}
```

Another attribute that needs to be updated is `align`. This patch updates the semantics of align to accept poison as well.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D90529
2021-01-20 11:31:23 +09:00
Jeroen Dobbelaere 121cac01e8 [noalias.decl] Look through llvm.experimental.noalias.scope.decl
Just like llvm.assume, there are a lot of cases where we can just ignore llvm.experimental.noalias.scope.decl.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93042
2021-01-19 20:09:42 +01:00
David Green 6a563eef13 [ARM] Expand vXi1 VSELECT's
We have no lowering for VSELECT vXi1, vXi1, vXi1, so mark them as
expanded to turn them into a series of logical operations.

Differential Revision: https://reviews.llvm.org/D94946
2021-01-19 17:56:50 +00:00
David Green f373b30923 [ARM] Add MVE add.sat costs
This adds some basic MVE sadd_sat/ssub_sat/uadd_sat/usub_sat costs,
based on when the instruction is legal. With smaller than legal types
that are promoted we generate shr(qadd(shl, shl)), so the cost is 4
appropriately.

Differential Revision: https://reviews.llvm.org/D94958
2021-01-19 15:38:46 +00:00
David Green 54e38440e7 [ARM] Expand add.sat/sub.sat cost checks. NFC 2021-01-19 15:06:06 +00:00
Caroline Concatto 172f1f8952 [AArch64][SVE]Add cost model for vector reduce for scalable vector
This patch computes the cost for vector.reduce<operand> for scalable vectors.
The cost is split into two parts:  the legalization cost and the horizontal
reduction.

Differential Revision: https://reviews.llvm.org/D93639
2021-01-19 11:54:16 +00:00
David Green dcefcd51e0 [ARM] Update trunc costs
We did not have specific costs for larger than legal truncates that were
not otherwise cheap (where they were next to stores, for example). As
MVE does not have a dedicated instruction for them (and we do not use
loads/stores yet), they should be expensive as they get expanded to a
series of lane moves.

Differential Revision: https://reviews.llvm.org/D94260
2021-01-11 08:59:28 +00:00
David Green 0c8b748f32 [ARM] Additional trunc cost tests. NFC 2021-01-11 08:35:16 +00:00
David Blaikie 4a3c2ba890 Fix print-dot-ddg.ll so it doesn't try to write to the source tree (& uses the test temp paths instead) 2021-01-07 19:57:14 -08:00
Bardia Mahjour 573d578248 [DDG] Data Dependence Graph - DOT printer tests
Adds some tests to check the formatting of the dot
file produced when using -dot-ddg.

Reviewed By: Meinersbur

Differential Revision: https://reviews.llvm.org/D93949
2021-01-07 10:51:14 -05:00
Caroline Concatto 01c190e907 [AArch64][CostModel]Fix gather scatter cost model
This patch fixes a bug introduced in the patch:
https://reviews.llvm.org/D93030

This patch pulls the test for scalable vector to be the first instruction
to be checked. This avoids the Gather and Scatter cost model for AArch64 to
compute the number of vector elements for something that is not a vector and
therefore crashing.
2021-01-07 14:02:08 +00:00
Nikita Popov f6f6f6375d [BasicAA] Fix BatchAA results for phi-phi assumptions
Change the way NoAlias assumptions in BasicAA are handled. Instead of
handling this inside the phi-phi code, always initially insert a
NoAlias result into the map and keep track whether it is used.
If it is used, then we require that we also get back NoAlias from
the recursive queries. Otherwise, the entry is changed to MayAlias.

Additionally, keep track of all location pairs we inserted that may
still be based on assumptions higher up. If it turns out one of those
assumptions is incorrect, we flush them from the cache.

The compile-time impact for the new implementation is significantly
higher than the previous iteration of this patch:
https://llvm-compile-time-tracker.com/compare.php?from=c0bb9859de6991cc233e2dedb978dd118da8c382&to=c07112373279143e37568b5bcd293daf81a35973&stat=instructions
However, it should avoid the exponential runtime cases we run into
if we don't cache assumption-based results entirely.

This also produces better results in some cases, because NoAlias
assumptions can now start at any root, rather than just phi-phi pairs.
This is not just relevant for analysis quality, but also for BatchAA
consistency: Otherwise, results would once again depend on query order,
though at least they wouldn't be wrong.

This ended up both more complicated and more expensive than I hoped,
but I wasn't able to come up with another solution that satisfies all
the constraints.

Differential Revision: https://reviews.llvm.org/D91936
2021-01-06 22:15:30 +01:00
Peter Waller 3e357ecd44 [llvm][NFC] Disallow all warnings in TypeSize tests
This is a follow-up to a request from a reviewer [0]. The text may change in
the future and these tests should not produce any warning output.

[0] https://reviews.llvm.org/D91806#inline-879243

Reviewed By: sdesmalen, david-arm

Differential Revision: https://reviews.llvm.org/D94161
2021-01-06 17:17:07 +00:00
Whitney Tsang c005518936 [LoopNest] Allow empty basic blocks without loops
Allow loop nests with empty basic blocks without loops in different
levels as perfect.

Reviewers: Meinersbur

Differential Revision: https://reviews.llvm.org/D93665
2021-01-05 15:09:38 +00:00
Whitney Tsang de6d43f16c Revert "[LoopNest] Allow empty basic blocks without loops"
This reverts commit 9a17bff4f7.
2021-01-04 20:42:21 +00:00
Whitney Tsang 9a17bff4f7 [LoopNest] Allow empty basic blocks without loops
Allow loop nests with empty basic blocks without loops in different
levels as perfect.

Reviewers: Meinersbur

Differential Revision: https://reviews.llvm.org/D93665
2021-01-04 19:59:50 +00:00
Caroline Concatto 060cfd9795 [AArch64][SVE]Add cost model for masked gather and scatter for scalable vector.
A new TTI interface has been added 'Optional <unsigned>getMaxVScale' that
    returns the maximum vscale for a given target.
    When known getMaxVScale is used to compute the cost of masked gather scatter
    for scalable vector.

    Depends on D92094

    Differential Revision: https://reviews.llvm.org/D93030
2021-01-04 13:59:58 +00:00
Gil Rapaport d9c0b128e3 [SCEV] Simplify trunc to zero based on known bits
Let getTruncateExpr() short-circuit to zero when the value being truncated is
known to have at least as many trailing zeros as the target type.

Differential Revision: https://reviews.llvm.org/D93973
2021-01-03 13:57:12 +02:00
Florian Hahn 4a17b9a39b
[LAA] Add tests with uncomputable BTCs. 2021-01-01 13:57:02 +00:00
Juneyoung Lee 509fa8e02e [SCEV] recognize logical and/or pattern
This patch makes SCEV recognize 'select A, B, false' and 'select A, true, B'.
This is a performance improvement that will be helpful after unsound select -> and/or transformation is removed, as discussed in D93065.

SCEV's answers for the select form should be a bit more conservative than the equivalent `and A, B` / `or A, B`.
Take this example: https://alive2.llvm.org/ce/z/NsP9ue .
To check whether it is valid for SCEV's computeExitLimit to return min(n, m) as ExactNotTaken value, I put llvm.assume at tgt.
It fails because the exit limit becomes poison if n is zero and m is poison. This is problematic if e.g. the exit value of i is replaced with min(n, m).
If either n or m is constant, we can revive the analysis again. I added relevant tests and put alive2 links there.

If and is used instead, this is okay: https://alive2.llvm.org/ce/z/K9rbJk . Hence the existing analysis is sound.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93882
2021-01-01 04:37:57 +09:00
Juneyoung Lee c1f3033697 Update inselt tests at llvm/test/Analysis to have poison as shufflevector's placeholder (NFC)
File listed by:

grep -R -E "^[^;]*shufflevector <.*> .*, <.*> undef" . | grep inseltpoison

Updated with:

sed -i -E 's/shufflevector <(.*)> (.*), <(.*)> undef/shufflevector <\1> \2, <\3> poison/g' $1
2020-12-31 17:12:37 +09:00
Nikita Popov dcd21572f9 [ValueTracking] Fix isKnownNonEqual() with constexpr mul
Confusingly, BinaryOperator is not an Operator,
OverflowingBinaryOperator is... We were implicitly assuming that
the multiply is an Instruction here.

This fixes the assertion failure reported in
https://reviews.llvm.org/D92726#2472827.
2020-12-28 18:32:57 +01:00
Arthur Eubanks 8791949f55 [test] Pin some tests to legacy PM
These all have NPM RUN lines.
2020-12-26 13:46:02 -08:00
Nikita Popov c795dd1926 [BasicAA] Pass AC/DT to isKnownNonEqual()
This allows us to handle assumes etc in the recursive
isKnownNonZero() checks.
2020-12-25 18:29:20 +01:00
Nikita Popov a3614a31c4 [BasicAA] Pass context instruction to isKnownNonZero()
This allows us to handle additional cases like assumes.
2020-12-25 12:58:19 +01:00
Nikita Popov b96a6ea0a9 [BasicAA] Make sure context instruction is symmetric
D71264 started using a context instruction in a computeKnownBits()
call. However, if aliasing between two GEPs is checked, then the
choice of context instruction will be different for alias(GEP1, GEP2)
and alias(GEP2, GEP1), which is not supposed to happen.

Resolve this by remembering which GEP a certain VarIndex belongs to,
and use that as the context instruction. This makes the choice of
context instruction predictable and symmetric.

It should be noted that this choice of context instruction is
non-optimal (just like the previous choice): The AA query result is
only valid at points that are reachable from *both* instructions.
Using either one of them is conservatively correct, but a larger
context may also be valid to use.

Differential Revision: https://reviews.llvm.org/D93183
2020-12-25 11:35:46 +01:00
Juneyoung Lee 3036547248 Precommit analysis/etc tests for inselt poison placeholder
This adds tests in directories missing from https://reviews.llvm.org/rGdb7a2f347f132b3920415013d62d1adfb18d8d58
2020-12-24 12:14:24 +09:00
Evgeniy Brevnov 9fb074e7bb [BPI] Improve static heuristics for "cold" paths.
Current approach doesn't work well in cases when multiple paths are predicted to be "cold". By "cold" paths I mean those containing "unreachable" instruction, call marked with 'cold' attribute and 'unwind' handler of 'invoke' instruction. The issue is that heuristics are applied one by one until the first match and essentially ignores relative hotness/coldness
 of other paths.

New approach unifies processing of "cold" paths by assigning predefined absolute weight to each block estimated to be "cold". Then we propagate these weights up/down IR similarly to existing approach. And finally set up edge probabilities based on estimated block weights.

One important difference is how we propagate weight up. Existing approach propagates the same weight to all blocks that are post-dominated by a block with some "known" weight. This is useless at least because it always gives 50\50 distribution which is assumed by default anyway. Worse, it causes the algorithm to skip further heuristics and can miss setting more accurate probability. New algorithm propagates the weight up only to the blocks that dominates and post-dominated by a block with some "known" weight. In other words, those blocks that are either always executed or not executed together.

In addition new approach processes loops in an uniform way as well. Essentially loop exit edges are estimated as "cold" paths relative to back edges and should be considered uniformly with other coldness/hotness markers.

Reviewed By: yrouban

Differential Revision: https://reviews.llvm.org/D79485
2020-12-23 22:47:36 +07:00
Nikita Popov 82bd64fff6 [AA] byval argument is identified function local
byval arguments should mostly get the same treatment as noalias
arguments in alias analysis. This was not the case for the
isIdentifiedFunctionLocal() function.

Marking byval arguments as identified function local means that
they cannot alias with other arguments, which I believe is correct.

Differential Revision: https://reviews.llvm.org/D93602
2020-12-21 20:18:23 +01:00
Nikita Popov bfa95b4ac7 [BasicAA] Add test for byval argument (NFC) 2020-12-20 21:58:22 +01:00
Florian Hahn a74941da71
Revert "[BasicAA] Handle two unknown sizes for GEPs"
Temporarily revert commit 8b1c4e310c.

After 8b1c4e310c the compile-time for `MultiSource/Benchmarks/MiBench/consumer-lame`
dramatically increases with -O3 & LTO, causing issues for builders with
that configuration.

I filed PR48553 with a smallish reproducer that shows a 10-100x compile
time increase.
2020-12-18 17:59:12 +00:00
Bradley Smith e0b9c5df26 [CostModel] Add costs for llvm.experimental.vector.{extract,insert} intrinsics
Adds cost model support for the new llvm.experimental.vector.{extract,insert}
intrinsics, using the existing getExtractSubvectorOverhead and
getInsertSubvectorOverhead functions for shuffles.

Previously this case would throw an assertion.

Differential Revision: https://reviews.llvm.org/D93043
2020-12-16 13:39:04 +00:00
Caroline Concatto 60e4698b9a [CostModel]Replace FixedVectorType by VectorType in costgetIntrinsicInstrCost
This patch replaces FixedVectorType by VectorType in getIntrinsicInstrCost
in BasicTTIImpl.h. It re-arranges the scalable type test earlier return
and add tests for scalable types.

Depends on D91532

Differential Revision: https://reviews.llvm.org/D92094
2020-12-16 13:06:23 +00:00
Nikita Popov bb939ebfd7 [BasicAA] Handle known non-zero variable index
BasicAA currently handles cases like Scale*V0 + (-Scale)*V1 where
V0 != V1, but does not handle the simpler case of Scale*V with
V != 0. Add it based on an isKnownNonZero() call.

I'm not passing a context instruction for now, because the existing
approach of always using GEP1 for context could result in symmetry
issues.

Differential Revision: https://reviews.llvm.org/D93162
2020-12-13 13:20:05 +01:00
David Green a4823377fd [ARM] Add basic masked load/store costs
This adds some basic MVE masked load/store costs, notably changing the
cost of legal loads/stores to the MVECostFactor and the cost of
scalarized instructions to 8*NumElts.

Differential Revision: https://reviews.llvm.org/D86538
2020-12-12 15:26:32 +00:00