Commit Graph

29820 Commits

Author SHA1 Message Date
Philip Reames 319265328c [SLP] Remove field unused after 33ce97f to silence buildbots [NFC] 2022-02-27 10:18:10 -08:00
Florian Hahn ff93260bf6
Revert "[VPlan] Introduce recipe to build scalar steps."
This reverts commit 49b23f451c.

This appears to break some PPC build bots. Revert while I investigate.
2022-02-27 17:51:19 +00:00
Philip Reames 33ce97f413 [SLP] Use BatchAA to reduce capture analysis cost [NFC]
SLP makes very heavy use of aliasing queries to construct pointer dependencies for scheduling purposes.  AA internally usings pointerMayBeCaptured to prove some noalias results.  In a local profile, we were spending about 4% of total O2 time in capture tracking.  By using BatchAA interface - which caches capture results - this drops to 2%.

Note that there is no invalidation of BatchAA here.  This assumes that no transformation done by SLP invalidates alias or capture results.  This is the same assumption made by the existing AliasCache, so this is not a new assumption in the code.
2022-02-27 09:47:24 -08:00
Florian Hahn 49b23f451c
[VPlan] Introduce recipe to build scalar steps.
This patch adds a new VPScalarIVStepsRecipe to handle building scalar
steps.

In the first patch, it only handles the case where there is no vector
induction variable needed.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D115953
2022-02-27 17:32:41 +00:00
Florian Hahn 9bc866cc6f
[VPlan] Add recipe to handle SCEV expansion (NFC).
This can be used to explicitly model VPValues that depend on SCEV
expansion, like the step for inductions.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D116288
2022-02-27 12:47:02 +00:00
Florian Hahn da740492b0
[VPlan] Remove dead header-phi recipes.
This patch adds a new transform to remove dead recipes. For now, it only
removes dead recipes in the header, to keep the number tests that require
updating manageable. Future patches will extend this to remove dead
recipes across the whole plan.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D118051
2022-02-26 16:26:39 +00:00
Craig Topper 1b1f8d6eff [SeparateConstOffsetFromGEP] Remove TargetMachine.h include. NFC
This doesn't appear to be used and it would be a layering violation
if it was.
2022-02-25 21:40:00 -08:00
Evgeniy Brevnov 10e99eb7e4 [SLP] "Normal" instructions should not go between PHI and Lading pad
Currently, SLP can insert "shuffle" instruction beween PHI and Landing pad instruction. The problem is demonstrated by LIT test. The solution is to adjust insertion point once we are done with PHI generation.

Differential Revision: https://reviews.llvm.org/D120552
2022-02-26 11:44:26 +07:00
Nikita Popov e7fb1c15cb [MergeICmps] Don't require GEP
With opaque pointers, the zero-offset load will generally not use
a GEP. Allow a direct load without GEP, which is treated the same
way as a zero-offset GEP.
2022-02-25 17:38:02 +01:00
Simon Pilgrim 3b422455dd [IPO] AAFunctionReachabilityFunction.updateImpl - reduce AAReachability scope. NFCI.
We already have a check for !InstQueries.empty(), so move the for-range over InstQueries inside to avoid the AAReachability uninitialized variable static analysis warnings.
2022-02-25 14:42:31 +00:00
Nikita Popov 4736e57199 [IndVars] Use phis() (NFC) 2022-02-25 12:08:12 +01:00
Nikita Popov e1608a9df8 [InstCombine] Remove SPF min/max canonicalization
Now that we canonicalize SPF min/max to intrinsics, there's no
need to canonicalize the structure of the SPF min/max itself
anymore. This is conceptually NFC, but in practice does slightly
impact results due to folding order differences.
2022-02-25 11:24:09 +01:00
Nikita Popov 16a2d5f885 [SCEVExpander] Use early returns in FindValueInExprValueMap() (NFC) 2022-02-25 10:09:16 +01:00
Nikita Popov 2d0fc3e46f [SCEV] Return ArrayRef from getSCEVValues() (NFC)
Return a read-only view on this set. For the one internal use,
directly access ExprValueMap.
2022-02-25 09:32:22 +01:00
Nikita Popov d9715a7266 [SCEV] Don't try to reuse expressions with offset
SCEVs ExprValueMap currently tracks not only which IR Values
correspond to a given SCEV expression, but additionally stores that
it may be expanded in the form X+Offset. In theory, this allows
reusing existing IR Values in more cases.

In practice, this doesn't seem to be particularly useful (the test
changes are rather underwhelming) and adds a good bit of complexity.
Per https://github.com/llvm/llvm-project/issues/53905, we have an
invalidation issue with these offseted expressions.

Differential Revision: https://reviews.llvm.org/D120311
2022-02-25 09:16:48 +01:00
Anton Afanasyev 904a00d17a [AggressiveInstCombine] Fix `TruncInstCombine` (fix f84d732f)
Erase phi-nodes from `InstInfoMap` before erasing themselves
2022-02-25 08:04:11 +03:00
Anton Afanasyev 0dd8401371 [AggressiveInstCombine] Add `phi` nodes support to `TruncInstCombine`
Expand `TruncInstCombine` to handle loops by adding `phi` nodes
to expression graph.

Reviewed by: RKSimon, lebedev.ri

(recommit of fixed f84d732f, reverted by 8ad6d5e after sanitizer breakage)

Differential Revision: https://reviews.llvm.org/D109817
2022-02-25 07:57:35 +03:00
Vasileios Porpodas 4bbc3290a2 [SLP] Fix for the min/max intrinsic cost.
The min/max intrinsic cost is currently too low because in the cost calculation
we subtract the cost of the vector compare as we will not emit it.
For the cost of the vector compare we are currently passing BAD_ICMP_PREDICATE
which returns 3, the worst case cost.
I think we should be passing VecPred instead, since we know the predicates of
the compare instr.

I think this is related to commit b3b993a7ad which introduced the predicate
argument to getCmpSelInstrCost().
https://reviews.llvm.org/rGb3b993a7ad817c3c5801341fa78f34332900eb83

Differential Revision: https://reviews.llvm.org/D120439
2022-02-24 18:08:40 -08:00
Joseph Huber 7aef8b3754 [OpenMP] Make section variable external to prevent collisions
Summary:
We use a section to embed offloading code into the host for later
linking. This is normally unique to the translation unit as it is thrown
away during linking. However, if the user performs a relocatable link
the sections will be merged and we won't be able to access the files
stored inside. This patch changes the section variables to have external
linkage and a name defined by the section name, so if two sections are
combined during linking we get an error.
2022-02-24 10:57:09 -05:00
Sanjay Patel 5379f76e63 [InstCombine] try harder to preserve 'nsz' in fneg-of-select transform
The corner case where 'nsz' needs to be removed is very narrow
as discussed here:
https://reviews.llvm.org/rG3cdd05e519dd

If the select condition is not undef, there's no problem with
propagating 'nsz':
https://alive2.llvm.org/ce/z/4GWJdq
2022-02-24 10:43:53 -05:00
Nikita Popov a266af7211 [InstCombine] Canonicalize SPF to min/max intrinsics
Now that integer min/max intrinsics have good support in both
InstCombine and other passes, start canonicalizing SPF min/max
to intrinsic min/max.

Once this sticks, we can stop matching SPF min/max in various
places, and can remove hacks we have for preventing infinite loops
and breaking of SPF canonicalization.

Differential Revision: https://reviews.llvm.org/D98152
2022-02-24 09:01:20 +01:00
Nikita Popov aa551ad198 Revert "[InstCombine] Remove one-use limitation from X-Y==0 fold"
This reverts commit 65dc78d63e.

This caused a major code-size regression on tramp3d-v4, revert
until I can investigate.
2022-02-24 08:50:40 +01:00
Matthias Braun 6a383369f9 PGOInstrumentation, GCOVProfiling: Split indirectbr critical edges regardless of PHIs
The `SplitIndirectBrCriticalEdges` function was originally designed for
`CodeGenPrepare` and skipped splitting of edges when the destination
block didn't contain any `PHI` instructions. This only makes sense when
reducing COPYs like `CodeGenPrepare`. In the case of
`PGOInstrumentation` or `GCOVProfiling` it would result in missed
counters and wrong result in functions with computed goto.

Differential Revision: https://reviews.llvm.org/D120096
2022-02-23 16:27:37 -08:00
minglotus-6 142cedc283 [SampleProf][Inliner] Add an option to turn off inliner in sample-profile pass.
Use case is offline evaluation (for inliner effectiveness) or debugging.

Differential Revision: https://reviews.llvm.org/D120344
2022-02-23 14:21:33 -08:00
Philip Reames ed54296ea3 [SLP] Fastpath instructions not in block being scheduled [nfc] 2022-02-23 13:51:36 -08:00
Philip Reames a4541fdfe4 [SLP] Replace a impossible branch condition with an assert [NFC]
An entire bundle must be inside the scheduling window.  Assert that this property holds as opposed to checking it at runtime.
2022-02-23 13:43:45 -08:00
Philip Reames 9a40f9f681 {SLP] Make it clear ScheduleDataMap is keyed by instructions [NFC] 2022-02-23 13:31:36 -08:00
Philip Reames 9392c0d4ef Revert "[SLP] Remove cap on schedule window size"
This reverts commit 6adf4b039e.  Reverting while investigating https://github.com/llvm/llvm-project/issues/54029
2022-02-23 13:12:07 -08:00
Philip Reames a83441e8cd Revert "[SLP] Simplify extendSchedulingRegion"
This reverts commit 8c85f3a052.
2022-02-23 13:12:07 -08:00
Philip Reames 222e8610f1 [SLP] Rearrange fields in ScheduleData for density [NFC] 2022-02-23 12:33:43 -08:00
Philip Reames a3e9b32c00 [SLP] Remove SchedulingPriority from ScheduleData [NFC]
First step in trying to shrink the memory footprint of ScheduleData to improve cache locality.
2022-02-23 11:43:46 -08:00
Philip Reames 8c85f3a052 [SLP] Simplify extendSchedulingRegion
This change uses instruction's comesBefore method to simplify the code significantly. There's little compile time concern here because getSpillCost already calls comesBefore on every basic block which contains a vectorization candidate. The only additional times we'll build basic block ordering is when we can't schedule a vector candidate anywhere in the containing block.

Differential Revision: https://reviews.llvm.org/D120364
2022-02-23 11:23:38 -08:00
Augie Fackler 95f3cc222a AttributorAttributes: avoid a crashing on bad alignments
Prior to this change, LLVM would attempt to optimize an
aligned_alloc(33, ...) call to the stack. This flunked an assertion when
trying to emit the alloca, which crashed LLVM. Avoid that with extra
checks.

Differential Revision: https://reviews.llvm.org/D119604
2022-02-23 14:21:02 -05:00
Arthur Eubanks 1fd980de04 Revert "AttributorAttributes: avoid a crashing on bad alignments"
This reverts commit 70ff6fbeb9.

Breaks bots, e.g. http://45.33.8.238/linux/69375/step_12.txt.
2022-02-23 09:08:03 -08:00
Augie Fackler 70ff6fbeb9 AttributorAttributes: avoid a crashing on bad alignments
Prior to this change, LLVM would attempt to optimize an
aligned_alloc(33, ...) call to the stack. This flunked an assertion when
trying to emit the alloca, which crashed LLVM. Avoid that with extra
checks.

Differential Revision: https://reviews.llvm.org/D119604
2022-02-23 11:46:15 -05:00
Philip Reames 6adf4b039e [SLP] Remove cap on schedule window size
This cap was first added in 848c1aa45 (back in 2015).  Per the original commit message, the purpose was to avoid a compile time explosion in long basic blocks.  The algorithmic problem in scheduling has now been fixed in 0539a26d.

In the meantime, the code has rotten fairly badly.  Some intermediate refactoring caused the size to only be incremented if *both* iterators advance in the window search.  This causes the size to be badly undercounted when near one end of a basic block.  We no longer have any test which exercises the logic in an intentional way; there's one test which differs with this change, but the changes appear fairly orthoganol to the purpose of the test file.

Unfortunately, we no longer have the original motivating example, so it's possible that it also hits some other issue.  I tested locally with a large example, but even at it's worst, that one doesn't demonstrate anything too extreme even without the algorithmic fix.  It's clearly faster with, but only by ~20% which doesn't seem in line with the original commit message.   If regressions with this patch are seen, please file a bug and I'll try to fix any other algorithmic problems which fall out.
2022-02-23 08:27:45 -08:00
Nikita Popov 587c7ff15c [InstCombine] Support min/max intrinsics in udiv->lshr fold
This complements the existing fold for selects. This fold is a bit
more conservative, requiring one-use. The other folds here should
probably also be subjected to a one-use restriction.

https://alive2.llvm.org/ce/z/Q9eCDU
https://alive2.llvm.org/ce/z/8YK2CJ
2022-02-23 15:51:36 +01:00
Nikita Popov 03e6efb8c2 [InstCombine] Further simplify udiv -> lshr folding
Rather than queuing up actions, have one function that does the
log2() fold in the obvious way, but with a flag that allows us
to check whether the fold will succeed without actually performing
it.
2022-02-23 15:29:21 +01:00
Nikita Popov 5ccb0582c2 [InstCombine] Simplify udiv -> lshr folding
What we're really doing here is converting Op0 udiv Op1 into
Op0 lshr log2(Op1), so phrase it in that way. Actually pushing
the lshr into the log2(Op1) expression should be seen as a separate
transform.
2022-02-23 14:55:23 +01:00
Anton Afanasyev 8ad6d5e465 Revert "[AggressiveInstCombine] Add `phi` nodes support to `TruncInstCombine`"
This reverts commit f84d732f8c.
Breakage of "sanitizer-x86_64-linux-fast"
2022-02-23 15:56:11 +03:00
Nikita Popov 5fb65557e3 [InstCombine] Remove unused visitUDivOperand() argument (NFC)
This function only works on the RHS operand.
2022-02-23 13:16:44 +01:00
Anton Afanasyev f84d732f8c [AggressiveInstCombine] Add `phi` nodes support to `TruncInstCombine`
Expand `TruncInstCombine` to handle loops by adding `phi` nodes
to expression graph.

Reviewed by: RKSimon, lebedev.ri

Differential Revision: https://reviews.llvm.org/D109817
2022-02-23 14:01:55 +03:00
Nikita Popov e2f627e5e3 [InstCombine] Fold sub of umin to usub.sat
We were handling sub of umax, but not the conjugated umin case.

https://alive2.llvm.org/ce/z/4fdZfy
https://alive2.llvm.org/ce/z/BhUQBM
2022-02-23 12:00:34 +01:00
Bill Wendling a5bbc6ef99 [NFC] Remove unnecessary "#include"s from header files 2022-02-23 01:20:48 -08:00
Nikita Popov 65dc78d63e [InstCombine] Remove one-use limitation from X-Y==0 fold
This one-use limitation is artificial, we do not increase
instruction count if we perform the fold with multiple uses. The
motivating case is shown in @sub_eq_zero_select, where the one-use
limitation causes us to miss a subsequent select fold.

I believe the backend is pretty good about reusing flag-producing
subs for cmps with same operands, so I think doing this is fine.

Differential Revision: https://reviews.llvm.org/D120337
2022-02-23 09:37:30 +01:00
minglotus-6 f415d74d1d [SampleProfile] Handle the case when the option `MaxNumPromotions` is zero.
In places where `MaxNumPromotions` is used to allocated an array, bail out early to prevent allocating an array of length 0.

Differential Revision: https://reviews.llvm.org/D120295
2022-02-22 21:44:32 -08:00
Brendon Cahoon 3cc15e2cb6 [SLP] Fix assert from non-constant index in insertelement
A call to getInsertIndex() in getTreeCost() is returning None,
which causes an assert because a non-constant index value for
insertelement was not expected. This case occurs when the
insertelement index value is defined with a PHI.

Differential Revision: https://reviews.llvm.org/D120223
2022-02-22 15:57:14 -06:00
Dmitry Vassiliev 90a3b31091 [Transforms] Enhance CorrelatedValuePropagation to handle both values of select
The "Correlated Value Propagation" pass was missing a case when handling select instructions. It was only handling the "false" constant value, while in NVPTX the select may have the condition (and thus the branches) inverted, for example:
```
loop:
	%phi = phi i32* [ %sel, %loop ], [ %x, %entry ]
	%f = tail call i32* @f(i32* %phi)
	%cmp1 = icmp ne i32* %f, %y
	%sel = select i1 %cmp1, i32* %f, i32* null
	%cmp2 = icmp eq i32* %sel, null
	br i1 %cmp2, label %return, label %loop
```
But the select condition can be inverted:
```
	%cmp1 = icmp eq i32* %f, %y
	%sel = select i1 %cmp1, i32* null, i32* %f
```
The fix is to enhance "Correlated Value Propagation" to handle both branches of the select instruction.

Reviewed By: nikic, lebedev.ri

Differential Revision: https://reviews.llvm.org/D119643
2022-02-23 00:11:20 +04:00
Philip Reames 8612b11c86 [SLP] Use isInSchedulingRegion consistently [NFC] 2022-02-22 10:27:16 -08:00
Philip Reames 0539a26d91 [SLP] Schedule only sub-graph of vectorizable instructions
SLP currently schedules all instructions within a scheduling window which stretches from the first instruction potentially vectorized to the last. This window can include a very large number of unrelated instructions which are not being considered for vectorization. This change switches the code to only schedule the sub-graph consisting of the instructions being vectorized and their transitive users.

This has the effect of greatly reducing the amount of work performed in large basic blocks, and thus greatly improves compile time on degenerate examples. To understand the effects, I added some statistics (not planned for upstream contribution). Here's an illustration from my motivating example:

Before this patch:

704357 SLP                          - Number of calcDeps actions
 699021 SLP                          - Number of schedule calls
   5598 SLP                          - Number of ReSchedule actions
     59 SLP                          - Number of ReScheduleOnFail actions
  10084 SLP                          - Number of schedule resets
   8523 SLP                          - Number of vector instructions generated

After this patch:

102895 SLP                          - Number of calcDeps actions
 161916 SLP                          - Number of schedule calls
   5637 SLP                          - Number of ReSchedule actions
     55 SLP                          - Number of ReScheduleOnFail actions
  10083 SLP                          - Number of schedule resets
   8403 SLP                          - Number of vector instructions generated

I do want to highlight that there is a small difference in number of generated vector instructions. This example is hitting the bailout due to maximum window size, and the change in scheduling is slightly perturbing when and how we hit it. This can be seen in the RescheduleOnFail counter change. Given that, I think we can safely ignore.

The downside of this change can be seen in the large test diff. We group all vectorizable instructions together at the bottom of the scheduling region. This means that vector instructions can move quite far from their original point in code. While maybe undesirable, I don't see this as being a major problem as this pass is not intended to be a general scheduling pass.

For context, it's worth noting that the pre-scheduling that SLP does while building the vector tree is exactly the sub-graph scheduling implemented by this patch.

Differential Revision: https://reviews.llvm.org/D118538
2022-02-22 10:15:55 -08:00