Commit Graph

16320 Commits

Author SHA1 Message Date
Reid Kleckner 940d7aaea9 Port StripGCRelocates pass to NPM
Fixes one test under NPM

Differential Revision: https://reviews.llvm.org/D88766
2020-10-07 14:41:29 -07:00
Reid Kleckner da48fe1732 [NPM] Port strip nonlinetable debuginfo pass to the new pass manager
Fixes a few tests in llvm/test/Transforms/Utils.

Differential Revision: https://reviews.llvm.org/D88762
2020-10-07 14:35:36 -07:00
Simon Pilgrim fe0197e194 [InstCombine] Add checks for and(logicalshift(zext(x),undef),y) cases
Prep work before some cleanup in narrowMaskedBinOp
2020-10-07 20:59:31 +01:00
Florian Hahn a73166a452 [LAA] Use DL to get element size for bound computation.
Currently LAA uses getScalarSizeInBits to compute the size of an element
when computing the end bound of an access.

This does not work as expected for pointers to pointers, because
getScalarSizeInBits will return 0 for pointer types.

By using DataLayout to get the size of the element we can also correctly
handle pointer element types.

Note the changes to the existing test, which seems to also use the wrong
offset for the end.

Fixes PR47751.

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D88953
2020-10-07 18:57:07 +01:00
Amara Emerson 322d0afd87 [llvm][mlir] Promote the experimental reduction intrinsics to be first class intrinsics.
This change renames the intrinsics to not have "experimental" in the name.

The autoupgrader will handle legacy intrinsics.

Relevant ML thread: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140729.html

Differential Revision: https://reviews.llvm.org/D88787
2020-10-07 10:36:44 -07:00
Nikita Popov 7a01fc5abe [MemCpyOpt] Add additional callslot test cases (NFC)
For cases where the destination is captured.
2020-10-07 18:06:29 +02:00
Roman Lebedev bef27e50b9
[NFC][InstCombine] Autogenerate a few tests being affected by upcoming patch 2020-10-07 19:00:08 +03:00
Philip Reames 14d5ee63e3 [Tests] Precommit test showing gap around load forwarding of vectors in instcombine 2020-10-07 08:57:24 -07:00
Roman Lebedev fed0f890e5
InstCombine: Negator: don't rely on complexity sorting already being performed (PR47752)
In some cases, we can negate instruction if only one of it's operands
negates. Previously, we assumed that constants would have been
canonicalized to RHS already, but that isn't guaranteed to happen,
because of InstCombine worklist visitation order,
as the added test (previously-hanging) shows.

So if we only need to negate a single operand,
we should ensure ourselves that we try constant operand first.
Do that by re-doing the complexity sorting ourselves,
when we actually care about it.

Fixes https://bugs.llvm.org/show_bug.cgi?id=47752
2020-10-07 15:09:50 +03:00
Simon Pilgrim dce03e3059 [InstCombine] Tweak funnel by constant tests for better shl/lshr commutation coverage 2020-10-07 11:47:03 +01:00
Florian Hahn 20cfd5fa33 [LAA] Add test for PR47751, which currently uses wrong bounds. 2020-10-07 11:22:22 +01:00
Max Kazantsev 85a6f8fc96 [Test] Add one more test where we can avoid creating trunc 2020-10-07 15:06:38 +07:00
Roman Lebedev 7fa503ef4a
[SROA] rewritePartition()/findCommonType(): if uses have conflicting type, try getTypePartition() before falling back to largest integral use type (PR47592)
And another step towards transformss not introducing inttoptr and/or
ptrtoint casts that weren't there already.

In this case, when load/store uses have conflicting types,
instead of falling back to the iN, we can try to use allocated sub-type.
As disscussed, this isn't the best idea overall (we shouldn't rely on
allocated type), but it works fine as a temporary measure.

I've measured, and @ `-O3` as of vanilla llvm test-suite + RawSpeed,
this results in +0.05% more bitcasts, -5.51% less inttoptr
and -1.05% less ptrtoint (at the end of middle-end opt pipeline)

See https://bugs.llvm.org/show_bug.cgi?id=47592

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D88788
2020-10-07 09:20:19 +03:00
Max Kazantsev 0c009e092e [Test] Add test showing that we can avoid inserting trunc/zext 2020-10-07 12:19:01 +07:00
Chen Zheng f05608707c [PowerPC] implement target hook getTgtMemIntrinsic
This patch can make pass recognize Powerpc related memory intrinsics.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D88373
2020-10-07 00:02:44 -04:00
Johannes Doerfert 7993d61177 [Attributor] Use smarter way to determine alignment of GEPs
Use same logic existing in other places to deal with base case GEPs.

Add the original Attributor talk example.
2020-10-06 19:31:08 -05:00
Johannes Doerfert c4cfe7a435 [Attributor] Ignore read accesses to constant memory
The old function attribute deduction pass ignores reads of constant
memory and we need to copy this behavior to replace the pass completely.
First step are constant globals. TBAA can also describe constant
accesses and there are other possibilities. We might want to consider
asking the alias analyses that are available but for now this is simpler
and cheaper.
2020-10-06 19:31:07 -05:00
Johannes Doerfert 3f540c05df [Attributor] Give up early on AANoReturn::initialize
If the function is not assumed `noreturn` we should not wait for an
update to mark the call site as "may-return".

This has two kinds of consequences:
  - We have less iterations in many tests.
  - We have less deductions based on "known information" (since we ask
    earlier, point 1, and therefore assumed information is not "known"
    yet).
The latter is an artifact that we might want to tackle properly at some
point but which is not easily fixable right now.
2020-10-06 19:31:07 -05:00
Nikita Popov 616f545048 [MemCpyOpt] Use dereferenceable pointer helper
The call slot optimization has some home-grown code for checking
whether the destination is dereferenceable. Replace this with the
generic isDereferenceableAndAlignedPointer() helper.

I'm not checking alignment here, because that is currently handled
separately and may be an enforced alignment for allocas. The clean
way of integrating that part would probably be to accept a callback
in isDereferenceableAndAlignedPointer() for the actual isAligned check,
which would then have a chance to use an enforced alignment instead.

This allows the destination to be a GEP (among other things), though
the two open TODOs may prevent it from working in practice.

Differential Revision: https://reviews.llvm.org/D88805
2020-10-06 18:41:19 +02:00
Nikita Popov 6b441ca523 [MemCpyOpt] Check for throwing calls during call slot optimization
When performing call slot optimization for a non-local destination,
we need to check whether there may be throwing calls between the
call and the copy. Otherwise, the early write to the destination
may be observable by the caller.

This was already done for call slot optimization of load/store,
but not for memcpys. For the sake of clarity, I'm moving this check
into the common optimization function, even if that does need an
additional instruction scan for the load/store case.

As efriedma pointed out, this check is not sufficient due to
potential accesses from another thread. This case is left as a TODO.

Differential Revision: https://reviews.llvm.org/D88799
2020-10-06 18:24:40 +02:00
Dávid Bolvanský 86429c4eaf [SimplifyLibCalls] Optimize mempcpy_chk to mempcpy 2020-10-06 17:08:46 +02:00
Arthur Eubanks 8df17b4dc1 [test][InstCombine][NewPM] Fix InstCombine tests under NPM
Some of these depended on analyses being present that aren't provided
automatically in NPM.

early_dce_clobbers_callgraph.ll was previously inlining a noinline function?

cast-call-combine.ll relied on the legacy always-inline pass being a
CGSCC pass and getting rerun.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D88187
2020-10-06 07:39:00 -07:00
Johannes Doerfert 4a7a988442 [Attributor][FIX] Move assertion to make it not trivially fail
The idea of this assertion was to check the simplified value before we
assign it, not after, which caused this to trivially fail all the time.
2020-10-06 09:32:18 -05:00
Johannes Doerfert 04f6951397 [Attributor][FIX] Dead return values are not `noundef`
When we assume a return value is dead we might still visit return
instructions via `Attributor::checkForAllReturnedValuesAndReturnInsts(..)`.
When we do so the "returned value" is potentially simplified to `undef`
as it is the assumed "returned value". This is a problem if there was a
preexisting `noundef` attribute that will only be removed as we manifest
the `undef` return value. We should not use this combination to derive
`unreachable` though. Two test cases fixed.
2020-10-06 09:32:18 -05:00
Johannes Doerfert 957094e31b [Attributor][NFC] Ignore benign uses in AAMemoryBehaviorFloating
In AAMemoryBehaviorFloating we used to track benign uses in a SetVector.
With this change we look through benign uses eagerly to reduce the
number of elements (=Uses) we look at during an update.

The test does actually not fail prior to this commit but I already wrote
it so I kept it.
2020-10-06 09:32:18 -05:00
Mauri Mustonen cef0de5eb5 [VPlan] Add vplan native path vectorization test case for inner loop reduction
Regarding this bug I posted earlier: https://bugs.llvm.org/show_bug.cgi?id=47035

After reading through LLVM source code and getting familiar with VPlan I was able to vectorize the code using by enabling VPlan native path. After talking with @fhahn he suggested that I contribute this as a test case. So here it is. I tried to follow the available guides how to do this best I could. I modified IR code by hand to have more clear variable names instead of numbers.

One thing what I'd like to get input from someone is that is current CHECK lines sufficient enough to verify that the inner loop has been vectorized properly?

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87564
2020-10-06 10:11:58 +01:00
Johannes Doerfert ef48436e62 [AttributeFuncs] Consider `noundef` in `typeIncompatible`
Drop `noundef` for return values that are replaced by void and make it
illegal to put `noundef` on a void value.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87306
2020-10-05 23:23:06 -05:00
Johannes Doerfert 2a078c3072 [AttributeFuncs] Consider `align` in `typeIncompatible`
Alignment attributes need to be dropped for non-pointer values.
This also introduces a check into the verifier to ensure you don't use
`align` on anything but a pointer. Test needed to be adjusted
accordingly.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87304
2020-10-05 23:23:05 -05:00
Serguei Katkov b988898013 [GVN LoadPRE] Extend the scope of optimization by using context to prove safety of speculation
Use context to prove that load can be safely executed at a point where load is being hoisted.

Postpone the decision about safety of speculative load execution till the moment we know
where we hoist load and check safety at that context.

Reviewers: nikic, fhahn, mkazantsev, lebedev.ri, efriedma, reames
Reviewed By: reames, mkazantsev
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D88725
2020-10-06 09:25:16 +07:00
Mircea Trofin 36bb1fb1fe [MLInliner] Factor out logging
Factored out the logging facility, to allow its reuse outside the
inliner.

Differential Revision: https://reviews.llvm.org/D88770
2020-10-05 18:09:17 -07:00
Vedant Kumar 9afb1c566e Revert "Outline non returning functions unless a longjmp"
This reverts commit 20797989ea.

This patch (https://reviews.llvm.org/D69257) cannot complete a stage2
build due to the change:

```
CI->getCalledFunction()->getName().contains("longjmp")
```

There are several concrete issues here:

  - The callee may not be a function, so `getCalledFunction` can assert.
  - The called value may not have a name, so `getName` can assert.
  - There's no distinction made between "my_longjmp_test_helper" and the
    actual longjmp libcall.

At a higher level, there's a serious layering problem here. The
splitting pass makes policy decisions in a general way (e.g. based on
attributes or profile data). Special-casing certain names breaks the
layering. It subverts the work of library maintainers (who may now need
to opt-out of unexpected optimization behavior for any affected
functions) and can lead to inconsistent optimization behavior (as not
all llvm passes special-case ".*longjmp.*" in the same way).

The patch may need significant revision to address these issues.

But the immediate issue is that this crashes while compiling llvm's unit
tests in a stage2 build (due to the `getName` problem).
2020-10-05 14:10:25 -07:00
Roman Lebedev e00f189d39
[InstCombine] Revert rL226781 "Teach InstCombine to canonicalize loads which are only ever stored to always use a legal integer type if one is available." (PR47592)
(it was introduced in https://lists.llvm.org/pipermail/llvm-dev/2015-January/080956.html)

This canonicalization seems dubious.

Most importantly, while it does not create `inttoptr` casts by itself,
it may cause them to appear later, see e.g. D88788.

I think it's pretty obvious that it is an undesirable outcome,
by now we've established that seemingly no-op `inttoptr`/`ptrtoint` casts
are not no-op, and are no longer eager to look past them.
Which e.g. means that given
```
%a = load i32
%b = inttoptr %a
%c = inttoptr %a
```
we likely won't be able to tell that `%b` and `%c` is the same thing.

As we can see in D88789 / D88788 / D88806 / D75505,
we can't really teach SCEV about this (not without the https://bugs.llvm.org/show_bug.cgi?id=47592 at least)
And we can't recover the situation post-inlining in instcombine.

So it really does look like this fold is actively breaking
otherwise-good IR, in a way that is not recoverable.
And that means, this fold isn't helpful in exposing the passes
that are otherwise unaware of these patterns it produces.

Thusly, i propose to simply not perform such a canonicalization.
The original motivational RFC does not state what larger problem
that canonicalization was trying to solve, so i'm not sure
how this plays out in the larger picture.

On vanilla llvm test-suite + RawSpeed, this results in
increase of asm instructions and final object size by ~+0.05%
decreases final count of bitcasts by -4.79% (-28990),
ptrtoint casts by -15.41% (-3423),
and of inttoptr casts by -25.59% (-6919, *sic*).
Overall, there's -0.04% less IR blocks, -0.39% instructions.

See https://bugs.llvm.org/show_bug.cgi?id=47592

Differential Revision: https://reviews.llvm.org/D88789
2020-10-06 00:00:30 +03:00
Dávid Bolvanský a4bae56ab8 Revert "[SLC] Optimize mempcpy_chk to mempcpy"
This reverts commit 3f1fd59de3.
2020-10-05 22:27:14 +02:00
Dávid Bolvanský 3f1fd59de3 [SLC] Optimize mempcpy_chk to mempcpy
As reported in PR46735:

void* f(void *d, const void *s, size_t l)
{
    return __builtin___mempcpy_chk(d, s, l, __builtin_object_size(d, 0));
}

This can be optimized to `return mempcpy(d, s, l);`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D86019
2020-10-05 22:18:36 +02:00
Nikita Popov 3641d375f6 [InstCombine] Handle GEP inbounds in select op replacement (PR47730)
When retrying the "simplify with operand replaced" select
optimization without poison flags, also handle inbounds on GEPs.

Of course, this particular example would also be safe to transform
while keeping inbounds, but the underlying machinery does not
know this (yet).
2020-10-05 21:13:02 +02:00
Nikita Popov 0f8e4a5ed0 [InstCombine] Add test for PR47730 2020-10-05 21:09:53 +02:00
Nikita Popov 9d63029770 Revert "[DebugInfo] Improve dbg preservation in LSR."
This reverts commit a3caf7f610.

The ReleaseLTO-g test-suite configuration has been failing
to build since this commit, because clang segfaults while
building 7zip.
2020-10-05 19:02:30 +02:00
Simon Pilgrim 5ba084c42f [InstCombine] Extend 'shift with constants' vector tests
Added missing test coverage for shl(add(and(lshr(x,c1),c2),y),c1) -> add(and(x,c2<<c1),shl(y,c1)) combine

Rename tests as 'foo' and 'bar' isn't very extensible

Added vector tests with undefs and nonuniform constants
2020-10-05 17:22:15 +01:00
Simon Pilgrim 2efd9fd699 [InstCombine] Add or(shl(v,and(x,bw-1)),lshr(v,bw-and(x,bw-1))) funnel shift tests
If we know the shift amount is less than the bitwidth we should be able to convert this to a funnel shift
2020-10-05 17:22:14 +01:00
Wenlei He 89e8a8b223 Revert SVML support for sqrt
As was brought up in D87169 by @craig.topper we shouldn't map llvm.sqrt to svml since there is a faster native instruction.
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sqrt_p&expand=5824,5823,5356,5823,5825,5365,5356

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D88620
2020-10-05 08:13:11 -07:00
David Green ff86acbb79 [LV] Regenerate test. NFC
This just reruns the update script to add the new
[[LOOP0:!llvm.loop !.*]] checks to remove them from
other diffs.
2020-10-05 13:46:15 +01:00
Simon Pilgrim 2cd7b0e130 [ValueTracking] canCreateUndefOrPoison - use APInt to check bounds instead of getZExtValue().
Fixes OSS Fuzz #26135
2020-10-05 13:45:27 +01:00
Markus Lavin a3caf7f610 [DebugInfo] Improve dbg preservation in LSR.
Use SCEV to salvage additional @llvm.dbg.value that have turned into
referencing undef after transformation (and traditional
salvageDebugInfo). Before transformation compute SCEV for each
@llvm.dbg.value in the loop body and store it (along side its current
DIExpression). After transformation update those @llvm.dbg.value now
referencing undef by comparing its stored SCEV to the SCEV of the
current loop-header PHI-nodes. Allow match with offset by inserting
compensation code in the DIExpression.

Fixes : PR38815

Differential Revision: https://reviews.llvm.org/D87494
2020-10-05 09:55:16 +02:00
Arthur Eubanks 37010d4ddf [Coroutines][NewPM] Fix coroutine tests under new pass manager
Some new function parameter attributes are derived under NPM.

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D88760
2020-10-04 14:19:29 -07:00
Nikita Popov 8aaa731349 [MemCpyOpt] Add tests for call slot optimization with GEPs (NFC) 2020-10-04 22:26:05 +02:00
Nikita Popov 22664a3251 [MemCpyOpt] Don't use array allocas in tests (NFC)
Apparently querying dereferenceability of array allocations is
being intentionally penalized (https://reviews.llvm.org/D41398),
so avoid using them in tests.
2020-10-04 21:50:27 +02:00
Nikita Popov 2c48dd7c3a [MemCpyOpt] Add additional call slot tests (NFC)
The case of a destination read between call and memcpy was not
covered anywhere (but is handled correctly).

However, a potentially throwing call between the call and the
memcpy appears to be miscompiled.
2020-10-04 17:26:46 +02:00
Roman Lebedev 03bd5198b6
[OldPM] Pass manager: run SROA after (simple) loop unrolling
I have stumbled into this pretty accidentally, when rewriting
some spaghetti-like code into something more structured,
which involved using some `std::array<>`s. And to my surprise,
the `alloca`s remained, causing about `+160%` perf regression.

https://llvm-compile-time-tracker.com/compare.php?from=bb6f4d32aac3eecb51909f4facc625219307ee68&to=d563e66f40f9d4d145cb2050e41cb961e2b37785&stat=instructions
suggests that this has geomean compile-time cost of `+0.08%`.

Note that D68593 / cecc0d27ad
already did this chage for NewPM, but left OldPM in a pessimized state.

This fixes [[ https://bugs.llvm.org/show_bug.cgi?id=40011 | PR40011 ]], [[ https://bugs.llvm.org/show_bug.cgi?id=42794 | PR42794 ]] and probably some other reports.

Reviewed By: nikic, xbolva00

Differential Revision: https://reviews.llvm.org/D87972
2020-10-04 11:53:50 +03:00
Roman Lebedev cd20c26622
[NFC][InstCombine] Autogenerate a few tests being affected by an upcoming patch 2020-10-03 22:49:58 +03:00
Roman Lebedev 1038ce4b6b
[NFC][PhaseOrdering] Add a test showing new inttoptr casts after SROA due to InstCombine (PR47592)
We could either try to make SROA more picky to the new type
and/or prevent InstCombine from creating the original problem (converting load-stores to operate on ints),
and/or make InstCombine recover the situation by cleaning up all that cruft.
2020-10-03 22:49:58 +03:00