Commit Graph

141643 Commits

Author SHA1 Message Date
Rahman Lavaee e0bf234930 Let .llvm_bb_addr_map section use the same unique id as its associated .text section.
Currently, `llvm_bb_addr_map` sections are generated per section names because we use
the `LinkedToSymbol` argument of getELFSection. This will cause the address map tables of functions
grouped into the same section when `-function-sections=true -unique-section-names=false` which is not
the intended behaviour. This patch lets the unique id of every `.text` section propagate to the associated
`.llvm_bb_addr_map` section.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D92113
2020-12-01 09:21:00 -08:00
Bardia Mahjour 9c5504adce [LV] Epilogue Vectorization with Optimal Control Flow
This is yet another attempt at providing support for epilogue
vectorization following discussions raised in RFC http://llvm.1065342.n5.nabble.com/llvm-dev-Proposal-RFC-Epilog-loop-vectorization-tt106322.html#none
and reviews D30247 and D88819.

Similar to D88819, this patch achieve epilogue vectorization by
executing a single vplan twice: once on the main loop and a second
time on the epilogue loop (using a different VF). However it's able
to handle more loops, and generates more optimal control flow for
cases where the trip count is too small to execute any code in vector
form.

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D89566
2020-12-01 12:04:29 -05:00
Nikita Popov 624af932a8 [MemCpyOpt] Port to MemorySSA
This is a straightforward port of MemCpyOpt to MemorySSA following
the approach of D26739. MemDep queries are replaced with MSSA queries
without changing the overall structure of the pass. Some care has
to be taken to account for differences between these APIs
(MemDep also returns reads, MSSA doesn't).

Differential Revision: https://reviews.llvm.org/D89207
2020-12-01 17:57:41 +01:00
Fangrui Song f0659c0673 [X86] Support modifier @PLTOFF for R_X86_64_PLTOFF64
`gcc -mcmodel=large` can emit @PLTOFF.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D92294
2020-12-01 08:39:01 -08:00
Clement Courbet 735e6c888e [MergeICmps] Fix missing split.
We were not correctly splitting a blocks for chains of length 1.

Before that change, additional instructions for blocks in chains of
length 1 were not split off from the block before removing (this was
done correctly for chains of longer size).
If this first block contained an instruction referenced elsewhere,
deleting the block, would result in invalidation of the produced value.

This caused a miscompile which motivated D92297 (before D17993,
nonnull and dereferenceable attributed were not added so MergeICmps were
not triggered.) The new test gep-references-bb.ll demonstrate the issue.

The regression was introduced in
rG0efadbbcdeb82f5c14f38fbc2826107063ca48b2.

This supersedes D92364.

Test case by MaskRay (Fangrui Song).

Differential Revision: https://reviews.llvm.org/D92375
2020-12-01 16:50:55 +01:00
Sanjay Patel 136f98e523 [x86] adjust cost model values for minnum/maxnum with fast-math-flags
Without FMF, we lower these intrinsics into something like this:

vmaxsd	%xmm0, %xmm1, %xmm2
vcmpunordsd	%xmm0, %xmm0, %xmm0
vblendvpd	%xmm0, %xmm1, %xmm2, %xmm0

But if we can ignore NANs, the single min/max instruction is enough
because there is no need to fix up the x86 logic that corresponds to
X > Y ? X : Y.

We probably want to make other adjustments for FP intrinsics with FMF
to account for specialized codegen (for example, FSQRT).

Differential Revision: https://reviews.llvm.org/D92337
2020-12-01 10:45:53 -05:00
Benjamin Kramer 107e92dff8 [DAG] Remove unused variable. NFC. 2020-12-01 16:29:02 +01:00
David Green eedf0ed63e [ARM] Mark select and selectcc of MVE vector operations as expand.
We already expand select and select_cc in codegenprepare, but they can
still be generated under some situations. Explicitly mark them as expand
to ensure they are not produced, leading to a failure to select the
nodes.

Differential Revision: https://reviews.llvm.org/D92373
2020-12-01 15:05:55 +00:00
Sanjay Patel 9f60b8b3d2 [InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.

It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
	sub	w8, w0, w1
	lsr	w0, w8, #31

vs:
	cmp	w0, w1
	cset	w0, lt

If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.

A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.

Alive proof:
https://rise4fun.com/Alive/o4E

  Name: sign-bit splat
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = ashr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = sext %c

  Name: sign-bit LSB
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = lshr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = zext %c
2020-12-01 09:58:11 -05:00
Simon Pilgrim 1b209ff9e3 [DAG] Move vselect(icmp_ult, 0, sub(x,y)) -> usubsat(x,y) to DAGCombine (PR40111)
Move the X86 VSELECT->USUBSAT fold to DAGCombiner - there's nothing target specific about these folds.
2020-12-01 14:25:29 +00:00
Florian Hahn 7a4f1d59b8 [ConstraintElimination] Decompose GEP %ptr, ZEXT(SHL()).
Add support to decompose a GEP with a ZEXT(SHL()) operand.
2020-12-01 14:23:21 +00:00
Kazushi (Jam) Marukawa 10b164d2f7 [VE] Add vmul and vdiv intrinsic instructions
Add vmul and vdiv intrinsic instructions and regression tests.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92377
2020-12-01 23:03:49 +09:00
Bhramar Vatsa fd679107d6
[InstCombine] Optimize away the unnecessary multi-use sign-extend
C.f. https://bugs.llvm.org/show_bug.cgi?id=47765

Added a case for handling the sign-extend (Shl+AShr) for multiple uses,
to optimize it away for an individual use,
when the demanded bits aren't affected by sign-extend.

https://rise4fun.com/Alive/lgf

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D91343
2020-12-01 16:54:00 +03:00
Roman Lebedev 94ead0190f
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold, 2
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 16:54:00 +03:00
Roman Lebedev 52533b52b8
Revert "[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold"
It seems i have missed checklines, temporairly reverting,
will reland momentairly..

This reverts commit aa1aa13509.
2020-12-01 15:47:04 +03:00
Roman Lebedev aa1aa13509
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 15:13:08 +03:00
Roman Lebedev 8e29e20e0d
[InstCombine] Evaluate new shift amount for sext(ashr(shl(trunc()))) fold in wide type (PR48343)
It is not correct to compute that new shift amount in it's narrow type
and only then extend it into the wide type:

----------------------------------------
Optimization: PR48343 good
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sext %Y
  %n1 = sub width(%o0), %n0
  %n2 = sub width(%X), %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 2016
Optimization is correct!

----------------------------------------
Optimization: PR48343 bad
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sub width(%o0), %Y
  %n1 = sub width(%X), %n0
  %n2 = sext %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 1
ERROR: Domain of definedness of Target is smaller than Source's for i9 %r

Example:
%X i9 = 0x000 (0)
%Y i4 = 0x3 (3)
%o0 i4 = 0x0 (0)
%o1 i4 = 0x0 (0)
%o2 i4 = 0x0 (0)
%n0 i4 = 0x1 (1)
%n1 i4 = 0x8 (8, -8)
%n2 i9 = 0x1F8 (504, -8)
%n3 i9 = 0x000 (0)
Source value: 0x000 (0)
Target value: undef


I.e. we should be computing it in the wide type from the beginning.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48343
2020-12-01 15:13:07 +03:00
Roman Lebedev 15f8060f6f
[SimplifyCFG] FoldBranchToCommonDest: don't require that cmp of br is last instruction
There is no correctness need for that, and since we allow live-out
uses, this could theoretically happen, because currently nothing
will move the cond to right before the branch in those tests.
But regardless, lifting that restriction even makes the transform
easier to understand.

This makes the transform happen in 81 more cases (+0.55%)
)
2020-12-01 15:13:06 +03:00
Simon Pilgrim 6dbd0d36a1 [DAG] Move vselect(icmp_ult, -1, add(x,y)) -> uaddsat(x,y) to DAGCombine (PR40111)
Move the X86 VSELECT->UADDSAT fold to DAGCombiner - there's nothing target specific about these folds.

The SSE42 test diffs are relatively benign - its avoiding an extra constant load in exchange for an extra xor operation - there are extra register moves, which is annoying as all those operations should commute them away.

Differential Revision: https://reviews.llvm.org/D91876
2020-12-01 11:56:26 +00:00
Cullen Rhodes cba4accda0 [LV] Clamp VF hint when unsafe
In the following loop the dependence distance is 2 and can only be
vectorized if the vector length is no larger than this.

  void foo(int *a, int *b, int N) {
    #pragma clang loop vectorize(enable) vectorize_width(4)
    for (int i=0; i<N; ++i) {
      a[i + 2] = a[i] + b[i];
    }
  }

However, when specifying a VF of 4 via a loop hint this loop is
vectorized. According to [1][2], loop hints are ignored if the
optimization is not safe to apply.

This patch introduces a check to bail of vectorization if the user
specified VF is greater than the maximum feasible VF, unless explicitly
forced with '-force-vector-width=X'.

[1] https://llvm.org/docs/LangRef.html#llvm-loop-vectorize-and-llvm-loop-interleave
[2] https://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations

Reviewed By: sdesmalen, fhahn, Meinersbur

Differential Revision: https://reviews.llvm.org/D90687
2020-12-01 11:30:34 +00:00
Simon Pilgrim c63799fc52 [InstCombine][X86] Fold addsub intrinsic to fadd/fsub depending on demanded elts (PR46277) 2020-12-01 11:27:40 +00:00
Caroline Concatto 4b0ef2b075 [NFC][CostModel]Extend class IntrinsicCostAttributes to use ElementCount Type
This patch replaces the attribute  `unsigned VF`  in the class
IntrinsicCostAttributes by `ElementCount VF`.
This is a non-functional change to help upcoming patches to compute the cost
model for scalable vector inside this class.

Differential Revision: https://reviews.llvm.org/D91532
2020-12-01 11:12:51 +00:00
Florian Hahn efa9728a50 [ConstraintElimination] Decompose GEP %ptr, SHL().
Add support the decompose a GEP with an SHL operand.
2020-12-01 10:58:36 +00:00
Kazushi (Jam) Marukawa c3fe6ea22e [VE] Add vadd and vsub intrinsic instructions
Add vadd and vsub intrinsic instructions and regression tests.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92332
2020-12-01 19:57:22 +09:00
Sjoerd Meijer f44ba25135 ExtractValue instruction costs
Instruction ExtractValue wasn't handled in
LoopVectorizationCostModel::getInstructionCost(). As a result, it was modeled
as a mul which is not really accurate. Since it is free (most of the times),
this now gets a cost of 0 using getInstructionCost.

This is a follow-up of D92208, that required changing this regression test.
In a follow up I will look at InsertValue which also isn't handled yet.

Differential Revision: https://reviews.llvm.org/D92317
2020-12-01 10:42:23 +00:00
David Green 7923d71b4a [ARM] PREDICATE_CAST demanded bits
The PREDICATE_CAST node is used to model moves between MVE predicate
registers and gpr's, and eventually become a VMSR p0, rn. When moving to
a predicate only the bottom 16 bits of the sources register are
demanded. This adds a simple fold for that, allowing it to potentially
remove instructions like uxth.

Differential Revision: https://reviews.llvm.org/D92213
2020-12-01 10:32:24 +00:00
Jay Foad 839c9635ed [AMDGPU] Simplify some generation checks. NFC. 2020-12-01 10:15:32 +00:00
Georgii Rymar ea8c8a5097 [obj2yaml] - Teach tool to emit the "SectionHeaderTable" key and sort sections by file offset.
Currently when we dump sections, we dump them in the order,
which is specified in the sections header table.

With that the order in the output might not match the order in the file.
This patch starts sorting them by by file offsets when dumping.

When the order in the section header table doesn't match the order
in the file, we should emit the "SectionHeaderTable" key. This patch does it.

Differential revision: https://reviews.llvm.org/D91249
2020-12-01 12:59:15 +03:00
Kazu Hirata e785379aff [CodeView] Remove unused declaration collectInlineSiteChildren (NFC)
The function definition was removed on Sep 7, 2016 in commit
a9f4cc9510.  The declaration seems to be
unused since then.
2020-11-30 22:28:26 -08:00
Wei Wang 93dc1b5b8c [Remarks][2/2] Expand remarks hotness threshold option support in more tools
This is the #2 of 2 changes that make remarks hotness threshold option
available in more tools. The changes also allow the threshold to sync with
hotness threshold from profile summary with special value 'auto'.

This change expands remarks hotness threshold option
-fdiagnostics-hotness-threshold in clang and *-remarks-hotness-threshold in
other tools to utilize hotness threshold from profile summary.

Remarks hotness filtering relies on several driver options. Table below lists
how different options are correlated and affect final remarks outputs:

| profile | hotness | threshold | remarks printed |
|---------|---------|-----------|-----------------|
| No      | No      | No        | All             |
| No      | No      | Yes       | None            |
| No      | Yes     | No        | All             |
| No      | Yes     | Yes       | None            |
| Yes     | No      | No        | All             |
| Yes     | No      | Yes       | None            |
| Yes     | Yes     | No        | All             |
| Yes     | Yes     | Yes       | >=threshold     |

In the presence of profile summary, it is often more desirable to directly use
the hotness threshold from profile summary. The new argument value 'auto'
indicates threshold will be synced with hotness threshold from profile summary
during compilation. The "auto" threshold relies on the availability of profile
summary. In case of missing such information, no remarks will be generated.

Differential Revision: https://reviews.llvm.org/D85808
2020-11-30 21:55:50 -08:00
Wei Wang 3acda91742 [Remarks][1/2] Expand remarks hotness threshold option support in more tools
This is the #1 of 2 changes that make remarks hotness threshold option
available in more tools. The changes also allow the threshold to sync with
hotness threshold from profile summary with special value 'auto'.

This change modifies the interface of lto::setupLLVMOptimizationRemarks() to
accept remarks hotness threshold. Update all the tools that use it with remarks
hotness threshold options:

* lld: '--opt-remarks-hotness-threshold='
* llvm-lto2: '--pass-remarks-hotness-threshold='
* llvm-lto: '--lto-pass-remarks-hotness-threshold='
* gold plugin: '-plugin-opt=opt-remarks-hotness-threshold='

Differential Revision: https://reviews.llvm.org/D85809
2020-11-30 21:55:49 -08:00
Greg Parker bcc802fa36 [DSE] Remove a redundant call to getLocForWriteEx()
Differential Revision: https://reviews.llvm.org/D92263
2020-11-30 21:12:24 -08:00
Craig Topper 40659cd2c6 [RISCV] Rename RISCVGenSystemOperands.inc to RISCVGenSearchableTables.inc to prepare for more tables. NFC
D89449 adds more tables so renaming as a pre-commit for that.
2020-11-30 20:47:58 -08:00
Hendrik Greving d4ba5e15f4 Add MachineModuleInfo constructor with external MCContext
Adds a constructor to MachineModuleInfo and MachineModuleInfoWapperPass that
takes an external MCContext. If provided, the external context will be used
throughout codegen instead of MMI's default one.

This enables external drivers to take ownership of data put on the MMI's context
during codegen. The internal context is used otherwise and destroyed upon
finish.

Differential Revision: https://reviews.llvm.org/D91313
2020-11-30 20:28:13 -08:00
Fangrui Song d928dfc6f9 [GlobalISel] Fix -Wunused-variable in -DLLVM_ENABLE_ASSERTIONS=off builds 2020-11-30 18:31:42 -08:00
Fangrui Song 36fe1a9dea [GlobalISel] Fix -Wunused-variable 2020-11-30 18:25:54 -08:00
Amy Huang efd1ec0dec Recommit "[llvm-symbolizer] Switch to using native symbolizer by default on Windows"
This reverts commit 1b63177a56.
2020-11-30 17:36:12 -08:00
Leonard Chan 4d7f3d68f1 [llvm] Fix for failing test from cf8ff75bad
Handle null values when handling operand changes for DSOLocalEquivalent.
2020-11-30 17:22:28 -08:00
Amara Emerson 87ff156414 [AArch64][GlobalISel] Fix crash during legalization of a vector G_SELECT with scalar mask.
The lowering of vector selects needs to first splat the scalar mask into a vector
first.

This was causing a crash when building oggenc in the test suite.

Differential Revision: https://reviews.llvm.org/D91655
2020-11-30 16:37:49 -08:00
Nick Desaulniers 91aff1d8ba [InlineCost] prefer range-for. NFC
Prefer range-for over iterators when such methods exist. Precommitted
from https://reviews.llvm.org/D91816.

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D92350
2020-11-30 16:07:40 -08:00
Amy Huang 00bbef2bb2 [llvm-symbolizer] Fix native symbolization on windows for inline sites.
The existing code handles this correctly and I checked that the code
in NativeInlineSiteSymbol also handles this correctly, but it was
wrong in the NativeFunctionSymbol code.

Differential Revision: https://reviews.llvm.org/D92134
2020-11-30 14:27:35 -08:00
Paul Robinson 3fd39d3694 [FastISel] NFC: Clean up unnecessary bookkeeping
Now that we flush the local value map for every instruction, we don't
need any extra flushes for specific cases.  Also, LastFlushPoint is
not used for anything.  Follow-ups to #dc35368 (D91734).

Differential Revision: https://reviews.llvm.org/D92338
2020-11-30 12:27:50 -08:00
Matt Arsenault 29bd6519d2 SplitKit: Use Register 2020-11-30 15:09:33 -05:00
Mircea Trofin 5fe10263ab [llvm][inliner] Reuse the inliner pass to implement 'always inliner'
Enable performing mandatory inlinings upfront, by reusing the same logic
as the full inliner, instead of the AlwaysInliner. This has the
following benefits:
- reduce code duplication - one inliner codebase
- open the opportunity to help the full inliner by performing additional
function passes after the mandatory inlinings, but before th full
inliner. Performing the mandatory inlinings first simplifies the problem
the full inliner needs to solve: less call sites, more contextualization, and,
depending on the additional function optimization passes run between the
2 inliners, higher accuracy of cost models / decision policies.

Note that this patch does not yet enable much in terms of post-always
inline function optimization.

Differential Revision: https://reviews.llvm.org/D91567
2020-11-30 12:03:39 -08:00
Nikita Popov b5f23189fb [DL] Inline getAlignmentInfo() implementation (NFC)
Apart from getting the entry in the table (which is already a
separate function), the remaining logic is different for all
alignment types and is better combined with getAlignment().

This is a minor efficiency improvement, and should make further
improvements like using separate storage for different alignment
types simpler.
2020-11-30 20:56:15 +01:00
Nick Lewycky fe43168348 Creating a named struct requires only a Context and a name, but looking up a struct by name requires a Module. The method on Module merely accesses the LLVMContextImpl and no data from the module itself, so this patch moves getTypeByName to a static method on StructType that takes a Context and a name.
There's a small number of users of this function, they are all updated.

This updates the C API adding a new method LLVMGetTypeByName2 that takes a context and a name.

Differential Revision: https://reviews.llvm.org/D78793
2020-11-30 11:34:12 -08:00
Eric Astor abef659a45 [ms] [llvm-ml] Implement the statement expansion operator
If prefaced with a %, expand text macros and macro functions in any statement.

Also, prevent expanding text macros in the message of an ECHO directive unless expanded explicitly by the statement expansion operator.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D89740
2020-11-30 14:33:24 -05:00
Sjoerd Meijer 630d37dc1b [AArch64] Enable Cortex-A55 schedmodel
The model was committed in 4b8ade837e
but not yet enabled to allow for a few fix ups. This adds a few
of these fixes, and also a LLVM MCA test to check most instructions.
While I do have plans to look into some more tuning, it's time to
enable this as it better than using the A53 schedule.

Differential Revision: https://reviews.llvm.org/D88017
2020-11-30 19:28:34 +00:00
Paul Robinson a474657e30 [FastISel] NFC: Remove obsolete -fast-isel-sink-local-values option
This option is not used for anything after #dc35368 (D91734).
2020-11-30 10:55:49 -08:00
Harald van Dijk cdac34bd47
[X86] Zero-extend pointers to i64 for x86_64
For LP64 mode, this has no effect as pointers are already 64 bits.
For ILP32 mode (x32), this extension is specified by the ABI.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D91338
2020-11-30 18:51:23 +00:00
Simon Pilgrim e425d0b92a [InstCombine][X86] Add basic addsub intrinsic SimplifyDemandedVectorElts support (PR46277)
Pass through the demanded elts mask to the source operands.

The next step will be to add support for folding to add/sub if we only demand odd/even elements.
2020-11-30 18:40:16 +00:00
Hongtao Yu c083fededf [CSSPGO] A Clang switch -fpseudo-probe-for-profiling for pseudo-probe instrumentation.
This change introduces a new clang switch `-fpseudo-probe-for-profiling` to enable AutoFDO with pseudo instrumentation. Please refer to https://reviews.llvm.org/D86193 for the whole story.

One implication from pseudo-probe instrumentation is that the profile is now sensitive to CFG changes. We perform the pseudo instrumentation very early in the pre-LTO pipeline, before any CFG transformation. This ensures that the CFG instrumented and annotated is stable and optimization-resilient.

The early instrumentation also allows the inliner to duplicate probes for inlined instances. When a probe along with the other instructions of a callee function are inlined into its caller function, the GUID of the callee function goes with the probe. This allows samples collected on inlined probes to be reported for the original callee function.

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D86502
2020-11-30 10:16:54 -08:00
Hongtao Yu 64fa8cce22 [CSSPGO] Pseudo probe instrumentation pass
This change introduces a pseudo probe instrumentation pass for block instrumentation. Please refer to https://reviews.llvm.org/D86193 for the whole story.

Given the following LLVM IR:

```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
  %cmp = icmp eq i32 %x, 0
   br i1 %cmp, label %bb1, label %bb2
bb1:
   br label %bb3
bb2:
   br label %bb3
bb3:
   ret void
}
```

The instrumented IR will look like below. Note that each llvm.pseudoprobe intrinsic call represents a pseudo probe at a block, of which the first parameter is the GUID of the probe’s owner function and the second parameter is the probe’s ID.

```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
   %cmp = icmp eq i32 %x, 0
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 1)
   br i1 %cmp, label %bb1, label %bb2
bb1:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 2)
   br label %bb3
bb2:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 3)
   br label %bb3
bb3:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 4)
   ret void
}
```

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D86499
2020-11-30 10:16:54 -08:00
Fangrui Song 7c4555f60d [PowerPC] Delete remnant Darwin code in PPCAsmParser
Continue the work started at D50989.
The code has been long dead since the triple has been removed (D75494).

Reviewed By: nickdesaulniers, void

Differential Revision: https://reviews.llvm.org/D91836
2020-11-30 10:16:19 -08:00
Kazushi (Jam) Marukawa 3d872cbc2f [VE][NFC] Update comments
Update comments.  I forgot to update it previously when I modified code.
2020-12-01 02:56:16 +09:00
Francesco Petrogalli f6150aa41a [SelectionDAGBuilder] Update signature of `getRegsAndSizes()`.
The mapping between registers and relative size has been updated to
use TypeSize to account for the size of scalable EVTs.

The patch is a NFCI, if not for the fact that with this change the
function `getUnderlyingArgRegs` does not raise a warning for implicit
conversion of `TypeSize` to `unsigned` when generating machine code
from the test added to the patch.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D92096
2020-11-30 17:38:51 +00:00
Kazushi (Jam) Marukawa 6834b3d6d5 [VE] Optimize prologue/epilogue instructions about GOT
Optimize prologue/epilogue instructions if a given function use GOT but
do not call other functions by eliminating FP.  Previously, we had wrong
implementations taken from other architectures.  Update regression tests
also.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92313
2020-12-01 02:22:31 +09:00
Kazushi (Jam) Marukawa 6fe610535f [VE] Clean check routines of branch types
Previously, these check routines accepted non-generatble instructions.
This time, I clean them and add assert for those non-generatable
instructions.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92254
2020-12-01 02:19:37 +09:00
Craig Topper bfc4f29f46 [RISCV] Combine (GORCI (GORCI x, C2), C1) -> (GORCI x, C1|C2).
Unlike GREVI, GORCI stages can't be undone, but they are
redundant if done more than once.

Differential Revision: https://reviews.llvm.org/D92295
2020-11-30 08:42:46 -08:00
Sanjay Patel 9eb2c0113d [IR][LoopRotate] remove assertion that phi must have at least one operand
This was suggested in D92247 - I initially committed an alternate
fix ( bfd2c216ea ) to avoid the crash/assert shown in
https://llvm.org/PR48296 ,
but that was reverted because it caused msan failures on other
tests. We can try to revive that patch using the test included
here, but I do not have an immediate plan to isolate that problem.
2020-11-30 11:32:42 -05:00
Craig Topper 76d1026b59 [RISCV] Custom legalize bswap/bitreverse to GREVI with Zbp extension to enable them to combine with other GREVI instructions
This enables bswap/bitreverse to combine with other GREVI patterns or each other without needing to add more special cases to the DAG combine or new DAG combines.

I've also enabled the existing GREVI combine for GREVIW so that it can pick up the i32 bswap/bitreverse on RV64 after they've been type legalized to GREVIW.

Differential Revision: https://reviews.llvm.org/D92253
2020-11-30 08:30:40 -08:00
Fangrui Song 25c8fbb3d9 [X86] Don't emit R_X86_64_[REX_]GOTPCRELX for a GOT load with an offset
clang may produce `movl x@GOTPCREL+4(%rip), %eax` when loading the high
32 bits of the address of a global variable in -fpic/-fpie mode.

If assembled by GNU as, the fixup emits R_X86_64_GOTPCRELX with an addend != -4.
The instruction loads from the GOT entry with an offset and thus it is incorrect
to relax the instruction.

This patch does not emit a relaxable relocation for a GOT load with an offset
because R_X86_64_[REX_]GOTPCRELX do not make sense for instructions which cannot
be relaxed.  The result is good enough for LLD to work. GNU ld relaxes
mov+GOTPCREL as well, but it suppresses the relaxation if addend != -4.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D92114
2020-11-30 08:27:31 -08:00
Craig Topper cbbd7021f1 [RISCV] Only combine (or (GREVI x, shamt), x) -> GORCI if shamt is a power of 2.
GORCI performs an OR between each stage. So we need to ensure only
one stage is active before doing this combine.

Initial attempts at finding a test case for this failed due to
the order things get combined. It's most likely that we'll form
one stage of GREVI then combine to GORCI before the two stages of
GREVI are able to be formed and combined with each other to form
a multi stage GREVI.

Differential Revision: https://reviews.llvm.org/D92289
2020-11-30 08:10:39 -08:00
Sanjay Patel 1dc38f8cfb [IR] improve code comment/logic in removePredecessor(); NFC
This was suggested in the post-commit review of ce134da4b1.
2020-11-30 10:51:30 -05:00
Sanjay Patel 355aee3dcd Revert "[IR][LoopRotate] avoid leaving phi with no operands (PR48296)"
This reverts commit bfd2c216ea.
This appears to be causing stage2 msan failures on buildbots:
  FAIL: LLVM :: Transforms/SimplifyCFG/X86/bug-25299.ll (65872 of 71835)
  ******************** TEST 'LLVM :: Transforms/SimplifyCFG/X86/bug-25299.ll' FAILED ********************
  Script:
  --
  : 'RUN: at line 1';   /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/opt < /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/Transforms/SimplifyCFG/X86/bug-25299.ll -simplifycfg -S | /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/Transforms/SimplifyCFG/X86/bug-25299.ll
  --
  Exit Code: 2
  Command Output (stderr):
  --
  ==87374==WARNING: MemorySanitizer: use-of-uninitialized-value
      #0 0x9de47b6 in getBasicBlockIndex /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/include/llvm/IR/Instructions.h:2749:5
      #1 0x9de47b6 in simplifyCommonResume /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:4112:23
      #2 0x9de47b6 in simplifyResume /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:4039:12
      #3 0x9de47b6 in (anonymous namespace)::SimplifyCFGOpt::simplifyOnce(llvm::BasicBlock*) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:6330:16
      #4 0x9dcca13 in run /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:6358:16
      #5 0x9dcca13 in llvm::simplifyCFG(llvm::BasicBlock*, llvm::TargetTransformInfo const&, llvm::SimplifyCFGOptions const&, llvm::SmallPtrSetImpl<llvm::BasicBlock*>*) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:6369:8
      #6 0x974643d in iterativelySimplifyCFG(
2020-11-30 10:15:42 -05:00
Sanjay Patel bfd2c216ea [IR][LoopRotate] avoid leaving phi with no operands (PR48296)
https://llvm.org/PR48296 shows an example where we delete all of the operands
of a phi without actually deleting the phi, and that is currently considered
invalid IR. The reduced test included here would crash for that reason.

A suggested follow-up is to loosen the assert to allow 0-operand phis
in unreachable blocks.

Differential Revision: https://reviews.llvm.org/D92247
2020-11-30 09:28:45 -05:00
Juneyoung Lee 9c49dcc356 [ConstantFold] Don't fold and/or i1 poison to poison (NFC)
.. because it causes miscompilation when combined with select i1 -> and/or.

It is the select fold which is incorrect; but it is costly to disable the fold, so hack this one.

D92270
2020-11-30 22:58:31 +09:00
Kazushi (Jam) Marukawa 686988a50f [VE] Optimize prologue/epilogue instructions
Optimize eliminate FP mechanism.  This time optimize a function which has
no call but fixed stack objects.  LLVM eliminates FP on such functions now.
Also, optimize GOT/PLT registers save/restore instructions if a given
function doesn't uses them.  In addition, remove generating mechanism of
`.cfi` instructions since those are taken from other architectures and not
inspected yet.  Update regression tests, also.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92251
2020-11-30 22:22:33 +09:00
Kazushi (Jam) Marukawa 44a679eaa4 [VE] Change the behaviour of truncate
Change the way to truncate i64 to i32 in I64 registers.  VE assumed
sext values previously.  Change it to zext values this time to make
it match to the LLVM behaviour.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92226
2020-11-30 22:12:45 +09:00
Florian Hahn fe83adb05a
[VPlan] Use VPUser to manage VPPredInstPHIRecipe operand (NFC).
VPPredInstPHIRecipe is one of the recipes that was missed during the
initial conversion. This patch adjusts the recipe to also manage its
operand using VPUser.
2020-11-30 13:09:58 +00:00
Kazushi (Jam) Marukawa 33eac0f283 [VE] Specify vector alignments
Specify alignments for all vector types.  Update a regression test also.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92256
2020-11-30 22:09:21 +09:00
Sjoerd Meijer 5110ff0817 [AArch64][CostModel] Fix cost for mul <2 x i64>
This was modeled to have a cost of 1, but since we do not have a MUL.2d this is
scalarized into vector inserts/extracts and scalar muls.

Motivating precommitted test is test/Transforms/SLPVectorizer/AArch64/mul.ll,
which we don't want to SLP vectorize.

Test Transforms/LoopVectorize/AArch64/extractvalue-no-scalarization-required.ll
unfortunately needed changing, but the reason is documented in
LoopVectorize.cpp:6855:

  // The cost of executing VF copies of the scalar instruction. This opcode
  // is unknown. Assume that it is the same as 'mul'.

which I will address next as a follow up of this.

Differential Revision: https://reviews.llvm.org/D92208
2020-11-30 11:36:55 +00:00
Simon Pilgrim 83d79ca5bf [X86][AVX512] Only lower to VPALIGNR if we have BWI (PR48322) 2020-11-30 10:51:24 +00:00
Jay Foad e20efa3dd5 [LegacyPM] Simplify PMTopLevelManager::collectLastUses. NFC. 2020-11-30 10:36:19 +00:00
Roman Lebedev b0e9b7c59f
[NFC][SimplifyCFG] Add STATISTIC() to the FoldValueComparisonIntoPredecessors() fold 2020-11-30 12:27:16 +03:00
Evgeny Leviant 112b3cb6ba [TableGen][SchedModels] Fix read/write variant substitution
Patch fixes multiple issues related to expansion of variant sched reads and
writes.

Differential revision: https://reviews.llvm.org/D90844
2020-11-30 11:55:55 +03:00
Max Kazantsev 0c9c6ddf17 [IndVars] ICmpInst should not prevent IV widening
If we decided to widen IV with zext, then unsigned comparisons
should not prevent widening (same for sext/sign comparisons).
The result of comparison in wider type does not change in this case.

Differential Revision: https://reviews.llvm.org/D92207
Reviewed By: nikic
2020-11-30 10:51:31 +07:00
Fangrui Song e6db1416ae [RISCV] Remove unused Addend parameter from classifySymbolRef. NFC
It is confusing as well since in the case of A - B + Cst, the returned Addend is not Cst.
2020-11-29 19:17:59 -08:00
Fangrui Song e6c1777685 [MC] Copy visibility for .symver created symbols 2020-11-29 16:51:48 -08:00
Nikita Popov 891170e863 [DL] Optimize address space zero lookup (NFC)
Information for pointer size/alignment/etc is queried a lot, but
the binary search based implementation makes this fairly slow.

Add an explicit check for address space zero and skip the search
in that case -- we need to specially handle the zero address space
anyway, as it serves as the fallback for all address spaces that
were not explicitly defined.

I initially wanted to simply replace the binary search with a
linear search, which would handle both address space zero and the
general case efficiently, but I was not sure whether there are
any degenerate targets that use more than a handful of declared
address spaces (in-tree, even AMDGPU only declares six).
2020-11-29 22:49:55 +01:00
Craig Topper 84aad9b5da [RISCV] Change predicate on InstAliases for GORCI/GREVI/SHFLI/UNSHFLI to HasStdExtZbp instead of HasStdExtZbbOrZbp.
This matches the predicate on the instructions. Though I think
some specific encodings are valid in Zbb, but not all of them.
2020-11-29 11:23:23 -08:00
Fangrui Song 5408fdcd78 [VPlan] Fix -Wunused-variable after a813090072 2020-11-29 10:38:01 -08:00
Florian Hahn 4bc9b909d7
[VPlan] Use VPValue and VPUser ops to print VPReplicateRecipe. 2020-11-29 18:28:27 +00:00
Florian Hahn a813090072
[VPlan] Manage stored values of interleave groups using VPUser (NFC)
Interleave groups also depend on the values they store. Manage the
stored values as VPUser operands. This is currently a NFC, but is
required to allow VPlan transforms and to manage generated vector values
exclusively in VPTransformState.
2020-11-29 17:24:36 +00:00
Sanjay Patel ce134da4b1 [IR] simplify code in removePredecessor(); NFCI
As suggested in D92247 (and independent of whatever we decide to do there),
this code is confusing as-is. Hopefully, this is at least mildly better.

We might be able to do better still, but we have a function called
"removePredecessor" with this behavior:
"Note that this function does not actually remove the predecessor." (!)
2020-11-29 09:55:04 -05:00
Sanjay Patel 2cebad702c [IR] remove redundant code comments; NFC
As noted in D92247 (and independent of that patch):

http://llvm.org/docs/CodingStandards.html#doxygen-use-in-documentation-comments

"Don’t duplicate the documentation comment in the header file and in the
implementation file. Put the documentation comments for public APIs into
the header file."
2020-11-29 09:29:59 -05:00
Juneyoung Lee 53040a968d [ConstantFold] Fold more operations to poison
This patch folds more operations to poison.

Alive2 proof: https://alive2.llvm.org/ce/z/mxcb9G (it does not contain tests about div/rem because they fold to poison when raising UB)

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D92270
2020-11-29 21:19:48 +09:00
Nikita Popov e987fbdd85 [BasicAA] Generalize recursive phi alias analysis
For recursive phis, we skip the recursive operands and check that
the remaining operands are NoAlias with an unknown size. Currently,
this is limited to inbounds GEPs with positive offsets, to
guarantee that the recursion only ever increases the pointer.

Make this more general by only requiring that the underlying object
of the phi operand is the phi itself, i.e. it it based on itself in
some way. To compensate, we need to use a beforeOrAfterPointer()
location size, as we no longer have the guarantee that the pointer
is strictly increasing.

This allows us to handle some additional cases like negative geps,
geps with dynamic offsets or geps that aren't inbounds.

Differential Revision: https://reviews.llvm.org/D91914
2020-11-29 10:25:23 +01:00
Dimitry Andric d989ffd109 Implement computeHostNumHardwareThreads() for FreeBSD
This retrieves CPU affinity via FreeBSD's cpuset(2) API, and makes LLVM
respect affinity settings configured by the user via the cpuset(1)
command.

In particular, this allows to reduce the number of threads used on
machines with high core counts, which can interact badly with
parallelized build systems. This is particularly noticable with lld,
which spawns lots of threads even for linking e.g. hello_world!

This fix is related to PR48193, but does not adress the more fundamental
problem, which is that LLVM by default grabs as many CPUs and/or threads
as possible.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D92271
2020-11-29 00:49:39 +01:00
LemonBoy f502b14d40 [ARMAttributeParser] Correctly parse and print Tag_THUMB_ISA_use=3
I took the "Permitted"/"Not Permitted" combo from the `Tag_ARM_ISA_use` case (GNU tools print "Yes").

Reviewed By: compnerd, MaskRay, simon_tatham

Differential Revision: https://reviews.llvm.org/D90305
2020-11-28 12:28:22 -08:00
Harald van Dijk 47e2fafbf3
[X86] Do not allow FixupSetCC to relax constraints
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. https://reviews.llvm.org/D91924 fixes one of them, this fixes the other.

FixupSetCC will change code in the form of

  %setcc = SETCCr ...
  %ext1 = MOVZX32rr8 %setcc

to

  %zero = MOV32r0
  %setcc = SETCCr ...
  %ext2 = INSERT_SUBREG %zero, %setcc, %subreg.sub_8bit

and replace uses of %ext1 with %ext2.

The register class for %ext2 did not take into account any constraints on %ext1, which may have been required by its uses. This change ensures that the original constraints are honoured, by instead of creating a new %ext2 register, reusing %ext1 and further constraining it as needed. This requires a slight reorganisation to account for the fact that it is possible for the constraining to fail, in which case no changes should be made.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D91933
2020-11-28 17:46:56 +00:00
Juneyoung Lee c6b62efb91 [ConstantFold] Fold operations to poison if possible
This patch updates ConstantFold, so operations are folded into poison if possible.

<alive2 proofs>
casts: https://alive2.llvm.org/ce/z/WSj7rw
binary operations (arithmetic): https://alive2.llvm.org/ce/z/_7dEyJ
binary operations (bitwise): https://alive2.llvm.org/ce/z/cezjVN
vector/aggregate operations: https://alive2.llvm.org/ce/z/BQ7hWz
unary ops: https://alive2.llvm.org/ce/z/yBRs4q
other ops: https://alive2.llvm.org/ce/z/iXbcFD

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D92203
2020-11-29 02:28:40 +09:00
Harald van Dijk 47c902ba84
[X86] Have indirect calls take 64-bit operands in 64-bit modes
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. This fixes one of them.

X86 has CALL64r and CALL32r opcodes, where CALL64r takes a 64-bit register, and CALL32r takes a 32-bit register. CALL64r can only be used in 64-bit mode, CALL32r can only be used in 32-bit mode. LLVM would assume that after picking the appropriate CALLr opcode, a pointer-sized register would be a valid operand, but in x32 mode, a 64-bit mode, pointers are 32 bits. In this mode, it is invalid to directly pass a pointer to CALL64r, it needs to be extended to 64 bits first.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D91924
2020-11-28 16:46:30 +00:00
Paul C. Anagnostopoulos 0aeaec13e7 [Timer] Add a command option to enable/disable timer sorting.
Add one more timer to DAGISelEmitter to test the option.

Differential Revision: https://reviews.llvm.org/D92146
2020-11-28 11:43:38 -05:00
Nikita Popov 1dea8ed8b7 [BasicAA] Remove unnecessary known size requirement
The size requirement on V2 was present because it was not clear
whether an unknown size would allow an access before the start of
V2, which could then overlap. This is clarified since D91649: In
this part of BasicAA, all accesses can occur only after the base
pointer, even if they have unknown size.

This makes the positive and negative offset cases symmetric.

Differential Revision: https://reviews.llvm.org/D91482
2020-11-28 10:17:12 +01:00
Craig Topper 6ee22ca6ce [RISCV] Add tests for existing (rotr (bswap X), (i32 16))->grevi pattern for RV32. Extend same pattern to rotl and GREVIW.
Not sure why bswap was treated specially. This also applies to bitreverse
or generic grevi. We can improve this in future patches.
For now I just wanted to get the consistency and the test coverage
as I plan to make some other changes around bswap.
2020-11-27 18:09:01 -08:00
Andrew Litteken a8a43b6338 Revert "[IRSim][IROutliner] Adding the extraction basics for the IROutliner."
Reverting commit due to address sanitizer errors.

> Extracting the similar regions is the first step in the IROutliner.
> 
> Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
> sort them by how many instructions will be removed.  Each
> IRSimilarityCandidate is used to define an OutlinableRegion.  Each
> region is ordered by their occurrence in the Module and the regions that
> are not compatible with previously outlined regions are discarded.
> 
> Each region is then extracted with the CodeExtractor into its own
> function.
> 
> We test that correctly extract in:
> test/Transforms/IROutliner/extraction.ll
> test/Transforms/IROutliner/address-taken.ll
> test/Transforms/IROutliner/outlining-same-globals.ll
> test/Transforms/IROutliner/outlining-same-constants.ll
> test/Transforms/IROutliner/outlining-different-structure.ll
> 
> Reviewers: paquette, jroelofs, yroux
> 
> Differential Revision: https://reviews.llvm.org/D86975

This reverts commit bf899e8913.
2020-11-27 19:55:57 -06:00
Andrew Litteken bf899e8913 [IRSim][IROutliner] Adding the extraction basics for the IROutliner.
Extracting the similar regions is the first step in the IROutliner.

Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
sort them by how many instructions will be removed.  Each
IRSimilarityCandidate is used to define an OutlinableRegion.  Each
region is ordered by their occurrence in the Module and the regions that
are not compatible with previously outlined regions are discarded.

Each region is then extracted with the CodeExtractor into its own
function.

We test that correctly extract in:
test/Transforms/IROutliner/extraction.ll
test/Transforms/IROutliner/address-taken.ll
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll

Reviewers: paquette, jroelofs, yroux

Differential Revision: https://reviews.llvm.org/D86975
2020-11-27 19:08:29 -06:00
Kazushi (Jam) Marukawa 3bd78b7cc0 [VE] Optimize emitSPAdjustment function
Optimize emitSPAdjustment function to generate as small as possible
instructions to adjust SP.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92174
2020-11-28 08:06:31 +09:00
Craig Topper 8709d9d872 [RISCV] Replace getSimpleValueType() with getValueType() in DAG combines to prevent asserts with weird types. 2020-11-27 12:49:12 -08:00