Commit Graph

132505 Commits

Author SHA1 Message Date
Florian Hahn 4a58996dd2 [SCCP] Use constant ranges for PHI nodes.
For PHIs with multiple incoming values, we can improve precision by
using constant ranges for integers. We can over-approximate phis
by merging the incoming values.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71933
2020-03-19 12:45:33 +00:00
Simon Pilgrim 0b458d4dca [ValueTracking] Add computeKnownBits DemandedElts support to ADD/SUB/MUL instructions (PR36319) 2020-03-19 12:41:29 +00:00
Cullen Rhodes 5ce38fcbac [ValueTypes] Add support for scalable EVTs
Summary:
* Remove a bunch of asserts checking for unsupported scalable types and
  add some more now that they are supported.
* Propagate the scalable flag where necessary.
* Add another `EVT::getExtendedVectorVT` method that takes an
  ElementCount parameter.
* Add `EVT::isExtendedScalableVector` and
  `EVT::getExtendedVectorElementCount` - latter is currently unused.

Reviewers: sdesmalen, efriedma, rengolin, craig.topper, huntergr

Reviewed By: efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75672
2020-03-19 11:04:15 +00:00
Cullen Rhodes 5c296df0c0 [ValueTypes] Add EVT::isFixedLengthVector
Summary:
Related to D75672, this patch adds EVT::isFixedLengthVector to determine
if the underlying vector type is of fixed length.

An assert is introduced in EVT::getVectorNumElements that triggers for
types that aren't fixed length. This is currently guarded by a flag
added D75297 that is off by default and has been renamed to the more
generic ENABLE_STRICT_FIXED_SIZE_VECTORS.

Ideally we want to get rid of getVectorNumElements but a quick grep
shows there are >350 uses in lib/CodeGen and 75 in lib/Target/AArch64
alone. All of these probably aren't EVT::getVectorNumElements (some may
be the MVT equivalent), but there are many places to fixup and having
the assert on by default would make the SVE upstreaming effort
difficult.

Reviewers: sdesmalen, efriedma, ctetreau, huntergr, rengolin

Reviewed By: efriedma

Subscribers: mgorny, kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76376
2020-03-19 10:08:17 +00:00
Simon Moll 733b319948 [VP,Integer,#1] Vector-predicated integer intrinsics
Summary:
This patch adds IR intrinsics for vector-predicated integer arithmetic.

It is subpatch #1 of the [integer
slice](https://reviews.llvm.org/D57504#1732277) of
[LLVM-VP](https://reviews.llvm.org/D57504).  LLVM-VP is a larger effort to bring
native vector predication to LLVM.

Reviewed By: andrew.w.kaylor

Differential Revision: https://reviews.llvm.org/D69891
2020-03-19 10:51:47 +01:00
Florian Hahn 8a36594a7e [SCCP] Use constant ranges for binary operators.
If one of the operands of a binary operator is a constant range, we can
use ConstantRange::binaryOp to approximate the result.

We still handle single element constant ranges as we did previously,
with ConstantExpr::get(), because ConstantRange::binaryOp still gives
worse results in a few cases for single element ranges.

Also note that we bail out early if any of the operands is still unknown.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71936
2020-03-19 09:35:48 +00:00
Chen Zheng 3f85134d71 [PowerPC] implement target hook isProfitableToHoist
On Powerpc fma is faster than fadd + fmul for some types,
(PPCTargetLowering::isFMAFasterThanFMulAndFAdd). we should implement target
hook isProfitableToHoist to prevent simplifyCFGpass from breaking fma
pattern by hoisting fmul to predecessor block.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D76207
2020-03-19 00:17:25 -04:00
Huihui Zhang 2ea5495759 [InstCombine][SVE] Fix InstCombiner::visitAllocaInst for scalable vector.
Summary:
DataLayout::getTypeAllocSize() return TypeSize. For cases where scalable
property doesn't matter (check for zero-sized alloca), we should explicitly
call getKnownMinSize() to avoid implicit type conversion to uint64_t, which is
invalid for scalable vector type.

Reviewers: sdesmalen, efriedma, spatel, apazos

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76386
2020-03-18 20:57:14 -07:00
Chen Zheng aacf022cd5 [PowerPC] add IR level isFMAFasterThanFMulAndFAdd - NFC
And also refactor legacy MIR level isFMAFasterThanFMulAndFAdd.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D76265
2020-03-18 23:24:40 -04:00
Craig Topper c69a4d6bef [SelectionDAG] When splitting gathers/scatters in type legalization, set MMO size to UnknownSize
Gather/scatter don't access one memory location, they access multiple disjoint locations. So using a fixed size isn't accurate. But we don't have a way to represent the true behavior so just use UnknownSize.

Previously we "split" the memory VT and use that size for the MMO of each half. But the memory VT is scalar so splitting usually just returned the original scalar VT, but on 32-bit X86 if the scalar VT was i64 it probably returned i32?

Differential Revision: https://reviews.llvm.org/D76388
2020-03-18 16:07:15 -07:00
Simon Pilgrim 99336bf95a [ValueTracking] Add computeKnownBits DemandedElts support to masked add instructions (PR36319) 2020-03-18 21:50:56 +00:00
Florian Hahn fd2c15e602 [VPlan] Do not print mapping for Value2VPValue.
The latest improvements to VPValue printing make this mapping clear when
printing the operand. Printing the mapping separately is not required
any longer.

Reviewers: rengolin, hsaito, Ayal, gilr

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D76375
2020-03-18 21:44:07 +00:00
Florian Hahn 00c1cd1934 [VPlan] Record underlying value for VPValues created by addVPValue (NFC).
Now that printing VPValues uses the underlying IR value name, if
available, recording the underlying value here improves printing.

Reviewers: rengolin, hsaito, Ayal, gilr

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D76374
2020-03-18 21:30:58 +00:00
Eli Friedman e24e95fe90 Remove CompositeType class.
The existence of the class is more confusing than helpful, I think; the
commonality is mostly just "GEP is legal", which can be queried using
APIs on GetElementPtrInst.

Differential Revision: https://reviews.llvm.org/D75660
2020-03-18 13:53:17 -07:00
Craig Topper 498b53890d [SelectionDAGBuilder][FPEnv] Take into account SelectionDAG continuous CSE when setting the nofpexcept flag for constrained intrinsics
SelectionDAG CSEs nodes based on their result type and operands, but not their flags. The flags are expected to be intersected when they are CSEd. In SelectionDAGBuilder, for FP nodes we manage both the fast math flags and the nofpexcept flag after the nodes have already been CSEd when they were created with getNode. The management of the fastmath flags before the constrained nodes prevents the nofpexcept management from working correctly.

This commit moves the FMF handling for constrained intrinsics into their visitor and disables the common FMF handling for these nodes.

Differential Revision: https://reviews.llvm.org/D75224
2020-03-18 13:37:17 -07:00
Simon Pilgrim 9d40292a64 [ValueTracking] Add computeKnownBits DemandedElts support to XOR instructions (PR36319) 2020-03-18 20:24:14 +00:00
lewis-revill e9f22fd429 [TableGen][GlobalISel] Account for HwMode in RegisterBank register sizes
This patch generates TableGen descriptions for the specified register
banks which contain a list of register sizes corresponding to the
available HwModes. The appropriate size is used during codegen according
to the current HwMode. As this HwMode was not available on generation,
it is set upon construction of the RegisterBankInfo class. Targets
simply need to provide the HwMode argument to the
<target>GenRegisterBankInfo constructor.

The RISC-V RegisterBankInfo constructor has been updated accordingly
(plus an unused argument removed).

Differential Revision: https://reviews.llvm.org/D76007
2020-03-18 19:52:23 +00:00
Eli Friedman ebec984e14 [AliasAnalysis] Misc fixes for checking aliasing with scalable types.
This is fixing up various places that use the implicit
TypeSize->uint64_t conversion.

The new overloads in MemoryLocation.h are already used in various places
that construct a MemoryLocation from a TypeSize, including MemorySSA.
(They were using the implicit conversion before.)

Differential Revision: https://reviews.llvm.org/D76249
2020-03-18 12:28:47 -07:00
Simon Pilgrim 1010c44b4c [ValueTracking] Add computeKnownBits DemandedElts support to EXTRACTELEMENT/OR/BSWAP/BITREVERSE instructions (PR36319)
These are all covered by the bswap/bitreverse vector tests.
2020-03-18 18:49:58 +00:00
Nemanja Ivanovic e009fad342 [PowerPC] Remove UB from PPCInstrInfo when handling rotates fed by constants
As pointed out in https://bugs.llvm.org/show_bug.cgi?id=45232 this code can
end up shifting a 64-bit unsigned value left by 64 bits. Althought this works
as expected on some platforms it is definitely UB. This patch removes the UB
and adds the associated test case.

Fixes: https://bugs.llvm.org/show_bug.cgi?id=45232
2020-03-18 13:40:39 -05:00
Simon Pilgrim 746bd860c9 Replace get*Alignment() methods with get*Align() equivalents.
Fixes deprecation warning in EXPENSIVE_CHECKS builds.
2020-03-18 18:25:07 +00:00
Jessica Paquette dc5f982639 [GlobalISel] Port some basic undef combines from DAGCombiner.cpp
This ports some combines from DAGCombiner.cpp which perform some trivial
transformations on instructions with undef operands.

Not having these can make it extremely annoying to find out where we differ
from SelectionDAG by looking at existing lit tests. Without them, we tend to
produce pretty bad code generation when we run into instructions which use
undef operands.

Also remove the nonpow2_store_narrowing testcase from arm64-fallback.ll, since
we no longer fall back on the add.

Differential Revision: https://reviews.llvm.org/D76339
2020-03-18 11:05:44 -07:00
Jin Lin 0d896278c8 Support repeated machine outlining
Summary: The following change is to allow the machine outlining can be applied for Nth times, where N is specified by the compiler option. By default the value of N is 1. The motivation is that the repeated machine outlining can further reduce code size.  Please refer to the presentation "Improving Swift Binary Size via Link Time Optimization" in LLVM Developers' Meeting in 2019.

Reviewers: aschwaighofer, tellenbach, paquette

Reviewed By: paquette

Subscribers: tellenbach, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71027
2020-03-18 10:48:52 -07:00
Florian Hahn e6a74803d4 [VPlan] Use underlying value for printing, if available.
When the an underlying value is available, we can use its name for
printing, as discussed in D73078.

Reviewers: rengolin, hsaito, Ayal, gilr

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D76200
2020-03-18 17:46:57 +00:00
Simon Tatham e13d153c1b [ARM,MVE] Add intrinsics for the VQDMLAD family.
Summary:
This is another set of instructions too complicated to be sensibly
expressed in IR by anything short of a target-specific intrinsic.
Given input vectors a,b, the instruction generates intermediate values
2*(a[0]*b[0]+a[1]+b[1]), 2*(a[2]*b[2]+a[3]+b[3]), etc; takes the high
half of each double-width values, and overwrites half the lanes in the
output vector c, which you therefore have to provide the input value
of. Optionally you can swap the elements of b so that the are things
like a[0]*b[1]+a[1]*b[0]; optionally you can round to nearest when
taking the high half; and optionally you can take the difference
rather than sum of the two products. Finally, saturation is applied
when converting back to a single-width vector lane.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76359
2020-03-18 17:11:22 +00:00
Matt Arsenault 4ea1baf6a0 AMDGPU: Initial, crude support for indirect calls
This isn't really usable, and requires using the
-amdgpu-fixed-function-abi flag to work.

Assumes a uniform call target, and will hit a verifier error if the
call target ends up in a VGPR. Also doesn't attempt to do anything
sensible for the reported register/stack usage.
2020-03-18 12:03:48 -04:00
Matt Arsenault ea4597eef1 Reapply "AMDGPU/GlobalISel: Fully handle 0 dmask case during legalize"
This reverts commit 9bca8fc4cf.

Rearrange handling to avoid changing the instruction in the case where
it's going to be erased and replaced with undef.
2020-03-18 12:01:22 -04:00
Piotr Sobczak d1a7bfca74 [AMDGPU] Fix AMDGPUUnifyDivergentExitNodes
Summary:
For the case where "done" bits on existing exports are removed
by unifyReturnBlockSet(), unify all return blocks - even the
uniformly reached ones. We do not want to end up with a non-unified,
uniformly reached block containing a normal export with the "done"
bit cleared.

That case is believed to be rare - possible with infinite loops
in pixel shaders.

This is a fix for D71192.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76364
2020-03-18 16:49:30 +01:00
Simon Pilgrim 06150e8356 [ValueTracking] Add computeKnownBits DemandedElts support to AND instructions (PR36319) 2020-03-18 15:38:15 +00:00
Sander de Smalen ef64ba8311 [InstCombine] GEPOperator::accumulateConstantOffset does not support scalable vectors
Avoid transforming:

 %0 = bitcast i8* %base to <vscale x 16 x i8>*
 %1 = getelementptr <vscale x 16 x i8>, <vscale x 16 x i8>* %0, i64 1

into:

 %0 = getelementptr i8, i8* %base, i64 16
 %1 = bitcast i8* %0 to <vscale x 16 x i8>*

Reviewers: efriedma, ctetreau

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76236
2020-03-18 14:58:46 +00:00
Chris Bowler c21866476e [PowerPC][AIX] Implement by-val caller arguments in a single register.
This is the first of a series of patches that adds caller support for
by-value arguments. This patch add support for arguments that are passed in a
single GPR.

There are 3 limitation cases:
-The by-value argument is larger than a single register.
-There are no remaining GPRs even though the by-value argument would
otherwise fit in a single GPR.
-The by-value argument requires alignment greater than register width.

Future patches will be required to add support for these cases as well
as for the callee handling (in LowerFormalArguments_AIX) that
corresponds to this work.

Differential Revision: https://reviews.llvm.org/D75863
2020-03-18 10:57:28 -04:00
Roman Lebedev 85334b030a
[NFCI][SCEV] Avoid recursion in SCEVExpander::isHighCostExpansion*()
Summary:
As noted in [[ https://bugs.llvm.org/show_bug.cgi?id=45201 | PR45201 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=10090 | PR10090 ]] SCEV doesn't
always avoid recursive algorithms, and that causes issues with
large expression depths and/or smaller stack sizes.

In `SCEVExpander::isHighCostExpansion*()` case, the refactoring to avoid
recursion is rather idiomatic. We simply need to place the root expr
into a vector, and iterate over vector elements accounting for the cost
of each one, adding new exprs at the end of the vector,
thus achieving recursion-less traversal.

The order in which we will visit exprs doesn't matter here,
so we will be fine with the most basic approach of using SmallVector
and inserting/extracting from the back, which accidentally is the same
depth-first traversal that we were doing previously recursively.

Reviewers: mkazantsev, reames, wmi, ekatz

Reviewed By: mkazantsev

Subscribers: hiraditya, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76273
2020-03-18 17:10:54 +03:00
Oliver Stannard 73cea83a6f [IPRA][ARM] Spill extra registers at -Oz
When optimising for code size at the expense of performance, it is often
worth saving and restoring some of r0-r3, if IPRA will be able to take
advantage of them. This doesn't cost any extra code size if we already
have a PUSH/POP pair, and increases the number of available registers
across any calls to the function.

We already have an optimisation which tries fold the subtract/add of the
SP into the PUSH/POP by using extra registers, which somewhat conflicts
with this. I've made the new optimisation less aggressive in cases where
the existing one is likely to trigger, which gives better results than
either of these optimisations by themselves.

Differential revision: https://reviews.llvm.org/D69936
2020-03-18 13:51:16 +00:00
Guillaume Chatelet d000655a8c [Alignment][NFC] Deprecate getMaxAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: jholewinski, arsenm, dschuff, jyknight, sdardis, nemanjai, jvesely, nhaehnle, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76348
2020-03-18 14:48:45 +01:00
Oliver Stannard 6739805e24 [ARM] Track epilogue instructions with FrameDestroy flag (NFC)
Rather than trying to work out which instructions are part of the
epilogue by examining them, we can just mark them with the FrameDestroy
flag, like we do in the AArch64 backend.
2020-03-18 13:32:59 +00:00
Francesco Petrogalli 9bdcd9bf44 [llvm][SVE] Addressing mode for FF/NF loads.
Summary:
This patch adds addressing mode computation for the following SVE
instructions:

* ldff1{s}<T1> { <Zt>.<T2> }, <Pg>/Z, [<Xn|SP>{, <Xm>{, lsl #imm}}]
* ldnf1{s}<T1> { <Zt>.<T2> }, <Pg>/Z, [<Xn|SP>{, #<imm>, mul vl}]

Reviewers: andwar, sdesmalen, rengolin, efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76209
2020-03-18 12:46:07 +00:00
Sander de Smalen 4788ca450f [AArch64][SVE] Change pointer type of nontemporal load/store intrinsics
Summary:
This fixes a discrepancy between the non-temporal loads/store
intrinsics and other SVE load intrinsics (such as nf/ff), so
that Clang can use the same code to generate these intrinsics.

Reviewers: andwar, kmclaughlin, rengolin, efriedma

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76237
2020-03-18 12:44:51 +00:00
Danila Malyutin 940ba1465b Fix possible assertion when using PBQP with debug info
Skip debug instructions before calling functions not expecting them.
In particular, LIS.getInstructionIndex(*mi) would fail if mi was a debg instr.

Differential Revision: https://reviews.llvm.org/D76129
2020-03-18 15:29:42 +03:00
Simon Pilgrim f4e495a18e [InstCombine][X86] simplifyX86varShift - convert variable in-range per-element shift amounts to generic shifts (PR40391)
AVX2/AVX512 per-element shifts can be replaced with generic shifts if the shift amounts are guaranteed to be in-range (upper bits are known zero).
2020-03-18 11:26:54 +00:00
Simon Tatham 928776de92 [ARM,MVE] Add intrinsics for the VQDMLAH family.
Summary:
These are complicated integer multiply+add instructions with extra
saturation, taking the high half of a double-width product, and
optional rounding. There's no sensible way to represent that in
standard IR, so I've converted the clang builtins directly to
target-specific intrinsics.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76123
2020-03-18 10:55:04 +00:00
Simon Tatham 28c5d97bee [ARM,MVE] Add intrinsics and isel for MVE integer VMLA.
Summary:
These instructions compute multiply+add in integers, with one of the
operands being a splat of a scalar. (VMLA and VMLAS differ in whether
the splat operand is a multiplier or the addend.)

I've represented these in IR using existing standard IR operations for
the unpredicated forms. The predicated forms are done with target-
specific intrinsics, as usual.

When operating on n-bit vector lanes, only the bottom n bits of the
i32 scalar operand are used. So we have to tell that to isel lowering,
to allow it to remove a pointless sign- or zero-extension instruction
on that input register. That's done in `PerformIntrinsicCombine`, but
first I had to enable `PerformIntrinsicCombine` for MVE targets
(previously all the intrinsics it handled were for NEON), and make it
a method of `ARMTargetLowering` so that it can get at
`SimplifyDemandedBits`.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76122
2020-03-18 10:55:04 +00:00
Florian Hahn 5672ae8d86 [SCCP] Use constant ranges for select, if cond is overdefined.
For selects with an unknown condition, we can approximate the result by
merging the state of both options. This automatically takes care of
the case where on operand is undef.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71935
2020-03-18 09:26:02 +00:00
Guillaume Chatelet c3df69faa0 [Alignment][NFC] Deprecate getTransientStackAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: jholewinski, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76301
2020-03-18 09:02:48 +01:00
Pengfei Wang 974d649f8e CET for Exception Handle
Summary:
Bug fix for https://bugs.llvm.org/show_bug.cgi?id=45182
Exception handle may indirectly jump to catch pad, So we should add ENDBR instruction before catch pad instructions.

Reviewers: craig.topper, hjl.tools, LuoYuanke, annita.zhang, pengfei

Reviewed By: LuoYuanke

Subscribers: hiraditya, llvm-commits

Patch By: Xiang Zhang (xiangzhangllvm)

Differential Revision: https://reviews.llvm.org/D76190
2020-03-17 22:35:05 -07:00
Vitaly Buka 9bca8fc4cf Revert "AMDGPU/GlobalISel: Fully handle 0 dmask case during legalize"
The patch introduced use-after-poison.

This reverts commit d0fe13ecf9.
2020-03-17 22:04:14 -07:00
QingShan Zhang d577193c0f [DAGCombine] Respect the uses when combine FMA for a*b+/-c*d
If it is a*b-c*d, it could be also folded into fma(a, b, -c*d) or fma(-c, d, a*b).
This patch is trying to respect the uses of a*b and c*d to make the best choice.

Differential Revision: https://reviews.llvm.org/D75982
2020-03-18 03:34:27 +00:00
Jin Lin 7b166d5182 Revert "Support repeated machine outlining"
This reverts commit ab2dcff309.
2020-03-17 18:33:55 -07:00
Jin Lin ab2dcff309 Support repeated machine outlining
Summary: The following change is to allow the machine outlining can be applied for Nth times, where N is specified by the compiler option. By default the value of N is 1. The motivation is that the repeated machine outlining can further reduce code size.  Please refer to the presentation "Improving Swift Binary Size via Link Time Optimization" in LLVM Developers' Meeting in 2019.

Reviewers: aschwaighofer, tellenbach, paquette

Reviewed By: paquette

Subscribers: tellenbach, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71027
2020-03-17 18:11:08 -07:00
Nico Weber 4e0fe038f4 Revert "Avoid emitting unreachable SP adjustments after `throw`"
This reverts commit 65b21282c7.
Breaks sanitizer bots (https://reviews.llvm.org/D75712#1927668)
and causes https://crbug.com/1062021 (which may or may not
be a compiler bug, not clear yet).
2020-03-17 20:49:22 -04:00
Michael Liao f2f8bdc2b1 Fix `-Wunused-variable` warning. NFC. 2020-03-17 20:15:50 -04:00
Matt Arsenault c9b454a1b7 AMDGPU/GlobalISel: Fix verifier errors on image atomics 2020-03-17 20:06:25 -04:00
Scott Linder 68f163df0e [AMDGPU] Print DWARF register numbers in AMDGPUInstPrinter
Summary:
Explanation is in a comment in the diff, but essentially printing a
physical register name here is ambiguous. Until we can implement
printing a DWARF register name here just use the encoding directly.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76253
2020-03-17 19:42:10 -04:00
Jian Cai 6a38e0e4f5 [MC] Recalculate fragment offsets after relaxation
Summary:
The current relaxation implementation is not correctly adjusting the
size and offsets of fragements in one section based on changes in size
of another if the layout order of the two happened to be such that the
former was visited before the later. Therefore, we need to invalidate
the fragments in all sections after each iteration of relaxation, and
possibly further relax some of them in the next ieration. This fixes
PR#45190.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76114
2020-03-17 14:48:05 -07:00
Simon Pilgrim 68224c1952 [TargetLowering] Only demand a rotation's modulo amount bits
ISD::ROTL/ROTR rotation values are guaranteed to act as a modulo amount, so for power-of-2 bitwidths we only need the lowest bits.

Differential Revision: https://reviews.llvm.org/D76201
2020-03-17 21:23:46 +00:00
Florian Hahn a72ae99cf9 [SCCP] Split up callsite handling, only propagate result on change (NFC)
Functions include their arguments in the use-list. Changed function
values mean that the result of the function changed. We only need
to update the call sites with the new function result and do not
have to propagate the call arguments.

To do so, this patch splits up the visitCallSite into handleCallResult
and handleCallArguments and updates markUsersAsChanged to only update
call results for functions.

Reviewers: efriedma, davide

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75846
2020-03-17 20:05:35 +00:00
Vedant Kumar 526c51e6fd [DwarfDebug] Fix an assertion error when emitting call site info that combines two DW_OP_stack_values
When compiling

```
struct S {
  float w;
};
void f(long w, long b);
void g(struct S s) {
  int w = s.w;
  f(w, w*4);
}
```

I get Assertion failed: ((!CombinedExpr || CombinedExpr->isValid()) && "Combined debug expression is invalid").

That's because we combine two epxressions that both end in DW_OP_stack_value:

```
(lldb) p Expr->dump()
!DIExpression(DW_OP_LLVM_convert, 32, DW_ATE_signed, DW_OP_LLVM_convert, 64, DW_ATE_signed, DW_OP_stack_value)
(lldb) p Param.Expr->dump()
!DIExpression(DW_OP_constu, 4, DW_OP_mul, DW_OP_LLVM_convert, 32, DW_ATE_signed, DW_OP_LLVM_convert, 64, DW_ATE_signed, DW_OP_stack_value)
(lldb) p CombinedExpr->isValid()
(bool) $0 = false
(lldb) p CombinedExpr->dump()
!DIExpression(4097, 32, 5, 4097, 64, 5, 16, 4, 30, 4097, 32, 5, 4097, 64, 5, 159, 159)
```

I believe that in this particular case combining two stack values is
safe, but I didn't want to sink the special handling into
DIExpression::append() because I do want everyone to think about what
they are doing.

Patch by Adrian Prantl.

Fixes PR45181.
rdar://problem/60383095

Differential Revision: https://reviews.llvm.org/D76164
2020-03-17 12:51:49 -07:00
Scott Constable 080dd10f7d Move RDF from Hexagon to Codegen
RDF is designed to be target agnostic. Therefore it would be useful to have it available for other targets, such as X86.

Based on a previous patch by Krzysztof Parzyszek

Differential Revision: https://reviews.llvm.org/D75932
2020-03-17 12:43:14 -07:00
Sanjay Patel be9e3d9416 [InstCombine] reduce demand-limited bool math to logic, part 2
Follow-on suggested in:
D75961
2020-03-17 15:18:18 -04:00
Huihui Zhang 1bf0c99375 [ValueTracking][SVE] Fix isGEPKnownNonNull for scalable vector.
Summary:
DataLayout::getTypeAllocSize() return TypeSize. For cases where the
scalable property doesn't matter, we should explicitly call getKnownMinSize()
to avoid implicit type conversion to uint64_t, which is not valid for scalable
vector type.

Reviewers: sdesmalen, efriedma, apazos, reames

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76260
2020-03-17 11:31:30 -07:00
Craig Topper 98369178bc [SelectionDAGBuilder] Don't set MachinePointerInfo for gather when we find a uniform base
I believe we were previously calculating a pointer info with the scalar base and an offset of 0. But that's not really where the gather is pointing. The offset is a function of the indices of the GEP we looked through.

Also set the size of the MachineMemOperand to UnknownSize

Differential Revision: https://reviews.llvm.org/D76157
2020-03-17 11:03:45 -07:00
Jin Lin b9f1b8be1c Revert "Support repeated machine outlining"
This reverts commit 1f93b162fc.
2020-03-17 10:03:27 -07:00
Sebastian Neubauer 6e29846b29 [AMDGPU] Fix whole wavefront mode
We cannot move wwm over exec copies because the exec register needs an exact exec mask.

Differential Revision: https://reviews.llvm.org/D76232
2020-03-17 17:23:23 +01:00
Jin Lin 1f93b162fc Support repeated machine outlining
Summary: The following change is to allow the machine outlining can be applied for Nth times, where N is specified by the compiler option. By default the value of N is 1. The motivation is that the repeated machine outlining can further reduce code size.  Please refer to the presentation "Improving Swift Binary Size via Link Time Optimization" in LLVM Developers' Meeting in 2019.

Reviewers: aschwaighofer, tellenbach, paquette

Reviewed By: paquette

Subscribers: tellenbach, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71027
2020-03-17 09:16:11 -07:00
Simon Pilgrim c9656a3b31 [DAGCombiner] matchRotateSub - handle shift amount truncation
Under certain circumstances we'll end up in the position where the negated shift amount will get truncated to the type specified getScalarShiftAmountTy(), so we need to test for a truncated version of the shift amount as well.

This allows us to remove half of the remaining patterns tested for by X86ISelLowering's combineOrShiftToFunnelShift.
2020-03-17 16:01:23 +00:00
Simon Pilgrim 34b8a0d599 [Verifier] Silence static analyzer null dereference warning (PR45118)
As discussed on PR45118, getInlinedAtScope() shouldn't ever return null. So we can simplify the logic to an assertion and remove all other null tests.
2020-03-17 16:01:23 +00:00
Matt Arsenault 039c917b43 AMDGPU/GlobalISel: Fix asserting on gather4 intrinsics 2020-03-17 11:07:30 -04:00
Tyker e8ac825f5b [AssumeBundles] Detection of Empty bundles
Summary: Prevent InstCombine from removing llvm.assume for which the arguement is true when they have operand bundles with usefull information.

Reviewers: jdoerfert, nikic, lebedev.ri

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76147
2020-03-17 15:50:15 +01:00
alex-t 48a9cf9043 [AMDGPU] Enable SEXT divergence driven selection.
Summary: This change enable the divergence driven selection for the SEXT DAG opcode.

Reviewers: vpykhtin, rampitec

Reviewed By: vpykhtin

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Differential Revision: https://reviews.llvm.org/D76230
2020-03-17 17:30:11 +03:00
Simon Atanasyan 73b1da1605 [MIPS] Implement MIPS3D vector instructions
Patch by Michael Roe.

Differential Revision: https://reviews.llvm.org/D76247
2020-03-17 17:17:51 +03:00
Matt Arsenault d0fe13ecf9 AMDGPU/GlobalISel: Fully handle 0 dmask case during legalize
For normal loads, fully eliminate the load. For the TFE case, adjust
the dmask value in the instruction so the selector doesn't need to
handle it. For the TFE special case, I guess it would be possible to
replace the loaded data register with undef, but as-is this will start
treating it as a well defined value.
2020-03-17 10:15:30 -04:00
Matt Arsenault d9a012ed8a AMDGPU/GlobalISel: Adjust image load register type based on dmask
Trim elements that won't be written. The equivalent still needs to be
done for writes. Also start widening 3 elements to 4
elements. Selection will get the count from the dmask.
2020-03-17 10:09:18 -04:00
Matt Arsenault 83ffbf2618 AMDGPU/GlobalISel: Legalize non-a16 non-NSA images 2020-03-17 10:02:09 -04:00
Matt Arsenault 2aba9b6cf8 AMDGPU/GlobalISel: Legalize a16 images
Pack the address registers in the legalizer. Avoid introducing a huge
family of new intermediate operations by filling dead operands with
noreg.
2020-03-17 10:02:09 -04:00
serge-sans-paille ac1d23ed7d Replace MCTargetOptionsCommandFlags.inc and CommandFlags.inc by runtime registration
MCTargetOptionsCommandFlags.inc and CommandFlags.inc are headers which contain
cl::opt with static storage.
These headers are meant to be incuded by tools to make it easier to parametrize
codegen/mc.

However, these headers are also included in at least two libraries: lldCommon
and handle-llvm. As a result, when creating DYLIB, clang-cpp holds a reference
to the options, and lldCommon holds another reference. Linking the two in a
single executable, as zig does[0], results in a double registration.

This patch explores an other approach: the .inc files are moved to regular
files, and the registration happens on-demand through static declaration of
options in the constructor of a static object.

[0] https://bugzilla.redhat.com/show_bug.cgi?id=1756977#c5

Differential Revision: https://reviews.llvm.org/D75579
2020-03-17 14:01:30 +01:00
Florian Hahn 1d6f919df2 [SCCP] Explicitly mark values as overdefined (NFC).
This was part of D60582 but can be committed separately.
2020-03-17 12:13:30 +00:00
John Brawn c09368313c [StackProtector] Catch direct out-of-bounds when checking address-takenness
With -fstack-protector-strong we check if a non-array variable has its address
taken in a way that could cause a potential out-of-bounds access. However what
we don't catch is when the address is directly used to create an out-of-bounds
memory access.

Fix this by examining the offsets of GEPs that are ultimately derived from
allocas and checking if the resulting address is out-of-bounds, and by checking
that any memory operations using such addresses are not over-large.

Fixes PR43478.

Differential revision: https://reviews.llvm.org/D75695
2020-03-17 12:09:07 +00:00
Georgii Rymar 4dd5f1ca9b [yaml2obj] - Add `ELFYAML::YAMLIntUInt` to fix how we parse a relocation `Addend` key.
This patch makes `Relocation::Addend` to be `ELFYAML::YAMLIntUInt` and not `int64_t`.

`ELFYAML::YAMLIntUInt` it is a new type and it has the following benefits/features:

1) For an 64-bit object any hex/decimal addends
   in the range [INT64_MIN, UINT64_MAX] is accepted.
2) For an 32-bit object any hex/decimal addends
   in range [INT32_MIN, UINT32_MAX] is accepted.
3) Negative hex numbers like -0xffffffff are not accepted.
4) It is printed as decimal. I.e. obj2yaml will print
   something like "Addend: 125", this matches the current behavior.

This fixes all FIXMEs in `relocation-addend.yaml`.

Differential revision: https://reviews.llvm.org/D75527
2020-03-17 14:22:19 +03:00
Kazushi (Jam) Marukawa 6bbbead7be [VE] Move VEInstPrinter.cpp and VEInstPrinter.h into MCTargetDesc
Summary:
Move them into MCTargetDesc to follow other architectures (a263aa2).

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D76270
2020-03-17 11:33:54 +01:00
QingShan Zhang b83490bdb7 [PowerPC] Fix a typo of the condition of checking the fusion candidate 2020-03-17 10:04:18 +00:00
Roman Lebedev 398b497cd0
[NFC] LoopRotate: do issue debug message when not rotating due to instr count
It is somewhat problematic to notice this issue otherwise.
2020-03-17 09:26:09 +03:00
QingShan Zhang 0b126eec6d [NFC][PowerPC] Simplify the logic in lower select_cc
The logic in select_cc is messy and hard to follow. This is a NFC patch to simplify the logic.

Differential Revision: https://reviews.llvm.org/D75834
2020-03-17 03:47:39 +00:00
Serguei Katkov 80c351cdb6 [InstCombine] Transform to undef incorrect atomic unordered mem intrinsics
According to LangRef:
If len is not a positive integer multiple of element_size, then the behaviour of the intrinsic is undefined.

Add InstCombine rule to transform intrinsic to undef operation.

This is a follow-up for D76116.

Reviewers: reames
Reviewed By: reames
Subscribers: hiraditya, jfb, dantrushin, llvm-commits
Differential Revision: https://reviews.llvm.org/D76215
2020-03-17 10:20:16 +07:00
Vitaly Buka f20dcc31e3 Fix unused function warning 2020-03-16 19:45:36 -07:00
Michael Liao d00d6a19dd Fix `-Wpedantic` warning. NFC. 2020-03-16 22:06:23 -04:00
Shengchen Kan 39bcc76a92 [X86] Disable nop padding before instruction following hardcode
Reviewers: reames, MaskRay, craig.topper, LuoYuanke, jyknight

Reviewed By: LuoYuanke

Subscribers: annita.zhang, llvm-commits, hiraditya

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76176
2020-03-17 09:45:12 +08:00
Douglas Gliner ac48b1885a [Support] Change isatty to is_displayed
Currently, when building with the Unix support library and `isatty` does
not exist for the target platform (i.e. `HAVE_ISATTY` is false),
compilation of the file `raw_ostream.cpp` will fail due to direct use of
`isatty` in the function `raw_fd_ostream::preferred_buffer_size()`.

Use is_displayed() to fix the problem.

Reviewed By: probinson, MaskRay

Differential Revision: https://reviews.llvm.org/D75278
2020-03-16 18:27:43 -07:00
Craig Topper 85726bbcba [X86] Disable fast-isel call lowering for functions with vXi1 arguments on avx512.
This fails an assert because the type is marked in the calling
convention td file as needing promotion, but the code doesn't know
how to do it.

It also much more complicated because we try to pass these in
xmm/ymm/zmm registers. As of a few weeks ago we do this promotion
from getRegisterTypeForCallingConv before the td file generated
code gets involved.
2020-03-16 18:20:42 -07:00
Eric Christopher 8b3b04eb41 Make isValidImmForSVEVecImmAddrMode inline static rather than just static.
Fixes -Werror builds.
2020-03-16 17:39:01 -07:00
Evgenii Stepanov 2a3723ef11 [memtag] Plug in stack safety analysis.
Summary:
Run StackSafetyAnalysis at the end of the IR pipeline and annotate
proven safe allocas with !stack-safe metadata. Do not instrument such
allocas in the AArch64StackTagging pass.

Reviewers: pcc, vitalybuka, ostannard

Reviewed By: vitalybuka

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, gilang, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73513
2020-03-16 16:35:25 -07:00
Sriraman Tallam df082ac45a Basic Block Sections support in LLVM.
This is the second patch in a series of patches to enable basic block
sections support.

This patch adds support for:

* Creating direct jumps at the end of basic blocks that have fall
through instructions.
* New pass, bbsections-prepare, that analyzes placement of basic blocks
in sections.
* Actual placing of a basic block in a unique section with special
handling of exception handling blocks.
* Supports placing a subset of basic blocks in a unique section.
* Support for MIR serialization and deserialization with basic block
sections.

Parent patch : D68063
Differential Revision: https://reviews.llvm.org/D73674
2020-03-16 16:06:54 -07:00
Craig Topper 378b1e6080 [X86] Assign avx512bf16 instructions to the SSEPackedSingle ExeDomain. 2020-03-16 14:07:01 -07:00
Benjamin Kramer 05ff3323e0 [AArch64] Remove unused variable 2020-03-16 21:59:55 +01:00
Nico Weber 623cb95eb3 Revert "[InstSimplify] Simplify calls with "returned" attribute"
This reverts commit 45555c3819.
Causes clang crashes in some causes, see comments on
https://reviews.llvm.org/D75815 for details (including
repro steps).
2020-03-16 15:21:30 -04:00
Francesco Petrogalli 0f2b68d9c7 Implement IR intrinsics for gather prefetch.
Summary:
Intrinsics and relative codegen has been implemented for the following
SVE instructions:

1. PRF<T> <prfop>, <Pg>, [<Xn|SP>, <Zm>.S, <mod>] -> 32-bit          scaled offset
2. PRF<T> <prfop>, <Pg>, [<Xn|SP>, <Zm>.D, <mod>] -> 32-bit unpacked scaled offset
3. PRF<T> <prfop>, <Pg>, [<Xn|SP>, <Zm>.D]        -> 64-bit          scaled offset
4. PRF<T> <prfop>, <Pg>, [<Zn>.S{, #<imm>}]       -> 32-bit element
5. PRF<T> <prfop>, <Pg>, [<Zn>.D{, #<imm>}]       -> 64-bit element

The instructions are associated the following intrinsics, respectively:

1. void @llvm.aarch64.sve.gather.prf<T>.scaled.<mod>.nx4vi32(
          i8* %base,
          <vscale x 4 x i32> %offset,
          <vscale x 4 x i1> %Pg,
          i32 %prfop)

2. void @llvm.aarch64.sve.gather.prf<T>.scaled.<mod>.nx2vi32(
          i8* %base,
          <vscale x 2 x i32> %offset,
          <vscale x 2 x i1> %Pg,
          i32 %prfop)

3. void @llvm.aarch64.sve.gather.prf<T>.scaled.nx2vi64(
          i8* %base,
          <vscale x 2 x i64> %offset,
          <vscale x 2 x i1> %Pg,
          i32 %prfop)

4. void @llvm.aarch64.sve.gather.prf<T>.nx4vi32(
          <vscale x 4 x i32> %bases,
          i64 %imm,
          <vscale x 4 x i1> %Pg,
          i32 %prfop)

5. void @llvm.aarch64.sve.gather.prf<T>.nx2vi64(
          <vscale x 2 x i64> %bases,
          i64 %imm,
          <vscale x 2 x i1> %Pg,
          i32 %prfop)

The intrinsics are the IR counterpart of the following SVE ACLE functions:

* void svprf<T>(svbool_t pg, const void *base, svprfop op)
* void svprf<T>_vnum(svbool_t pg, const void *base, int64_t vnum, svprfop op)
* void svprf<T>_gather[_u32base](svbool_t pg, svuint32_t bases, svprfop op)
* void svprf<T>_gather[_u64base](svbool_t pg, svuint64_t bases, svprfop op)
* void svprf<T>_gather_[s32]offset(svbool_t pg, const void *base, svint32_t offsets, svprfop op)
* void svprf<T>_gather_[u32]offset(svbool_t pg, const void *base, svint32_t offsets, svprfop op)
* void svprf<T>_gather_[s64]offset(svbool_t pg, const void *base, svint64_t offsets, svprfop op)
* void svprf<T>_gather_[u64]offset(svbool_t pg, const void *base, svint64_t offsets, svprfop op)
* void svprf<T>_gather[_u32base]_offset(svbool_t pg, svuint32_t bases, int64_t offset, svprfop op)
* void svprf<T>_gather[_u64base]_offset(svbool_t pg, svuint64_t bases,int64_t offset, svprfop op)

Reviewers: andwar, sdesmalen, efriedma, rengolin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75580
2020-03-16 18:52:35 +00:00
Huihui Zhang 0616e9964b [InstSimplify][SVE] Fix SimplifyGEPInst for scalable vector.
Summary:
Skip folds that rely on DataLayout::getTypeAllocSize(). For scalable
vector, only minimal type alloc size is known at compile-time.

Reviewers: sdesmalen, efriedma, spatel, apazos

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75892
2020-03-16 11:46:12 -07:00
Matt Arsenault b0bdb186f5 Utils: Always set alignment when expanding mem intrinsics
This was creating natural aligned loads and stores, which may not be
the case. The target could request a wider type load with less
alignment.
2020-03-16 14:34:29 -04:00
Matt Arsenault 05e7d8d6ce TTI: Add addrspace parameters to memcpy lowering functions 2020-03-16 14:34:29 -04:00
Nico Weber 9e48422035 Revert "[llvm-objdump] Display locations of variables alongside disassembly"
Makes tests fail on Windows, see https://reviews.llvm.org/D70720#1924542

This reverts commit 3a5ddedadb, and
follow-ups:
f4cb9c919e
042eb0482a
c0cf5f5da9
18649f4813
f62b898c1f
2020-03-16 14:04:25 -04:00
Simon Pilgrim ebb181cf40 [X86] matchScalarReduction - add support for partial reductions
Add optional support for opt-in partial reduction cases by providing an optional partial mask to indicate which elements have been extracted for the scalar reduction.
2020-03-16 18:01:02 +00:00
Matt Arsenault 2e77362626 GlobalISel: Fix lower bswap for vectors
This would hit an assertion from trying to use the wrong bitwidth for
the constants.
2020-03-16 13:59:08 -04:00
Matt Arsenault 80b627d69d AMDGPU/GlobalISel: Fix handling of G_ANYEXT with s1 source
We were letting G_ANYEXT with a vcc register bank through, which was
incorrect and would select to an invalid copy. Fix this up like G_ZEXT
and G_SEXT. Also drop old code to fixup the non-boolean case in
RegBankSelect. We now have to perform that expansion during selection,
so there's no benefit to doing it during RegBankSelect.
2020-03-16 12:59:54 -04:00
Matt Arsenault c460dc6eeb AMDGPU/GlobalISel: Fix some illegal scalar argument types
Fixes integers that don't evenly divide to i32 pieces. We should
probably extract some of the code in the legalizer to start handling
argument breakdowns. I'm dissatisfied with the argument lowering's
handling of vectors for example, and we should not be producing the
weird G_EXTRACTs we do now.
2020-03-16 12:51:23 -04:00
Juneyoung Lee 07a41544fd Minor fix to a comment in CodeGenPrepare.cpp 2020-03-17 01:10:26 +09:00
Matt Arsenault 84386b2d8a AMDGPU: Drop special case f64 fround lowering
The result is better if ftrunc is emitted and separately legalized
when unavailable.
2020-03-16 12:09:30 -04:00
Matt Arsenault 19a0350187 GlobalISel: Fix round lowering
I used the implementation for floor instead of round. It also turns
out the OpenCL builtin library wasn't using the round builtin, but
implemented the expanded form.
2020-03-16 11:37:30 -04:00
Dominik Montada 8ff2dcb18b [GlobalISel] add additional lowering support for G_INSERT
Summary: Add lowering support for inserting pointers or scalars into scalars, vectors or pointers

Reviewers: arsenm, dsanders

Reviewed By: arsenm

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75994
2020-03-16 16:27:17 +01:00
Matt Arsenault 57d896e838 AMDGPU/GlobalISel: Make some large merges legal
We allow up to 1024-bit registers, so we should support merges all the
way to the maximum.
2020-03-16 10:49:10 -04:00
Fangrui Song 536ba6373f [Object] Change ELFObjectFile<ELFT>::getFileFormatName() to use BFD names
Follow-up for D74433

What the function returns are almost standard BFD names, except that "ELF" is
in uppercase instead of lowercase.

This patch changes "ELF" to "elf" and changes ARM/AArch64 to use their BFD names.
MIPS and PPC64 have endianness differences as well, but this patch does not intend to address them.

Advantages:

* llvm-objdump: the "file format " line matches GNU objdump on ARM/AArch64 objects
* "file format " line can be extracted and fed into llvm-objcopy -O literally.
  (https://github.com/ClangBuiltLinux/linux/issues/779 has such a use case)

Affected tools: llvm-readobj, llvm-objdump, llvm-dwarfdump, MCJIT (internal implementation detail, not exposed)

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D76046
2020-03-16 07:42:04 -07:00
Simon Pilgrim 2b3b453a82 [TargetLowering] Only demand a funnelshift's modulo amount bits
ISD::FSHL/FSHR shift amount values are guaranteed to act as a modulo amount, so for power-of-2 bitwidths we only need the lowest bits.
2020-03-16 13:52:17 +00:00
Juneyoung Lee 7aecf2323c [ExpandMemCmp] Correctly set alignment of generated loads
Summary:
This is a part of the series of efforts for correcting alignment of memory operations.
(Another related bugs: https://bugs.llvm.org/show_bug.cgi?id=44388 , https://bugs.llvm.org/show_bug.cgi?id=44543 )

This fixes https://bugs.llvm.org/show_bug.cgi?id=43880 by giving default alignment of loads to 1.

The test CodeGen/AArch64/bcmp-inline-small.ll should have been changed; it was introduced by https://reviews.llvm.org/D64805 . I talked with @evandro, and confirmed that the test is okay to be changed.
Other two tests from PowerPC needed changes as well, but fixes were straightforward.

Reviewers: courbet

Reviewed By: courbet

Subscribers: nlopes, gchatelet, wuzish, nemanjai, kristof.beyls, hiraditya, steven.zhang, danielkiss, llvm-commits, evandro

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76113
2020-03-16 22:39:48 +09:00
Simon Pilgrim e43a085781 [X86] X86::isConstantSplat - enable partial undef bit handling by default.
We currently only ever use this for lowering constant uniform values (shift/rotate by immediate) so we can safely enable it by default (it treats the undef bits as zero when extracting constants).

This is necessary for an upcoming patch that will use SimplifyDemandedBits more aggressively on funnel shift amounts and causes regressions in vXi64 constant without it.
2020-03-16 12:56:24 +00:00
Simon Pilgrim ac4609cb1d [X86] LowerRotate - use X86::isConstantSplat to detect constant splat rotation amounts.
Avoid code duplication and matches what we do for the similar LowerFunnelShift and LowerScalarImmediateShift methods.
2020-03-16 12:56:23 +00:00
Jonas Paulsson 132f25bcca [SystemZ] Avoid scalarization of [SU]INT_TO_FP ISD-nodes.
The type legalizer will scalarize vector conversions from integer to floating
point if the source element size is less than that of the result.

This is avoided now by inserting a zero/sign-extension of the source vector
before type legalization.

Review: Ulrich Weigand

Differential revision: https://reviews.llvm.org/D75978
2020-03-16 13:07:42 +01:00
David Stenberg 02b6a3c349 [DebugInfo] Handle generic type DW_OP_convert ops in dsymutil
Summary:
This is a preparatory change for allowing LLVM to emit DW_OP_convert
operations converting to the generic type.

If DW_OP_convert's operand is 0, it converts the top of the stack to the
generic type, as specified by DWARFv5 section 2.5.1.6:

"[...] takes one operand, which is an unsigned LEB128 integer that
 represents the offset of a debugging information entry in the current
 compilation unit, or value 0 which represents the generic type."

This adds support for such operations to dsymutil.

Reviewers: aprantl, markus, friss, JDevlieghere

Reviewed By: aprantl, JDevlieghere

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D76142
2020-03-16 12:16:37 +01:00
Oliver Stannard 18649f4813 [llvm-objdump] Add entry_value and stack_value opcodes
Add the DW_OP_entry_value and DW_OP_stack_value opcodes to the DWARF
expression printer.

Differential revision: https://reviews.llvm.org/D74843
2020-03-16 10:54:41 +00:00
Oliver Stannard c0cf5f5da9 [llvm-objdump] Add simple memory expressions to variable display
Add the DW_OP_breg0..DW_OP_breg31 and DW_OP_bregx opcodes to the DWARF
expression printer.

Differential revision: https://reviews.llvm.org/D74841
2020-03-16 10:54:41 +00:00
Oliver Stannard 3a5ddedadb [llvm-objdump] Display locations of variables alongside disassembly
This adds the --debug-vars option to llvm-objdump, which prints
locations (registers/memory) of source-level variables alongside the
disassembly based on DWARF info. A vertical line is printed for each
live-range, with a label at the top giving the variable name and
location, and the position and length of the line indicating the program
counter range in which it is valid.

Currently, this only works for object files, not executables or shared
libraries.

Differential revision: https://reviews.llvm.org/D70720
2020-03-16 10:54:40 +00:00
David Stenberg c93652517c [DebugInfo] Handle generic type DW_OP_convert ops in llvm-dwarfdump
Summary:
This is a preparatory change for allowing LLVM to emit DW_OP_convert
operations converting to the generic type.

If DW_OP_convert's operand is 0, it converts the top of stack to the
generic type, as specified by DWARFv5 section 2.5.1.6:

"[...] takes one operand, which is an unsigned LEB128 integer that
 represents the offset of a debugging information entry in the current
 compilation unit, or value 0 which represents the generic type."

This adds support for such operations to llvm-dwarfdump.

Reviewers: aprantl, markus, jdoerfert, jhenderson

Reviewed By: aprantl

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D76141
2020-03-16 11:24:01 +01:00
Shengchen Kan b1a7a245ec [NFC][MC] Rename alignBranches* to emitInstruction*
alignBranches is X86 specific, change the name in a
more general one since other target can do some state
chang before and after emitting the instruction.
2020-03-16 17:13:14 +08:00
Simon Atanasyan e0ab0e6a28 [MIPS] Implement PUL.PS and PUU.PS instructions
Patch by Michael Roe.

Differential Revision: https://reviews.llvm.org/D75812
2020-03-16 09:39:47 +03:00
Serguei Katkov ad643d5e93 [Verifier] Remove invalid verifier check
According to LangRef for unordered atomic memory transfer intrinsics
"The first three arguments are the same as they are in the @llvm.memcpy intrinsic, with the added constraint that
 len is required to be a positive integer multiple of the element_size. If len is not a positive integer multiple
 of element_size, then the behaviour of the intrinsic is undefined."

So the len is not multiple of element size is just an undefined behavior and verifier should not complain about that
as undefined behavior is allowed in LLVM IR.

This change removes the verifier check for this condition

Reviewers: reames
Reviewed By: reames
Subscribers: dantrushin, hiraditya, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D76116
2020-03-16 12:00:08 +07:00
Juneyoung Lee 6ad63606ea [CodeGenPrepare] Freeze condition when transforming select to br
Summary:
This is a simple fix for CodeGenPrepare that freezes branch condition when transforming select to branch.
If it is not frozen, instsimplify or the later pipeline can potentially exploit undefined behavior.

The diff shows optimized form becase D75859 and D76048 already made a few changes to CodeGenPrepare for optimizing freeze(cmp).

Reviewers: jdoerfert, spatel, lebedev.ri, efriedma

Reviewed By: lebedev.ri

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76179
2020-03-16 12:46:20 +09:00
Juneyoung Lee 4ffe3ac729 Revert "[CodeGenPrepare] Freeze condition when transforming select to br"
This reverts commit 10aa7ea951.
2020-03-16 12:45:54 +09:00
Philip Reames a79863f2f7 Support prefix padding for alignment purposes (Relaxable instructions only)
Now that D75203 has landed and baked for a few days, extend the basic approach to prefix padding as well. The patch itself is fairly straight forward.

For the moment, this patch adds the functional support and some basic testing there of, but defaults to not enabling prefix padding. I want to be able to phrase a separate patch which adds the target specific reasoning and test it cleanly. I haven't decided whether I want to common it with the nop logic or not.

Differential Revision: https://reviews.llvm.org/D75300
2020-03-15 19:53:41 -07:00
Craig Topper b2da1ddaef [X86] Add a non-zero cost for truncating v32i16->v32i8 on avx512bw. 2020-03-15 17:18:46 -07:00
Lang Hames 9c5771710e Revert "[ORC] Enable JITEventListeners in the RTDyldObjectLinkingLayer."
This reverts commit 98f2bb4461.

Reverting while I investigate bot failures.
2020-03-15 15:35:08 -07:00
Lang Hames 98f2bb4461 [ORC] Enable JITEventListeners in the RTDyldObjectLinkingLayer.
Enable use of ExecutionEngine JITEventListeners in RTDyldObjectLinkingLayer.
This allows existing MCJIT clients to more easily migrate to LLJIT / ORCv2.

Example usage in llvm/examples/OrcV2Examples/LLJITWithGDBRegistrationListener.

Differential Revision: https://reviews.llvm.org/D75838
2020-03-15 15:14:46 -07:00
Benjamin Kramer caef4a81c9 [AVR] Make helper functions static. NFC. 2020-03-15 16:50:15 +01:00
Sander de Smalen 8105935d3a [TypeSize] Allow returning scalable size in implicit conversion to uint64_t
This patch removes compiler runtime assertions that ensure the implicit
conversion are only guaranteed to work for fixed-width vectors.

With the assert it would be impossible to get _anything_ to build until
the
entire codebase has been upgraded, even when the indiscriminate uses of
the size as uint64_t would work fine for both scalable and fixed-width
types.

This issue will need to be addressed differently, with build-time errors
rather than assertion failures, but that effort falls beyond the scope
of this patch.

Returning the scalable size and avoiding the assert in getFixedSize()
is a temporary stop-gap in order to use LLVM for compiling and using
the SVE ACLE intrinsics.

Reviewers: efriedma, huntergr, rovka, ctetreau, rengolin

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75297
2020-03-15 13:48:49 +00:00
Simon Pilgrim 5641804298 [DAG] MatchRotate - Add funnel shift by variable support
Followup to D75114, this patch reuses the existing MatchRotate ROTL/ROTR rotation pattern code to also recognize the more general FSHL/FSHR funnel shift patterns when we have variable shift amounts, matched with MatchFunnelPosNeg which acts in an (almost) equivalent manner to MatchRotatePosNeg.
2020-03-15 11:50:45 +00:00
Florian Hahn 650f363bd7 [ValueLattice] Add singlecrfromundef lattice value.
This patch adds a new singlecrfromundef lattice value, indicating a
single element constant range which was merge with undef at some point.
Merging it with another constant range results in overdefined, as we
won't be able to replace all users with a single value.

This patch uses a ConstantRange instead of a Constant*, because regular
integer constants are represented as single element constant ranges as
well and this allows the existing code working without additional
changes.

Reviewers: efriedma, nikic, reames, davide

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75845
2020-03-15 11:23:46 +00:00
Juneyoung Lee 10aa7ea951 [CodeGenPrepare] Freeze condition when transforming select to br
Summary:
This is a simple fix for CodeGenPrepare that freezes branch condition when transforming select to branch.
If it is not freezed, instsimplify or the later pipeline can potentially exploit undefined behavior.

The diff shows optimized form becase D75859 and D76048 already made a few changes to CodeGenPrepare for optimizing freeze(cmp).

Reviewers: jdoerfert, spatel, lebedev.ri, efriedma

Reviewed By: lebedev.ri

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76179
2020-03-15 11:10:46 +09:00
Lang Hames 981f017c5c [ORC] Print symbol flags and materializer name in ExecutionSession::dump.
The extra information can be helpful in diagnosing JIT bugs.
2020-03-14 18:52:10 -07:00
Lang Hames 9c9eb60b4b [JITLink][MachO] Re-apply b64afadf30, MachO linker-private support, with fixes.
Global symbols with linker-private prefixes should be resolvable across object
boundaries, but internal symbols with linker-private prefixes should not.
2020-03-14 18:36:15 -07:00
Lang Hames a7d187d9c0 Revert "[JITLink][MachO] Treat linker private symbols as hidden rather than private."
This reverts commit b64afadf30.

Reverting while I investigate bot failures.
2020-03-14 16:52:25 -07:00
Lang Hames b64afadf30 [JITLink][MachO] Treat linker private symbols as hidden rather than private.
Linker-private symbols should be resolvable across object file boundaries.
2020-03-14 16:33:15 -07:00
Lang Hames 633ea07200 [Orc] Add basic OrcV2 C bindings and example.
Renames the llvm/examples/LLJITExamples directory to llvm/examples/OrcV2Examples
since it is becoming a home for all OrcV2 examples, not just LLJIT.

See http://llvm.org/PR31103.
2020-03-14 14:41:22 -07:00
Krzysztof Parzyszek 3656558cec [Hexagon] Only allow single HVX vector loads/stores in lowering
This will prevent store widening from forming vector pair stores,
which eventually end up broken up into single stores.
2020-03-14 14:26:01 -05:00
Simon Pilgrim ee862adf60 Fix signed/unsigned comparison warning. 2020-03-14 18:42:27 +00:00
Simon Pilgrim 0cb2f089c1 [X86] getFauxShuffleMask - pull out repeated byte sizes varaibles. NFC. 2020-03-14 17:36:17 +00:00
Florian Hahn 4878aa36d4 [ValueLattice] Add new state for undef constants.
This patch adds a new undef lattice state, which is used to represent
UndefValue constants or instructions producing undef.

The main difference to the unknown state is that merging undef values
with constants (or single element constant ranges) produces  the
constant/constant range, assuming all uses of the merge result will be
replaced by the found constant.

Contrary, merging non-single element ranges with undef needs to go to
overdefined. Using unknown for UndefValues currently causes mis-compiles
in CVP/LVI (PR44949) and will become problematic once we use
ValueLatticeElement for SCCP.

Reviewers: efriedma, reames, davide, nikic

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75120
2020-03-14 17:19:59 +00:00
Georgii Rymar b236b4cb43 [yaml2obj] - Set a default value for `PAddr` property of a program header to a value of `VAddr`
`PAddr` corresponds to `p_paddr` of a program header, which is the segment's physical
address for systems in which physical addressing is relevant. `p_paddr` is often equal
to `p_vaddr`, which is the virtual address of a segment.

This patch changes the default for `PAddr` from 0 to a value of `VAddr`.

Differential revision: https://reviews.llvm.org/D76131
2020-03-14 17:44:57 +03:00
Simon Pilgrim f47f4c137b [X86] getFauxShuffleMask - merge insertelement paths
Merge the INSERT_VECTOR_ELT/SCALAR_TO_VECTOR and PINSRW/PINSRB shuffle mask paths - they both do the same thing (find source vector + handle implicit zero extension). The PINSRW/PINSRB path also handled in the insertion of zero case which needed to be added to the general case as well.
2020-03-14 13:11:03 +00:00
Shengchen Kan e6f1dd40bd [X86] Disable nop padding before instruction following a prefix
Reviewers: reames, MaskRay, craig.topper, LuoYuanke, jyknight

Reviewed By: LuoYuanke

Subscribers: hiraditya, llvm-commits, annita.zhang

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76052
2020-03-14 13:15:30 +08:00
Diogo Sampaio 83cdb654e4 [AArch64][Fix] LdSt optimization generate premature stack-popping
Summary:
When moving add and sub to memory operand instructions,
aarch64-ldst-opt would prematurally pop the stack pointer,
before memory instructions that do access the stack using
indirect loads.
e.g.
```
int foo(int offset){
    int local[4] = {0};
    return local[offset];
}
```
would generate:
```
sub     sp, sp, #16            ; Push the stack
mov     x8, sp                 ; Save stack in register
stp     xzr, xzr, [sp], #16    ; Zero initialize stack, and post-increment, making it invalid
------ If an exception goes here, the stack value might be corrupted
ldr     w0, [x8, w0, sxtw #2]  ; Access correct position, but it is not guarded by SP
```

Reviewers: fhahn, foad, thegameg, eli.friedman, efriedma

Reviewed By: efriedma

Subscribers: efriedma, kristof.beyls, hiraditya, danielkiss, llvm-commits, simon_tatham

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75755
2020-03-14 02:03:10 +00:00
Craig Topper 755e00876c [X86] Remove isel patterns for X86VBroadcast+trunc+extload. Replace with DAG combines.
This is a little more complicated than I'd like it to be. We have
to manually match a trunc+srl+load pattern that generic DAG
combine won't do for us due to isTypeDesirableForOp.
2020-03-13 18:12:16 -07:00
Whitney Tsang aca7167535 [NFC][LoopUnrollAndJam] clang-format.
I am currently working on this file.
2020-03-14 00:04:10 +00:00
Philip Reames b4c8608eba Adjust debug output for MCRelaxableFragment to include the size so that sanity checking relaxation offsets from -debug output is easier 2020-03-13 16:22:46 -07:00
Eli Friedman 65fc706ddf [SCEV] Add support for GEPs over scalable vectors.
Because we have to use a ConstantExpr at some point, the canonical form
isn't set in stone, but this seems reasonable.

The pretty sizeof(<vscale x 4 x i32>) dumping is a relic of ancient
LLVM; I didn't have to touch that code. :)

Differential Revision: https://reviews.llvm.org/D75887
2020-03-13 16:12:45 -07:00
Brian Cain ad7b930bd1 Initialize IsFast* values
We must initialize these values in case some targets do not assign to
them in allowsMemoryAccess().
2020-03-13 17:46:32 -05:00
Jan Korous b7ce8fa91e [LLJIT] Add std::move() as a workaround for older compilers
Clang 3.8 isn't able to bind the variable to rvalue-ref which breaks the build.
2020-03-13 15:25:25 -07:00
Craig Topper 431df3d873 [SelectionDAGBuilder] Simplify the struct type handling in getUniformBase. 2020-03-13 14:00:21 -07:00
Craig Topper 1d192e09d8 [IR] Fix formatting. NFC 2020-03-13 14:00:20 -07:00
Lang Hames 906a91aa4d [MCJIT] Check for RuntimeDyld errors in MCJIT::finalizeLoadedModules.
Patch based on https://reviews.llvm.org/D75912 by Alexander Shishkin. Thanks
Alexander!

To minimize disruption to existing clients, who may be relying on the fact that
unused references to unresolved symbols do not generate an error, this patch
makes error checking opt-in: Clients can call ExecutionEngine::hasError or
LLVMExecutionEngineGetError to check whether and error has occurred.

Differential revision: https://reviews.llvm.org/D75912
2020-03-13 13:58:41 -07:00
Richard Smith b5aaa60962 Fix "unused variable" warning in NDEBUG builds. 2020-03-13 13:56:57 -07:00
Akira Hatanaka c6f1713c46 [ObjC][ARC] Don't remove autoreleaseRV/retainRV pairs if the call isn't
a tail call

This reapplies the patch in https://reviews.llvm.org/rG1f5b471b8bf4,
which was reverted because it was causing crashes.

https://bugs.chromium.org/p/chromium/issues/detail?id=1061289#c2

Check that HasSafePathToCall is true before checking the call is a tail
call.

Original commit message:

Previosly ARC optimizer removed the autoreleaseRV/retainRV pair in the
following code, which caused the object returned by @something to be
placed in the autorelease pool because the call to @something isn't a
tail call:

```
  %call = call i8* @something(...)
  %2 = call i8* @objc_retainAutoreleasedReturnValue(i8* %call)
  %3 = call i8* @objc_autoreleaseReturnValue(i8* %2)
  ret i8* %3
```

Fix the bug by checking whether @something is a tail call.

rdar://problem/59275894
2020-03-13 13:52:14 -07:00
Stanislav Mekhanoshin c262b69dcc [AMDGPU] Fix endcf collapse
Only collapse inner endcf if the outer one belongs to SI_IF.
If it does belong to SI_ELSE then mask being restored in fact
a partial inverse of what we need.

Differential Revision: https://reviews.llvm.org/D76154
2020-03-13 13:50:21 -07:00
Martin Storsjö 8f540dad61 [COFF] Assign unique names to autogenerated .weak.<name>.default symbols
These symbols need to be external (MSVC tools error out if a weak
external points at a symbol that isn't external; this was tried before
but had to be reverted in bc5b7217dc,
and this was originally explicitly fixed in
732eeaf2a9).

If multiple object files have weak symbols with defaults, their
defaults could cause linker errors due to duplicate definitions,
unless the names of the defaults are unique.

GNU binutils handles this by appending the name of another symbol
from the same object file to the name of the default symbol. Try
to implement something similar; before writing the object file,
locate a symbol that should have a unique name and use the name of
that one for making the weak defaults unique.

Differential Revision: https://reviews.llvm.org/D75989
2020-03-13 22:44:55 +02:00
Matt Arsenault 015b640be4 AMDGPU: Add flag to used fixed function ABI
Pass all arguments to every function, rather than only passing the
minimum set of inputs needed for the call graph.
2020-03-13 13:27:05 -07:00
Alexey Zhikhartsev f71abec661 [LoopInterchange] Fix interchanging contents of preheader BBs
Summary:
Previously LCSSA was getting broken by placing instructions into the
(newly) inner *header* instead of the *pre*header.

Fixes PR43474

Reviewers: fhahn

Reviewed By: fhahn

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75943
2020-03-13 15:59:37 -04:00
Matt Arsenault bb8622094d AMDGPU: Don't handle kernarg.segment.ptr in functions
Just lower this to null. Pass implicitarg.ptr in its place in the
argument list.
2020-03-13 12:51:12 -07:00
Nico Weber f82b32a51e Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit 5aa5c943f7.
Causes clang to assert, see
https://bugs.chromium.org/p/chromium/issues/detail?id=1061533#c4
for a repro.
2020-03-13 15:37:44 -04:00
Stanislav Mekhanoshin 32e90cbcd1 [AMDGPU] Disable endcf collapse
There are some functional regressions and I suspect our
scopes are not as perfectly enclosed as I expected.
Disable it for now.

Differential Revision: https://reviews.llvm.org/D76148
2020-03-13 12:33:22 -07:00
Reid Kleckner 478b06e687 Revert "[ObjC][ARC] Check the basic block size before calling DominatorTree::dominate"
This reverts commit 5c3117b0a9

This should not be necessary after
7593a480db, and Florian Hahn has confirmed
that the problem no longer reproduces with this patch.

I happened to notice this code because the FIXME talks about
OrderedBasicBlock.

Reviewed By: fhahn, dexonsmith

Differential Revision: https://reviews.llvm.org/D76075
2020-03-13 11:57:55 -07:00
Simon Pilgrim 05c0d34918 [X86][SSE] Prefer trunc(movd(x)) to pextrb(x,0)
If we're extracting the 0'th index of a v16i8 vector we're better off using MOVD than PEXTRB, unless we're storing the value or we require the implicit zero extension of PEXTRB.

The biggest perf diff is on SLM targets where MOVD (uops=1, lat=3 tp=1) is notably faster than PEXTRB (uops=2, lat=5, tp=4).

This matches what we already do for PEXTRW.

Differential Revision: https://reviews.llvm.org/D76138
2020-03-13 18:43:04 +00:00
Huihui Zhang fc1f205745 [SLPVectorizer][SVE] Bail out early for scalable vector.
Summary:
SLPVectorizer try to vectorize list of scalar instructions of the same type,
instructions already vectorized are rejected through isValidElementType().

Without this patch, tryToVectorizeList() will first try to determine vectorization
factor of a list of Instructions before checking whether each instruction has unsupported
type or not. For instructions already vectorized for SVE, it will crash at getVectorElementSize(),
where it try to return a fixed size.

This patch make sure invalid element types are rejected before trying to get vectorization
factor. This make sure we are not trying to vectorize instructions already vectorized.

Reviewers: sdesmalen, efriedma, spatel, RKSimon, ABataev, apazos, rengolin

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76017
2020-03-13 11:23:31 -07:00
Sanjay Patel 94f5d73182 [SimplifyCFG] fix formatting; NFC 2020-03-13 14:12:28 -04:00
Sanjay Patel 51e53af11c [SimplifyCFG] fix debug print formatting; NFC 2020-03-13 14:12:28 -04:00
Philip Reames 1b86ad27a7 Use 15 byte long nops on modern Intel processors
Back in D42616, we switched our default nop length from 15 to 10 bytes because some platforms have painful decode stalls when encountering multiple instruction prefixes. (10 byte long nops come from the fact that prefixes are used to pad after 8 bytes, and some platforms have issues w/more than two prefixes.)

Based on Agner's guides, it appears to be the case that modern Intel (SandyBridge and later) can decode an arbitrary number of prefixes without issue. Intel's guide only provides up to 9 bytes; I read that as providing a safe default for all their chips. Older chips and Atom series have serious decode stalls. I can't find a conclusive reference beyond those two.

Differential Revision: https://reviews.llvm.org/D75945
2020-03-13 10:51:09 -07:00
Simon Cook a26bd4ec16 [TableGen] Support combining AssemblerPredicates with ORs
For context, the proposed RISC-V bit manipulation extension has a subset
of instructions which require one of two SubtargetFeatures to be
enabled, 'zbb' or 'zbp', and there is no defined feature which both of
these can imply to use as a constraint either (see comments in D65649).

AssemblerPredicates allow multiple SubtargetFeatures to be declared in
the "AssemblerCondString" field, separated by commas, and this means
that the two features must both be enabled. There is no equivalent to
say that _either_ feature X or feature Y must be enabled, short of
creating a dummy SubtargetFeature for this purpose and having features X
and Y imply the new feature.

To solve the case where X or Y is needed without adding a new feature,
and to better match a typical TableGen style, this replaces the existing
"AssemblerCondString" with a dag "AssemblerCondDag" which represents the
same information. Two operators are defined for use with
AssemblerCondDag, "all_of", which matches the current behaviour, and
"any_of", which adds the new proposed ORing features functionality.

This was originally proposed in the RFC at
http://lists.llvm.org/pipermail/llvm-dev/2020-February/139138.html

Changes to all current backends are mechanical to support the replaced
functionality, and are NFCI.

At this stage, it is illegal to combine features with ands and ors in a
single AssemblerCondDag. I suspect this case is sufficiently rare that
adding more complex changes to support it are unnecessary.

Differential Revision: https://reviews.llvm.org/D74338
2020-03-13 17:13:51 +00:00
Florian Hahn 0c5b6e2ea5 Recommit "[SCCP] Use ValueLatticeElement instead of LatticeVal (NFCI)"
This patch should fix the cause of the stage2 failures and
PR45185.

This reverts the revert commit c52f839e72.
2020-03-13 17:03:22 +00:00
Simon Pilgrim a2db388dce [CostModel][X86] Improve ISD::CTTZ costs accounting for BSF/TZCNT implementations 2020-03-13 16:51:13 +00:00
Ehud Katz 18eae33122 [SCEV] Fix usage of invalid IP with FoldingSet
Fix the use of invalid Insertion Point pointer with the UniqueSCEVs FoldingSet,
which caused memory corruption.
2020-03-13 18:36:58 +02:00
Tyker 2543567c41 [AssumeBundles] filter usefull attriutes to preserve
Summary:
This patch will filter attributes to only preserve those that are usefull.
In the case of NoAlias it is filtered out not because it isn't usefull
but because it is incorrect to preserve it as it is only valdi for the
duration of the function.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: jdoerfert, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75828
2020-03-13 17:35:47 +01:00
Tyker 69375fd0a3 [AssumeBundles] Preserve Information in the inliner
Summary:
during inling Create and insert an llvm.assume with attributes to preserve them.
to prevent any changes for now generation of llvm.assume is under a flag disabled by default.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75825
2020-03-13 17:35:47 +01:00
Alexandre Ganea a7325298e1 [CodeView] Align type records on 4-bytes when emitting PDBs
When emitting PDBs, the TypeStreamMerger class is used to merge .debug$T records from the input .OBJ files into the output .PDB stream.
Records in .OBJs are not required to be aligned on 4-bytes, and "The Netwide Assembler 2.14" generates non-aligned records.

When compiling with -DLLVM_ENABLE_ASSERTIONS=ON, an assert was triggered in MergingTypeTableBuilder when non-ghash merging was used.
With ghash merging there was no assert.
As a result, LLD could potentially generate a non-aligned TPI stream.

We now align records on 4-bytes when record indices are remapped, in TypeStreamMerger::remapIndices().

Differential Revision: https://reviews.llvm.org/D75081
2020-03-13 12:22:19 -04:00
omarahmed1111 b285b333dc [Attributor] Detect possibly unbounded cycles in functions
This patch add mayContainUnboundedCycle helper function which checks whether a function has any cycle which we don't know if it is bounded or not.
Loops with maximum trip count are considered bounded, any other cycle not.
It also contains some fixed tests and some added tests contain bounded and
unbounded loops and non-loop cycles.

Reviewed By: jdoerfert, uenoku, baziotis

Differential Revision: https://reviews.llvm.org/D74691
2020-03-13 11:17:33 -05:00
Pankaj Gode bf990530ae [Attributor] Improve noalias preservation using reachability
Resolution for below fixme:
(ii) Check whether the value is captured in the scope using AANoCapture.
FIXME: This is conservative though, it is better to look at CFG and
             check only uses possibly executed before this callsite.

Propagates caller argument's noalias attribute to callee.

Reviewed by: jdoerfert, uenoku

Reviewers: jdoerfert, sstefan1, uenoku

Subscribers: uenoku, sstefan1, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D71617
2020-03-13 21:09:08 +05:30
Simon Pilgrim 846c614f54 [X86] combineExtractWithShuffle - pull out repeated getSizeInBits() call. NFC. 2020-03-13 15:36:04 +00:00
Simon Pilgrim fe047fbccc [X86] LowerEXTRACT_VECTOR_ELT - pull out repeated getOperand() calls. NFC.
Also, cleanup LowerEXTRACT_VECTOR_ELT_SSE4 comments which had references to non-constant extraction indices.
2020-03-13 15:36:02 +00:00
Sanjay Patel cbeffa3f6c [SimplifyCFG] convert if-else chain to switch; NFC
Fix formatting of related function names while changing the code.
2020-03-13 10:28:41 -04:00
Nico Weber 86eb2c3991 Revert "[ObjC][ARC] Don't remove autoreleaseRV/retainRV pairs if the call isn't"
This reverts commit 1f5b471b8b.
Causes asserts when building code with arc. See
https://bugs.chromium.org/p/chromium/issues/detail?id=1061289#c2
for a full repro. Will post a creduced repro once creduce is done
running.
2020-03-13 10:16:02 -04:00
Ehud Katz fcc2238b8b [SCEV] Add missing cache queries
Calculating SCEVs can be cumbersome, and may take very long time (even
hours, for very long expressions). To prevent recalculating expressions
over and over again, we cache them.
This change add cache queries to key positions, to prevent recalculation
of the expressions.

Fix PR43571.

Differential Revision: https://reviews.llvm.org/D70097
2020-03-13 15:32:43 +02:00
Andrzej Warzynski a0c15ed460 [AArch64][SVE] Add the @llvm.aarch64.sve.dup.x intrinsic
Summary:
This intrinsic implements the unpredicated duplication of scalar values
and is mapped to (through ISD::SPLAT_VECTOR):
  * DUP <Zd>.<T>, #<imm>
  * DUP <Zd>.<T>, <R><n|SP>

Reviewed by: sdesmalen

Differential Revision: https://reviews.llvm.org/D75900
2020-03-13 12:40:22 +00:00
Alexandre Ganea 28ad9fc208 [Clang][Driver] In -fintegrated-cc1 mode, avoid crashing on exit after a compiler crash
After a crash catched by the CrashRecoveryContext, this patch prevents from accessing dangling pointers in TimerGroup structures before the clang tool exits. Previously, the default TimerGroup had internal linked lists which were still pointing to old Timer or TimerGroup instances, which lived in stack frames released by the CrashRecoveryContext.

Fixes PR45164.

Differential Revision: https://reviews.llvm.org/D76099
2020-03-13 08:15:35 -04:00
David Green 2c6c169dbd [ARM] Optimise ASRL/LSRL to smaller shifts using demand bits.
The ASRL/LSRL long shifts are generated from 64bit shifts. Once we have
them, it might turn out that enough of the 64bit result was not required
that we can use a smaller shift to perform the same result. As the
smaller shift can in general be folded in more way, such as into add
instructions in one of the test cases here, we can use the demand bit
analysis to prefer the smaller shifts where we can.

Differential Revision: https://reviews.llvm.org/D75371
2020-03-13 10:09:03 +00:00
David Green f67d93dc23 [ARM] Constant long shift combines
This changes the way that asrl and lsrl intrinsics are lowered, going
via a the ISEL ASRL and LSLL nodes instead of straight to machine nodes.
On top of that, it adds some constant folds for long shifts, in case it
turns out that the shift amount was either constant or 0.

Differential Revision: https://reviews.llvm.org/D75553
2020-03-13 08:54:59 +00:00
Juneyoung Lee c39cb1c0dd [CodeGenPrepare] Expand freeze conversion to support fcmp and icmp with null
Summary:
This is a simple patch that expands https://reviews.llvm.org/D75859 to pointer comparison and fcmp

Checked with Alive2

Reviewers: reames, jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76048
2020-03-13 17:21:33 +09:00
QingShan Zhang d0fb34dc09 [PowerPC] Replace the PPCISD:: SExtVElems with ISD::SIGN_EXTEND_INREG to leverage the combine rules
The PPCISD::SExtVElems was added by commit https://reviews.llvm.org/D34009. However,
we have another ISD node ISD::SIGN_EXTEND_INREG that perfectly match the semantics
of SExtVElems. And the DAGCombiner has some combine rules for SIGN_EXTEND_INREG
that produce better code.

Differential Revision: https://reviews.llvm.org/D70771
2020-03-13 07:28:28 +00:00
Craig Topper 09c8f38924 [X86] Add isel patterns for X86VBroadcast with i16 truncates from i16->i64 zextload/extload.
We can form vpbroadcastw with a folded load.

We had patterns for i16->i32 zextload/extload, but nothing prevents
i64 from occuring.

I'd like to move this all to DAG combine to fix more cases, but
this is trivial fix to minimize test diffs when moving to a combine.
2020-03-13 00:10:48 -07:00
Johannes Doerfert a198adb490 [Attributor] IPO across definition boundary of a function marked alwaysinline
Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D75590
2020-03-13 01:06:12 -05:00
Amy Kwan 1ba3d2639d [PowerPC][NFC] Rename instruction formats in PPCInstrPrefix.td
This patch renames some of the instruction formats within PPCInstrPrefix.td to
adopt a more uniform naming convention. It also adds the naming convention
extension, `_MEM` to indicate instruction formats for memory ops.

Differential Revision: https://reviews.llvm.org/D75819
2020-03-13 00:50:08 -05:00
rathod-sahaab 263c4a3c75 Fix compiler warning when compiling without asserts
This patch aims to prevent warning-as-error failures in release build.
As suggested in this comment
https://reviews.llvm.org/D69930#1910922

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D75970
2020-03-13 00:26:49 -05:00
QingShan Zhang e601196833 [NFC][DAGCombine] Move the fold of a*b-c and a-b*c into lambda function
This will help the review of https://reviews.llvm.org/D75982. It is
a simple code refactor.
2020-03-13 02:35:46 +00:00
Arlo Siemsen 1478ed69d3 Add support for SHA256 source file checksums in debug info
LLVM currently supports CSK_MD5 and CSK_SHA1 source file checksums in
debug info. This change adds support for CSK_SHA256 checksums.

The SHA256 checksums are supported by the CodeView debug format.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D75785
2020-03-12 16:32:05 -07:00
Huihui Zhang f4f2706572 [ConstantFold][SVE] Fix constant folding for scalable vector compare instruction.
Summary:
Do not iterate on scalable vector. Also do not return constant scalable vector
from ConstantInt::get().
Fix result type by using getElementCount() instead of getNumElements().

Reviewers: sdesmalen, efriedma, apazos, huntergr, willlovett

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73753
2020-03-12 16:15:38 -07:00
Matt Arsenault ccc6e780c8 AMDGPU: Directly annotate functions if they have calls
Currently we infer whether the flat-scratch-init kernel input should
be enabled based on calls. Move this handling, so we can decide if the
full set of ABI inputs is needed in kernels. Ideally we would have an
analysis of some sort, rather than the function attributes.
2020-03-12 19:10:59 -04:00
Lang Hames 7266a8bfeb [ORC] Enable exception handling in JIT'd code when using LLJIT on Darwin.
This patch enables exception handling in code added to LLJIT on Darwin by
adding an orc::EHFrameRegistrationPlugin instance to the ObjectLinkingLayer
(which is currently used on Darwin only).
2020-03-12 15:33:56 -07:00
Lang Hames 214a9f0dd4 [ORC] Add a mutex to guard EHFrameRegistrationPlugin data structures.
These may be accessed from multiple threads if concurrent materialization is
enabled in ORC.

Testcase coming in a follow-up patch that enables eh-frame registration for
LLJIT.
2020-03-12 15:33:56 -07:00
Stanislav Mekhanoshin a73528649c [AMDGPU] Simplify exec copies
The patch removes late endcf handling and only leaves the
related portion with redundant exec mask copy elimination.

Differential Revision: https://reviews.llvm.org/D76095
2020-03-12 14:54:19 -07:00