Commit Graph

141974 Commits

Author SHA1 Message Date
Kazushi (Jam) Marukawa aefedb1707 [VE] Add logical mask intrinsic instructions
Add andm, orm, xorm, eqvm, nndm, negm, pcvm, lzvm, and tovm intrinsic
instructions, a few pseudo instructions to expand logical intrinsic
using VM512, a mechnism to expand such pseudo instructions, and
regression tests.  Also, assign vector mask types and vector mask
register classes correctly.  This is required to use VM512 registers
as function arguments.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D93093
2020-12-15 01:34:31 +09:00
Simon Pilgrim 5f5a2547c1 [X86] LowerBUILD_VECTOR - track zero/nonzero elements with APInt masks. NFCI.
Prep work for undef/zero 'upper elements' handling as proposed in D92645.
2020-12-14 16:28:45 +00:00
Kazushi (Jam) Marukawa c9213e1b29 [VE] Correct addRegisterClass calls
Correct addRegisterClass calls for vector mask registers.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D93212
2020-12-15 01:16:56 +09:00
diggerlin 15f2d4f198 [AIX] Fixed "comparison of unsigned expression >= 0 is always true" gcc warnings.
Summary:

fixed a  Fixed "comparison of unsigned expression >= 0 is always true" gcc warnings.
http://lab.llvm.org:8011/#/builders/5/builds/2407/steps/2/logs/stdio

the error caused by patch https://reviews.llvm.org/D92398
2020-12-14 11:08:40 -05:00
Markus Lavin 2a6782bb9f Reland [DebugInfo] Improve dbg preservation in LSR.
Use SCEV to salvage additional @llvm.dbg.value that have turned into
referencing undef after transformation (and traditional
salvageDebugInfo).  Before rewrite (but after introduction of new
induction variables) use SCEV to compute an equivalent set of values for
each @llvm.dbg.value in the loop body (among the loop header PHI-nodes).
After rewrite (and dead PHI elimination) update those @llvm.dbg.value
now referencing undef by picking a remaining value from its equivalence
set.  Allow match with offset by inserting compensation code in the
DIExpression.

Fixes : PR38815

Differential Revision: https://reviews.llvm.org/D87494
2020-12-14 16:15:18 +01:00
Florian Hahn e42e5263bd
[VPlan] Make VPWidenMemoryInstructionRecipe a VPDef.
This patch updates VPWidenMemoryInstructionRecipe to use VPDef
to manage the value it produces instead of inheriting from VPValue.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90563
2020-12-14 14:13:59 +00:00
Anton Afanasyev fac7c7ec3c [SLP] Fix vector element size for the store chains
Vector element size could be different for different store chains.
This patch prevents wrong computation of maximum number of elements
for that case.

Differential Revision: https://reviews.llvm.org/D93192
2020-12-14 15:51:43 +03:00
Simon Pilgrim 6c8ded0d8c [TableGen] Don't dereference from dyn_cast<> - use cast<> instead. NFCI.
dyn_cast<> can return null if the cast fails, resulting in null dereferences and static analyzer warnings. We should use cast<> instead.
2020-12-14 12:12:08 +00:00
Kerry McLaughlin c5ced82c8e [SVE][CodeGen] Lower scalable floating-point vector reductions
Changes in this patch:
-  Minor changes to the LowerVECREDUCE_SEQ_FADD function added by @cameron.mcinally
   to also work for scalable types
- Added TableGen patterns for FP reductions with unpacked types (nxv2f16, nxv4f16 & nxv2f32)
- Asserts added to expandFMINNUM_FMAXNUM & expandVecReduceSeq for scalable types

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D93050
2020-12-14 11:45:42 +00:00
David Green 1de3e7fd62 [ARM] Improve handling of empty VPT blocks in tail predicated loops
A vpt block that just contains either VPST;VCTP or VPT;VCTP, once the
VCTP is removed will become invalid. This fixed the first by removing
the now empty block and bails out for the second, as we have no simple
way of converting a VPT to a VCMP.

Differential Revision: https://reviews.llvm.org/D92369
2020-12-14 11:17:01 +00:00
Carl Ritson 62c246eda2 [AMDGPU][NFC] Rename opsel/opsel_hi/neg_lo/neg_hi with suffix 0
These parameters set a default value of 0, so I believe they
should include a 0 suffix. This allows for versions which do not
set a default value in future.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D93187
2020-12-14 20:01:56 +09:00
Carl Ritson af4570cd3a [AMDGPU][NFC] Remove unused VOP3Mods0Clamp
This is unused and the selection function does not exist.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D93188
2020-12-14 20:00:58 +09:00
Sebastian Neubauer 5733167f54 [AMDGPU] Mark amdgpu_gfx functions as module entry function
- Allows lds allocations
- Writes resource usage into COMPUTE_PGM_RSRC1 registers in PAL metadata

Differential Revision: https://reviews.llvm.org/D92946
2020-12-14 10:43:39 +01:00
QingShan Zhang 08e287aaf3 [PowerPC][FP128] Fix the incorrect signature for math library call
The runtime library has two family library implementation for ppc_fp128 and fp128.
For IBM Long double(ppc_fp128), it is suffixed with 'l', i.e(sqrtl). For
IEEE Long double(fp128), it is suffixed with "ieee128" or "f128".
We miss to map several libcall for IEEE Long double.

Reviewed By: qiucf

Differential Revision: https://reviews.llvm.org/D91675
2020-12-14 07:52:56 +00:00
Chen Zheng 4830d458dd [MachineCombiner][NFC] Add MustReduceRegisterPressure goal
add a new goal MustReduceRegisterPressure for machine combiner pass.

PowerPC will use this new goal to do some register pressure related optimization.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D92068
2020-12-14 00:02:42 -05:00
Kazu Hirata ee5b5b7a35 [CodeGen] Use llvm::erase_value (NFC) 2020-12-13 20:05:48 -08:00
Kazu Hirata 913515e465 [Target] Use llvm::is_contained (NFC) 2020-12-13 19:35:10 -08:00
Lang Hames 04795ab836 Re-apply 8904ee8ac7 with missing header included this time. 2020-12-14 13:39:33 +11:00
Nico Weber 5b112bcc0d Revert "[JITLink] Add JITLinkDylib type, thread through JITLinkMemoryManager APIs."
This reverts commit 8904ee8ac7.
Didn't `git add` llvm/ExecutionEngine/JITLink/JITLinkDylib.h and hence doesn't
build anywhere.
2020-12-13 21:30:38 -05:00
Lang Hames 8904ee8ac7 [JITLink] Add JITLinkDylib type, thread through JITLinkMemoryManager APIs.
JITLinkDylib represents a target dylib for a JITLink link. By representing this
explicitly we can:
  - Enable JITLinkMemoryManagers to manage allocations on a per-dylib basis
    (e.g by maintaining a seperate allocation pool for each JITLinkDylib).
  - Enable new features and diagnostics that require information about the
    target dylib (not implemented in this patch).
2020-12-14 12:29:16 +11:00
Lang Hames 0207de0bfe [ORC] Prefer preincrement on iterator. 2020-12-14 12:00:21 +11:00
Craig Topper 0261ce9e17 [X86] Add ExeDomain = SSEPackedSingle to cvtss2sd and cvtsd2ss instrutions.
Prep for D92993
2020-12-13 12:35:33 -08:00
Craig Topper fa31f337a2 [X86] Add isel patterns to form VPDPWSSD from (add (vpmaddwd X, Y), Z) when AVXVNNI is enabled.
We already have these patterns for AVX512VNNI.
2020-12-13 12:02:07 -08:00
Nikita Popov 22dba707b0 [AC] Handle (X+C1)<C2 assumes (PR48408)
InstCombine canonicalizes X>C && X<C' style comparisons into
(X+C1)<C2. This type of expression is recognized by some analyses
like LVI, but currently not when used inside assumptions, because
AssumptionCache does not track affected values for it.
2020-12-13 21:00:32 +01:00
Kazu Hirata 5891ad4e22 [Transforms] Use llvm::erase_value (NFC) 2020-12-13 09:48:47 -08:00
Simon Pilgrim d5c434d7dd [X86][SSE] combineX86ShufflesRecursively - add basic handling for combining shuffles of different widths (PR45974)
If a faux shuffle uses smaller shuffle inputs, try to recursively combine with those inputs directly instead of widening them immediately. Then widen all smaller inputs at the bottom of the recursion.

This will still mean we're generating nodes on the fly (PR45974) even if we don't combine to a new shuffle but it does help AVX2+ targets combine across xmm/ymm/zmm types, mainly as variable shuffles.
2020-12-13 17:18:07 +00:00
Florian Hahn 533f85767c
[VPlan] Use interleaveComma in printOperands() (NFC). 2020-12-13 16:29:16 +00:00
Florian Hahn 46bc40e502
Recommit "[AArch64] Lower calls with rv_marker attribute."
This recommits a87fccb3ff with a fix to mark the destination operand
of the marker instruction as def, to fix a machine verifier failure.

This reverts the revert commit c0f2cea7c0.
2020-12-13 16:20:39 +00:00
Simon Pilgrim 47321c311b [X86][SSE] combineReductionToHorizontal - add vXi8 ISD::MUL reduction handling (PR39709)
Default expansion leads to repeated extensions/truncations to/from vXi16 which shuffle combining and demanded elts can't completely unravel.

Better just to promote (any_extend) the input and perform a vXi16 reduction.

We'll be able to remove a lot of this if we ever get decent legalization support for reduction intrinsics in SelectionDAG.
2020-12-13 15:22:54 +00:00
Nikita Popov bb939ebfd7 [BasicAA] Handle known non-zero variable index
BasicAA currently handles cases like Scale*V0 + (-Scale)*V1 where
V0 != V1, but does not handle the simpler case of Scale*V with
V != 0. Add it based on an isKnownNonZero() call.

I'm not passing a context instruction for now, because the existing
approach of always using GEP1 for context could result in symmetry
issues.

Differential Revision: https://reviews.llvm.org/D93162
2020-12-13 13:20:05 +01:00
Chris Sears 36a23b33aa X86: Correcting X86OutgoingValueHandler typo (NFC)
https://reviews.llvm.org/D92631
2020-12-12 20:28:37 -05:00
Nico Weber cf16437e05 fix typos to cycle bots 2020-12-12 20:19:33 -05:00
Amara Emerson 21de99d43c [[GlobalISel][IRTranslator] Fix a crash when the use of an extractvalue is a non-dominated metadata use.
We don't expect uses to come before defs in the CFG, so allocateVRegs() asserted.

Fixes PR48211
2020-12-12 14:58:54 -08:00
Roman Lebedev d38205144f
[SimplifyCFG] FoldBranchToCommonDest(): bonus instrns must only be used by PHI nodes in successors (PR48450)
In particular, if the successor block, which is about to get a new
predecessor block, currently only has a single predecessor,
then the bonus instructions will be directly used within said successor,
which is fine, since the block with bonus instructions dominates that
successor. But once there's a new predecessor, the IR is no longer valid,
and we don't fix it, because we only update PHI nodes.

Which means, the live-out bonus instructions must be exclusively used
by the PHI nodes in successor blocks. So we have to form trivial PHI nodes.
which will then be successfully updated to recieve cloned bonus instns.

This all works fine, except for the fact that we don't have access to
the dominator tree, and we don't ignore unreachable code,
so we sometimes do end up having to deal with some weird IR.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48450
2020-12-13 00:06:57 +03:00
Zarko Todorovski ce4040a43d [PPC] Check for PPC64 when emitting 64bit specific VSX nodes when pattern matching built vectors
Some of the pattern matching in PPCInstrVSX.td and node lowering involving vectors assumes 64bit mode.  This patch disables some of the unsafe pattern matching and lowering of BUILD_VECTOR in 32bit mode.

Reviewed By: Xiangling_L

Differential Revision: https://reviews.llvm.org/D92789
2020-12-12 15:28:28 -05:00
Nikita Popov afbb6d97b5 [CVP] Simplify and generalize switch handling
CVP currently handles switches by checking an equality predicate
on all edges from predecessor blocks. Of course, this can only
work if the value being switched over is defined in a different block.

Replace this implementation with a call to getPredicateAt(), which
also does the predecessor edge predicate check (if not defined in
the same block), but can also do quite a bit more: It can reason
about phi-nodes by checking edge predicates for incoming values,
it can reason about assumes, and it can reason about block values.

As such, this makes the implementation both simpler and more
powerful. The compile-time impact on CTMark is in the noise.
2020-12-12 21:12:27 +01:00
Krzysztof Parzyszek baf931a842 [Hexagon] Reconsider getMask fix, return original mask, convert later
The getPayload/getMask/getPassThrough functions should return values
that could be composed into a masked load/store without any additional
type casts. The previous fix violated that.
Instead, convert scalar mask to a vector right before rescaling.
2020-12-12 13:27:22 -06:00
Kazu Hirata 9293b251b5 [Analysis/Interval] Remove isLoop (NFC)
The last use of isLoop was removed on Apr 29, 2002 in commit
09bbb5c015 as part of an effort to
remove "old induction varaible cannonicalization pass built on top of
interval analysis".
2020-12-12 10:09:35 -08:00
Kazu Hirata 215c1b1935 [Transforms] Use is_contained (NFC) 2020-12-12 09:37:49 -08:00
Krzysztof Parzyszek 2cf5310471 [Hexagon] Create vector masks for scalar loads/stores
AlignVectors treats all loaded/stored values as vectors of bytes,
and masks as corresponding vectors of booleans, so make getMask
produce a 1-element vector for scalars from the start.
2020-12-12 11:12:17 -06:00
Harald van Dijk f61e5ecb91
[X86] Avoid data16 prefix for lea in x32 mode
The ABI demands a data16 prefix for lea in 64-bit LP64 mode, but not in
64-bit ILP32 mode. In both modes this prefix would ordinarily be
ignored, but the instructions may be changed by the linker to
instructions that are affected by the prefix.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D93157
2020-12-12 17:05:24 +00:00
David Green a4823377fd [ARM] Add basic masked load/store costs
This adds some basic MVE masked load/store costs, notably changing the
cost of legal loads/stores to the MVECostFactor and the cost of
scalarized instructions to 8*NumElts.

Differential Revision: https://reviews.llvm.org/D86538
2020-12-12 15:26:32 +00:00
David Green ab97c9bdb7 [LV] Fix scalar cost for tail predicated loops
When it comes to the scalar cost of any predicated block, the loop
vectorizer by default regards this predication as a sign that it is
looking at an if-conversion and divides the scalar cost of the block by
2, assuming it would only be executed half the time. This however makes
no sense if the predication has been introduced to tail predicate the
loop.

Original patch by Anna Welker

Differential Revision: https://reviews.llvm.org/D86452
2020-12-12 14:21:40 +00:00
Nikita Popov d716eab197 [BasicAA] Make non-equal index handling simpler to extend (NFC) 2020-12-12 15:00:47 +01:00
Luo, Yuanke e52bc1d2bb [X86] Add chain in ISel for x86_tdpbssd_internal intrinsic. 2020-12-12 21:14:38 +08:00
Nathan James 0e5bfffb13
[YAML] Support extended spellings when parsing bools.
Support all the spellings of boolean datatypes according to https://yaml.org/type/bool.html

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D92755
2020-12-12 12:50:34 +00:00
Kazu Hirata eb44682d67 [Analysis] Use is_contained (NFC) 2020-12-11 21:19:31 -08:00
Mircea Trofin f76b7f22f0 [MLGO] Fix build break as result of new InstructionCost (D91174) 2020-12-11 20:28:39 -08:00
Fangrui Song 7698a01808 [llvm-cov gcov] Replace Donald B. Johnson's cycle enumeration with iterative cycle finding
gcov computes the line execution count as the sum of (a) counts from
predecessors on other lines and (b) the sum of loop execution counts of blocks
on the same line (think of loops on one line).

For (b), we use Donald B. Johnson's cycle enumeration algorithm and perform
cycle cancelling for each cycle. This number of candidate cycles were
exponential and D93036 made it polynomial by skipping zero count cycles.  The
time complexity is high (O(V*E^2) (it could be O(E^2) but the linear `Blocks`
check made it higher) and the implementation is complex.

We could just identify loops and sum all back edges. However, this requires a
dominator tree construction which is more complex. The time complexity can be
decreased to almost linear, though.

This patch just performs cycle cancelling iteratively. Add two members
`traversable` and `incoming` to GCOVArc. There are 3 states:

* `!traversable`: blocks not on this line or explored blocks
* `traversable && incoming == nullptr`: unexplored blocks
* `traversable && incoming != nullptr`: blocks which are being explored (on the stack)

If an arc points to a block being explored, a cycle has been found.

Let E be the number of arcs. Every time a cycle is found, at least one arc is
saturated (`edgeCount` reduced to 0), so there are at most E cycles. Finding one
cycle takes O(E) time, so the overall time complexity is O(E^2). Note that we
always augment through a back edge and never need to augment its reverse edge so
reverse edges in traditional flow networks are not needed.

Reviewed By: xinhaoyuan

Differential Revision: https://reviews.llvm.org/D93073
2020-12-11 18:28:16 -08:00
Jonas Paulsson 42f628c842 Reapply "[SystemZFrameLowering] Don't overrwrite R1D (backchain) when probing."
Fixed to properly compute the live-in lists of new blocks.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D92803
2020-12-11 18:25:47 -06:00