Commit Graph

28297 Commits

Author SHA1 Message Date
Nico Weber f82b32a51e Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit 5aa5c943f7.
Causes clang to assert, see
https://bugs.chromium.org/p/chromium/issues/detail?id=1061533#c4
for a repro.
2020-03-13 15:37:44 -04:00
Juneyoung Lee c39cb1c0dd [CodeGenPrepare] Expand freeze conversion to support fcmp and icmp with null
Summary:
This is a simple patch that expands https://reviews.llvm.org/D75859 to pointer comparison and fcmp

Checked with Alive2

Reviewers: reames, jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76048
2020-03-13 17:21:33 +09:00
QingShan Zhang e601196833 [NFC][DAGCombine] Move the fold of a*b-c and a-b*c into lambda function
This will help the review of https://reviews.llvm.org/D75982. It is
a simple code refactor.
2020-03-13 02:35:46 +00:00
Arlo Siemsen 1478ed69d3 Add support for SHA256 source file checksums in debug info
LLVM currently supports CSK_MD5 and CSK_SHA1 source file checksums in
debug info. This change adds support for CSK_SHA256 checksums.

The SHA256 checksums are supported by the CodeView debug format.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D75785
2020-03-12 16:32:05 -07:00
Huihui Zhang 118abf2017 [SVE] Update API ConstantVector::getSplat() to use ElementCount.
Summary:
Support ConstantInt::get() and Constant::getAllOnesValue() for scalable
vector type, this requires ConstantVector::getSplat() to take in 'ElementCount',
instead of 'unsigned' number of element count.

This change is needed for D73753.

Reviewers: sdesmalen, efriedma, apazos, spatel, huntergr, willlovett

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74386
2020-03-12 13:22:41 -07:00
Simon Pilgrim 2a2d242017 [DAGCombine] foldVSelectOfConstants - ensure constants are same type
Fix bug identified by https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=21167, foldVSelectOfConstants must ensure that the 2 build vectors have scalars of the same type before trying to compare APInt values.
2020-03-12 20:02:05 +00:00
Thomas Lively 4e589e6c26 [WebAssembly] Fix SIMD shift unrolling to avoid assertion failure
Summary:
Using the default DAG.UnrollVectorOp on v16i8 and v8i16 vectors
results in i8 or i16 nodes being inserted into the SelectionDAG. Since
those are illegal types, this causes a legalization assertion failure
for some code patterns, as uncovered by PR45178. This change unrolls
shifts manually to avoid this issue by adding and using a new optional
EVT argument to DAG.ExtractVectorElements to control the type of the
extract_element nodes.

Reviewers: aheejin, dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, zzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76043
2020-03-12 12:20:14 -07:00
Marcello Maggioni ba5500f27a [RAGreedy] Fix minor typo in comment. NFC 2020-03-12 08:15:04 -07:00
Andrzej Warzynski 46b9f14d71 [AArch64][SVE] Add intrinsics for non-temporal scatters/gathers
Summary:
This patch adds the following intrinsics for non-temporal gather loads
and scatter stores:
  * aarch64_sve_ldnt1_gather_index
  * aarch64_sve_stnt1_scatter_index
These intrinsics implement the "scalar + vector of indices" addressing
mode.

As opposed to regular and first-faulting gathers/scatters, there's no
instruction that would take indices and then scale them. Instead, the
indices for non-temporal gathers/scatters are scaled before the
intrinsics are lowered to `ldnt1` instructions.

The new ISD nodes, GLDNT1_INDEX and SSTNT1_INDEX, are only used as
placeholders so that we can easily identify the cases implemented in
this patch in performGatherLoadCombine and performScatterStoreCombined.
Once encountered, they are replaced with:
  * GLDNT1_INDEX -> SPLAT_VECTOR + SHL + GLDNT1
  * SSTNT1_INDEX -> SPLAT_VECTOR + SHL + SSTNT1

The patterns for lowering ISD::SHL for scalable vectors (required by
this patch) were missing, so these are added too.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D75601
2020-03-12 13:55:56 +00:00
Dominik Montada 6b96623dcb [GlobalISel] fix crash in narrowScalarExtract if DstRegs only has one register
Summary: When narrowing a scalar G_EXTRACT where the destination lines up perfectly with a single result of the emitted G_UNMERGE_VALUES a COPY should be emitted instead of unconditionally trying to emit a G_MERGE_VALUES.

Reviewers: arsenm, dsanders

Reviewed By: arsenm

Subscribers: wdng, rovka, hiraditya, volkan, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75743
2020-03-12 09:14:35 +01:00
Tres Popp bbe6764711 Remove unused variable.
Delete dead code from 8fffa40400.
2020-03-12 08:42:57 +01:00
Philip Reames 8fffa40400 [GC] Remove redundant entiries in stackmap section (and test it this time)
This is a reimplementation of the optimization removed in D75964. The actual spill/fill optimization is handled by D76013, this one just worries about reducing the stackmap section size itself by eliminating redundant entries. As noted in the comments, we could go a lot further here, but avoiding the degenerate invoke case as we did before is probably "enough" in practice.

Differential Revision: https://reviews.llvm.org/D76021
2020-03-11 21:24:48 -07:00
Bill Wendling 6aebf0ee56 Specify branch probabilities for callbr dests
Summary:
callbr's indirect branches aren't expected to be taken, so reduce their
probabilities to 0 while increasing the default destination to 1. This
allows some code improvements through block placement.

Reviewers: nickdesaulniers

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72656
2020-03-11 20:33:48 -07:00
Adrian Prantl d5180ea134 Add debug info support for Swift/Clang APINotes.
In order for dsymutil to collect .apinotes files (which capture
attributes such as nullability, Swift import names, and availability),
I want to propose adding an apinotes: field to DIModule that gets
translated into a DW_AT_LLVM_apinotes (path) nested inside
DW_TAG_module. This will be primarily used by LLDB to indirectly
extract the Swift names of Clang declarations that were deserialized
from DWARF.

<rdar://problem/59514626>

Differential Revision: https://reviews.llvm.org/D75585
2020-03-11 18:47:30 -07:00
Adrian Prantl e4e7e44765 Add an SDK attribute to DICompileUnit
This is part of PR44213 https://bugs.llvm.org/show_bug.cgi?id=44213

When importing (system) Clang modules, LLDB needs to know which SDK
(e.g., MacOSX, iPhoneSimulator, ...) they came from. While the sysroot
attribute contains the absolute path to the SDK, this doesn't work
well when the debugger is run on a different machine than the
compiler, and the SDKs are installed in different directories. It thus
makes sense to just store the name of the SDK instead of the absolute
path, so it can be found relative to LLDB.

rdar://problem/51645582

Differential Revision: https://reviews.llvm.org/D75646
2020-03-11 14:14:06 -07:00
Jin Lin a0cacb6054 Fix conflict value for metadata "Objective-C Garbage Collection" in the mix of swift and Objective-C bitcode
Summary:
The change is to fix conflict value for metadata "Objective-C Garbage Collection" in the mix of swift and Objective-C bitcode.
The purpose is to provide the support of LTO for swift and Objective-C mixed project.

Reviewers: rjmccall, ahatanak, steven_wu

Reviewed By: rjmccall, steven_wu

Subscribers: manmanren, mehdi_amini, hiraditya, dexonsmith, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71219
2020-03-11 13:26:06 -07:00
Philip Reames 8f997b4f01 [GC] Loosen ordering on statepoint reloads to allow CSE
We just removed a broken duplicate elimination algorithm in D75964, and after landed that it occurred to me that duplicate elimination is simply CSE. SelectionDAG has a build in CSE, so why wasn't that triggering? Well, it turns out we were overly conservative in the memory states for our reloads and CSE (rightly) considers the incoming memory state for a load part of the identity of the load.

By loosening the chain and allowing reordering, we also allow CSE. As shown in the test case, doing iterative CSE as we go is enough to eliminate duplicate stores in later statepoints as well. We key our (block local) slot map by SDValue, so commoning a previous pair of loads at construction time means we also common following stores.

Differential Revision: https://reviews.llvm.org/D76013
2020-03-11 12:30:06 -07:00
Simon Pilgrim d8f9416fdc [DAG] MatchRotate - Add funnel shift by immediate support
This patch reuses the existing MatchRotate ROTL/ROTR rotation pattern code to also recognize the more general FSHL/FSHR funnel shift patterns when we have constant shift amounts.

Differential Revision: https://reviews.llvm.org/D75114
2020-03-11 18:55:18 +00:00
Juneyoung Lee 8eb2f865c3 [CodeGenPrepare] Fold br(freeze(icmp x, const)) to br(icmp(freeze x, const))
Summary:
This patch helps CodeGenPrepare move freeze into the icmp when it is used by branch.
It reenables generation of efficient conditional jumps.

This is only done when at least one of icmp's operands is constant to prevent the transformation from increasing # of freeze instructions.

Performance degradation of MultiSource/Benchmarks/Ptrdist/yacr2/yacr2.test is resolved with this patch.

Checked with Alive2

Reviewers: reames, fhahn, nlopes

Reviewed By: reames

Subscribers: jdoerfert, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75859
2020-03-12 03:16:15 +09:00
Philip Reames e671641844 [GC] Remove buggy untested optimization from statepoint lowering
A downstream test case (see included reduced test) revealed that we have a bug in how we handle duplicate relocations. If we have the same SDValue relocated twice, and that value happens to be a constant (such as null), we only export one of the two llvm::Values. Exporting on a per llvm::Value basis is required to allow lowering of gc.relocates in following basic blocks (e.g. invokes). Without it, we end up with a use of an undefined vreg and bad things happen.

Rather than fixing the optimization - which appears to be hard - I propose we simply remove it. There are no tests in tree that change with this code removed. If we find out later that this did matter for something, we can reimplement a variation of this in CodeGenPrepare to catch the easy cases without complicating the lowering code.

Thanks to Denis and Serguei who did all the hard work of figuring out what went wrong here. The patch is by far the easy part. :)

Differential Revision: https://reviews.llvm.org/D75964
2020-03-11 10:03:24 -07:00
Matt Arsenault c0ad75e758 GlobalISel: Don't try to narrow extending loads/trunc store
If the loaded memory size was smaller than the result size, this would
produce out of bounds memory accesses. I'm wondering if we need a
distinct narrow memory legalize action type, since a case I care about
is decomposing a 4-byte unaligned access into 4 extending loads, which
would leave the original result register type. I'm currently awkwardly
using narrowScalar to handle unaligned accesses that need to be split.
2020-03-10 23:34:10 -04:00
Matt Arsenault b17a81f8b2 GlobalISel: Add missing add/sub with carries to MachineIRBuilder 2020-03-10 22:39:55 -04:00
Matt Arsenault ce8a1f7294 GlobalISel: Implement fewerElementsVector for G_TRUNC
Extend fewerElementsVectorBasic to handle operands with different
element types.
2020-03-10 15:17:20 -07:00
Benjamin Kramer 247a177cf7 Give helpers internal linkage. NFC. 2020-03-10 18:27:42 +01:00
Kazushi (Jam) Marukawa 3dabad1af3 [VE] Target-specific bit size for sjljehprepare
Summary:
This patch extends the TargetMachine to let targets specify the integer size
used by the sjljehprepare pass. This is 64bit for the VE target and otherwise
defaults to 32bit for all targets, which was hard-wired before.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D71337
2020-03-10 17:51:16 +01:00
Simon Pilgrim e71fb46a8f [TargetLowering] SimplifyDemandedVectorElts - add DemandedElts mask to ISD::BITCAST SimplifyDemandedBits call.
This fixes most of the regressions introduced in the rG4bc6f6332028 bugfix. The vector-trunc.ll issue should be fixed by D66004.
2020-03-10 13:39:10 +00:00
Djordje Todorovic 5aa5c943f7 Reland "[DebugInfo] Enable the debug entry values feature by default"
Differential Revision: https://reviews.llvm.org/D73534
2020-03-10 09:15:06 +01:00
Puyan Lotfi 4b8af31f63 [llvm][MIRVRegNamer] Avoid collisions across constant pool indices.
When hashing on MachineOperand::MO_ConstantPoolIndex, now MIR-Canon and
MIRVRegNamer will no longer result in a hash collision.

Differential Revision: https://reviews.llvm.org/D74449
2020-03-10 01:13:20 -04:00
Marcello Maggioni e5205074df Move Spiller.h from lib/ directory path to include/CodeGen. NFC
This allows Spiller.h to be used and included outside of
the lib/CodeGen directory. For example to be used in the
lib/Target directory or other places.
2020-03-09 10:52:28 -07:00
Djordje Todorovic c15c68abdc [CallSiteInfo] Enable the call site info only for -g + optimizations
Emit call site info only in the case of '-g' + 'O>0' level.

Differential Revision: https://reviews.llvm.org/D75175
2020-03-09 12:12:44 +01:00
Clement Courbet 6518b72f93 [ExpandMemCmp] Properly constant-fold all compares.
Summary:
This gets rid of duplicated code and diverging behaviour w.r.t.
constants.
Fixes PR45086.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75519
2020-03-09 10:40:52 +01:00
Clement Courbet f7e6f5f8e3 [ExpandMemCmp] Properly constant-fold all compares.
Summary:
This gets rid of duplicated code and diverging behaviour w.r.t.
constants.
Fixes PR45086.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75519
2020-03-09 09:10:34 +01:00
Matt Arsenault a4e71f01c0 Assume ieee behavior without denormal-fp-math attribute 2020-03-07 12:10:56 -05:00
Amara Emerson c1a97e992d Revert "Revert "[GlobalISel][Localizer] Enable intra-block localization of already-local uses.""
This reverts commit 5583c2f2fb.

The lldb bot failure was a test that was fragile and sensitive to irrelevant
changes in instruction ordering. Re-committing this as the test should have
been skipped for AArch64 now.

Differential Revision: https://reviews.llvm.org/D75555
2020-03-06 21:35:08 -08:00
Jin Lin fc6fda90f7 Fix incorrect logic in maintaining the side-effect of compiler generated outliner functions
Summary: Fix incorrect logic in maintaining the side-effect of compiler generated outliner functions by adding the up-exposed uses.

Reviewers: paquette, tellenbach

Reviewed By: paquette

Subscribers: aemerson, lebedev.ri, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71217
2020-03-06 09:13:20 -08:00
Xiangling Liao 362456bc53 [AIX] Handle LinkOnceODRLinkage and AppendingLinkage for static init gloabl arrays
Handle LinkOnceODRLinkage;
Handle AppendingLinkage type for llvm.global_ctors/dtors static init global arrays;

Differential Revision: https://reviews.llvm.org/D75305
2020-03-06 09:26:55 -05:00
Simon Pilgrim 7202d9cde9 [DAG] Combine fshl/fshr(load1,load0,c) if we have consecutive loads
As noted on D75114, if both arguments of a funnel shift are consecutive loads we are missing the opportunity to combine them into a single load.

Differential Revision: https://reviews.llvm.org/D75624
2020-03-06 11:36:18 +00:00
Dominik Montada feb20a1594 [GlobalISel] add missing libcalls and 128-bit support for floating points
Add libcall support for G_FMINNUM, G_FMAXNUM, G_FSQRT, G_FRINT, G_FNEARBYINT.
Add 128-bit libcall support for all simple libcalls.

Reviewers: arsenm, Petar.Avramovic, dsanders, petarj, paquette

Subscribers: wdng, rovka, hiraditya, volkan, llvm-commits

Differential Revision: https://reviews.llvm.org/D75516
2020-03-06 09:06:13 +01:00
Hiroshi Yamauchi 76b9901fb1 [PGO][PGSO] Use IsColdXNthPercentile for sample PGO.
Summary:
This performs better for sample PGO.
NFC as PGSOColdCodeOnlyForSamplePGO is still true.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75550
2020-03-05 09:54:54 -08:00
QingShan Zhang 3906ae387f [DAGCombine] Check the uses of negated floating constant and remove the hack
PowerPC hits an assertion due to somewhat the same reason as https://reviews.llvm.org/D70975.
Though there are already some hack, it still failed with some case, when the operand 0 is NOT
a const fp, it is another fma that with const fp. And that const fp is negated which result in multi-uses.

A better fix is to check the uses of the negated const fp. If there are already use of its negated
value, we will have benefit as no extra Node is added.

Differential revision: https://reviews.llvm.org/D75501
2020-03-05 03:42:50 +00:00
Muhammad Omair Javaid 5583c2f2fb Revert "[GlobalISel][Localizer] Enable intra-block localization of already-local uses."
This reverts commit e91e1df6ab.
2020-03-05 03:12:28 +05:00
Matt Arsenault b71203a751 GlobalISel: Move some legalizer functions to utils 2020-03-04 16:40:00 -05:00
Matt Arsenault fb0c35fa34 GlobalISel: Set alignment on function argument stack load/store 2020-03-04 16:38:46 -05:00
Wei Mi 3c96d01d2e Generate Callee Saved Register (CSR) related cfi directives like .cfi_restore.
https://reviews.llvm.org/D42848 only handled CFA related cfi directives but
didn't handle CSR related cfi. The patch adds the CSR part. Basically it reuses
the framework created in D42848. For each basicblock, the patch tracks which
CSR set have been saved at its CFG predecessors's exits, and compare the CSR
set with the set at its previous basicblock's exit (The previous block is the
block laid before the current block). If the saved CSR set at its previous
basicblock's exit is larger, .cfi_restore will be inserted.

The patch also generates proper .cfi_restore in epilogue to make sure the
saved CSR set is consistent for the incoming edges of each block.

Differential Revision: https://reviews.llvm.org/D74303
2020-03-04 11:18:37 -08:00
Guozhi Wei ee9a3eba76 [CodeGenPrepare] Handle ExtractValueInst in dupRetToEnableTailCallOpts
As the test case shows if there is an ExtractValueInst in the Ret block, function dupRetToEnableTailCallOpts can't duplicate it into the block containing call. So later no tail call is generated in CodeGen.

    This patch adds the ExtractValueInst handling code in function dupRetToEnableTailCallOpts and FoldReturnIntoUncondBranch, and later tail call can be generated for this case.

Differential Revision: https://reviews.llvm.org/D74242
2020-03-04 11:10:32 -08:00
Nikita Popov 0e890cd4d4 [ConstantFolding] Always return something from ConstantFoldConstant
Spin-off from D75407. As described there, ConstantFoldConstant()
currently returns null for non-ConstantExpr/ConstantVector inputs,
but otherwise always returns non-null, independently of whether
any folding has happened or not.

This is confusing and makes consumer code more complicated.
I would expect either that ConstantFoldConstant() returns only if
it actually folded something, or that it always returns non-null.
I'm going to the latter possibility here, which appears to be more
useful considering existing usage.

Differential Revision: https://reviews.llvm.org/D75543
2020-03-04 18:24:47 +01:00
Sanjay Patel 29a2b20ab3 [SDAG] simplify FP binops to undef
As discussed in the commit thread for rGa253a2a and D73978, we can do more undef folding for FP ops.
The nnan and ninf fast-math-flags specify that if an operand is the disallowed value, the result is
poison, so we can produce an undef result.

But this doesn't work as expected (the undef operand cases remain) because of a Flags propagation
problem in SelectionDAGBuilder.

I've added DAGCombiner calls to enable these for the other cases because we've shown in other
patches that (because of the limited way that SDAG iterates), it is possible to miss simplifications
like this if they are done only at node creation time.

Several potential follow-ups to expand on this patch are possible.

Differential Revision: https://reviews.llvm.org/D75576
2020-03-04 10:42:16 -05:00
Amara Emerson e91e1df6ab [GlobalISel][Localizer] Enable intra-block localization of already-local uses.
This changes the localizer to attempt intra-block localizer of instructions
that have local uses. This is useful because sometimes the entry block itself
has many uses of constant-like instructions, which would benefit from shortening
live ranges. Previously if an inst had no non-local uses, we wouldn't add it to
the list of instructions to attempt further intra-block localization.

This gives a 0.7% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D75555
2020-03-03 18:14:57 -08:00
Fangrui Song 90acc505ed [MCDwarf] Change emitListsTableHeaderStart to use a reference and fold Start/End symbols generation into it
Apply @dblaikie's suggestions in a post-commit review for D75375

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D75568
2020-03-03 16:20:40 -08:00
Amy Huang 5b3b21f025 [DebugInfo] Fix for adding "returns cxx udt" option to functions in CodeView.
Summary:
This change checks for the return type in the frontend and adds a flag
to the DISubroutineType to indicate that the option should be added in
CodeViewDebug.

Previously function types sometimes appeared twice in the PDB: once with
"returns cxx udt" and once without.
See https://bugs.llvm.org/show_bug.cgi?id=44785.

Reviewers: rnk, asmith

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75215
2020-03-03 14:00:08 -08:00
Vedant Kumar f002ee55c7 [MachineVerifier] Remove placement rule exception for debug entry values
There should not be an exception allowing debug entry values to be
placed after a terminator.

Differential Revision: https://reviews.llvm.org/D75559
2020-03-03 13:02:18 -08:00
Vedant Kumar 2bf496620c [LiveDebugValues] Do not insert DBG_VALUEs after a MBB terminator
This fixes a miscompile that happened because a DBG_VALUE interfered
with the MachineOutliner's liveness analysis.

Inserting a DBG_VALUE after a terminator breaks predicates on MBB such
as isReturnBlock(). And the resulting DBG_VALUE cannot be "live".

I plan to introduce a MachineVerifier check for this situation in a
follow up.

rdar://59859175

Testing: check-llvm, LNT build with a stage2 compiler & entry values
enabled

Differential Revision: https://reviews.llvm.org/D75548
2020-03-03 13:00:52 -08:00
Fangrui Song 55a56041d1 [MCDwarf] Generate DWARF v5 .debug_rnglists for assembly files
```
// clang -c -gdwarf-5 a.s -o a.o
.section .init; ret
.text; ret
```

.debug_info contains DW_AT_ranges and llvm-dwarfdump will report
a verification error because .debug_rnglists does not exist (not
implemented).

This patch generates .debug_rnglists for assembly files.
emitListsTableHeaderStart() in DwarfDebug.cpp can be shared with
MCDwarf.cpp. Because CodeGen depends on MC, I move the function to
MCDwarf.cpp

Reviewed By: probinson

Differential Revision: https://reviews.llvm.org/D75375
2020-03-03 09:03:34 -08:00
Craig Topper d8ad7cc088 [DAGCombiner][X86] Improve narrowExtractedVectorLoad to handle cases where the element size isn't byte sized by the subvector is.
Summary:
Follow up from D75377. If the subvector is byte sized and the
index is aligned to the subvector size, we can shrink the load.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: dbabokin, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75434
2020-03-03 08:41:31 -08:00
Sam Parker 5618e9be37 [RDA][ARM] collectKilledOperands across multiple blocks
Use MIOperand in collectLocalKilledOperands to make the search
global, as we already have to search for global uses too. This
allows us to delete more dead code when tail predicating.

Differential Revision: https://reviews.llvm.org/D75167
2020-03-03 15:23:05 +00:00
Sam Parker dfe8f5da4c [ARM][RDA] Allow multiple killed users
In RDA, check against the already decided dead instructions when
looking at users. This allows an instruction to be removed if it
has multiple users, but they're all dead.

This means that IT instructions can be considered killed once all
the itstate using instructions are dead.

Differential Revision: https://reviews.llvm.org/D75245
2020-03-03 15:12:29 +00:00
Clement Courbet b0ae20d92e [ExpandMemCmp][NFC] Fix typo in comment. 2020-03-03 11:07:13 +01:00
Awanish Pandey 1cb0e01e42 [DebugInfo][DWARF5]: Added support for debuginfo generation for defaulted parameters
This patch adds support for dwarf emission/dumping part of debuginfo
generation for defaulted parameters.

Reviewers: probinson, aprantl, dblaikie

Reviewed By: aprantl, dblaikie

Differential Revision: https://reviews.llvm.org/D73462
2020-03-03 13:09:53 +05:30
Vedant Kumar d64a22a2ad [LiveDebugValues] Prevent some misuse of LocIndex::fromRawInteger, NFC
Make it a compile-time error to pass an int/unsigned/etc to
fromRawInteger.

Hopefully this prevents errors of the form:

```
for (unsigned ID : getVarLocs()) {
  auto VL = LocMap[LocIndex::fromRawInteger(ID)];
  ...
```
2020-03-02 16:59:09 -08:00
Jordan Rupprecht d7803c3832 Add default case to fix -Wswitch errors 2020-03-02 14:23:46 -08:00
Craig Topper adc69729ec [TargetLowering] Fix what look like copy/paste mistakes in compare with infinity handling SimplifySetCC.
I expect that the isCondCodeLegal checks should match that CC of
the node that we're going to create.

Rewriting to a switch to minimize repeated mentions of the same
constants.
2020-03-02 14:12:16 -08:00
Stanislav Mekhanoshin 1bacdcf48d Extend LaneBitmask to 64 bit
This is needed for D74873, AMDGPU going to have 16 bit subregs
and the largest tuple is 32 VGPRs, which results in 64 lanes.

Differential Revision: https://reviews.llvm.org/D75378
2020-03-02 12:10:52 -08:00
Volkan Keles 4167645d1e GlobalISel: Move Localizer::shouldLocalize(..) to TargetLowering
Add a new target hook for shouldLocalize so that
targets can customize the logic.

https://reviews.llvm.org/D75207
2020-03-02 09:15:40 -08:00
Simon Pilgrim d20fb7ea13 Fix shadow variable warning. NFC. 2020-03-02 11:41:20 +00:00
Simon Pilgrim e4380b07cc Fix operator precedence warning. NFCI. 2020-03-02 10:56:58 +00:00
Serguei Katkov 496e0a99c7 [InlineSpiller] Relax re-materialization restriction for statepoint
We should be careful to allow count of re-materialization of operands to be less
then number of physical registers.

STATEPOINT instruction has a variable number of operands and potentially very big.
So re-materialization for all operands is disabled at the moment if restrict-statepoint-remat is true.

The patch relaxes the re-materialization restriction for STATEPOINT instruction allowing it for
fixed operands. Specifically it is about call target.

Reviewers: reames
Reviewed By: reames
Subscribers: llvm-commits, qcolombet, hiraditya
Differential Revision: https://reviews.llvm.org/D75335
2020-03-02 11:25:44 +07:00
Craig Topper 0cd6712a7a [DAGCombiner][X86] Disable narrowExtractedVectorLoad if the element type size isn't byte sized
The address calculation for the offset assumes that you can calculate the offset by multiplying the index by the store size of the element. But that only works if the element's store size is exactly its real size since we store vectors tightly packed in memory. There are improvements we could make to this like special casing extracting element 0. I think we could also handle cases where the extracted VT is byte sized and the index is aligned with the extract element count.

Differential Revision: https://reviews.llvm.org/D75377
2020-03-01 18:13:25 -08:00
Craig Topper b6e2796114 [X86][TwoAddressInstructionPass] Teach tryInstructionCommute to continue checking for commutable FMA operands in more cases.
Previously we would only check for another commutable operand if the first commute was an aggressive commute.

But if we have two kill operands and neither is tied to the def at the start, we should consider both operands as the one to use as the new def.

This improves the loop in the fma-commute-loop.ll test. This test is derived from a post from discourse here https://llvm.discourse.group/t/unnecessary-vmovapd-instructions-generated-can-you-hint-in-favor-of-vfmadd231pd/582

Differential Revision: https://reviews.llvm.org/D75016
2020-03-01 16:38:08 -08:00
Craig Topper 211fb91f10 [DAGCombiner] Don't emit select_cc from visitSINT_TO_FP/visitUINT_TO_FP. Use plain select instead.
Select_cc isn't used by all targets. X86 doesn't have optimizations
for it.

Since we already know the input to the sint_to_fp/uint_to_fp is
a setcc we can just emit a plain select using that setcc as the
condition. Other DAG combines can turn that into a select_cc on
targets that support it.

Differential Revision: https://reviews.llvm.org/D75415
2020-03-01 10:52:17 -08:00
Sanjay Patel 619d7dc39a [DAGCombiner] recognize shuffle (shuffle X, Mask0), Mask --> splat X
We get the simple cases of this via demanded elements and other folds,
but that doesn't work if the values have >1 use, so add a dedicated
match for the pattern.

We already have this transform in IR, but it doesn't help the
motivating x86 tests (based on PR42024) because the shuffles don't
exist until after legalization and other combines have happened.
The AArch64 test shows a minimal IR example of the problem.

Differential Revision: https://reviews.llvm.org/D75348
2020-03-01 09:10:25 -05:00
Simon Pilgrim d955b221cb [MachineInst] Remove dead code. NFCI.
The MachineFunction MF value is not used any more and is always null.
2020-02-29 19:25:02 +00:00
Simon Pilgrim 6e7a768354 Make argument const to silence cppcheck warning. NFCI. 2020-02-29 19:25:01 +00:00
Fangrui Song 692e0c9648 [MC] Add MCStreamer::emitInt{8,16,32,64}
Similar to AsmPrinter::emitInt{8,16,32,64}.
2020-02-29 09:40:21 -08:00
Vedant Kumar dd1ea9de2e Reland: [Coverage] Revise format to reduce binary size
Try again with an up-to-date version of D69471 (99317124 was a stale
revision).

---

Revise the coverage mapping format to reduce binary size by:

1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.

This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).

Rationale for changes to the format:

- With the current format, most coverage function records are discarded.
  E.g., more than 97% of the records in llc are *duplicate* placeholders
  for functions visible-but-not-used in TUs. Placeholders *are* used to
  show under-covered functions, but duplicate placeholders waste space.

- We reached general consensus about giving (1) a try at the 2017 code
  coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
  duplicates is simpler than alternatives like teaching build systems
  about a coverage-aware database/module/etc on the side.

- Revising the format is expensive due to the backwards compatibility
  requirement, so we might as well compress filenames while we're at it.
  This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).

See CoverageMappingFormat.rst for the details on what exactly has
changed.

Fixes PR34533 [2], hopefully.

[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533

Differential Revision: https://reviews.llvm.org/D69471
2020-02-28 18:12:04 -08:00
Vedant Kumar 3388871714 Revert "[Coverage] Revise format to reduce binary size"
This reverts commit 99317124e1. This is
still busted on Windows:

http://lab.llvm.org:8011/builders/lld-x86_64-win7/builds/40873

The llvm-cov tests report 'error: Could not load coverage information'.
2020-02-28 18:03:15 -08:00
Vedant Kumar 99317124e1 [Coverage] Revise format to reduce binary size
Revise the coverage mapping format to reduce binary size by:

1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.

This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).

Rationale for changes to the format:

- With the current format, most coverage function records are discarded.
  E.g., more than 97% of the records in llc are *duplicate* placeholders
  for functions visible-but-not-used in TUs. Placeholders *are* used to
  show under-covered functions, but duplicate placeholders waste space.

- We reached general consensus about giving (1) a try at the 2017 code
  coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
  duplicates is simpler than alternatives like teaching build systems
  about a coverage-aware database/module/etc on the side.

- Revising the format is expensive due to the backwards compatibility
  requirement, so we might as well compress filenames while we're at it.
  This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).

See CoverageMappingFormat.rst for the details on what exactly has
changed.

Fixes PR34533 [2], hopefully.

[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533

Differential Revision: https://reviews.llvm.org/D69471
2020-02-28 17:33:25 -08:00
Vedant Kumar 0368b42295 [entry values] ARM: Add a describeLoadedValue override (PR45025)
As a narrow stopgap for the assertion failure described in PR45025, add
a describeLoadedValue override to ARMBaseInstrInfo and use it to detect
copies in which the forwarding reg is a super/sub reg of the copy
destination. For the moment this is unsupported.

Several follow ups are possible:

1) Handle VORRq. At the moment, we do not, because isCopyInstrImpl
   returns early when !MI.isMoveReg().

2) In the case where forwarding reg is a super-reg of the copy
   destination, we should be able to describe the forwarding reg as a
   subreg within the copy destination. I'm not 100% sure about this, but
   it looks like that's what's done in AArch64InstrInfo.

3) In the case where the forwarding reg is a sub-reg of the copy
   destination, maybe we could describe the forwarding reg using the
   copy destinaion and a DW_OP_LLVM_fragment (I guess this should be
   possible after D75036).

https://bugs.llvm.org/show_bug.cgi?id=45025
rdar://59772698

Differential Revision: https://reviews.llvm.org/D75273
2020-02-28 14:30:40 -08:00
David Green 1de1070559 [DAGCombine] Fix alias analysis for unaligned accesses
The alias analysis in DAG Combine looks at the BaseAlign, the Offset and
the Size of two accesses, and determines if they are known to access
different parts of memory by the fact that they are different offsets
from inside that "alignment window". It does not seem to account for
accesses that are not a multiple of the size, and may overflow from one
alignment window into another.

For example in the test case we have a 19byte memset that is splits into
a 16 byte neon store and an unaligned 4 byte store with a 15 byte
offset. This 15byte offset (with a base align of 8) wraps around to the
next alignment windows. When compared to an access that is a 16byte
offset (of the same 4byte size and 8byte basealign), the two accesses
are said not to alias.

I've fixed this here by just ensuring that the offsets are a multiple of
the size, ensuring that they don't overlap by wrapping. Fixes PR45035,
which was exposed by the UseAA changes in the arm backend.

Differential Revision: https://reviews.llvm.org/D75238
2020-02-28 18:44:36 +00:00
Simon Pilgrim 4bc6f63320 [TargetLowering] SimplifyDemandedBits - fix SCALAR_TO_VECTOR knownbits bug
We can only report the knownbits for a SCALAR_TO_VECTOR node if we only demand the 0'th element - the upper elements are undefined and shouldn't be trusted.

This is causing a number of regressions that need addressing but we need to get the bugfix in first.
2020-02-28 15:23:37 +00:00
Jeremy Morse 6af859dcca [DebugInfo] Re-implement LexicalScopes dominance method, add unit tests
Way back in D24994, the combination of LexicalScopes::dominates and
LiveDebugValues was identified as having worst-case quadratic complexity,
but it wasn't triggered by any code path at the time. I've since run into a
scenario where this occurs, in a very large basic block where large numbers
of inlined DBG_VALUEs are present.

The quadratic-ness comes from LiveDebugValues::join calling "dominates" on
every variable location, and LexicalScopes::dominates potentially touching
every instruction in a block to test for the presence of a scope. We have,
however, already computed the presence of scopes in blocks, in the
"InstrRanges" of each scope. This patch switches the dominates method to
examine whether a block is present in a scope's InsnRanges, avoiding
walking through the whole block.

At the same time, fix getMachineBasicBlocks to account for the fact that
InsnRanges can cover multiple blocks, and add some unit tests, as Lexical
Scopes didn't have any.

Differential revision: https://reviews.llvm.org/D73725
2020-02-28 11:41:28 +00:00
Sam Parker bf61421a02 [RDA] Track implicit-defs
Ensure that we're recording implicit defs, as well as visiting implicit
uses and implicit defs when we're walking through operands.

Differential Revision: https://reviews.llvm.org/D75185
2020-02-28 11:14:42 +00:00
serge-sans-paille 6d15c4deab No longer generate calls to *_finite
According to Joseph Myers, a libm maintainer

> They were only ever an ABI (selected by use of -ffinite-math-only or
> options implying it, which resulted in the headers using "asm" to redirect
> calls to some libm functions), not an API. The change means that ABI has
> turned into compat symbols (only available for existing binaries, not for
> anything newly linked, not included in static libm at all, not included in
> shared libm for future glibc ports such as RV32), so, yes, in any case
> where tools generate direct calls to those functions (rather than just
> following the "asm" annotations on function declarations in the headers),
> they need to stop doing so.

As a consequence, we should no longer assume these symbols are available on the
target system.

Still keep the TargetLibraryInfo for constant folding.

Differential Revision: https://reviews.llvm.org/D74712
2020-02-28 10:07:37 +01:00
Vedant Kumar a993720397 [LiveDebugValues] Encode register location within VarLoc IDs [3/3]
This is part 3 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.

---

Start encoding register locations within VarLoc IDs, and take advantage
of this encoding to speed up transferRegisterDef.

There is no fundamental algorithmic change: this patch simply swaps out
SparseBitVector in favor of CoalescingBitVector. That changes iteration
order (hence the test updates), but otherwise this patch is NFCI.

The only interesting change is in transferRegisterDef. Instead of doing:

```
KillSet = {}
for (ID : OpenRanges.getVarLocs())
  if (DeadRegs.count(ID))
    KillSet.add(ID)
```

We now do:

```
KillSet = {}
for (Reg : DeadRegs)
  for (ID : intervalsReservedForReg(Reg, OpenRanges.getVarLocs()))
    KillSet.add(ID)
```

By not visiting each open location every time we visit an instruction,
this eliminates some potentially quadratic behavior. The new
implementation basically does a constant amount of work per instruction
because the interval map lookups are very fast.

For a file in WebKit, this brings the time spent in LiveDebugValues down
from ~2.5 minutes to 4 seconds, reducing compile time spent in that pass
from 28% of the total to just over 1%.

Before:

```
2.49 min   27.8%	0 s	LiveDebugValues::process
2.41 min   27.0%	5.40 s	LiveDebugValues::transferRegisterDef
1.51 min   16.9%	1.51 min LiveDebugValues::VarLoc::isDescribedByReg() const
32.73 s    6.1%		8.70 s	 llvm::SparseBitVector<128u>::SparseBitVectorIterator::operator++()
```

After:

```
4.53 s	1.1%	0 s	LiveDebugValues::process
3.00 s	0.7%	107.00 ms		LiveDebugValues::transferRegisterCopy
892.00 ms	0.2%	406.00 ms	LiveDebugValues::transferSpillOrRestoreInst
404.00 ms	0.1%	32.00 ms	LiveDebugValues::transferRegisterDef
110.00 ms	0.0%	2.00 ms		  LiveDebugValues::getUsedRegs
57.00 ms	0.0%	1.00 ms		  std::__1::vector<>::push_back
40.00 ms	0.0%	1.00 ms		  llvm::CoalescingBitVector<>::find(unsigned long long)
```

FWIW, I tried the same approach using SparseBitVector, but got bad
results. To do that, I had to extend SparseBitVector to support 64-bit
indices and expose its lower bound operation. The problem with this is
that the performance is very hard to predict: SparseBitVector's lower
bound operation falls back to O(n) linear scans in a std::list if you're
not /very/ careful about managing iteration order. When I profiled this
the performance looked worse than the baseline.

You can see the full CoalescingBitVector-based implementation here:

  https://github.com/vedantk/llvm-project/commits/try-coalescing

You can see the full SparseBitVector-based implementation here:

  https://github.com/vedantk/llvm-project/commits/try-sparsebitvec-find

Depends on D74984 and D74985.

Differential Revision: https://reviews.llvm.org/D74986
2020-02-27 12:39:47 -08:00
Vedant Kumar 210c4853de [LiveDebugValues] Encode a location in VarLoc IDs, NFC [2/3]
This is part 2 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.

---

Each VarLoc has a unique ID: this ID is used to look up a VarLoc in the
VarLocMap, and to virtually insert a VarLoc into a VarLocSet. Instead of
inserting the VarLoc /itself/ into the VarLocSet, we insert just the ID,
because this can be represented efficiently with a SparseBitVector.

This change introduces LocIndex, a layer of abstraction on top of VarLoc
IDs. Prior to this change, an ID was just an index into a vector. With
this change, an ID encodes both an index /and/ a register location. The
type-checker ensures that conversions to and from LocIndex are correct.

For the moment the register location is always 0 (undef). We have plenty
of bits left over to encode physregs, stack slots, and other locations
in the future.

Differential Revision: https://reviews.llvm.org/D74985
2020-02-27 12:39:47 -08:00
Sanjay Patel 90fd859f51 [x86] use instruction-level fast-math-flags to drive MachineCombiner
The code changes here are hopefully straightforward:

1. Use MachineInstruction flags to decide if FP ops can be reassociated
   (use both "reassoc" and "nsz" to be consistent with IR transforms;
   we probably don't need "nsz", but that's a safer interpretation of
   the FMF).
2. Check that both nodes allow reassociation to change instructions.
   This is a stronger requirement than we've usually implemented in
   IR/DAG, but this is needed to solve the motivating bug (see below),
   and it seems unlikely to impede optimization at this late stage.
3. Intersect/propagate MachineIR flags to enable further reassociation
   in MachineCombiner.

We managed to make MachineCombiner flexible enough that no changes are
needed to that pass itself. So this patch should only affect x86
(assuming no other targets have implemented the hooks using MachineIR
flags yet).

The motivating example in PR43609 is another case of fast-math transforms
interacting badly with special FP ops created during lowering:
https://bugs.llvm.org/show_bug.cgi?id=43609
The special fadd ops used for converting int to FP assume that they will
not be altered, so those are created without FMF.

However, the MachineCombiner pass was being enabled for FP ops using the
global/function-level TargetOption for "UnsafeFPMath". We managed to run
instruction/node-level FMF all the way down to MachineIR sometime in the
last 1-2 years though, so we can do better now.

The test diffs require some explanation:

1. llvm/test/CodeGen/X86/fmf-flags.ll - no target option for unsafe math was
   specified here, so MachineCombiner kicks in where it did not previously;
   to make it behave consistently, we need to specify a CPU schedule model,
   so use the default model, and there are no code diffs.
2. llvm/test/CodeGen/X86/machine-combiner.ll - replace the target option for
   unsafe math with the equivalent IR-level flags, and there are no code diffs;
   we can't remove the NaN/nsz options because those are still used to drive
   x86 fmin/fmax codegen (special SDAG opcodes).
3. llvm/test/CodeGen/X86/pow.ll - similar to #1
4. llvm/test/CodeGen/X86/sqrt-fastmath.ll - similar to #1, but MachineCombiner
   does some reassociation of the estimate sequence ops; presumably these are
   perf wins based on latency/throughput (and we get some reduction of move
   instructions too); I'm not sure how it affects numerical accuracy, but the
   test reflects reality better now because we would expect MachineCombiner to
   be enabled if the IR was generated via something like "-ffast-math" with clang.
5. llvm/test/CodeGen/X86/vec_int_to_fp.ll - this is the test added to model PR43609;
   the fadds are not reassociated now, so we should get the expected results.
6. llvm/test/CodeGen/X86/vector-reduce-fadd-fast.ll - similar to #1
7. llvm/test/CodeGen/X86/vector-reduce-fmul-fast.ll - similar to #1

Differential Revision: https://reviews.llvm.org/D74851
2020-02-27 15:19:37 -05:00
Djordje Todorovic 016d91ccbd [CallSiteInfo] Handle bundles when updating call site info
This will address the issue: P8198 and P8199 (from D73534).

The methods was not handle bundles properly.

Differential Revision: https://reviews.llvm.org/D74904
2020-02-27 13:57:06 +01:00
David Stenberg 6d857166d2 [DebugInfo] Describe call site values for chains of expression producing instrs
Summary:
If the describeLoadedValue() hook produced a DIExpression when
describing a instruction, and it was not possible to emit a call site
entry directly (the value operand was not an immediate nor a preserved
register), then that described value could not be inserted into the
worklist, and would instead be dropped, meaning that the parameter's
call site value couldn't be described.

This patch extends the worklist so that each entry has an DIExpression
that is built up when iterating through the instructions.

This allows us to describe instruction chains like this:

  $reg0 = mv $fp
  $reg0 = add $reg0, offset
  call @call_with_offseted_fp

Since DW_OP_LLVM_entry_value operations can't be combined with any other
expression, such call site entries will not be emitted. I have added a
test, dbgcall-site-expr-entry-value.mir, which verifies that we don't
assert or emit broken DWARF in such cases.

Reviewers: djtodoro, aprantl, vsk

Reviewed By: djtodoro, vsk

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D75036
2020-02-27 11:18:51 +01:00
David Stenberg ff574ff291 [DebugInfo][NFC] Move out lambdas from collectCallSiteParameters()
Summary:
This is a preparatory patch for D75036, in which a debug expression is
associated with each parameter register in the worklist. In that patch
the two lambda functions addToWorklist() and finishCallSiteParams() grow
a bit, so move those out to separate functions. This patch also prepares
for each parameter register having their own expression moving the
creation of the DbgValueLoc into finishCallSiteParams().

Reviewers: djtodoro, vsk

Reviewed By: djtodoro, vsk

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D75050
2020-02-27 11:18:51 +01:00
Matt Arsenault 6fc0d00823 GlobalISel: Fix lowering for G_UADDE/G_USUBE
The type parameter passed into lower is invalid and should be removed
from the function.
2020-02-26 19:10:52 -08:00
Matt Arsenault c7e8d8b13e GlobalISel: Cleanup code with MachineIRBuilder features 2020-02-26 19:10:34 -08:00
Krzysztof Parzyszek fd7c2e24c1 [SDAG] Add SDNode::values() = make_range(values_begin(), values_end())
Also use it in a few places to simplify code a little bit.  NFC
2020-02-26 12:07:38 -06:00
Sanjay Patel b3d0c79836 [DAGCombiner] avoid narrowing fake fneg vector op
This may inhibit vector narrowing in general, but there's
already an inconsistency in the way that we deal with this
pattern as shown by the test diff.

We may want to add a dedicated function for narrowing fneg.
It's often folded into some other op, so moving it away from
other math ops may cause regressions that we would not see
for normal binops.

See D73978 for more details.
2020-02-26 11:25:56 -05:00
Simon Pilgrim bbb0933e3d [DAG] visitRotate - modulo non-uniform constant rotation amounts 2020-02-26 15:43:12 +00:00
Sam Parker 1d06e75df2 [ARM][RDA] add getUniqueReachingMIDef
Add getUniqueReachingMIDef to RDA which performs a global search for
a machine instruction that produces a unique definition of a given
register at a given point. Also add two helper functions
(getMIOperand) that wrap around this functionality to get the
incoming definition uses of a given instruction. These now replace
the uses of getReachingMIDef in ARMLowOverheadLoops. getReachingMIDef
has been renamed to getReachingLocalMIDef and has been made private
along with getInstFromId.

Differential Revision: https://reviews.llvm.org/D74605
2020-02-26 11:15:26 +00:00
Fangrui Song b61a4aaca5 [MC] Default MCContext::UseNamesOnTempLabels to false and only set it to true for MCAsmStreamer
Only MCAsmStreamer (assembly output) needs to keep names of temporary labels created by
MCContext::createTempSymbol().

This change made the rL236642 optimization available for cc2as and
probably some other users.

This eliminates a behavior difference between llvm-mc -filetype=obj and cc1as, which caused
https://reviews.llvm.org/D74006#1890487

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75097
2020-02-25 18:23:10 -08:00
Craig Topper 735d27dc40 [SelectionDAG][PowerPC][AArch64][X86][ARM] Add chain input and output the ISD::FLT_ROUNDS_
This node reads the rounding control which means it needs to be ordered properly with operations that change the rounding control. So it needs to be chained to maintain order.

This patch adds a chain input and output to the node and connects it to the chain in SelectionDAGBuilder. I've update all in-tree targets to connect their chain through their lowering code.

Differential Revision: https://reviews.llvm.org/D75132
2020-02-25 16:58:23 -08:00
Quentin Colombet 5bf0023b0d [GISel][KnownBits] Update a comment regarding the effect of cache on PHIs
Unlike what I claimed in my previous commit. The caching is
actually not NFC on PHIs.

When we put a big enough max depth, we end up simulating loops.
The cache is effectively cutting the simulation short and we
get less information as a result.
E.g.,
```
v0 = G_CONSTANT i8 0xC0
jump
v1 = G_PHI i8 v0, v2
v2 = G_LSHR i8 v1, 1
```

Let say we want the known bits of v1.
- With cache:
Set v1 cache to we know nothing
v1 is v0 & v2
v0 gives us 0xC0
v2 gives us known bits of v1 >> 1
v1 is in the cache
=> v1 is 0, thus v2 is 0x80
Finally v1 is v0 & v2 => 0x80

- Without cache and enough depth to do two iteration of the loop:
v1 is v0 & v2
v0 gives us 0xC0
v2 gives us known bits of v1 >> 1
v1 is v0 & v2
v0 is 0xC0
v2 is v1 >> 1
Reach the max depth for v1...
unwinding
v1 is know nothing
v2 is 0x80
v0 is 0xC0
v1 is 0x80
v2 is 0xC0
v0 is 0xC0
v1 is 0xC0

Thus now v1 is 0xC0 instead of 0x80.

I've added a unittest demonstrating that.

NFC
2020-02-25 15:56:15 -08:00
Scott Linder 915b4aa139 Support emitting .cfi_undefined in CodeGen
This will be used by AMDGPU.

Differential Revision: https://reviews.llvm.org/D74914
2020-02-25 14:00:01 -05:00
Quentin Colombet a12f1d6a52 [MachineInstr] Add a dumpr method
Add a dump method that recursively prints an instruction and all
the instructions defining its operands and so on.

This is helpful when looking at combiner issue.

NFC

Differential Revision: https://reviews.llvm.org/D75094
2020-02-25 10:46:29 -08:00
Roman Lebedev d20907d1de
[Codegen] Revert rL354676/rL354677 and followups - introduced PR43446 miscompile
This reverts https://reviews.llvm.org/D58468
(rL354676, 44037d7a63),
and all and any follow-ups to that code block.

https://bugs.llvm.org/show_bug.cgi?id=43446
2020-02-25 20:30:12 +03:00
Jay Foad ccee390767 GlobalISel: NFC minor cleanup to avoid a couple of fixed size local arrays 2020-02-25 09:49:19 +00:00
Roman Tereshin b3bce6a3dd [MachineVerifier] Doing ::calcRegsPassed over faster sets: ~15-20% faster MV, NFC
MachineVerifier still takes 45-50% of total compile time with
-verify-machineinstrs, with calcRegsPassed dataflow taking ~50-60% of
MachineVerifier.

The majority of that time is spent in BBInfo::addPassed, mostly within
DenseSet implementing the sets the dataflow is operating over.

In particular, 1/4 of that DenseSet time is spent just iterating over it
(operator++), 40-50% on insertions, and most of the rest in ::count.

Given that, we're implementing custom sets just for this analysis here,
focusing on cheap insertions and O(n) iteration time (as opposed to
O(U), where U is the universe).

As it's based _mostly_ on BitVector for sparse and SmallVector for
dense, it may remotely resemble SparseSet. The difference is, our
solution is a lot less clever, doesn't have constant time `clear` that
we won't use anyway as reusing these sets across analyses is cumbersome,
and thus more space efficient and safer (got a resizable Universe and a
fallback to DenseSet for sparse if it gets too big).

With this patch MachineVerifier gets ~15-20% faster, its contribution to
total compile time drops from 45-50% to ~35%, while contribution of
calcRegsPassed to MachineVerifier drops from 50-60% to ~35% as well.

calcRegsPassed itself gets another 2x faster here.

All measured on a large suite of shaders targeting a number of GPUs.

Reviewers: bogner, stoklund, rudkx, qcolombet

Reviewed By: rudkx

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75033
2020-02-24 19:01:21 -08:00
Bill Wendling 23c2a5ce33 Allow "callbr" to return non-void values
Summary:
Terminators in LLVM aren't prohibited from returning values. This means that
the "callbr" instruction, which is used for "asm goto", can support "asm goto
with outputs."

This patch removes all restrictions against "callbr" returning values. The
heavy lifting is done by the code generator. The "INLINEASM_BR" instruction's
a terminator, and the code generator doesn't allow non-terminator instructions
after a terminator. In order to correctly model the feature, we need to copy
outputs from "INLINEASM_BR" into virtual registers. Of course, those copies
aren't terminators.

To get around this issue, we split the block containing the "INLINEASM_BR"
right before the "COPY" instructions. This results in two cheats:

  - Any physical registers defined by "INLINEASM_BR" need to be marked as
    live-in into the block with the "COPY" instructions. This violates an
    assumption that physical registers aren't marked as "live-in" until after
    register allocation. But it seems as if the live-in information only
    needs to be correct after register allocation. So we're able to get away
    with this.

  - The indirect branches from the "INLINEASM_BR" are moved to the "COPY"
    block. This is to satisfy PHI nodes.

I've been told that MLIR can support this handily, but until we're able to
use it, we'll have to stick with the above.

Reviewers: jyknight, nickdesaulniers, hfinkel, MaskRay, lattner

Reviewed By: nickdesaulniers, MaskRay, lattner

Subscribers: rriddle, qcolombet, jdoerfert, MatzeB, echristo, MaskRay, xbolva00, aaron.ballman, cfe-commits, JonChesterfield, hiraditya, llvm-commits, rnk, craig.topper

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D69868
2020-02-24 18:29:06 -08:00
Matt Arsenault 11e3dde625 GlobalISel: Reimplement fewerElementsVectorBasic
Changes the handling of odd breakdowns, and avoids using
G_EXTRACT/G_INSERT. Pad with undef to a wider size, and unmerge. Also
avoid introducing instructions for the fully undef components.
2020-02-24 21:19:47 -05:00
Craig Topper a5fa778882 [LegalizeTypes] Scalarize non-byte sized loads in WidenRecRes_Load and SplitVecResLoad
Should fix PR42803 and PR44902

Differential Revision: https://reviews.llvm.org/D74590
2020-02-24 15:14:33 -08:00
Roman Tereshin 6f87b162e6 [MachineVerifier] Doing ::calcRegsPassed in RPO: ~35% faster MV, NFC
Depending on the target, test suite, pipeline config and perhaps other
factors machine verifier when forced on with -verify-machineinstrs can
increase compile time 2-2.5 times over (Release, Asserts On), taking up
~60% of the time. An invaluable tool, it significantly slows down
machine verifier-enabled testing.

Nearly 75% of its time MachineVerifier spends in the calcRegsPassed
method. It's a classic forward dataflow analysis executed over sets, but
visiting MBBs in arbitrary order. We switch that to RPO here.

This speeds up MachineVerifier by about 35%, decreasing the overall
compile time with -verify-machineinstrs by 20-25% or so.

calcRegsPassed itself gets 2x faster here.

All measured on a large suite of shaders targeting a number of GPUs.

Reviewers: bogner, stoklund, rudkx, qcolombet

Reviewed By: bogner

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75032
2020-02-24 13:30:01 -08:00
Simon Pilgrim 53b597cfa2 [SelectionDAG] Merge constant SDNode arithmetic into foldConstantArithmetic
This is the second patch as part of https://bugs.llvm.org/show_bug.cgi?id=36544

Merging in the ConstantSDNode variant of FoldConstantArithmetic. After this, I will begin merging in FoldConstantVectorArithmetic

I've ensured this patch can build & pass all lit tests in Windows and Linux environments.

Patch by @justice_adams (Justice Adams)

Differential Revision: https://reviews.llvm.org/D74881
2020-02-24 18:54:22 +00:00
Sjoerd Meijer 7efabe5c7d [MIR][ARM] MachineOperand comments
This adds infrastructure to print and parse MIR MachineOperand comments.
The motivation for the ARM backend is to print condition code names instead of
magic constants that are difficult to read (for human beings). For example,
instead of this:

  dead renamable $r2, $cpsr = tEOR killed renamable $r2, renamable $r1, 14, $noreg
  t2Bcc %bb.4, 0, killed $cpsr

we now print this:

  dead renamable $r2, $cpsr = tEOR killed renamable $r2, renamable $r1, 14 /* CC::always */, $noreg
  t2Bcc %bb.4, 0 /* CC:eq */, killed $cpsr

This shows that MachineOperand comments are enclosed between /* and */. In this
example, the EOR instruction is not conditionally executed (i.e. it is "always
executed"), which is encoded by the 14 immediate machine operand. Thus, now
this machine operand has /* CC::always */ as a comment. The 0 on the next
conditional branch instruction represents the equal condition code, thus now
this operand has /* CC:eq */ as a comment.

As it is a comment, the MI lexer/parser completely ignores it. The benefit is
that this keeps the change in the lexer extremely minimal and no target
specific parsing needs to be done. The changes on the MIPrinter side are also
minimal, as there is only one target hooks that is used to create the machine
operand comments.

Differential Revision: https://reviews.llvm.org/D74306
2020-02-24 14:19:21 +00:00
Sam Parker a67eb221e2 [RDA][ARM][LowOverheadLoops] Iteration count IT blocks
Change the way that we remove the redundant iteration count code in
the presence of IT blocks. collectLocalKilledOperands has been
introduced to scan an instructions operands, collecting the killed
instructions and then visiting them too. This is used to delete the
code in the preheader which calculates the iteration count. We also
track any IT blocks within the preheader and, if we remove all the
instructions from the IT block, we also remove the IT instruction.
isSafeToRemove is used to remove any redundant uses of the iteration
count within the loop body.

Differential Revision: https://reviews.llvm.org/D74975
2020-02-24 13:51:03 +00:00
Bevin Hansson 6e561d1c94 [Intrinsic] Add fixed point saturating division intrinsics.
Summary:
This patch adds intrinsics and ISelDAG nodes for signed
and unsigned fixed-point division:

```
llvm.sdiv.fix.sat.*
llvm.udiv.fix.sat.*
```

These intrinsics perform scaled, saturating division
on two integers or vectors of integers. They are
required for the implementation of the Embedded-C
fixed-point arithmetic in Clang.

Reviewers: bjope, leonardchan, craig.topper

Subscribers: hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71550
2020-02-24 10:50:52 +01:00
Bevin Hansson c3f36acc92 [MC] Widen the functional unit type from 32 to 64 bits.
Summary:
The type used to represent functional units in MC is
'unsigned', which is 32 bits wide. This is currently
not a problem in any upstream target as no one seems
to have hit the limit on this yet, but in our
downstream one, we need to define more than 32
functional units.

Increasing the size does not seem to cause a huge
size increase in the binary (an llc debug build went
from 1366497672 to 1366523984, a difference of 26k),
so perhaps it would be acceptable to have this patch
applied upstream as well.

Subscribers: hiraditya, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71210
2020-02-24 09:37:00 +01:00
Craig Topper 3a6bb32bd2 [SelectionDAG] Remove ISD::LIFETIME_START/LIFETIME_END from assert in getMemIntrinsicNode.
These appear to have their own SDNode type and shouldn't use
MemIntrinsicSDNode.
2020-02-23 22:32:36 -08:00
Florian Hahn 7769030b93 Recommit "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This version fixes a buildbot failure cause by picking the wrong insert
point for XORs. We cannot pick the XOR binary operator as insert point,
as it is not guaranteed that both input operands for the overflow
intrinsic are defined before it.

This reverts the revert commit
c7fc0e5da6.
2020-02-23 18:33:18 +00:00
Sanjay Patel a253a2a793 [SDAG] fold fsub -0.0, undef to undef rather than NaN
A question about this behavior came up on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/139003.html
...and as part of backend improvements in D73978.

We decided not to implement a more general change that would have
folded any FP binop with nearly arbitrary constant + undef operand
to undef because that is not theoretically correct (even if it is
practically correct).

This is the SDAG-equivalent to the IR change in D74713.
2020-02-23 11:36:53 -05:00
Quentin Colombet b6d63c92ec [GISel][KnownBits] Suppress unused warning on the dump method
NFC
2020-02-21 21:07:04 -08:00
Quentin Colombet 618dec2aef [GISel][KnownBits] Add a cache mechanism to speed compile time
This patch adds a cache that is valid only for the duration of a call
to getKnownBits. With such short lived cache we avoid all the problems
of cache invalidation while still getting the benefits of reusing
the information we already computed.

This cache is useful whenever an instruction occurs more than once
in a chain of computation.
E.g.,
v0 = G_ADD v1, v2
v3 = G_ADD v0, v1

Previously we would compute the known bits for:
v1, v2, v0, then v1 again and finally v3.

With the patch, now we won't have to recompute v1 again.

NFC
2020-02-21 14:31:42 -08:00
Francesco Petrogalli 31ec721516 [llvm][CodeGen] DAG Combiner folds for vscale.
Summary:
This patch simplifies the DAGs generated when using the intrinsic `@llvm.vscale.*` as follows:

* Fold (add (vscale * C0), (vscale * C1)) to (vscale * (C0 + C1)).
* Canonicalize (sub X, (vscale * C)) to (add X,  (vscale * -C)).
* Fold (mul (vscale * C0), C1) to (vscale * (C0 * C1)).
* Fold (shl (vscale * C0), C1) to (vscale * (C0 << C1)).

The test `sve-gep-ll` have been updated to reflect the folding introduced by this patch.

Reviewers: efriedma, sdesmalen, andwar, rengolin

Reviewed By: sdesmalen

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74782
2020-02-21 18:03:12 +00:00
Hiroshi Yamauchi 0e3e242209 [BFI] Fix missed BFI updates in MachineSink.
Summary:
This prevents BFI queries on new blocks (from
MachineSinking::GetAllSortedSuccessors) and fixes a bunch of assert failures
under -check-bfi-unknown-block-queries=true.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74511
2020-02-21 09:50:54 -08:00
Nikita Popov a8db806d52 [SimplifyLibCalls][IRBuilder] Accept any IRBuilder in SimplifyLibCalls
This changes the SimplifyLibCalls utility to accept an IRBuilderBase,
which allows us to pass through the IRBuilder used by InstCombine.
This will ensure that new instructions get added to the worklist.
The annotated test-case drops from 4 to 2 InstCombine iterations thanks
to this.

To achieve this, I'm adding an IRBuilderBase::OperandBundlesGuard,
which is basically the same as the existing InsertPointGuard and
FastMathFlagsGuard, but for operand bundles. Also add a
setDefaultOperandBundles() method so these can be set outside the
constructor.

Differential Revision: https://reviews.llvm.org/D74792
2020-02-21 18:26:05 +01:00
Jay Foad cab39e4b8c GlobalISel: Fix narrowing of (G_ASHR i64:x, 32)
Reviewers: arsenm

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74950
2020-02-21 16:51:03 +00:00
Simon Pilgrim 42ec6fdce9 [TargetLowering] Apply basic shift combines before recursive SimplifyDemandedBits calls.
Minor refactor/cleanup before we begin adding non-uniform support.
2020-02-21 16:31:20 +00:00
Simon Pilgrim 86c52af05a [TargetLowering] SimplifyDemandedBits - use getValidShiftAmountConstant helper.
Use the SelectionDAG::getValidShiftAmountConstant helper to get const/constsplat shift amounts, which allows us to drop the out of range shift amount early-out.

First step towards better non-uniform shift amount support in SimplifyDemandedBits.
2020-02-21 14:23:53 +00:00
Sam Clegg df74033ec9 [WebAssembly] Remove unneeded getWasmKindForNamedSection function
I believe this was carried over from getELFKindForNamedSection since
the wasm backend originally used ELF object writing as a template.

Differential Revision: https://reviews.llvm.org/D74565
2020-02-20 22:49:08 -08:00
Eli Friedman c767cf24e4 [SVE] Add support for lowering GEPs involving scalable vectors.
This includes both GEPs where the indexed type is a scalable vector, and
GEPs where the result type is a scalable vector.

Differential Revision: https://reviews.llvm.org/D73602
2020-02-20 13:45:41 -08:00
Quentin Colombet e4a9225f5d [GISel][KnownBits] Give up on PHI analysis as soon as we don't know anything
When analyzing PHIs, we gather the known bits for every operand and
merge them together to get the known bits of the result of the PHI.
It is not unusual that merging the information leads to know nothing
on the result (e.g., phi a: i8 3, b: i8 unknown, ..., after looking at the
second argument we know we will know nothing on the result), thus, as
soon as we reach that state, stop analyzing the following operand (i.e.,
on the previous example, we won't process anything after looking at `b`).

This improves compile time in particular with PHIs with a large number
of operands.

NFC.
2020-02-20 11:34:01 -08:00
Simon Pilgrim f9c326364e [DAGCombiner] Use SDValue::getConstantOperandAPInt helper where possible. NFC. 2020-02-20 18:23:05 +00:00
Simon Pilgrim fc2b4a02b1 [DAGCombine] visitEXTRACT_VECTOR_ELT - add SimplifyDemandedBits multi use support
Similar to what we already do with SimplifyDemandedVectorElts, call SimplifyDemandedBits across all the extracted elements of the source vector, treating it as single use.

There's a minor regression in store-weird-sizes.ll which will be addressed in an upcoming SimplifyDemandedBits patch.
2020-02-20 15:49:38 +00:00
Sam Parker 659500c0c9 [NFC][RDA] Break-up initialization code
Separate out the initialization code from the loop traversal so
that the analysis can be reset and re-run by a user.
2020-02-20 14:59:42 +00:00
Djordje Todorovic 2f215cf36a Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGfaff707db82d.
A failure found on an ARM 2-stage buildbot.
The investigation is needed.
2020-02-20 14:41:39 +01:00
Bill Wendling 129c911efa Include static prof data when collecting loop BBs
Summary:
If the programmer adds static profile data to a branch---i.e. uses
"__builtin_expect()" or similar---then we should honor it. Otherwise,
"__builtin_expect()" is ignored in crucial situations. So we trust that
the programmer knows what they're doing until proven wrong.

Subscribers: hiraditya, JDevlieghere, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74809
2020-02-19 11:33:48 -08:00
Florian Hahn c7fc0e5da6 Revert "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This reverts commit e01a3d49c2.
and commit a6a585b803.

This causes a failure on GreenDragon:
http://lab.llvm.org:8080/green/view/LLDB/job/lldb-cmake/9597
2020-02-19 19:37:08 +01:00
Florian Hahn e01a3d49c2 [PatternMatch] Match XOR variant of unsigned-add overflow check.
Instcombine folds (a + b <u a) to (a ^ -1 <u b) and that does not match
the expected pattern in CodeGenPerpare via UAddWithOverflow.

This causes a regression over Clang 7 on both X86 and AArch64:
https://gcc.godbolt.org/z/juhXYV

This patch extends UAddWithOverflow to also catch the XOR case, if the
XOR is only used in the ICMP. This covers just a single case, but I'd
like to make sure I am not missing anything before tackling the other
cases.

Reviewers: nikic, RKSimon, lebedev.ri, spatel

Reviewed By: nikic, lebedev.ri

Differential Revision: https://reviews.llvm.org/D74228
2020-02-19 15:25:18 +01:00
Florian Hahn 216afd3301 [TargetLower] Update shouldFormOverflowOp check if math is used.
On some targets, like SPARC, forming overflow ops is only profitable if
the math result is used: https://godbolt.org/z/DxSmdB
This patch adds a new MathUsed parameter to allow the targets
to make the decision and defaults to only allowing it
if the math result is used. That is the conservative choice.

This patch also updates AArch64ISelLowering, X86ISelLowering,
ARMISelLowering.h, SystemZISelLowering.h to allow forming overflow
ops if the math result is not used. On those targets using the
overflow intrinsic for the overflow check only generates better code.

Reviewers: nikic, RKSimon, lebedev.ri, spatel

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D74722
2020-02-19 11:28:33 +01:00
Djordje Todorovic faff707db8 Reland "[DebugInfo] Enable the debug entry values feature by default"
Differential Revision: https://reviews.llvm.org/D73534
2020-02-19 11:12:26 +01:00
Aditya Nandakumar b91d9ec0bb [GlobalISel]: Fix some non determinism exposed in CSE due to not notifying observers about mutations + add verification for CSE
https://reviews.llvm.org/D67133

While investigating some non determinism (CSE doesn't produce wrong
code, it just doesn't CSE some times) in GISel CSE on an out of tree
target, I realized that the core issue was that there were lots of code
that mutates (setReg, setRegClass etc), but doesn't notify observers
(CSE in this case but this could be any other observer). In order to
make the Observer be available in various parts of code and to avoid
having to thread it through various API, the MachineFunction now has the
observer as field. This allows it to be easily used in helper functions
such as constrainOperandRegClass.
Also added some invariant verification method in CSEInfo which can
catch these issues (when CSE is enabled).
2020-02-18 14:54:57 -08:00
Thomas Lively 9d37f5afac [WebAssembly] Implement multivalue call_indirects
Summary:
Unlike normal calls, call_indirects have immediate arguments that
caused a MachineVerifier failure without a small tweak to loosen the
verifier's requirements for variadicOpsAreDefs instructions.

One nice thing about the new call_indirects is that they do not need
to participate in the PCALL_INDIRECT mechanism because their post-isel
hook handles moving the function pointer argument and adding the flags
and typeindex arguments itself.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74191
2020-02-18 13:49:46 -08:00
Thomas Lively 7b64a59060 Reland "[WebAssembly][InstrEmitter] Foundation for multivalue call lowering"
This reverts commit 649aba93a2, now that
the approach started there has been shown to be workable in the patch
series culminating in https://reviews.llvm.org/D74192.
2020-02-18 13:49:46 -08:00
Simon Pilgrim d6eef0614f [TargetLowering] Add SimplifyMultipleUseDemandedBits 'all elements' helper wrapper. NFC. 2020-02-18 19:53:50 +00:00
Huihui Zhang 8ee0e1dc02 [NFC] Silence compiler warning [-Wmissing-braces]. 2020-02-18 10:37:12 -08:00
Sander de Smalen 8fbc925807 Add OffsetIsScalable to getMemOperandWithOffset
Summary:
Making `Scale` a `TypeSize` in AArch64InstrInfo::getMemOpInfo,
has the effect that all places where this information is used
(notably, TargetInstrInfo::getMemOperandWithOffset) will need
to consider Scale - and derived, Offset - possibly being scalable.

This patch adds a new operand `bool &OffsetIsScalable` to
TargetInstrInfo::getMemOperandWithOffset and fixes up all
the places where this function is used, to consider the
offset possibly being scalable.

In most cases, this means bailing out because the algorithm does not
(or cannot) support scalable offsets in places where it does some
form of alias checking for example.

Reviewers: rovka, efriedma, kristof.beyls

Reviewed By: efriedma

Subscribers: wuzish, kerbowa, MatzeB, arsenm, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, javed.absar, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72758
2020-02-18 15:53:29 +00:00
Djordje Todorovic 2bf44d11cb Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGa82d3e8a6e67.
2020-02-18 16:38:11 +01:00
Djordje Todorovic a82d3e8a6e Reland "[DebugInfo] Enable the debug entry values feature by default"
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-18 14:41:08 +01:00
James Clarke b3cd44f80b Use SETNE directly rather than SUB/SETNE 0 for stack guard check
Summary:
Backends should fold the subtraction into the comparison, but not all
seem to. Moreover, on targets where pointers are not integers, such as
CHERI, an integer subtraction is not appropriate. Instead we should just
compare the two pointers directly, as this should work everywhere and
potentially generate more efficient code.

Reviewers: bogner, lebedev.ri, efriedma, t.p.northover, uweigand, sunfish

Reviewed By: lebedev.ri

Subscribers: dschuff, sbc100, arichardson, jgravelle-google, hiraditya, aheejin, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74454
2020-02-18 13:21:26 +00:00
Djordje Todorovic a5ac8ca3e0 [CSInfo][TailDuplicator] Delete the call site info when removing dead MBBs
This is needed for the debug entry values feature.

Differential Revision: https://reviews.llvm.org/D74702
2020-02-18 12:29:51 +01:00
Jim Lin 466f8843f5 [NFC] Remove trailing space
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h,td}
2020-02-18 10:49:13 +08:00
Vedant Kumar 3f148eabe0 [LiveDebugValues] Visit open var locs just once in transferRegisterDef, NFC
For a file in WebKit, this brings the time spent in LiveDebugValues down
from 16 minutes to 2 minutes. The reduction comes from iterating the set
of open variable locations just once in transferRegisterDef. Post-patch,
the most expensive item inside of transferRegisterDef is a call to
VarLoc::isDescribedByReg, which we have to do.

Testing: I built LNT using the Os-g cmake cache with & without this
patch, then diffed the object files to verify there was no binary diff.

rdar://59446577

Differential Revision: https://reviews.llvm.org/D74633
2020-02-17 14:04:22 -08:00
Matt Arsenault 0e2eb357e0 GlobalISel: Extend narrowing to G_ASHR 2020-02-17 10:42:59 -08:00
Matt Arsenault 8550859535 GlobalISel: Extend shift narrowing to G_SHL 2020-02-17 09:13:37 -08:00
Benjamin Kramer 564a9de28e Hide implementation details. NFC> 2020-02-17 17:55:23 +01:00
Simon Pilgrim a1585aec6f [SelectionDAG] Expose the "getValidShiftAmount" helpers available. NFCI.
These are going to be useful in TargetLowering::SimplifyDemandedBits, so expose these helpers outside of SelectionDAG.cpp

Also add an getValidShiftAmountConstant early-out to getValidMinimumShiftAmountConstant/getValidMaximumShiftAmountConstant so we can use them for scalar cases as well.
2020-02-17 16:28:46 +00:00
Matt Arsenault 78d455adf0 GlobalISel: Add combine to narrow G_LSHR
Produce an unmerge to a narrower type and introduce a narrower shift
if needed. I wasn't sure if there was a better way to parameterize the
target's preferred shift type for the GICombineRule, so manually call
the combine helper.
2020-02-17 08:04:52 -08:00
Sander de Smalen a7a96c726e [AArch64] Implement passing SVE vectors by ref for AAPCS.
Summary:
This patch implements the part of the calling convention
where SVE Vectors are passed by reference. This means the
caller must allocate stack space for these objects and
pass the address to the callee.

Reviewers: efriedma, rovka, cameron.mcinally, c-rhodes, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71216
2020-02-17 15:20:28 +00:00
Sjoerd Meijer dad5f00e3b [DAGCombine] Combine pattern for REV16
This adds another pattern to the combiner for a case that we were not handling
to generate the REV16 instruction for ARM/Thumb2 and a bswap+ror on X86.

Differential Revision: https://reviews.llvm.org/D74032
2020-02-17 14:54:17 +00:00
Benjamin Kramer 5fc5c7db38 Strength reduce vectors into arrays. NFCI. 2020-02-17 15:37:35 +01:00
Fangrui Song 549b436beb [MC] De-capitalize MCStreamer::Emit{Bundle,Addrsig}* etc
So far, all non-COFF-related Emit* functions have been de-capitalized.
2020-02-15 09:11:48 -08:00
Simon Pilgrim ce2b5f1569 Fix gcc9.2 -Winit-list-lifetime warning. NFCI.
Reported by @lbenes (Luke Benes)
2020-02-15 16:48:51 +00:00
Fangrui Song 774971030d [MCStreamer] De-capitalize EmitValue EmitIntValue{,InHex} 2020-02-14 23:08:40 -08:00
Fangrui Song 895cad1a13 [AsmPrinter][XRay] Omit unique ID for xray_instr_map and xray_fn_idx
Follow-up for D74006.
2020-02-14 21:10:46 -08:00
Diogo Sampaio 8bc790f9e6 [AArch64][FPenv] Update chain of int to fp conversion
Summary:
When using strict fp, it is required to update the
chain when performing integer type promotion of a
operand to a integer to floating point conversion.

Reviewers: craig.topper, john.brawn

Reviewed By: craig.topper

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74597
2020-02-15 05:07:34 +00:00
Fangrui Song f554e27224 [AsmPrinter] Omit unique ID for __patchable_function_entries sections
Follow-up for D74006.

When the integrated assembler is used, we use SHF_LINK_ORDER.  The
linked-to symbol is part of ELFSectionKey, thus we can omit the unique
ID.
2020-02-14 20:54:54 -08:00
Fangrui Song 1dc16c752d [MC] Add MCSection::NonUniqueID and delete one MCContext::getELFSection overload 2020-02-14 20:25:52 -08:00
Fangrui Song 6d2d589b06 [MC] De-capitalize another set of MCStreamer::Emit* functions
Emit{ValueTo,Code}Alignment Emit{DTP,TP,GP}* EmitSymbolValue etc
2020-02-14 19:26:52 -08:00
Fangrui Song a55daa1461 [MC] De-capitalize some MCStreamer::Emit* functions 2020-02-14 19:11:53 -08:00
Matt Arsenault 3bb0ff8341 GlobalISel: Remove unused function argument 2020-02-14 15:57:39 -08:00
Sean Fertile b75692c30e [AsmPrinter] Use the McASMInfo to determine if we need descriptors.
In https://reviews.llvm.org/rG8b737688c21a9755cae14cb9343930e0882164ab I
switched the condition gating the creation of the descriptor symbol from
checking the MCAsmInfo if we need to support descriptors, to if the OS
was AIX. Technically the 2 should be interchangeable: if we are
targeting AIX then we need to emit XCOFF object files, and the MCAsmInfo
must return true for needing function descriptors.

This doesn't account for lit test with runsteps that only set the arch.
Eg: test/CodeGen/XCore/section-name.ll
which when run natively on AIX we end up with a target xcore-ibm-aix and
needFunctionDescriptors is false.

This patch reverts to using the MCAsmInfo and adds an assert that the
target OS must be AIX since that is the only target using the descriptor
hook.

Differential Revision: https://reviews.llvm.org/D74622
2020-02-14 15:20:39 -05:00
Matt Arsenault bfbfa18591 GlobalISel: Lower s64->s16 G_FPTRUNC
This is more or less directly ported from the AMDGPU custom lowering
for FP_TO_FP16. I made a few minor fixups (using G_UNMERGE_VALUES
instead of creating shift/trunc to extract the two halves, and zexting
an inverted compare instead of select_cc).

This also does not include the fast math expansion the DAG which
converts to f32 and then to f16. I think that belongs in a
pre-legalize combine instead.
2020-02-14 10:46:58 -08:00
Volkan Keles 187686a22f [GlobalISel] LegalizationArtifactCombiner: Fix a bug in tryCombineMerges
Like COPY instructions explained in D70616, we don't check the constraints
when combining G_UNMERGE_VALUES. Use the same logic used in D70616 to check
if registers can be replaced, or a COPY instruction needs to be built.

https://reviews.llvm.org/D70564
2020-02-14 10:45:58 -08:00
Alexandre Ganea 8404aeb56a [Support] On Windows, ensure hardware_concurrency() extends to all CPU sockets and all NUMA groups
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, so that parallel computations such as LLD/ThinLTO can use all hardware threads in the system. Before this patch, on Windows, a maximum of 64 hardware threads could be used at most, in some cases dispatched only on one CPU socket.

== Background ==
Windows doesn't have a flat cpu_set_t like Linux. Instead, it projects hardware CPUs (or NUMA nodes) to applications through a concept of "processor groups". A "processor" is the smallest unit of execution on a CPU, that is, an hyper-thread if SMT is active; a core otherwise. There's a limit of 32-bit processors on older 32-bit versions of Windows, which later was raised to 64-processors with 64-bit versions of Windows. This limit comes from the affinity mask, which historically is represented by the sizeof(void*). Consequently, the concept of "processor groups" was introduced for dealing with systems with more than 64 hyper-threads.

By default, the Windows OS assigns only one "processor group" to each starting application, in a round-robin manner. If the application wants to use more processors, it needs to programmatically enable it, by assigning threads to other "processor groups". This also means that affinity cannot cross "processor group" boundaries; one can only specify a "preferred" group on start-up, but the application is free to allocate more groups if it wants to.

This creates a peculiar situation, where newer CPUs like the AMD EPYC 7702P (64-cores, 128-hyperthreads) are projected by the OS as two (2) "processor groups". This means that by default, an application can only use half of the cores. This situation could only get worse in the years to come, as dies with more cores will appear on the market.

== The problem ==
The heavyweight_hardware_concurrency() API was introduced so that only *one hardware thread per core* was used. Once that API returns, that original intention is lost, only the number of threads is retained. Consider a situation, on Windows, where the system has 2 CPU sockets, 18 cores each, each core having 2 hyper-threads, for a total of 72 hyper-threads. Both heavyweight_hardware_concurrency() and hardware_concurrency() currently return 36, because on Windows they are simply wrappers over std:🧵:hardware_concurrency() -- which can only return processors from the current "processor group".

== The changes in this patch ==
To solve this situation, we capture (and retain) the initial intention until the point of usage, through a new ThreadPoolStrategy class. The number of threads to use is deferred as late as possible, until the moment where the std::threads are created (ThreadPool in the case of ThinLTO).

When using hardware_concurrency(), setting ThreadCount to 0 now means to use all the possible hardware CPU (SMT) threads. Providing a ThreadCount above to the maximum number of threads will have no effect, the maximum will be used instead.
The heavyweight_hardware_concurrency() is similar to hardware_concurrency(), except that only one thread per hardware *core* will be used.

When LLVM_ENABLE_THREADS is OFF, the threading APIs will always return 1, to ensure any caller loops will be exercised at least once.

Differential Revision: https://reviews.llvm.org/D71775
2020-02-14 10:24:22 -05:00
Fangrui Song bcd24b2d43 [AsmPrinter][MCStreamer] De-capitalize EmitInstruction and EmitCFI* 2020-02-13 22:08:55 -08:00
Fangrui Song 1d49eb00d9 [AsmPrinter] De-capitalize all AsmPrinter::Emit* but EmitInstruction
Similar to rL328848.
2020-02-13 17:06:24 -08:00
Vedant Kumar 3091049446 Add dbgs() output to help track down missing DW_AT_location bugs, NFC 2020-02-13 14:38:44 -08:00
Vedant Kumar 8e77b33b3c [Local] Do not move around dbg.declares during replaceDbgDeclare
replaceDbgDeclare is used to update the descriptions of stack variables
when they are moved (e.g. by ASan or SafeStack). A side effect of
replaceDbgDeclare is that it moves dbg.declares around in the
instruction stream (typically by hoisting them into the entry block).
This behavior was introduced in llvm/r227544 to fix an assertion failure
(llvm.org/PR22386), but no longer appears to be necessary.

Hoisting a dbg.declare generally does not create problems. Usually,
dbg.declare either describes an argument or an alloca in the entry
block, and backends have special handling to emit locations for these.
In optimized builds, LowerDbgDeclare places dbg.values in the right
spots regardless of where the dbg.declare is. And no one uses
replaceDbgDeclare to handle things like VLAs.

However, there doesn't seem to be a positive case for moving
dbg.declares around anymore, and this reordering can get in the way of
understanding other bugs. I propose getting rid of it.

Testing: stage2 RelWithDebInfo sanitized build, check-llvm

rdar://59397340

Differential Revision: https://reviews.llvm.org/D74517
2020-02-13 14:35:02 -08:00
Fangrui Song 0bc77a0f0d [AsmPrinter] De-capitalize some AsmPrinter::Emit* functions
Similar to rL328848.
2020-02-13 13:38:33 -08:00
Fangrui Song 0dce409cee [AsmPrinter] De-capitalize Emit{Function,BasicBlock]* and Emit{Start,End}OfAsmFile 2020-02-13 13:22:49 -08:00
Matt Arsenault de256478e6 GlobalISel: Don't use LLT references
These should always be passed by value
2020-02-13 15:25:30 -05:00
Simon Pilgrim 32176133fa Move FIXME to start of comment so visual studio actually tags it. NFC. 2020-02-13 14:28:50 +00:00
Serguei Katkov a6f38b4697 [Statepoint] Remove redundant clear of call target on register
Patchable statepoint is lowered into sequence of nops, so zeroed call target
should not be on register. It is better to use getTargetConstant instead
of getConstant to select zero constant for call target.

Reviewers: reames
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D74465
2020-02-13 10:25:50 +07:00
Fangrui Song c662795b07 [AsmPrinter][ELF] Emit local alias for ExternalLinkage dso_local GlobalAlias 2020-02-12 17:08:22 -08:00
Guozhi Wei 369d086d78 [MBP] Partial tail duplication into hot predecessors
Current tail duplication embedded in MBP duplicates a BB into all or none of its predecessors without too much cost analysis. So sometimes it is duplicated into cold predecessors, and in other cases it may miss the duplication into hot predecessors.

This patch improves tail duplication in 3 aspects:

  A successor can be duplicated into part of its predecessors.
  A more fine-grained benefit analysis, combined with 1, now a successor is duplicated into hot predecessors only.
  If a successor can't be duplicated into one predecessor, it doesn't impact the duplication into other predecessors.

Differential Revision: https://reviews.llvm.org/D73387
2020-02-12 15:22:33 -08:00
Jay Foad 32aac25637 [KnownBits] Introduce anyext instead of passing a flag into zext
Summary:
This was a very odd API, where you had to pass a flag into a zext
function to say whether the extended bits really were zero or not. All
callers passed in a literal true or false.

I think it's much clearer to make the function name reflect the
operation being performed on the value we're tracking (rather than on
the KnownBits Zero and One fields), so zext means the value is being
zero extended and new function anyext means the value is being extended
with unknown bits.

NFC.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74482
2020-02-12 19:06:53 +00:00
Simon Pilgrim 9eb426c88c [TargetLowering] Add NegatibleCost enum for isNegatibleForFree return codes
The isNegatibleForFree/getNegatedExpression methods currently rely on a raw char value to indicate whether a negation is beneficial or not.

This patch replaces the char return value with an NegatibleCost enum to more clearly demonstrate what is implied.

It also renames isNegatibleForFree to getNegatibleCost to more accurately reflect whats going on.

Differential Revision: https://reviews.llvm.org/D74221
2020-02-12 11:51:42 +00:00
Djordje Todorovic 97ed706a96 Revert "[DebugInfo] Enable the debug entry values feature by default"
This reverts commit rG9f6ff07f8a39.

Found a test failure on clang-with-thin-lto-ubuntu buildbot.
2020-02-12 11:59:04 +01:00
Clement Courbet 15488ff24b [CodeGen] Fix the computation of the alignment of split stores.
Summary:
Right now the alignment of the lower half of a store is computed as
align/2, which fails for unaligned stores (align = 1), and is overly
pessimitic for, e.g. a 8 byte store aligned to 4 bytes.
Fixes PR44851
Fixes PR44877

Reviewers: gchatelet, spatel, lebedev.ri

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74311
2020-02-12 10:37:30 +01:00
Djordje Todorovic 9f6ff07f8a [DebugInfo] Enable the debug entry values feature by default
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-12 10:25:14 +01:00
Nicolai Hähnle 07a5b849f7 SelectionDAG: Fix bug in ClusterNeighboringLoads
Summary:
The method attempts to find loads that can be legally clustered by
looking for loads consuming the same chain glue token.

However, the old code looks at _all_ users of values produced by the
chain node -- including uses of the loaded/returned value of volatile
loads or atomics. This could lead to circular dependencies which then
failed during scheduling.

With this change, we filter out users by getResNo, i.e. by which
SDValue value they use, to ensure that we only look at users of the
chain glue token.

This appears to be a rather old bug, which is perhaps surprising.
However, the test case is actually quite fragile (i.e., it is hidden
by fairly small changes), and the test _must_ use volatile loads for
the bug to manifest.

Reviewers: arsenm, bogner, craig.topper, foad

Subscribers: MatzeB, jvesely, wdng, hiraditya, javed.absar, jfb, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74253
2020-02-12 09:12:55 +01:00
Craig Topper 0daf9b8e41 [X86][LegalizeTypes] Add SoftPromoteHalf support STRICT_FP_EXTEND and STRICT_FP_ROUND
This adds a strict version of FP16_TO_FP and FP_TO_FP16 and uses
them to implement soft promotion for the half type. This is
enough to provide basic support for __fp16 with strictfp.

Add the necessary X86 support to use VCVTPS2PH/VCVTPH2PS when F16C
is enabled.
2020-02-11 22:30:04 -08:00
lewis-revill a6bd1256ce [DebugInfo] Call site entries cannot be generated for FrameSetup calls
Instructions marked as FrameSetup do not cause requestLabelAfterInsn to
be called and so no such label is generated. Call instructions which
require call site entries to be generated require this label to be
present in order to calculate the return PC offset/address, but the
check for whether the call instruction is marked as FrameSetup was not
present.

Therefore in the case where a call instruction is marked as FrameSetup,
an assertion failure occurs if a call site entry is to be generated.
This is the case with RISC-V's implementation of save/restore via
library calls.

Differential Revision: https://reviews.llvm.org/D71593
2020-02-11 21:23:18 +00:00
OCHyams 35e0ab647b [DebugInfo][NFC] Fixup the UserValue methods to use FragmentInfo
Fixup the UserValue methods to use FragmentInfo instead of DIExpression because
the DIExpression is only ever used to get the to get the FragmentInfo. The
DIExpression is meaningless in the UserValue class because each definition point
added to a UserValue may have a unique DIExpression.

Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D74057
2020-02-11 10:20:24 +00:00
OCHyams 3aa33fde03 [DebugInfo][NFC] Rename the class DbgValueLocation to DbgVariableValue
Rename the class DbgValueLocation to DbgVariableValue and instances from Loc to
DbgValue. These names better express the new semantics introduced in D74053.

The class previously represented a { Location } only. It now represents a
{ Location, DIExpression } pair which together describe a value.

Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D74055
2020-02-11 10:20:24 +00:00
OCHyams 1e40799324 [DebugInfo] Teach LDV how to handle identical variable fragments
LiveDebugVariables uses interval maps to explicitly represent DBG_VALUE
intervals. DBG_VALUEs are filtered into an interval map based on their {
Variable, DIExpression }. The interval map will coalesce adjacent entries that
use the same { Location }.  Under this model, DBG_VALUEs which refer to the same
bits of the same variable will be filtered into different interval maps if they
have different DIExpressions which means the original intervals will not be
properly preserved.

This patch fixes the problem by using { Variable, Fragment } to filter the
DBG_VALUEs into maps, and coalesces adjacent entries iff they have the same
{ Location, DIExpression } pair.

The solution is not perfect because we see the similar issues appear when
partially overlapping fragments are encountered, but is far simpler than a
complete solution (i.e. D70121).

Fixes: pr41992, pr43957
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D74053
2020-02-11 10:20:24 +00:00
Amara Emerson 067dd9c6b1 [GlobalISel][CallLowering] Use stripPointerCasts().
A downstream test exposed a simple logic bug with the manual pointer
stripping code, fix that by just using stripPointerCasts() on the value.

I don't think there's a way to expose this issue upstream.
2020-02-10 15:43:57 -08:00
Matt Arsenault f270da6bfc RegisterCoalescer: Add LaneMask to debug printing 2020-02-10 12:34:33 -08:00
diggerlin aa86311e62 [AIX][XCOFF] Support Mergeable2ByteCString and Mergeable4ByteCString
SUMMARY:
The patch is enable to support Mergeable2ByteCString and Mergeable4ByteCString

Reviewers: daltenty
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D74164
2020-02-10 14:45:54 -05:00
Sebastian Neubauer 7cddd15e56 [SelectionDAG] Optimize build_vector of truncates and shifts
Add a simplification to fuse a manual vector extract with shifts and
truncate into a bitcast.

Unpacking and packing values into vectors is only optimized with
extractelement instructions, not when manually unpacked using shifts
and truncates.
This patch simplifies shifts and truncates into a bitcast if possible.

Simplify (build_vec (trunc $1)
                    (trunc (srl $1 width))
                    (trunc (srl $1 (2 * width))) ...)
to (bitcast $1)

Differential Revision: https://reviews.llvm.org/D73892
2020-02-10 15:04:07 +01:00
Djordje Todorovic 3a4dc577c9 [CSInfo] Fix the assertions regarding updating the CSInfo
The call site info was not updated correctly when deleting
corresponding call instructions.

Differential Revision: https://reviews.llvm.org/D73700
2020-02-10 10:55:06 +01:00
Djordje Todorovic 68908993eb [CSInfo] Use isCandidateForCallSiteEntry() when updating the CSInfo
Use the isCandidateForCallSiteEntry().
This should mostly be an NFC, but there are some parts ensuring
the moveCallSiteInfo() and copyCallSiteInfo() operate with call site
entry candidates (both Src and Dest should be the call site entry
candidates).

Differential Revision: https://reviews.llvm.org/D74122
2020-02-10 10:03:14 +01:00
Amara Emerson 21c9d9ad43 [GlobalISel][CallLowering] Tighten constantexpr check for callee.
I'm not sure there's a test case for this, but it's better to be safe.
2020-02-09 22:59:48 -08:00
Matt Arsenault 312a9d1b83 GlobalISel: Fix narrowScalar for G_{CTLZ|CTTZ}_ZERO_UNDEF
Narrow these for 64-bit VALU for AMDGPU.
2020-02-09 19:02:38 -05:00
Matt Arsenault 6135f5eda4 GlobalISel: Fix narrowing of G_CTLZ/G_CTTZ
The result type is separate from the source type.
2020-02-09 18:11:43 -05:00
Craig Topper eeb63944e4 [LegalizeTypes][ARM][AArch64][PowerPC][RISCV][X86] Use BUILD_PAIR to return expanded integer results from ReplaceNodeResults instead of just returning two results.
Remove code from LegalizeTypes that allowed this to work.

We were already using BUILD_PAIR for this in some places so this
standardizes on a single way to do this.
2020-02-08 09:52:31 -08:00