Commit Graph

3324 Commits

Author SHA1 Message Date
Kerry McLaughlin aa0f37e14a [AArch64][SVE] Add first-faulting load intrinsic
Summary:
Implements the llvm.aarch64.sve.ldff1 intrinsic and DAG
combine rules for first-faulting loads with sign & zero extends

Reviewers: sdesmalen, efriedma, andwar, dancgr, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73025
2020-01-23 11:57:16 +00:00
Florian Hahn 300997c41a [AArch64] Don't rename registers with pseudo defs in Ld/St opt.
If the root def of for renaming is a noop-pseudo instruction like kill,
we would end up without a correct def for the renamed register, causing
miscompiles.

This patch conservatively bails out on any pseudo instruction.

This fixes https://bugs.chromium.org/p/chromium/issues/detail?id=1037912#c70
2020-01-22 09:26:25 -08:00
Pablo Barrio a8ff6c0b09 [AArch64] Add test for DWARF return address signing
Summary: Patch by LukeCheeseman and pbarrio

Reviewers: samparker, chill

Subscribers: kristof.beyls, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72835
2020-01-22 16:36:21 +00:00
Sander de Smalen 4cf16efe49 [AArch64][SVE] Add patterns for unpredicated load/store to frame-indices.
This patch also fixes up a number of cases in DAGCombine and
SelectionDAGBuilder where the size of a scalable vector is used in a
fixed-width context (thus triggering an assertion failure).

Reviewers: efriedma, c-rhodes, rovka, cameron.mcinally

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71215
2020-01-22 14:32:27 +00:00
Kerry McLaughlin cdcc4f2a44 [AArch64][SVE] Add intrinsic for non-faulting loads
Summary:
This patch adds the llvm.aarch64.sve.ldnf1 intrinsic, plus
DAG combine rules for non-faulting loads and sign/zero extends

Reviewers: sdesmalen, efriedma, andwar, dancgr, mgudim, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71698
2020-01-22 11:15:20 +00:00
Sander de Smalen 67d4c9924c Add support for (expressing) vscale.
In LLVM IR, vscale can be represented with an intrinsic. For some targets,
this is equivalent to the constexpr:

  getelementptr <vscale x 1 x i8>, <vscale x 1 x i8>* null, i32 1

This can be used to propagate the value in CodeGenPrepare.

In ISel we add a node that can be legalized to one or more
instructions to materialize the runtime vector length.

This patch also adds SVE CodeGen support for VSCALE, which maps this
node to RDVL instructions (for scaled multiples of 16bytes) or CNT[HSD]
instructions (scaled multiples of 2, 4, or 8 bytes, respectively).

Reviewers: rengolin, cameron.mcinally, hfinkel, sebpop, SjoerdMeijer, efriedma, lattner

Reviewed by: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68203
2020-01-22 10:09:27 +00:00
Amara Emerson 2e25d75aaa [AArch64][GlobalISel] Fix llvm.returnaddress(0) selection when LR is clobbered.
The code was originally ported from SelectionDAG, which does CSE behind the scenes
automatically. When copying the return address from LR live into the function, we
need to make sure to use the single copy on function entry. Any later copy from LR
could be using clobbered junk.

Implement this by caching the copy in the per-MF state in the selector.

Should hopefully fix the AArch64 sanitiser buildbot failure.
2020-01-21 22:53:32 -08:00
Amara Emerson 67a8775322 [AArch64] Don't generate gpr CSEL instructions in early-ifcvt if regclasses aren't compatible.
In GlobalISel we may in some unfortunate circumstances generate PHIs with
operands that are on separate banks. If-conversion doesn't currently check for
that case and ends up generating a CSEL on AArch64 with incorrect register
operands.

Differential Revision: https://reviews.llvm.org/D72961
2020-01-21 16:51:31 -08:00
Florian Hahn 535ed62c5f [AArch64] Add custom store lowering for 256 bit non-temporal stores.
Currently we fail to lower non-termporal stores for 256+ bit vectors
to STNPQ, because type legalization will split them up to 128 bit stores
and because there are no single non-temporal stores, creating STPNQ
in the Load/Store optimizer would be quite tricky.

This patch adds custom lowering for 256 bit non-temporal vector stores
to improve the generated code.

Reviewers: dmgreen, samparker, t.p.northover, ab

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D72919
2020-01-21 14:53:40 -08:00
Fangrui Song d232c21566 [AsmPrinter] Don't emit __patchable_function_entries entry if "patchable-function-entry"="0"
Add improve tests
2020-01-20 16:13:48 -08:00
Andrzej Warzynski 7e717b3990 [AArch64][SVE] Extend int_aarch64_sve_ld1_gather_imm
The ACLE distinguishes between the following addressing modes for gather
loads:
  * "scalar base, vector offset", and
  * "vector base, scalar offset".
For the "vector base, scalar offset" case, the
`int_aarch64_sve_ld1_gather_imm` intrinsic was added in 79f2422d.
Currently, that intrinsic assumes that the scalar offset is passed as an
immediate.  As a result, it does not cater for cases where scalar offset
is stored in a register.

In this patch `int_aarch64_sve_ld1_gather_imm` is extended so that all
cases are covered:
* `int_aarch64_sve_ld1_gather_imm` is renamed as
  `int_aarch64_sve_ld1_gather_scalar_offset`
* new DAG combine rules are added for GLD1_IMM for scenarios where the
  offset is a non-immediate scalar or an out-of-range immediate
* sve-intrinsics-gather-loads-vector-base.ll is renamed as
  sve-intrinsics-gather-loads-vector-base-imm-offset.ll
* sve-intrinsics-gather-loads-vector-base-scalar-offset.ll is added to test
  file for non-immediate offsets

Similar changes are made for scatter store intrinsics.

Reviewed By: sdesmalen, efriedma

Differential Revision: https://reviews.llvm.org/D71773
2020-01-20 12:19:18 +00:00
Fangrui Song 886d2c2ca7 [BranchRelaxation] Simplify offset computation and fix a bug in adjustBlockOffsets()
If Start!=0, adjustBlockOffsets() may unnecessarily adjust the offset of
Start. There is no correctness issue, but it can create more block
splits.
2020-01-19 16:02:16 -08:00
Fangrui Song 9a24488cb6 [CodeGen] Move fentry-insert, xray-instrumentation and patchable-function before addPreEmitPass()
This intention is to move patchable-function before aarch64-branch-targets
(configured in AArch64PassConfig::addPreEmitPass) so that we emit BTI before NOPs
(see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92424).

This also allows addPreEmitPass() passes to know the precise instruction sizes if they want.

Tried x86-64 Debug/Release builds of ccls with -fxray-instrument -fxray-instruction-threshold=1.
No output difference with this commit and the previous commit.
2020-01-19 00:09:46 -08:00
Evgenii Stepanov d081962dea Merge memtag instructions with adjacent stack slots.
Summary:
Detect a run of memory tagging instructions for adjacent stack frame slots,
and replace them with a shorter instruction sequence
* replace STG + STG with ST2G
* replace STGloop + STGloop with STGloop

This code needs to run when stack slot offsets are already known, but before
FrameIndex operands in STG instructions are eliminated; that's the
reason for the new hook in PrologueEpilogue.

This change modifies STGloop and STZGloop pseudos to take the size as an
immediate integer operand, and adds _untied variants of those pseudos
that are allowed to take the base address as a FI operand. This is needed to
simplify recognizing an STGloop instruction as operating on a stack slot
post-regalloc.

This improves memtag code size by ~0.25%, and it looks like an additional ~0.1%
is possible by rearranging the stack frame such that consecutive STG
instructions reference adjacent slots (patch pending).

Reviewers: pcc, ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70286
2020-01-17 15:19:29 -08:00
Ian Levesque 97ba483026 [xray] Allow instrumenting only function entry and/or only function exit
Extend -fxray-instrumentation-bundle to split function-entry and
function-exit into two separate options, so that it is possible to
instrument only function entry or only function exit.  For use cases
that only care about one or the other this will save significant overhead
and code size.

Differential Revision: https://reviews.llvm.org/D72890
2020-01-17 13:32:34 -08:00
Cullen Rhodes 49edf9a509 [AArch64][SVE] Add break intrinsics
Summary:
Implements the following intrinsics:

    * @llvm.aarch64.sve.brka
    * @llvm.aarch64.sve.brka.z
    * @llvm.aarch64.sve.brkb
    * @llvm.aarch64.sve.brkb.z
    * @llvm.aarch64.sve.brkn.z
    * @llvm.aarch64.sve.brkpa.z
    * @llvm.aarch64.sve.brkpb.z

Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72393
2020-01-17 11:47:08 +00:00
Davide Italiano 30a8865142 [FastISel] Lower `llvm.dbg.value(undef, ...` correctly.
Summary:
Instead of just dropping them.

<rdar://problem/58657146>

Reviewers: aprantl, vsk, ab, paquette, echristo

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72877
2020-01-16 16:22:20 -08:00
Jessica Paquette b82d18e1e8 [AArch64][GlobalISel] Change G_FCONSTANTs feeding into stores into G_CONSTANTS
Given the following situation:

x = G_FCONSTANT (something that can't be materialized)
G_STORE x, some_addr

We know that x must be materialized as at least a single mov. However, at the
time of selection, the G_STORE will have been regbankselected to a FPR store.

So, as a result, you'll get an unnecessary fmov into the G_STORE.

Storing a constant value in a GPR and a constant value in a FPR are the same.
So, whenever you see a G_FCONSTANT that feeds into only G_STORES, so might as
well make it a G_CONSTANT.

This adds a target-specific combine which changes G_FCONSTANTs feeding into
G_STOREs into G_CONSTANTs.

Differential Revision: https://reviews.llvm.org/D72814
2020-01-16 15:18:44 -08:00
Matt Arsenault a66d2817ca GlobalISel: Don't ignore requested ext narrowing type
This was assuming the narrow target was the source type. Respect the
requested type when these don't match by using intermediate
merges. This avoids producing very wide, illegal shift expansions.
2020-01-16 14:29:37 -05:00
Matt Arsenault d0943537e1 GlobalISel: Apply target MMO flags to atomics
Unify MMO flag handling with SelectionDAG like with loads and stores.
2020-01-16 13:49:43 -05:00
Matt Arsenault 0d0fce42b0 GlobalISel: Preserve load/store metadata in IRTranslator
This was dropping the invariant metadata on dead argument loads, so
they weren't deleted.

Atomics still need to be fixed the same way. Also, apparently store
was never preserving dereferencable which should also be fixed.
2020-01-16 13:49:43 -05:00
Matt Arsenault 77eb1b8f63 llc: Don't overwrite frame-pointer attribute
Continue making command line flags with matching attribute behavior
consistent.
2020-01-15 20:56:46 -05:00
Yuanfang Chen 6e24c6037f Revert "[Support] make report_fatal_error `abort` instead of `exit`"
This reverts commit 647c3f4e47.

Got bots failure from sanitizer-windows and maybe others.
2020-01-15 17:52:25 -08:00
Yuanfang Chen 647c3f4e47 [Support] make report_fatal_error `abort` instead of `exit`
Summary:
This patch could be treated as a rebase of D33960. It also fixes PR35547.
A fix for `llvm/test/Other/close-stderr.ll` is proposed in D68164. Seems
the consensus is that the test is passing by chance and I'm not
sure how important it is for us. So it is removed like in D33960 for now.
The rest of the test fixes are just adding `--crash` flag to `not` tool.

** The reason it fixes PR35547 is

`exit` does cleanup including calling class destructor whereas `abort`
does not do any cleanup. In multithreading environment such as ThinLTO or JIT,
threads may share states which mostly are ManagedStatic<>. If faulting thread
tearing down a class when another thread is using it, there are chances of
memory corruption. This is bad 1. It will stop error reporting like pretty
stack printer; 2. The memory corruption is distracting and nondeterministic in
terms of error message, and corruption type (depending one the timing, it
could be double free, heap free after use, etc.).

Reviewers: rnk, chandlerc, zturner, sepavloff, MaskRay, espindola

Reviewed By: rnk, MaskRay

Subscribers: wuzish, jholewinski, qcolombet, dschuff, jyknight, emaste, sdardis, nemanjai, jvesely, nhaehnle, sbc100, arichardson, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, lenary, s.egerton, pzheng, cfe-commits, MaskRay, filcab, davide, MatzeB, mehdi_amini, hiraditya, steven_wu, dexonsmith, rupprecht, seiya, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D67847
2020-01-15 17:05:13 -08:00
Amara Emerson 2e39ea726e Revert "Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.""
The original change wasn't constraining the operand regclasses which broke EXPENSIVE_CHECKS.
2020-01-15 10:13:11 -08:00
Simon Pilgrim e26a78e708 Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection."
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
----
Breaks EXPENSIVE_CHECKS builds.
2020-01-15 12:37:37 +00:00
Cullen Rhodes 93a4dede3a [AArch64][SVE] Add ptest intrinsics
Summary:
Implements the following intrinsics:

    * @llvm.aarch64.sve.ptest.any
    * @llvm.aarch64.sve.ptest.first
    * @llvm.aarch64.sve.ptest.last

Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72398
2020-01-15 11:15:01 +00:00
Amara Emerson 6078f2fedc [AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
2020-01-14 13:41:21 -08:00
Danilo Carvalho Grael 26d96126a0 [SVE] Add patterns for MUL immediate instruction.
Summary: Add the missing MUL pattern for integer immediate instructions.

Reviewers: sdesmalen, huntergr, efriedma, c-rhodes, kmclaughlin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72654
2020-01-14 15:26:19 -05:00
Jay Foad b777e551f0 [MachineScheduler] Reduce reordering due to mem op clustering
Summary:
Mem op clustering adds a weak edge in the DAG between two loads or
stores that should be clustered, but the direction of this edge is
pretty arbitrary (it depends on the sort order of MemOpInfo, which
represents the operands of a load or store). This often means that two
loads or stores will get reordered even if they would naturally have
been scheduled together anyway, which leads to test case churn and goes
against the scheduler's "do no harm" philosophy.

The fix makes sure that the direction of the edge always matches the
original code order of the instructions.

Reviewers: atrick, MatzeB, arsenm, rampitec, t.p.northover

Subscribers: jvesely, wdng, nhaehnle, kristof.beyls, hiraditya, javed.absar, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72706
2020-01-14 19:19:02 +00:00
Sanne Wouda 1cc8fff420 [AArch64] Fix save register pairing for Windows AAPCS
Summary:
On Windows, when a function does not have an unwind table (for example, EH
filtering funclets), we don't correctly pair FP and LR to form the frame record
in all circumstances.

Fix this by invalidating a pair when the second register is FP when compiling
for Windows, even when CFI is not needed.

Fixes PR44271 introduced by D65653.

Reviewers: efriedma, sdesmalen, rovka, rengolin, t.p.northover, thegameg, greened

Reviewed By: rengolin

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71754
2020-01-14 15:08:27 +00:00
Eli Friedman e68e4cbcc5 [GlobalISel] Change representation of shuffle masks in MachineOperand.
We're planning to remove the shufflemask operand from ShuffleVectorInst
(D72467); fix GlobalISel so it doesn't depend on that Constant.

The change to prelegalizercombiner-shuffle-vector.mir happens because
the input contains a literal "-1" in the mask (so the parser/verifier
weren't really handling it properly). We now treat it as equivalent to
"undef" in all contexts.

Differential Revision: https://reviews.llvm.org/D72663
2020-01-13 16:55:41 -08:00
Danilo Carvalho Grael 2d7e757a83 [AArch64][SVE] Add patterns for some arith SVE instructions.
Summary: Add patterns for the following instructions:
- smax, smin, umax, umin

Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin

Subscribers: amehsan

Differential Revision: https://reviews.llvm.org/D71779
2020-01-13 11:39:42 -05:00
Pablo Barrio da33762de8 [AArch64] Emit HINT instead of PAC insns in Armv8.2-A or below
Summary:
The Pointer Authentication Extension (PAC) was added in Armv8.3-A. Some
instructions are implemented in the HINT space to allow compiling code
common to CPUs regardless of whether they feature PAC or not, and still
benefit from PAC protection in the PAC-enabled CPUs.

The 8.3-specific mnemonics were currently enabled in any architecture, and
LLVM was emitting them in assembly files when PAC code generation was
enabled. This was ok for compilations where both LLVM codegen and the
integrated assembler were used. However, the LLVM codegen was not
compatible with other assemblers (e.g. GAS). Given the fact that the
approach from these assemblers (i.e. to disallow Armv8.3-A mnemonics if
compiling for Armv8.2-A or lower) is entirely reasonable, this patch makes
LLVM to emit HINT when building for Armv8.2-A and below, instead of
PACIASP, AUTIASP and friends. Then, LLVM assembly should be compatible
with other assemblers.

Reviewers: samparker, chill, LukeCheeseman

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71658
2020-01-13 14:14:48 +00:00
Simon Pilgrim 8f49204f26 [SelectionDAG] ComputeKnownBits - minimum leading/trailing zero bits in LSHR/SHL (PR44526)
As detailed in https://blog.regehr.org/archives/1709 we don't make use of the known leading/trailing zeros for shifted values in cases where we don't know the shift amount value.

This patch adds support to SelectionDAG::ComputeKnownBits to use KnownBits::countMinTrailingZeros and countMinLeadingZeros to set the minimum guaranteed leading/trailing known zero bits.

Differential Revision: https://reviews.llvm.org/D72573
2020-01-13 11:08:12 +00:00
KAWASHIMA Takahiro 10c11e4e2d This option allows selecting the TLS size in the local exec TLS model,
which is the default TLS model for non-PIC objects. This allows large/
many thread local variables or a compact/fast code in an executable.

Specification is same as that of GCC. For example, the code model
option precedes the TLS size option.

TLS access models other than local-exec are not changed. It means
supoort of the large code model is only in the local exec TLS model.

Patch By KAWASHIMA Takahiro (kawashima-fj <t-kawashima@fujitsu.com>)
Reviewers: dmgreen, mstorsjo, t.p.northover, peter.smith, ostannard
Reviewd By: peter.smith
Committed by: peter.smith

Differential Revision: https://reviews.llvm.org/D71688
2020-01-13 10:16:53 +00:00
Fangrui Song 7fa5290d5b __patchable_function_entries: don't use linkage field 'unique' with -no-integrated-as
.section name, "flags"G, @type, GroupName[, linkage]

As of binutils 2.33, linkage cannot be 'unique'.  For integrated
assembler, we use both 'o' flag and 'unique' linkage to support
--gc-sections and COMDAT with lld.

https://sourceware.org/ml/binutils/2019-11/msg00266.html
2020-01-12 12:53:44 -08:00
Jessica Paquette ceb801612a [AArch64] Don't generate libcalls for wide shifts on Darwin
Similar to cff90f07cb.

Darwin doesn't always use compiler-rt, and so we can't assume that these
functions are available (at least on arm64).
2020-01-10 15:58:51 -08:00
Fangrui Song 4d1e23e3b3 [AArch64] Add function attribute "patchable-function-entry" to add NOPs at function entry
The Linux kernel uses -fpatchable-function-entry to implement DYNAMIC_FTRACE_WITH_REGS
for arm64 and parisc. GCC 8 implemented
-fpatchable-function-entry, which can be seen as a generalized form of
-mnop-mcount. The N,M form (function entry points before the Mth NOP) is
currently only used by parisc.

This patch adds N,0 support to AArch64 codegen. N is represented as the
function attribute "patchable-function-entry". We will use a different
function attribute for M, if we decide to implement it.

The patch reuses the existing patchable-function pass, and
TargetOpcode::PATCHABLE_FUNCTION_ENTER which is currently used by XRay.

When the integrated assembler is used, __patchable_function_entries will
be created for each text section with the SHF_LINK_ORDER flag to prevent
--gc-sections (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93197) and
COMDAT (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93195) issues.

Retrospectively, __patchable_function_entries should use a PC-relative
relocation type to avoid the SHF_WRITE flag and dynamic relocations.

"patchable-function-entry"'s interaction with Branch Target
Identification is still unclear (see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92424 for GCC discussions).

Reviewed By: peter.smith

Differential Revision: https://reviews.llvm.org/D72215
2020-01-10 09:55:51 -08:00
Andrew Paverd bdd88b7ed3 Add support for __declspec(guard(nocf))
Summary:
Avoid using the `nocf_check` attribute with Control Flow Guard. Instead, use a
new `"guard_nocf"` function attribute to indicate that checks should not be
added on indirect calls within that function. Add support for
`__declspec(guard(nocf))` following the same syntax as MSVC.

Reviewers: rnk, dmajor, pcc, hans, aaron.ballman

Reviewed By: aaron.ballman

Subscribers: aaron.ballman, tomrittervg, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72167
2020-01-10 16:04:12 +00:00
Douglas Yung 3727ca3137 Relax opcode checks in test for G_READCYCLECOUNTER to check for only a number instead of a specific number. 2020-01-09 17:16:52 -08:00
Amara Emerson cc95bb1f57 [AArch64][GlobalISel] Implement selection of <2 x float> vector splat.
Also requires making G_IMPLICIT_DEF of v2s32 legal.

Differential Revision: https://reviews.llvm.org/D72422
2020-01-09 14:05:35 -08:00
Jessica Paquette 9949b1a175 [GlobalISel][AArch64] Import + select LDR*roW and STR*roW patterns
This adds support for selecting a large chunk of the load/store *roW patterns.

This is pretty much a straight port of AArch64DAGToDAGISel::SelectAddrModeWRO
into GISel. The code is very similar to the XRO code. The main difference is
that in the *roW patterns, we want to try and fold in an extend, and *possibly*
a shift along with it. A good portion of this patch is refactoring the existing
XRO code.

- Add selectAddrModeWRO

- Factor out the code from selectAddrModeShiftedExtendXReg which is used by both
  selectAddrModeXRO and selectAddrModeWRO into selectExtendedSHL.
  This is similar to the function of the same name in AArch64DAGToDAGISel.

- Add support for extends to the factored out code in selectExtendedSHL.

- Teach getExtendTypeForInst how to handle AND masks that are intended to be
  used in loads/stores (necessary for this addressing mode.)

- Make getExtendTypeForInst not static because moving it made an annoying diff
  and I wanted to have the WRO/XRO functions close to each other while I was
  writing the code.

Differential Revision: https://reviews.llvm.org/D72426
2020-01-09 12:15:56 -08:00
Evgenii Stepanov 58deb20dd2 Revert "Merge memtag instructions with adjacent stack slots."
*** Bad machine code: Tied use must be a register ***
- function:    stg_alloca17
- basic block: %bb.0 entry (0x20076710580)
- instruction: early-clobber %0:gpr64common, early-clobber %1:gpr64sp = STGloop 272, %stack.0.a :: (store 272 into %ir.a, align 16)
- operand 3:   %stack.0.a

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/21481/steps/test-check-all/logs/stdio

This reverts commit b675a7628c.
2020-01-08 14:36:12 -08:00
Evgenii Stepanov b675a7628c Merge memtag instructions with adjacent stack slots.
Summary:
Detect a run of memory tagging instructions for adjacent stack frame slots,
and replace them with a shorter instruction sequence
* replace STG + STG with ST2G
* replace STGloop + STGloop with STGloop

This code needs to run when stack slot offsets are already known, but before
FrameIndex operands in STG instructions are eliminated; that's the
reason for the new hook in PrologueEpilogue.

This change modifies STGloop and STZGloop pseudos to take the size as an
immediate integer operand, and base address as a FI operand when
possible. This is needed to simplify recognizing an STGloop instruction
as operating on a stack slot post-regalloc.

This improves memtag code size by ~0.25%, and it looks like an additional ~0.1%
is possible by rearranging the stack frame such that consecutive STG
instructions reference adjacent slots (patch pending).

Reviewers: pcc, ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70286
2020-01-08 11:02:03 -08:00
Tim Northover 903e5c3028 AArch64: add missing Apple CPU names and use them by default.
Apple's CPUs are called A7-A13 in official communication, occasionally with
weird suffixes which we probably don't need to care about. This adds each one
and describes its features. It also switches the default CPU to the canonical
name for Cyclone, but leaves legacy support in so that existing bitcode still
compiles.
2020-01-08 09:24:06 +00:00
Amara Emerson b6598bcf4b [AArch64][GlobalISel] Fold a chain of two G_PTR_ADDs of constant offsets.
E.g.
%addr1 = G_PTR_ADD %base, G_CONSTANT 20
%addr2 = G_PTR_ADD %addr1, G_CONSTANT 8
  -->
%addr2 = G_PTR_ADD %base, G_CONSTANT 28

Differential Revision: https://reviews.llvm.org/D72351
2020-01-07 14:12:42 -08:00
Jessica Paquette acd2580824 [MachineOutliner][AArch64] Save + restore LR in noreturn functions
Conservatively always save + restore LR in noreturn functions.

These functions do not end in a RET, and so they aren't guaranteed to have an
instruction which uses LR in any way. So, as a result, you can end up in
unfortunate situations where you can't backtrace out of these functions in a
debugger.

Remove the old noreturn test, and add a new one which is more descriptive.

Remove the restriction that we can't outline from noreturn functions as well
since we now do the right thing.
2020-01-07 11:27:25 -08:00
Evgenii Stepanov 40a80a0a19 Lower TAGPstack with negative offset to SUBG.
Summary:
This never really occurs in the current codegen, so only a MIR test is
possible.

Reviewers: ostannard, pcc

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72123
2020-01-06 11:48:35 -08:00
Matt Arsenault 1f950ced50 GlobalISel: Define G_READCYCLECOUNTER 2020-01-04 13:10:19 -05:00
Simon Pilgrim eb0e1978df [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::EXTRACT_VECTOR_ELT (REAPPLIED)
This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.

In particular this helps remove some unnecessary scalar->vector->scalar patterns.

The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.

Reapplied after reversion at rL368660 due to PR42982 which was fixed at rGca7fdd41bda0.

Differential Revision: https://reviews.llvm.org/D65887
2020-01-04 13:15:50 +00:00
Reid Kleckner 9c2b72821b Move tail call disabling code to target independent code
When the "disable-tail-calls" attribute was added, checks were added for
it in various backends. Now this code has proliferated, and it is
something the target is responsible for checking. Move that
responsibility back to the ISels (fast, global, and SD).

There's no major functionality change, except for targets that never
implemented this check.

This LLVM attribute was originally added in
d9699bc7bd (2015).

Reviewers: echristo, MaskRay

Differential Revision: https://reviews.llvm.org/D72118
2020-01-03 11:27:41 -08:00
Fangrui Song 04dbd449c2 [AArch64][test] Merge arm64-$i.ll Linux tests into $i.ll
Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D72061
2020-01-03 09:18:55 -08:00
Roman Lebedev 0727e2b90c
[DAGCombiner][X86][AArch64] Generalize `A-(A&B)`->`A&(~B)` fold (PR44448)
The fold 'A - (A & (B - 1))' -> 'A & (0 - B)'
added in 8dab0a4a7d
is too specific. It should/can just be 'A - (A & B)' -> 'A & (~B)'

Even if we don't manage to fold `~` into B,
we have likely formed `ANDN` node.
Also, this way there's less similar-but-duplicate folds.

Name: X - (X & Y)  ->  X & (~Y)
%o = and i32 %X, %Y
%r = sub i32 %X, %o
  =>
%n = xor i32 %Y, -1
%r = and i32 %X, %n

https://rise4fun.com/Alive/kOUl

See
  https://bugs.llvm.org/show_bug.cgi?id=44448
  https://reviews.llvm.org/D71499
2020-01-03 17:55:47 +03:00
Roman Lebedev 473deaf34b
[NFC][X86][AArch64] Add 'A - (A & B)' pattern tests (PR44448)
The fold 'A - (A & (B - 1))' -> 'A & (0 - B)'
added in 8dab0a4a7d
is too specific. It should just be 'A - (A & B)' -> 'A & (~B)'

Name: X - (X & Y)  ->  X & (~Y)
%o = and i32 %X, %Y
%r = sub i32 %X, %o
  =>
%n = xor i32 %Y, -1
%r = and i32 %X, %n

https://rise4fun.com/Alive/kOUl

See
  https://bugs.llvm.org/show_bug.cgi?id=44448
  https://reviews.llvm.org/D71499
2020-01-03 17:55:46 +03:00
Roman Lebedev 8dab0a4a7d
[DAGCombine][X86][AArch64] 'A - (A & (B - 1))' -> 'A & (0 - B)' fold (PR44448)
While we do manage to fold integer-typed IR in middle-end,
we can't do that for the main motivational case of pointers.

There is @llvm.ptrmask() intrinsic which may or may not be helpful,
but i'm not sure it is fully considered canonical yet,
not everything is fully aware of it likely.

https://rise4fun.com/Alive/ZVdp

Name: ptr - (ptr & (alignment-1))  ->  ptr & (0 - alignment)
  %mask = add i64 %alignment, -1
  %bias = and i64 %ptr, %mask
  %r = sub i64 %ptr, %bias
=>
  %highbitmask = sub i64 0, %alignment
  %r = and i64 %ptr, %highbitmask

See
  https://bugs.llvm.org/show_bug.cgi?id=44448
  https://reviews.llvm.org/D71499
2020-01-03 13:58:36 +03:00
Roman Lebedev c0cbe3fbb7
[NFC][DAGCombine][X86][AArch64] Tests for 'A - (A & (B - 1))' pattern (PR44448)
https://rise4fun.com/Alive/ZVdp

Name: ptr - (ptr & (alignment-1))  ->  ptr & (0 - alignment)
  %mask = add i64 %alignment, -1
  %bias = and i64 %ptr, %mask
  %r = sub i64 %ptr, %bias
=>
  %highbitmask = sub i64 0, %alignment
  %r = and i64 %ptr, %highbitmask

The main motivational pattern involes pointer-typed values,
so this transform can't really be done in middle-end.

See
  https://bugs.llvm.org/show_bug.cgi?id=44448
  https://reviews.llvm.org/D71499
2020-01-03 13:58:36 +03:00
Evgenii Stepanov b153fbefa3 Change dbg-*-tag-offset tests to use llvm-dwarfdump.
Reviewers: dblaikie

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72023
2020-01-02 14:35:54 -08:00
Andrzej Warzynski 404da13e1e [AArch64][SVE] Gather loads: pass 32 bit unpacked offsets as nxv2i32
Summary:
Currently 32 bit unpacked offsets are passed as nxv2i64. However, as
pointed out in https://reviews.llvm.org/D71074, using nxv2i32 instead
would improve consistency with:
  * how other arguments are treated
  * how scatter stores are implemented
This patch makes sure that 32 bit unpacked offsets are passes as nxv2i32
instead of nxv2i64.

Reviewers: sdesmalen, efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71724
2020-01-02 13:01:28 +00:00
Matt Arsenault 4d7201e7b9 DAG: Stop trying to fold FP -(x-y) -> y-x in getNode with nsz
This was increasing the number of instructions when fsub was legalized
on AMDGPU with no signed zeros enabled. This fold should be guarded by
hasOneUse, and I don't think getNode should be doing that. The same
fold is already done as a regular combine through isNegatibleForFree.

This does require duplicating, even though isNegatibleForFree does
this combine already (and properly checks hasOneUse) to avoid one PPC
regression. In the regression, the outer fneg has nsz but the fsub
operand does not. isNegatibleForFree only sees the operand, and
doesn't see it's used from a nsz context. A nsz parameter needs to be
added and threaded through isNegatibleForFree to avoid this.
2019-12-31 22:49:51 -05:00
Sanjay Patel e6bdecf1cd [AArch64] add test for fsub+fneg; NFC
D72015 proposes to restrict the current behavior.
2019-12-31 10:25:41 -05:00
Fangrui Song a36ddf0aa9 Migrate function attribute "no-frame-pointer-elim"="false" to "frame-pointer"="none" as cleanups after D56351 2019-12-24 16:27:51 -08:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Sanjay Patel 8cefc37be5 [DAGCombine] visitEXTRACT_SUBVECTOR - 'little to big' extract_subvector(bitcast()) support
This moves the X86 specific transform from rL364407
into DAGCombiner to generically handle 'little to big' cases
(for example: extract_subvector(v2i64 bitcast(v16i8))). This
allows us to remove both the x86 implementation and the aarch64
bitcast(extract_subvector(bitcast())) combine.

Earlier patches that dealt with regressions initially exposed
by this patch:
rG5e5e99c041e4
rG0b38af89e2c0

Patch by: @RKSimon (Simon Pilgrim)

Differential Revision: https://reviews.llvm.org/D63815
2019-12-23 10:11:45 -05:00
Martin Storsjö 5a751e747d [AArch64] [Windows] Use COFF stubs for calls to extern_weak functions
As the extern_weak target might be missing, resolving to the absolute
address zero, we can't use the normal direct PC-relative branch
instructions (as that would result in relocations out of range).

Improve the classifyGlobalFunctionReference method to set
MO_DLLIMPORT/MO_COFFSTUB, and simplify the existing code in
AArch64TargetLowering::LowerCall to use the return value from
classifyGlobalFunctionReference for these cases.

Add code in both AArch64FastISel and GlobalISel/IRTranslator to
bail out for function calls to extern weak functions on windows,
to let SelectionDAG handle them.

This matches what was done for X86 in 6bf108d77a.

Differential Revision: https://reviews.llvm.org/D71721
2019-12-23 12:13:49 +02:00
Sanjay Patel 0b38af89e2 [AArch64] match splat of bitcasted extract subvector to DUPLANE
This is another potential regression exposed by D63815.

Here we peek through a bitcast to find an extract subvector and
scale the splat offset based on that:
splat (bitcast (extract X, C)), LaneC --> duplane (bitcast X), LaneC'

Differential Revision: https://reviews.llvm.org/D71672
2019-12-22 08:37:03 -05:00
Florian Hahn d269255b95 [AArch64] Respect reserved registers while renaming in LdSt opt.
We cannot pick reserved registers as rename registers.

Fixes https://bugs.llvm.org/show_bug.cgi?id=44358
2019-12-21 15:10:07 +01:00
Danilo Carvalho Grael 15bfd2cd54 [AArch64][SVE] Replace integer immediate intrinsics with splat vector variant
Summary: Replace the integer immediate intrisics with splat vector variants so they can be applied as optimizations for the C/C++ intrinsics.

Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71614
2019-12-20 13:52:19 -05:00
Paul Walker 6cba90dc4d [AArch64][SVE] Correct intrinsics and patterns for logical predicate instructions
In general SVE intrinsics are considered predicated and merging
with everything else having suitable decoration.  For predicated
zeroing operations (like the predicate logical instructions) we
use the "_z" suffix.  After this change all intrinsics use their
expected names (i.e. orr instead of or and eor instead of xor).

I've removed intrinsics and patterns for condition code setting
instructions as that data is not returned as part of the intrinsic.
The expectation is to ask for a cc flag explicitly.

For example:
  a = and_z(pg, p1, p2)
  cc = ptest_<flag>(pg, a)

With the code generator expected to use "s" variants of instructions
when available.

Differential Revision: https://reviews.llvm.org/D71715
2019-12-20 14:22:27 +00:00
Sanjay Patel 59811f454d [AArch64] add more tests for extract-bitcast-splat; NFC
Goes with D71672 - we should be able to handle casting to
a wider type as well as casting to a narrower type.
2019-12-20 08:57:23 -05:00
Cullen Rhodes 974f00a436 [AArch64][SVE] Fold constant multiply of element count
Summary:
E.g.

  %0 = tail call i64 @llvm.aarch64.sve.cntw(i32 31)
  %mul = mul i64 %0, <const>

Should emit:

  cntw    x0, all, mul #<const>

For <const> in the range 1-16.

Patch by Kerry McLaughlin

Reviewers: sdesmalen, huntergr, dancgr, rengolin, efriedma

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71014
2019-12-20 11:58:00 +00:00
Andrzej Warzynski be2b7ea89a [AArch64][SVE] Add intrnisics for saturating scalar arithmetic
Summary:
The following intrnisics are added:
  * @llvm.aarch64.sve.sqdec{b|h|w|d|p}
  * @llvm.aarch64.sve.sqinc{b|h|w|d|p}
  * @llvm.aarch64.sve.uqdec{b|h|w|d|p}
  * @llvm.aarch64.sve.uqinc{b|h|w|d|p}

For every intrnisic there a scalar variants (with n32 or n64 suffix) and
vector variants (no suffix).

Reviewers: sdesmalen, rengolin, efriedma

Reviewed By: sdesmalen, efriedma

Subscribers: eli.friedman, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71252
2019-12-20 11:09:40 +00:00
Cullen Rhodes 3f9005eb89 Recommit "[AArch64][SVE] Add permutation and selection intrinsics"
Recommit 23c28c4043 (reverted in
dcb48f50bd) with a fix for an assert
"Request for a fixed size on a scalable object" being triggered in
`LowerSVEIntrinsicEXT`. The fix is to call `getKnownMinSize` on the
TypeSize object.
2019-12-20 10:45:17 +00:00
Andrzej Warzynski 88a973cf68 [AArch64][SVE] Add intrinsics for binary narrowing operations
Summary:
The following intrinsics for binary narrowing shift righ operations are
added:
  * @llvm.aarch64.sve.shrnb
  * @llvm.aarch64.sve.uqshrnb
  * @llvm.aarch64.sve.sqshrnb
  * @llvm.aarch64.sve.sqshrunb
  * @llvm.aarch64.sve.uqrshrnb
  * @llvm.aarch64.sve.sqrshrnb
  * @llvm.aarch64.sve.sqrshrunb
  * @llvm.aarch64.sve.shrnt
  * @llvm.aarch64.sve.uqshrnt
  * @llvm.aarch64.sve.sqshrnt
  * @llvm.aarch64.sve.sqshrunt
  * @llvm.aarch64.sve.uqrshrnt
  * @llvm.aarch64.sve.sqrshrnt
  * @llvm.aarch64.sve.sqrshrunt

Reviewers: sdesmalen, rengolin, efriedma

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71552
2019-12-20 10:20:30 +00:00
Cullen Rhodes dcb48f50bd Revert "[AArch64][SVE] Add permutation and selection intrinsics"
This reverts commit 23c28c4043.

It caused build failures in the following expensive checks builders:

    http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-ubuntu/builds/1295
    http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-debian/builds/700

Reverting for now whilst I figure what the issue is.
2019-12-19 14:26:14 +00:00
Cullen Rhodes 23c28c4043 [AArch64][SVE] Add permutation and selection intrinsics
Summary:
Adds the following intrinsics:

    * @llvm.aarch64.sve.clasta
    * @llvm.aarch64.sve.clasta_n
    * @llvm.aarch64.sve.clastb
    * @llvm.aarch64.sve.clastb_n
    * @llvm.aarch64.sve.compact
    * @llvm.aarch64.sve.ext
    * @llvm.aarch64.sve.lasta
    * @llvm.aarch64.sve.lastb
    * @llvm.aarch64.sve.rev
    * @llvm.aarch64.sve.splice
    * @llvm.aarch64.sve.tbl
    * @llvm.aarch64.sve.trn1
    * @llvm.aarch64.sve.trn2
    * @llvm.aarch64.sve.uzp1
    * @llvm.aarch64.sve.uzp2
    * @llvm.aarch64.sve.zip1
    * @llvm.aarch64.sve.zip2

Reviewers: sdesmalen, efriedma, dancgr, mgudim, huntergr, rengolin

Reviewed By: sdesmalen, efriedma

Subscribers: kmclaughlin, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71401
2019-12-19 13:18:40 +00:00
Cullen Rhodes eca0c97a6b [AArch64][SVE] Implement pfirst and pnext intrinsics
Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally

Reviewed By: cameron.mcinally

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl,
llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71472
2019-12-19 11:03:32 +00:00
Cullen Rhodes 49199465a3 [AArch64][SVE] Implement ptrue intrinsic
Reviewers: sdesmalen, eli.friedman, dancgr, mgudim, cameron.mcinally,
huntergr, efriedma

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl,
llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71457
2019-12-19 11:02:05 +00:00
Danilo Carvalho Grael c7abf88411 Revert "[AArch64][SVE] Replace integer immediate intrinsics with splat vector variant"
This reverts commit 830e08b98b and eb1857ce0d.

This commit leads to an unexpected failure on test/CodeGen/AArch64/sve-gather-scatter-dag-combine.ll.

The review will need more changes before its re-commited.
2019-12-18 14:14:10 -05:00
Danilo Carvalho Grael eb1857ce0d [AArch64][SVE] Fix gather scatter dag combine test. 2019-12-18 13:44:25 -05:00
Danilo Carvalho Grael 830e08b98b [AArch64][SVE] Replace integer immediate intrinsics with splat vector variant
Summary: Replace the integer immediate intrisics with splat vector variants so they can be applied as optimizations for the C/C++ intrinsics.

Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71614
2019-12-18 13:11:21 -05:00
Sanjay Patel b99111b3e4 [AArch64] add tests for bitcasted DUPLANE; NFC
See D63815 for context/motivation.
2019-12-18 12:14:20 -05:00
Sanjay Patel e67462a719 [AArch64] update test checks; NFC
The common prefix reduces a bunch of replication; not sure why it
didn't happen before.
2019-12-18 11:05:06 -05:00
Sanjay Patel 5e5e99c041 [AArch64] match fcvtl2 with bitcasted extract
This should eliminate a regression seen in D63815.

If we are FP extending the high half extract of a vector,
we should be able to peek through a bitcast sitting
between the extract and extend.

This replaces tablegen patterns with a more general
DAG to DAG override, so we can handle any casted type.

Differential Revision: https://reviews.llvm.org/D71515
2019-12-18 08:47:07 -05:00
Victor Campos 364b8f5fbe [AArch64] Improve codegen of volatile load/store of i128
Summary:
Instead of generating two i64 instructions for each load or store of a
volatile i128 value (two LDRs or STRs), now emit a single LDP or STP.

Reviewers: labrinea, t.p.northover, efriedma

Reviewed By: efriedma

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69559
2019-12-18 10:03:12 +00:00
Jay Foad 97ca7c2cc9 [AArch64] Enable clustering memory accesses to fixed stack objects
Summary:
r347747 added support for clustering mem ops with FI base operands
including support for fixed stack objects in shouldClusterFI, but
apparently this was never tested.

This patch fixes shouldClusterFI to work with scaled as well as
unscaled load/store instructions, and fixes the ordering of memory ops
in MemOpInfo::operator< to ensure that memory addresses always
increase, regardless of which direction the stack grows.

Subscribers: MatzeB, kristof.beyls, hiraditya, javed.absar, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71334
2019-12-18 09:46:11 +00:00
Xiaoqing Wu a17619e0b0 [AArch64][GlobalISel]: Fix a crash in GlobalIsel in dealing with 16bit uadd.with.overflow.
Summary: AArch64 doesn't support uadd.with.overflow.i16 natively. This change adds a legalization rule to convert the 32bit add result to 16bit. This should fix PR43981.

Reviewers: arsenm, qcolombet, paquette, aemerson

Reviewed By: paquette

Subscribers: wdng, rovka, kristof.beyls, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71587
2019-12-17 16:05:00 -08:00
Craig Topper 84d8fa30f9 [FPEnv][LegalizeTypes][LegalizeDAG][AArch64] Few fixes/improvements for legalizing fp<->int conversion nodes.
This started with adding a test to support get code coverage on
ScalarizeVecOp_UnaryOp_StrictFP by copying an existing AArch64 test
and using constrained sitofp/uitofp intrinsics.

This found 3 separate issues:
-ScalarizeVecOp_UnaryOp_StrictFP needs to do its own replacement
 because the caller can't handle replacing multiple results.
-Missing integer promotion support for sitofp/uitofp
-Chain result not always assigned in ExpandLegalINT_TO_FP.

Committing them together so I can add the test case.
2019-12-17 14:37:00 -08:00
Sanjay Patel 6a77e36975 [SDAG] adjust isNegatibleForFree calculation to avoid crashing
This is an alternate fix for the bug discussed in D70595.
This also includes minimal tests for other in-tree targets to show the problem more
generally.

We check the number of uses as a predicate for whether some value is free to negate,
but that use count can change as we rewrite the expression in getNegatedExpression().
So something that was marked free to negate during the cost evaluation phase becomes
not free to negate during the rewrite phase (or the inverse - something that was not
free becomes free). This can lead to a crash/assert because we expect that everything
in an expression that is negatible to be handled in the corresponding code within
getNegatedExpression().

This patch adds a hack to work-around the case where we probably no longer detect
that either multiply operand of an FMA isNegatibleForFree which is assumed to be
true when we started rewriting the expression.

Differential Revision: https://reviews.llvm.org/D70975
2019-12-17 13:49:15 -05:00
Sanjay Patel 5b0251da1c Revert "[SDAG] remove use restriction in isNegatibleForFree() when called from getNegatedExpression()"
This reverts commit 36b1232ec5.
Need to adjust commit message - that was a leftover from the earlier version.
2019-12-17 13:47:59 -05:00
Sanjay Patel 36b1232ec5 [SDAG] remove use restriction in isNegatibleForFree() when called from getNegatedExpression()
This is an alternate fix for the bug discussed in D70595.
This also includes minimal tests for other in-tree targets to show the problem more
generally.

We check the number of uses as a predicate for whether some value is free to negate,
but that use count can change as we rewrite the expression in getNegatedExpression().
So something that was marked free to negate during the cost evaluation phase becomes
not free to negate during the rewrite phase (or the inverse - something that was not
free becomes free). This can lead to a crash/assert because we expect that everything
in an expression that is negatible to be handled in the corresponding code within
getNegatedExpression().

This patch adds a hack to work-around the case where we probably no longer detect
that either multiply operand of an FMA isNegatibleForFree which is assumed to be
true when we started rewriting the expression.

Differential Revision: https://reviews.llvm.org/D70975
2019-12-17 13:46:06 -05:00
Sanjay Patel fbaf835c5c [AArch64] add tests for fcvtl2; NFC 2019-12-17 10:39:37 -05:00
alex-t e7f585ed61 PostRA Machine Sink should take care of COPY defining register that is a sub-register by another COPY source operand
Differential Revision: https://reviews.llvm.org/D71132
2019-12-17 15:20:43 +03:00
Kristof Beyls 870f39d310 Fix assertion failure in getMemOperandWithOffsetWidth
This fixes an assertion failure that triggers inside
getMemOperandWithOffset when Machine Sinking calls it on a MachineInstr
that is not a memory operation.

Different backends implement getMemOperandWithOffset differently: some
return false on non-memory MachineInstrs, others assert.

The Machine Sinking pass in at least SinkingPreventsImplicitNullCheck
relies on getMemOperandWithOffset to return false on non-memory
MachineInstrs, instead of asserting.

This patch updates the documentation on getMemOperandWithOffset that it
should return false on any MachineInstr it cannot handle, instead of
asserting. It also adapts the in-tree backends accordingly where
necessary.

Differential Revision: https://reviews.llvm.org/D71359
2019-12-17 10:56:09 +00:00
Danilo Carvalho Grael f933878991 [AArch64][SVE] Add patterns for logical immediate operations.
Summary:
Add pattern matching for the following SVE logical vector and immediate instructions:
- and/bic, orr/orn, eor/eon.

Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71483
2019-12-16 16:18:34 -05:00
David Tellenbach df0cc105fa Reland [AArch64][MachineOutliner] Return address signing for outlined functions
Summary:
Reland after fixing a bug that allowed outlining of SP modifying instructions
that invalidated return address signing.

During AArch64 frame lowering instructions to enable return address
signing are inserted into functions if needed. Functions generated during
machine outlining don't run through target frame lowering and hence are
missing such instructions.

This patch introduces the following changes:

1. If not all functions that potentially participate in function outlining agree
   on their return address signing scope and their return address signing key,
   outlining is disabled for these functions.
2. If not all functions that potentially participate in function outlining agree
   on their support for v8.3A features, outlining is disabled for these
   functions.
3. If an outlining candidate would outline instructions that modify sp in a way
   that invalidates return address signing, outlining is disabled for that
   particular candidate.
4. If all candidate functions agree on the signing scope, signing key and their
   support for v8.3 features, the outlined function behaves as if it had the
   same scope and key attributes and as if it would provide the same v8.3A
   support as the original functions.

Reviewers: ostannard, paquette

Reviewed By: ostannard

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70635
2019-12-16 14:40:45 +01:00
Andrzej Warzynski c41d2b5ab2 [AArch64][SVE2] Add intrinsics for binary narrowing operations
Summary:
The following intrinsics for binary narrowing add and sub operations are
added:
  * @llvm.aarch64.sve.addhnb
  * @llvm.aarch64.sve.addhnt
  * @llvm.aarch64.sve.raddhnb
  * @llvm.aarch64.sve.raddhnt
  * @llvm.aarch64.sve.subhnb
  * @llvm.aarch64.sve.subhnt
  * @llvm.aarch64.sve.rsubhnb
  * @llvm.aarch64.sve.rsubhnt

Reviewers: sdesmalen, rengolin, efriedma

Reviewed By: sdesmalen, efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71424
2019-12-16 12:22:56 +00:00
Kristof Beyls 7f4f07ddf3 [AArch64] Enable emission of stack maps for non-Mach-O binaries on AArch64.
The emission of stack maps in AArch64 binaries has been disabled for all
binary formats except Mach-O since rL206610, probably mistakenly, as far
as I can tell. This patch reverts this to its intended state.

Differential Revision: https://reviews.llvm.org/D70069

Patch by Loic Ottet.
2019-12-16 12:02:47 +00:00
Andrzej Warzynski 7e20c3a71d [Aarch64][SVE] Add intrinsics for scatter stores
Summary:
This patch adds the following SVE intrinsics for scatter stores:
* 64-bit offsets:
  * @llvm.aarch64.sve.st1.scatter (unscaled)
  * @llvm.aarch64.sve.st1.scatter.index (scaled)
* 32-bit unscaled offsets:
  * @llvm.aarch64.sve.st1.scatter.uxtw (zero-extended offset)
  * @llvm.aarch64.sve.st1.scatter.sxtw (sign-extended-offset)
* 32-bit scaled offsets:
  * @llvm.aarch64.sve.st1.scatter.uxtw.index (zero-extended offset)
  * @llvm.aarch64.sve.st1.scatter.sxtw.index (sign-extended offset)
* vector base + immediate:
  * @llvm.aarch64.sve.st1.scatter.imm

Reviewers: rengolin, efriedma, sdesmalen

Reviewed By: efriedma, sdesmalen

Subscribers: kmclaughlin, eli.friedman, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71074
2019-12-16 11:52:53 +00:00
Sanjay Patel 2afe864118 [DAG] Add SimplifyDemandedBits support for BSWAP
This exposes a shortcoming for AArch64, and that is tracked by PR40881:
https://bugs.llvm.org/show_bug.cgi?id=40881

Patch by: @RKSimon (Simon Pilgrim)

Differential Revision: https://reviews.llvm.org/D58017
2019-12-15 08:52:34 -05:00