Commit Graph

3414 Commits

Author SHA1 Message Date
Kerry McLaughlin 0bba37a320 [AArch64][SVE] Add SVE intrinsics for address calculations
Summary: Adds the @llvm.aarch64.sve.adr[b|h|w|d] intrinsics

Reviewers: sdesmalen, andwar, efriedma, dancgr, cameron.mcinally, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, danielkiss, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75858
2020-03-10 10:53:37 +00:00
Cameron McInally 2ab8065df6 [AArch64][SVE] Add missing fp16 DestructiveInstType tests
These tests should have been added with a5b22b768f in D73711.

Differential Revision: https://reviews.llvm.org/D75767
2020-03-09 14:09:23 -05:00
KAWASHIMA Takahiro c8cd1a994d [AArch64] Add support for Fujitsu A64FX
A64FX is an Armv8.2-A CPU used in FUJITSU Supercomputer
PRIMEHPC FX1000, PRIMEHPC FX700, and supercomputer Fugaku.

https://www.fujitsu.com/global/products/computing/servers/supercomputer/specifications/

Differential Revision: https://reviews.llvm.org/D75594
2020-03-09 19:15:09 +09:00
Amara Emerson c1a97e992d Revert "Revert "[GlobalISel][Localizer] Enable intra-block localization of already-local uses.""
This reverts commit 5583c2f2fb.

The lldb bot failure was a test that was fragile and sensitive to irrelevant
changes in instruction ordering. Re-committing this as the test should have
been skipped for AArch64 now.

Differential Revision: https://reviews.llvm.org/D75555
2020-03-06 21:35:08 -08:00
Jin Lin fc6fda90f7 Fix incorrect logic in maintaining the side-effect of compiler generated outliner functions
Summary: Fix incorrect logic in maintaining the side-effect of compiler generated outliner functions by adding the up-exposed uses.

Reviewers: paquette, tellenbach

Reviewed By: paquette

Subscribers: aemerson, lebedev.ri, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71217
2020-03-06 09:13:20 -08:00
Fangrui Song 71e2ca6e32 [llvm-objdump] -d: print `00000000 <foo>:` instead of `00000000 foo:`
The new behavior matches GNU objdump. A pair of angle brackets makes tests slightly easier.

`.foo:` is not unique and thus cannot be used in a `CHECK-LABEL:` directive.
Without `-LABEL`, the CHECK line can match the `Disassembly of section`
line and causes the next `CHECK-NEXT:` to fail.

```
Disassembly of section .foo:

0000000000001634 .foo:
```

Bdragon: <> has metalinguistic connotation. it just "feels right"

Reviewed By: rupprecht

Differential Revision: https://reviews.llvm.org/D75713
2020-03-05 18:05:28 -08:00
Jessica Paquette ef4282e0ee [AArch64][GlobalISel] Avoid copies to target register bank for subregister copies
Previously for any copy from a register bigger than the destination:

Copied to a same-sized register in the destination register bank.
Subregister copy of that to the destination.
This fails for copies from 128-bit FPRs to GPRs because the GPR register bank
can't accomodate 128-bit values.

Instead of special-casing such copies to perform the truncation beforehand in
the source register bank, generalize this:
a) Perform a subregister copy straight from source register whenever possible.
This results in shorter MIR and fixes the above problem.

b) Perform a full copy to target bank and then do a subregister copy only if
source bank can't support target's size. E.g. GPR to 8-bit FPR copy.

Patch by Raul Tambre (tambre)!

Differential Revision: https://reviews.llvm.org/D75421
2020-03-05 11:13:02 -08:00
Muhammad Omair Javaid 5583c2f2fb Revert "[GlobalISel][Localizer] Enable intra-block localization of already-local uses."
This reverts commit e91e1df6ab.
2020-03-05 03:12:28 +05:00
Matt Arsenault fb0c35fa34 GlobalISel: Set alignment on function argument stack load/store 2020-03-04 16:38:46 -05:00
Sanjay Patel 29a2b20ab3 [SDAG] simplify FP binops to undef
As discussed in the commit thread for rGa253a2a and D73978, we can do more undef folding for FP ops.
The nnan and ninf fast-math-flags specify that if an operand is the disallowed value, the result is
poison, so we can produce an undef result.

But this doesn't work as expected (the undef operand cases remain) because of a Flags propagation
problem in SelectionDAGBuilder.

I've added DAGCombiner calls to enable these for the other cases because we've shown in other
patches that (because of the limited way that SDAG iterates), it is possible to miss simplifications
like this if they are done only at node creation time.

Several potential follow-ups to expand on this patch are possible.

Differential Revision: https://reviews.llvm.org/D75576
2020-03-04 10:42:16 -05:00
Kerry McLaughlin f5502c7035 [AArch64][SVE] Add SVE2 intrinsic for xar
Summary: Implements the @llvm.aarch64.sve.xar intrinsic

Reviewers: andwar, c-rhodes, dancgr, efriedma, rengolin

Reviewed By: andwar

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75160
2020-03-04 11:44:32 +00:00
Amara Emerson e91e1df6ab [GlobalISel][Localizer] Enable intra-block localization of already-local uses.
This changes the localizer to attempt intra-block localizer of instructions
that have local uses. This is useful because sometimes the entry block itself
has many uses of constant-like instructions, which would benefit from shortening
live ranges. Previously if an inst had no non-local uses, we wouldn't add it to
the list of instructions to attempt further intra-block localization.

This gives a 0.7% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D75555
2020-03-03 18:14:57 -08:00
Sanjay Patel f95095e9f6 [AArch64] add tests for nnan/ninf/undef FP simplifications; NFC 2020-03-03 16:38:58 -05:00
Jessica Paquette 02c154a9cb [AArch64][MachineOutliner] Don't outline CFI instructions
CFI instructions can only safely be outlined when the outlined call is a tail
call, or when the outlined frame is fixed up.

For the sake of correctness, disable outlining from CFI instructions.

Add machine-outliner-cfi.mir to test this.
2020-03-02 10:56:35 -08:00
Andrzej Warzynski 9249f60602 [AArch64][SVE] Add intrinsics for non-temporal gather-loads/scatter-stores
Summary:
This patch adds the following LLVM IR intrinsics for SVE:
1. non-temporal gather loads
  * @llvm.aarch64.sve.ldnt1.gather
  * @llvm.aarch64.sve.ldnt1.gather.uxtw
  * @llvm.aarch64.sve.ldnt1.gather.scalar.offset
2. non-temporal scatter stores
  * @llvm.aarch64.sve.stnt1.scatter
  * @llvm.aarch64.sve.ldnt1.gather.uxtw
  * @llvm.aarch64.sve.ldnt1.gather.scalar.offset
These intrinsic are mapped to the corresponding SVE instructions
(example for half-words, zero-extending):
  * ldnt1h { z0.s }, p0/z, [z0.s, x0]
  * stnt1h { z0.s }, p0/z, [z0.s, x0]

Note that for non-temporal gathers/scatters, the SVE spec defines only
one instruction type: "vector + scalar". For this reason, we swap the
arguments when processing intrinsics that implement the "scalar +
vector" addressing mode:
  * @llvm.aarch64.sve.ldnt1.gather
  * @llvm.aarch64.sve.ldnt1.gather.uxtw
  * @llvm.aarch64.sve.stnt1.scatter
  * @llvm.aarch64.sve.ldnt1.gather.uxtw
In other words, all intrinsics for gather-loads and scatter-stores
implemented in this patch are mapped to the same load and store
instruction, respectively.

The sve2_mem_gldnt_vs multiclass (and it's counterpart for scatter
stores) from SVEInstrFormats.td was split into:
  * sve2_mem_gldnt_vec_vs_32_ptrs (32bit wide base addresses)
  * sve2_mem_gldnt_vec_vs_62_ptrs (64bit wide base addresses)
This is consistent with what we did for
@llvm.aarch64.sve.ld1.scalar.offset and highlights the actual split in
the spec and the implementation.

Reviewed by: sdesmalen

Differential Revision: https://reviews.llvm.org/D74858
2020-03-02 10:38:28 +00:00
Sanjay Patel 619d7dc39a [DAGCombiner] recognize shuffle (shuffle X, Mask0), Mask --> splat X
We get the simple cases of this via demanded elements and other folds,
but that doesn't work if the values have >1 use, so add a dedicated
match for the pattern.

We already have this transform in IR, but it doesn't help the
motivating x86 tests (based on PR42024) because the shuffles don't
exist until after legalization and other combines have happened.
The AArch64 test shows a minimal IR example of the problem.

Differential Revision: https://reviews.llvm.org/D75348
2020-03-01 09:10:25 -05:00
serge-sans-paille 6d15c4deab No longer generate calls to *_finite
According to Joseph Myers, a libm maintainer

> They were only ever an ABI (selected by use of -ffinite-math-only or
> options implying it, which resulted in the headers using "asm" to redirect
> calls to some libm functions), not an API. The change means that ABI has
> turned into compat symbols (only available for existing binaries, not for
> anything newly linked, not included in static libm at all, not included in
> shared libm for future glibc ports such as RV32), so, yes, in any case
> where tools generate direct calls to those functions (rather than just
> following the "asm" annotations on function declarations in the headers),
> they need to stop doing so.

As a consequence, we should no longer assume these symbols are available on the
target system.

Still keep the TargetLibraryInfo for constant folding.

Differential Revision: https://reviews.llvm.org/D74712
2020-02-28 10:07:37 +01:00
Sanjay Patel 2f090ce890 [AArch64] add splat shuffle combine test; NFC 2020-02-27 14:38:56 -05:00
Sanjay Patel 84e6fd815a [AArch64] regenerate complete test checks; NFC 2020-02-27 14:38:55 -05:00
Andrzej Warzynski fa9439fac8 [AArch64][SVE] Add intrinsics for first-faulting gather loads
Summary:
The following intrinsics are added:
  * @llvm.aarch64.sve.ldff1.gather
  * @llvm.aarch64.sve.ldff1.gather.index
  * @llvm.aarch64.sve.ldff1.gather_sxtw
  * @llvm.aarch64.sve.ldff1.gather.uxtw
  * @llvm.aarch64.sve.ldff1.gather_sxtw.index
  * @llvm.aarch64.sve.ldff1.gather.uxtw.index
  * @llvm.aarch64.sve.ldff1.gather.scalar.offset

Although this patch is quite substantial, the vast majority of the
implementation is just a 'copy & paste' of the implementation of regular
gather loads, including tests. There's only a handful of new
definitions:
  * AArch64ISD nodes defined in AArch64ISelLowering.h (e.g. GLDFF1)
  * Seleciton DAG Types in AArch64SVEInstrInfo.td (e.g.
    AArch64ldff1_gather)
  * intrinsics in IntrinsicsAArch64.td (e.g. aarch64_sve_ldff1_gather)
  * Pseudo instructions in SVEInstrFormats.td to workaround the issue of
    use-before-def for the FFR register.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D75128
2020-02-27 12:56:33 +00:00
Sjoerd Meijer 13db7490fa [AArch64] Peephole optimization: merge AND and TST instructions
In some cases Clang does not perform merging of instructions AND and TST (aka
ANDS xzr).

Example:

  tst x2, x1
  and x3, x2, x1

to:

  ands x3, x2, x1

This patch add such merging during instruction selection: when AND is replaced
with ANDS instruction in LowerSELECT_CC, all users of AND also should be
changed for using this ANDS instruction

Short discussion on mailing list:
http://llvm.1065342.n5.nabble.com/llvm-dev-ARM-Peephole-optimization-instructions-tst-add-tp133109.html

Patch by Pavel Kosov.

Differential Revision: https://reviews.llvm.org/D71701
2020-02-27 09:23:47 +00:00
Amara Emerson 65f99b5383 [AArch64][GlobalISel] Fixup <32b heterogeneous regbanks of G_PHIs just before selection.
Since all types <32b on gpr end up being assigned gpr32 regclasses, we can end
up with PHIs here which try to select between a gpr32 and an fpr16. Ideally RBS
shouldn't be selecting heterogenous regbanks for operands if possible, but we
still need to be able to deal with it here.

To fix this, if we have a gpr-bank operand < 32b in size and at least one other
operand is on the fpr bank, then we add cross-bank copies to homogenize the
operand banks. For simplicity the bank that we choose to settle on is whatever
bank the def operand has. For example:

%endbb:
  %dst:gpr(s16) = G_PHI %in1:gpr(s16), %bb1, %in2:fpr(s16), %bb2
 =>
%bb2:
  ...
  %in2_copy:gpr(s16) = COPY %in2:fpr(s16)
  ...
%endbb:
  %dst:gpr(s16) = G_PHI %in1:gpr(s16), %bb1, %in2_copy:gpr(s16), %bb2

Differential Revision: https://reviews.llvm.org/D75086
2020-02-26 14:10:32 -08:00
Sanjay Patel b3d0c79836 [DAGCombiner] avoid narrowing fake fneg vector op
This may inhibit vector narrowing in general, but there's
already an inconsistency in the way that we deal with this
pattern as shown by the test diff.

We may want to add a dedicated function for narrowing fneg.
It's often folded into some other op, so moving it away from
other math ops may cause regressions that we would not see
for normal binops.

See D73978 for more details.
2020-02-26 11:25:56 -05:00
Sanjay Patel 894ce940db [AArch64] add tests for fake fneg; NFC
See comments in D73978 for background.
2020-02-26 10:56:26 -05:00
Kerry McLaughlin 9c859fc54d [AArch64][SVE] Add SVE2 intrinsics for bit permutation & table lookup
Summary:
Implements the following intrinsics:
 - @llvm.aarch64.sve.bdep.x
 - @llvm.aarch64.sve.bext.x
 - @llvm.aarch64.sve.bgrp.x
 - @llvm.aarch64.sve.tbl2
 - @llvm.aarch64.sve.tbx

The SelectTableSVE2 function in this patch is used to select the TBL2
intrinsic & ensures that the vector registers allocated are consecutive.

Reviewers: sdesmalen, andwar, dancgr, cameron.mcinally, efriedma, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74912
2020-02-26 11:22:23 +00:00
Florian Hahn a059be72c4 [AArch64] Flip default for register renaming in the ld/st optimizier.
Turn on register renaming again after disabling it for the 10.0 release,
to help flushing out any issues.
2020-02-26 11:08:17 +00:00
Roman Lebedev d20907d1de
[Codegen] Revert rL354676/rL354677 and followups - introduced PR43446 miscompile
This reverts https://reviews.llvm.org/D58468
(rL354676, 44037d7a63),
and all and any follow-ups to that code block.

https://bugs.llvm.org/show_bug.cgi?id=43446
2020-02-25 20:30:12 +03:00
Cullen Rhodes 72848f26b4 [AArch64][SVE] Add predicate reinterpret intrinsics
Summary:
Implements the following intrinsics:

    * llvm.aarch64.sve.convert.to.svbool
    * llvm.aarch64.sve.convert.from.svbool

For converting the ACLE svbool_t type (<n x 16 x i1>) to and from the
other predicate types: <n x 8 x i1>, <n x 4 x i1> and <n x 2 x i1>.

Reviewers: sdesmalen, kmclaughlin, efriedma, dancgr, rengolin

Reviewed By: sdesmalen, efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74471
2020-02-25 10:24:06 +00:00
Eli Friedman 248eaff823 [AArch64] SVE implies fullfp16
This is explicitly guaranteed in ARMARM. And it makes reasoning about
vectors easier: we can assume that if a vector operation is legal, the
corresponding scalar operation is also legal.

Differential Revision: https://reviews.llvm.org/D74993
2020-02-24 17:19:35 -08:00
Kerry McLaughlin f87f23c81c [AArch64][SVE] Add the SVE dupq_lane intrinsic
Summary:
Implements the @llvm.aarch64.sve.dupq.lane intrinsic.

As specified in the ACLE, the behaviour of:
  svdupq_lane_u64(data, index)

...is identical to:
  svtbl(data, svadd_x(svptrue_b64(),
                      svand_x(svptrue_b64(), svindex_u64(0, 1), 1),
                      index * 2))

If the index is in the range [0,3], the operation is equivalent
to a single DUP (.q) instruction.

Reviewers: sdesmalen, c-rhodes, cameron.mcinally, efriedma, dancgr, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74734
2020-02-24 13:59:47 +00:00
Kerry McLaughlin f2ff153401 [AArch64][SVE] Add intrinsics for SVE2 cryptographic instructions
Summary:
Implements the following SVE2 intrinsics:
 - @llvm.aarch64.sve.aesd
 - @llvm.aarch64.sve.aesimc
 - @llvm.aarch64.sve.aese
 - @llvm.aarch64.sve.aesmc
 - @llvm.aarch64.sve.rax1
 - @llvm.aarch64.sve.sm4e
 - @llvm.aarch64.sve.sm4ekey

Reviewers: sdesmalen, c-rhodes, dancgr, cameron.mcinally, efriedma, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74833
2020-02-24 10:49:31 +00:00
Florian Hahn 335e21f900 [AArch64] Update new test.
Changed after 7769030b93.
2020-02-23 19:13:13 +00:00
Florian Hahn 7769030b93 Recommit "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This version fixes a buildbot failure cause by picking the wrong insert
point for XORs. We cannot pick the XOR binary operator as insert point,
as it is not guaranteed that both input operands for the overflow
intrinsic are defined before it.

This reverts the revert commit
c7fc0e5da6.
2020-02-23 18:33:18 +00:00
Cameron McInally a5b22b768f [AArch64][SVE] Add support for DestructiveBinary and DestructiveBinaryComm DestructiveInstTypes
Add support for DestructiveBinaryComm DestructiveInstType, as well as the lowering code to expand the new Pseudos into the final movprfx+instruction pairs.

Differential Revision: https://reviews.llvm.org/D73711
2020-02-21 15:19:54 -06:00
Francesco Petrogalli 33bf119647 [llvm][CodeGen][aarch64] Add contiguous prefetch intrinsics for SVE.
Summary: The patch covers both register/register and register/immediate addressing modes.

Reviewers: efriedma, andwar, sdesmalen

Reviewed By: sdesmalen

Subscribers: sdesmalen, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74581
2020-02-21 20:22:25 +00:00
Francesco Petrogalli e2ed1d14d6 [llvm][aarch64] SVE addressing modes.
Summary:
Added register + immediate and register + register addressing modes for the following intrinsics:

1. Masked load and stores:
     * Sign and zero extended load and truncated stores.
     * No extension or truncation.
2. Masked non-temporal load and store.

Reviewers: andwar, efriedma

Subscribers: cameron.mcinally, sdesmalen, tschuett, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74254
2020-02-21 20:02:34 +00:00
Cameron McInally 266959c0f7 [AArch64][SVE] Add backend support for splats of immediates
This patch adds backend support for splats of both Int and FP immediates.

Differential Revision: https://reviews.llvm.org/D74856
2020-02-21 13:21:47 -06:00
Francesco Petrogalli 31ec721516 [llvm][CodeGen] DAG Combiner folds for vscale.
Summary:
This patch simplifies the DAGs generated when using the intrinsic `@llvm.vscale.*` as follows:

* Fold (add (vscale * C0), (vscale * C1)) to (vscale * (C0 + C1)).
* Canonicalize (sub X, (vscale * C)) to (add X,  (vscale * -C)).
* Fold (mul (vscale * C0), C1) to (vscale * (C0 * C1)).
* Fold (shl (vscale * C0), C1) to (vscale * (C0 << C1)).

The test `sve-gep-ll` have been updated to reflect the folding introduced by this patch.

Reviewers: efriedma, sdesmalen, andwar, rengolin

Reviewed By: sdesmalen

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74782
2020-02-21 18:03:12 +00:00
Danilo Carvalho Grael db9c40f562 [AArch64][SVE] Add intrinsics for SVE2 bitwise ternary operations
Summary:
Add intrinsics for the following operations:
- eor3, bcax
- bsl, bsl1n, bsl2n, nbsl

Fix MC tests for bsl instructions.

Reviewers: kmclaughlin, c-rhodes, sdesmalen, efriedma, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74785
2020-02-21 12:15:51 -05:00
Cameron McInally 9fff6e823c [AArch64][SVE] Add +fullfp16 to sve-vector-splat.ll
Add +fullfp16 to sve-vector-splat.ll so we can test folding of immediates into moves.

This attribute can go away later when SVE has a full set of fp16 patterns in place.

Differential Revision: https://reviews.llvm.org/D74965
2020-02-21 10:56:39 -06:00
Eli Friedman c767cf24e4 [SVE] Add support for lowering GEPs involving scalable vectors.
This includes both GEPs where the indexed type is a scalable vector, and
GEPs where the result type is a scalable vector.

Differential Revision: https://reviews.llvm.org/D73602
2020-02-20 13:45:41 -08:00
Nico Weber 6f4d9d1029 Revert "[AArch64][SVE] Add intrinsics for SVE2 bitwise ternary operations"
This reverts commit ce70e28998.
It broke MC/AArch64/SVE2/bsl-diagnostics.s everywhere.
2020-02-20 15:11:13 -05:00
Danilo Carvalho Grael ce70e28998 [AArch64][SVE] Add intrinsics for SVE2 bitwise ternary operations
Summary:
Add intrinsics for the following operations:
- eor3, bcax
- bsl, bsl1n, bsl2n, nbsl

Reviewers: kmclaughlin, c-rhodes, sdesmalen, efriedma, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74785
2020-02-20 11:36:48 -05:00
Simon Pilgrim fc2b4a02b1 [DAGCombine] visitEXTRACT_VECTOR_ELT - add SimplifyDemandedBits multi use support
Similar to what we already do with SimplifyDemandedVectorElts, call SimplifyDemandedBits across all the extracted elements of the source vector, treating it as single use.

There's a minor regression in store-weird-sizes.ll which will be addressed in an upcoming SimplifyDemandedBits patch.
2020-02-20 15:49:38 +00:00
Djordje Todorovic 2f215cf36a Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGfaff707db82d.
A failure found on an ARM 2-stage buildbot.
The investigation is needed.
2020-02-20 14:41:39 +01:00
Florian Hahn c7fc0e5da6 Revert "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This reverts commit e01a3d49c2.
and commit a6a585b803.

This causes a failure on GreenDragon:
http://lab.llvm.org:8080/green/view/LLDB/job/lldb-cmake/9597
2020-02-19 19:37:08 +01:00
Cameron McInally 3931734990 [AArch64][SVE] Add initial backend support for FP splat_vector
Differential Revision: https://reviews.llvm.org/D74632
2020-02-19 10:19:11 -06:00
Florian Hahn a6a585b803 [CGP] Adjust CodeGen tests after e01a3d49c2 2020-02-19 16:05:00 +01:00
Kerry McLaughlin 63236078d2 [AArch64][SVE] Add SVE2 intrinsics for polynomial arithmetic
Summary:
Implements the following intrinsics:
 - @llvm.aarch64.sve.eorbt
 - @llvm.aarch64.sve.eortb
 - @llvm.aarch64.sve.pmullb.pair
 - @llvm.aarch64.sve.pmullt.pair

Reviewers: sdesmalen, c-rhodes, dancgr, cameron.mcinally, efriedma, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74769
2020-02-19 10:12:50 +00:00
Djordje Todorovic faff707db8 Reland "[DebugInfo] Enable the debug entry values feature by default"
Differential Revision: https://reviews.llvm.org/D73534
2020-02-19 11:12:26 +01:00