Commit Graph

3324 Commits

Author SHA1 Message Date
Matt Arsenault 9087ef0765 GlobalISel: Allow CSE of G_IMPLICIT_DEF
The legalizer produces a lot of these, and they make reading legalized
MIR annoying. For some reason, this does seem to sometimes introduce
copies of implicit def, which is dumb.
2020-02-05 17:47:21 -05:00
Shu-Chun Weng ce9633633c [GlobalISel][AArch64] Fix contract cross-bank copies with SIMD instructions
contractCrossBankCopyIntoStore() finds the instruction defines the
source register and uses its output to replace the register. There are,
however, instructions that have multiple outputs, e.g. G_UNMERGE_VALUES.
Current implementation hardcodes to operand 0 and has no way of knowing
which output should be used.

This change adds another function to directly return the register that
is the source of the register and use that for folding.

This fixes https://bugs.llvm.org/show_bug.cgi?id=44783

Differential Revision: https://reviews.llvm.org/D74005
2020-02-05 10:38:35 -08:00
Jessica Paquette 292f725711 [AArch64][GlobalISel] Fold G_ASHR into TB(N)Z bit calculation
This implements walking over G_ASHR in the same way as `getTestBitOperand` in
AArch64ISelLowering.

```
(tbz (ashr x, c), b) -> (tbz x, b+c) or (tbz x, msb) if b+c is > # bits in x
```

Differential Revision: https://reviews.llvm.org/D73933
2020-02-05 10:04:48 -08:00
Jessica Paquette a82a28ae12 [AArch64][GlobalISel] Fix one use check in getTestBitReg
(1) The check needs to be on the 0th operand of whatever we're folding
(2) Checks for validity should happen before we change the bit

Fixes a bug which caused MultiSource/Applications/JM/lencod to fail at -O3.

Differential Revision: https://reviews.llvm.org/D74002
2020-02-05 09:54:52 -08:00
Fangrui Song 8ff86fcf4c [X86] -fpatchable-function-entry=N,0: place patch label after ENDBR{32,64}
Similar to D73680 (AArch64 BTI).

A local linkage function whose address is not taken does not need ENDBR32/ENDBR64. Placing the patch label after ENDBR32/ENDBR64 has the advantage that code does not need to differentiate whether the function has an initial ENDBR.

Also, add 32-bit tests and test that .cfi_startproc is at the function
entry. The line information has a general implementation and is tested
by AArch64/patchable-function-entry-empty.mir

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D73760
2020-02-04 09:42:36 -08:00
David Green 362d00e051 [ARM][VecReduce] Force expand vector_reduce_fmin
Under MVE, we do not have any lowering for fminimum, which a
vector_reduce_fmin without NoNan will be expanded into. As with the
other recent patches, force this to expand in the pre-isel pass. Note
that Neon lowering would be OK because the scalar fminimum uses the
vector VMIN instruction, but is probably better to just rely on the
scalar operations, which is what is done here.

Also fixes what appears to be the reversal of INF vs -INF in the
vector_reduce_fmin widening code.
2020-02-04 09:36:59 +00:00
Jessica Paquette 9effe38b22 [AArch64][GlobalISel] Fold G_XOR into TB(N)Z bit calculation
This ports the existing case for G_XOR from `getTestBitOperand` in
AArch64ISelLowering into GlobalISel.

The idea is to flip between TBZ and TBNZ while walking through G_XORs.

Let's say we have

```
tbz (xor x, c), b
```

Let's say the `b`-th bit in `c` is 1. Then

- If the `b`-th bit in `x` is 1, the `b`-th bit in `(xor x, c)` is 0.
- If the `b`-th bit in `x` is 0, then the `b`-th bit in `(xor x, c)` is 1.

So, then

```
tbz (xor x, c), b == tbnz x, b
```

Let's say the `b`-th bit in `c` is 0. Then

- If the `b`-th bit in `x` is 1, the `b`-th bit in `(xor x, c)` is 1.
- If the `b`-th bit in `x` is 0, then the `b`-th bit in `(xor x, c)` is 0.

So, then

```
tbz (xor x, c), b == tbz x, b
```

Differential Revision: https://reviews.llvm.org/D73929
2020-02-03 15:22:24 -08:00
Jessica Paquette 37910fd0e1 [AArch64][GlobalISel] Fold G_SHL into TB(N)Z bit calculation
This implements the following optimization:

```
(tbz (shl x, c), b) -> (tbz x, b-c)
```

Which appears in `getTestBitOperand` in AArch64ISelLowering.cpp.

If we test bit `b` of `shl x, c`, we can fold away the `shl` by looking `c` bits
to the right of `b` in `x` when this fits in the type. So, we can just test the
`b-c`th bit.

Differential Revision: https://reviews.llvm.org/D73924
2020-02-03 14:27:08 -08:00
Jessica Paquette 2bd46444d7 [AArch64][GlobalISel] Walk through G_AND in TB(N)Z bit calculation
Given

```
tb(n)z (and x, m), b
```

Where the `b`-th bit of `m` is 1,

```
tb(n)z (and x, m), b == tb(n)z x, b
```

So, we can walk past a `G_AND` in this case.

Also add test/CodeGen/AArch64/GlobalISel/opt-fold-and-tbz-tbnz.mir to test this.

Differential Revision: https://reviews.llvm.org/D73790
2020-02-03 11:53:47 -08:00
Amara Emerson b911b99052 [AArch64][GlobalISel] Don't reconvert to p0 in convertPtrAddToAdd().
convertPtrAddToAdd improved overall code size and quality by a significant amount,
but on -O0 we generate some cross-class copies due to the fact that we emitted
G_PTRTOINT and G_INTTOPTR around the G_ADD. Unfortunately at -O0 we don't run any
register coalescing, so these cross class copies end up escaping as moves, and
we ended up regressing 3 benchmarks on CTMark (though still a winner overall).

This patch changes the lowering to instead directly emit the G_ADD into the
destination register, and then force changes the dest LLT to s64 from p0. This
should be ok, as all uses of the register should now be selected and therefore
the LLT doesn't matter for the users. It does however matter for the importer
patterns, which will fail to select a G_ADD if there's a p0 LLT.

I'm not able to get rid of the G_PTRTOINT on the source yet however. We can't
use the same trick of breaking the type system since that could break the
selection of the defining instruction. Thus with -O0 we still end up with a
cross class copy on source.

Code size improvements on -O0:
Program                                         baseline      new         diff
 test-suite :: CTMark/Bullet/bullet.test        965520       949164      -1.7%
 test-suite...TMark/7zip/7zip-benchmark.test    1069456      1052600     -1.6%
 test-suite...ark/tramp3d-v4/tramp3d-v4.test    1213692      1199804     -1.1%
 test-suite...:: CTMark/sqlite3/sqlite3.test    421680       419736      -0.5%
 test-suite...-typeset/consumer-typeset.test    837076       833380      -0.4%
 test-suite :: CTMark/lencod/lencod.test        799712       796976      -0.3%
 test-suite...:: CTMark/ClamAV/clamscan.test    688264       686132      -0.3%
 test-suite :: CTMark/kimwitu++/kc.test         1002344      999648      -0.3%
 test-suite...Mark/mafft/pairlocalalign.test    422296       421768      -0.1%
 test-suite :: CTMark/SPASS/SPASS.test          656792       656532      -0.0%
 Geomean difference                                                      -0.6%

Differential Revision: https://reviews.llvm.org/D73910
2020-02-03 11:50:22 -08:00
John Brawn 68cf574857 [FPEnv][AArch64] Add lowering of f128 STRICT_FSETCC
These get lowered to function calls, like the non-strict versions.

Differential Revision: https://reviews.llvm.org/D73784
2020-02-03 14:39:16 +00:00
Guillaume Chatelet 75d9994a51 Fix broken invariant
Summary:
A Copy with a source that is zeros is the same as a Set of zeros.
This fixes the invariant that SrcAlign should always be non-null.

Reviewers: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73791
2020-02-03 11:01:05 +01:00
Fangrui Song 5a56a25b0b [CodeGenPrepare] Make TargetPassConfig required
The code paths in the absence of TargetMachine, TargetLowering or
TargetRegisterInfo are poorly tested. As rL285987 said, requiring
TargetPassConfig allows us to delete many (untested) checks littered
everywhere.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73754
2020-02-02 09:28:45 -08:00
Fangrui Song 5932f7b8f2 [PatchableFunction] Use an empty DebugLoc
The current FirstMI.getDebugLoc() is actually null in almost all cases.
If it isn't, the generated .loc will be considered initial. The .loc
will have the prologue_end flag and terminate the prologue prematurely.

Also use an overload of BuildMI that will not prepend
PATCHABLE_FUNCTION_ENTRY to a MachineInstr bundle.
2020-02-01 14:12:06 -08:00
Jessica Paquette b9bf9305d1 [AArch64][GlobalISel] Walk through G_TRUNC in getTestBitReg
When you encounter a G_TRUNC, you are moving from a larger type to a smaller
type.

Asking for the i-th bit on a larger value is the same as asking for the i-th
bit on a smaller value.

So, we should always be able to walk through G_TRUNC when computing the bit
for a TB(N)Z.

Differential Revision: https://reviews.llvm.org/D73748
2020-01-31 11:09:55 -08:00
Jay Foad 2a1b5af299 [GlobalISel] Tidy up unnecessary calls to createGenericVirtualRegister
Summary:
As a side effect some redundant copies of constant values are removed by
CSEMIRBuilder.

Reviewers: aemerson, arsenm, dsanders, aditya_nandakumar

Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, hiraditya, jrtc27, atanasyan, volkan, Petar.Avramovic, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73789
2020-01-31 17:07:16 +00:00
Danilo Carvalho Grael 44a4f5fc6a [AArch64][SVE] Add SVE2 mla unpredicated intrinsics.
Summary:
Add intrinsics for the MLA unpredicated sve2 instructions:
- smlalb, smlalt, umlalb, umlalt, smlslb, smlslt, umlslb, umlslt
- sqdmlalb, sqdmlalt, sqdmlslb, sqdmlslt
- sqdmlalbt, sqdmlslbt

Reviewers: efriedma, sdesmalen, cameron.mcinally, c-rhodes, rengolin, kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73746
2020-01-31 11:39:12 -05:00
Kerry McLaughlin 69558c8487 [AArch64][SVE] Add remaining SVE2 intrinsics for uniform DSP operations
Summary:
Implements the following intrinsics:

 - @llvm.aarch64.sve.[s|u]qadd
 - @llvm.aarch64.sve.[s|u]qsub
 - @llvm.aarch64.sve.suqadd
 - @llvm.aarch64.sve.usqadd
 - @llvm.aarch64.sve.[s|u]qsubr
 - @llvm.aarch64.sve.[s|u]rshl
 - @llvm.aarch64.sve.[s|u]qshl
 - @llvm.aarch64.sve.[s|u]qrshl
 - @llvm.aarch64.sve.[s|u]rshr
 - @llvm.aarch64.sve.sqshlu
 - @llvm.aarch64.sve.sri
 - @llvm.aarch64.sve.sli
 - @llvm.aarch64.sve.[s|u]sra
 - @llvm.aarch64.sve.[s|u]rsra
 - @llvm.aarch64.sve.[s|u]aba

Reviewers: efriedma, sdesmalen, dancgr, cameron.mcinally, c-rhodes, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73551
2020-01-31 10:51:57 +00:00
Amara Emerson 84bd851108 [GlobalISel][IRTranslator] When translating vector geps, splat the base pointer if required.
We can have geps that have a scalar base pointer, and a vector index value, which
means that the base pointer must be splatted into a vector of pointers.

This fixes crashes on arm64 GlobalISel with optimizations enabled.
2020-01-30 16:27:27 -08:00
Jessica Paquette c8c987d310 [AArch64][GlobalISel] Fold in G_ANYEXT/G_ZEXT into TB(N)Z
This is similar to the code in getTestBitOperand in AArch64ISelLowering. Instead
of implementing all of the TB(N)Z optimizations at once, this patch implements
the simplest case first. The way that this is set up should make it fairly easy
to add the rest as we go along.

The idea here is that after determining that we can use a TB(N)Z, we can
continue looking through instructions and perform further folding.

In this case, when we have a G_ZEXT or G_ANYEXT where the extended bits are not
used, we can fold it into the TB(N)Z.

Differential Revision: https://reviews.llvm.org/D73673
2020-01-30 14:51:26 -08:00
Fangrui Song 06b8e32d4f [AArch64] -fpatchable-function-entry=N,0: place patch label after BTI
Summary:
For -fpatchable-function-entry=N,0 -mbranch-protection=bti, after
9a24488cb6, we place the NOP sled after
the initial BTI.

```
.Lfunc_begin0:
bti c
nop
nop

.section __patchable_function_entries,"awo",@progbits,f,unique,0
.p2align 3
.xword .Lfunc_begin0
```

This patch adds a label after the initial BTI and changes the __patchable_function_entries entry to reference the label:

```
.Lfunc_begin0:
bti c
.Lpatch0:
nop
nop

.section __patchable_function_entries,"awo",@progbits,f,unique,0
.p2align 3
.xword .Lpatch0
```

This placement is compatible with the resolution in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92424 .

A local linkage function whose address is not taken does not need a BTI.
Placing the patch label after BTI has the advantage that code does not
need to differentiate whether the function has an initial BTI.

Reviewers: mrutland, nickdesaulniers, nsz, ostannard

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73680
2020-01-30 11:11:52 -08:00
Danilo Carvalho Grael 0610637aac [AArch64][SVE] Add remaining SVE2 mla indexed intrinsics.
Summary:
Add remaining SVE2 mla indexed intrinsics:
- sqdmlalb, sqdmlalt, sqdmlslb, sqdmlslt

Add suffix _lanes and switch immediate types to i32 for all mla indexed intrinsics to align with ACLE builtin definitions.

Reviewers: efriedma, sdesmalen, cameron.mcinally, c-rhodes, rengolin, kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, arphaman, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73633
2020-01-30 13:32:11 -05:00
Nikita Popov 70d345e687 [AArch64][ARM] Always expand ordered vector reductions (PR44600)
fadd/fmul reductions without reassoc are lowered to
VECREDUCE_STRICT_FADD/FMUL nodes, which don't have legalization
support. Until that is in place, expand these intrinsics on
ARM and AArch64. Other targets always expand the vector reduction
intrinsics.

Additionally expand fmax/fmin reductions without nonan flag on
AArch64, as the backend asserts that the flag is present when
lowering VECREDUCE_FMIN/FMAX.

This fixes https://bugs.llvm.org/show_bug.cgi?id=44600.

Differential Revision: https://reviews.llvm.org/D73135
2020-01-30 18:40:24 +01:00
Hans Wennborg 6be9acdfa8 Drop arm triple from test/CodeGen/AArch64/global-merge-hidden-minsize.ll
Because it's in the AArch64/ directory, it runs in cases where the arm
target may not be available, see comment on D73235.
2020-01-30 15:02:38 +01:00
John Brawn 0bb9a27c98 [FPEnv][AArch64] Add lowering and instruction selection for strict conversions
Strict fp-to-int and int-to-fp conversions can be handled in the same way that
the non-strict versions are (by using the appropriate instruction or converting
to a function call when we have no instruction).

Differential Revision: https://reviews.llvm.org/D73625
2020-01-30 13:50:06 +00:00
John Brawn 258d8dd76a [FPEnv][AArch64] Add lowering and instruction selection for STRICT_FP_ROUND
This gets selected to the appropriate fcvt instruction. Handling from there on
isn't fully correct yet, as we need to model fcvt reading and writing to fpsr
and fpcr.

Differential Revision: https://reviews.llvm.org/D73201
2020-01-30 12:51:25 +00:00
John Brawn 2224407ef5 Add lowering of STRICT_FSETCC and STRICT_FSETCCS
These become STRICT_FCMP and STRICT_FCMPE, which then get selected to the
corresponding FCMP and FCMPE instructions, though the handling from there on
isn't fully correct as we don't model reads and writes to FPCR and FPSR.

Differential Revision: https://reviews.llvm.org/D73368
2020-01-30 10:40:55 +00:00
Amara Emerson 610f1d22f1 [AArch64][GlobalISel] During ISel try to convert G_PTR_ADD to G_ADD.
This lowering tries to look for G_PTR_ADD instructions and then converts
them to a standard G_ADD with a COPY on the source, and G_INTTOPTR on the
result. This is ok for address space 0 on AArch64 as p0 can be treated as
s64.

The motivation behind this is to expose the add semantics to the imported
tablegen patterns. We shouldn't need to check for uses being loads/stores,
because the selector works bottom up, uses before defs. By the time we
end up trying to select a G_PTR_ADD, we should have already attempted to
fold this into addressing modes and were therefore unsuccessful.

This gives some performance and code size improvements across the board.

Differential Revision: https://reviews.llvm.org/D73673
2020-01-29 23:04:52 -08:00
Amara Emerson c12f046eb9 [GlobalISel] Add new combine to convert scalar G_MUL to G_SHL.
For pow2 constants we should use G_SHL for pattern matching (and perf)
purposes later.

Vector support not yet implemented.

Differential Revision: https://reviews.llvm.org/D73659
2020-01-29 13:39:00 -08:00
Jessica Paquette 050cd443ca [AArch64][GlobalISel] Fix TBNZ/TBZ opcode selection
When the bit is <= 32, we have to use the W register variant for TB(N)Z.

This is because of the way the instruction is encoded.

Differential Revision: https://reviews.llvm.org/D73660
2020-01-29 13:11:18 -08:00
Amara Emerson 0da937bb5c [GlobalISel][IRTranslator] Follow convention and put constant offset of getelementptr arithmetic on RHS.
We were needlessly putting known constant values on the LHS of a G_MUL, which
is suboptimal.

Differential Revision: https://reviews.llvm.org/D73650
2020-01-29 11:37:19 -08:00
Fangrui Song 8903e61b66 [AsmPrinter][ELF] Define local aliases (.Lfoo$local) for GlobalObjects
For `MC_GlobalAddress` operands referencing **certain** GlobalObjects,
we can lower them to STB_LOCAL aliases to avoid costs brought by
assembler/linker's conservative decisions about symbol interposition:

* An assembler conservatively assumes a global default visibility symbol interposable (ELF
  semantics). So relocations in object files are needed even if the code generator assumed
  the definition exact and non-interposable.
* The relocations can cause the creation of PLT entries on some targets for -shared links.
  A linker conservatively assumes a global default visibility symbol interposable (if not
  otherwise constrained by -Bsymbolic/--dynamic-list/VER_NDX_LOCAL/etc).

"certain" refers to GlobalObjects in the intersection of
`hasExactDefinition() and !isInterposable()`: `external`, `appending`, `internal`, `private`.
Local linkages (`internal` and `private`) cannot be interposed. `appending` is for very
few objects LLVM interpret specially.  So the set just includes `external`.

This patch emits STB_LOCAL aliases (.Lfoo$local) for such GlobalObjects, so that targets can lower
MC_GlobalAddress operands to STB_LOCAL aliases if applicable.
We may extend the scope and include GlobalAlias in the future.

LLVM's existing -fno-semantic-interposition behaviors give us license to do such optimizations:

* Various optimizations (ipconstprop, inliner, sccp, sroa, etc) treat normal ExternalLinkage
  GlobalObjects as non-interposable.
* Before D72197, MC resolved a PC-relative VK_None fixup to a non-local symbol at assembly time (no
  outstanding relocation), if the target is defined in the same section. Put it simply, even if IR
  optimizations failed to optimize and allowed interposition for the function call in
  `void foo() {} void bar() { foo(); }`, the assembler would disallow it.

This patch sets up AsmPrinter infrastructure to make -fno-semantic-interposition more so.
With and without the patch, the object file output should be identical:
`.Lfoo$local` does not take a symbol table entry.

Reviewed By: sfertile

Differential Revision: https://reviews.llvm.org/D73228
2020-01-29 10:58:43 -08:00
Matt Arsenault c5c1bb3374 GlobalISel: Lower G_WRITE_REGISTER 2020-01-29 06:48:24 -08:00
Sanne Wouda 2939fc13c8 [AArch64] Add IR intrinsics for sq(r)dmulh_lane(q)
Summary:
Currently, sqdmulh_lane and friends from the ACLE (implemented in arm_neon.h),
are represented in LLVM IR as a (by vector) sqdmulh and a vector of (repeated)
indices, like so:

   %shuffle = shufflevector <4 x i16> %v, <4 x i16> undef, <4 x i32> <i32 3, i32 3, i32 3, i32 3>
   %vqdmulh2.i = tail call <4 x i16> @llvm.aarch64.neon.sqdmulh.v4i16(<4 x i16> %a, <4 x i16> %shuffle)

When %v's values are known, the shufflevector is optimized away and we are no
longer able to select the lane variant of sqdmulh in the backend.

This defeats a (hand-coded) optimization that packs several constants into a
single vector and uses the lane intrinsics to reduce register pressure and
trade-off materialising several constants for a single vector load from the
constant pool, like so:

   int16x8_t v = {2,3,4,5,6,7,8,9};
   a = vqdmulh_laneq_s16(a, v, 0);
   b = vqdmulh_laneq_s16(b, v, 1);
   c = vqdmulh_laneq_s16(c, v, 2);
   d = vqdmulh_laneq_s16(d, v, 3);
   [...]

In one microbenchmark from libjpeg-turbo this accounts for a 2.5% to 4%
performance difference.

We could teach the compiler to recover the lane variants, but this would likely
require its own pass.  (Alternatively, "volatile" could be used on the constants
vector, but this is a bit ugly.)

This patch instead implements the following LLVM IR intrinsics for AArch64 to
maintain the original structure through IR optmization and into instruction
selection:
- sqdmulh_lane
- sqdmulh_laneq
- sqrdmulh_lane
- sqrdmulh_laneq.

These 'lane' variants need an additional register class.  The second argument
must be in the lower half of the 64-bit NEON register file, but only when
operating on i16 elements.

Note that the existing patterns for shufflevector and sqdmulh into sqdmulh_lane
(etc.) remain, so code that does not rely on NEON intrinsics to generate these
instructions is not affected.

This patch also changes clang to emit these IR intrinsics for the corresponding
NEON intrinsics (AArch64 only).

Reviewers: SjoerdMeijer, dmgreen, t.p.northover, rovka, rengolin, efriedma

Reviewed By: efriedma

Subscribers: kristof.beyls, hiraditya, jdoerfert, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71469
2020-01-29 13:25:23 +00:00
Kerry McLaughlin 3cf80822a9 [AArch64][SVE] Add SVE2 intrinsics for uniform DSP operations
Summary:
Implements the following intrinsics:
 - sqrdmlah, sqrdmlsh, sqrdmulh & sqdmulh
 - [s|u]hadd, [s|u]hsub, [s|u]rhadd & [s|u]hsubr
 - urecpe, ursqrte, sqabs & sqneg

Reviewers: sdesmalen, efriedma, dancgr, cameron.mcinally

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73493
2020-01-29 12:03:15 +00:00
Kerry McLaughlin bd33a46213 [AArch64][SVE] Add SVE2 intrinsics for pairwise arithmetic
Summary:
Implements the following intrinsics:
 - addp
 - smaxp, sminp, umaxp & uminp
 - sadalp & uadalp

Reviewers: dancgr, efriedma, sdesmalen, c-rhodes

Reviewed By: c-rhodes

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73347
2020-01-29 10:31:31 +00:00
Danilo Carvalho Grael 1f85dfb2af [AArch64][SVE] Add SVE2 mla indexed intrinsics.
Summary:
Add SVE2 mla indexed intrinsics:
- smlalb, smalalt, umlalb, umlalt, smlslb, smlslt, umlslb, umlslt.

Reviewers: efriedma, sdesmalen, dancgr, cameron.mcinally, c-rhodes, rengolin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, arphaman, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73576
2020-01-28 17:15:33 -05:00
Jessica Paquette dba29f7c3b [AArch64][GlobalISel] Fold G_AND into G_BRCOND
When the G_BRCOND is fed by a eq or ne G_ICMP, it may be possible to fold a
G_AND into the branch by producing a tbnz/tbz instead.

This happens when

  1. We have a ne/eq G_ICMP feeding into the G_BRCOND
  2. The G_ICMP is a comparison against 0
  3. One of the operands of the G_AND is a power of 2 constant

This is very similar to the code in AArch64TargetLowering::LowerBR_CC.

Add opt-and-tbnz-tbz to test this.

Differential Revision: https://reviews.llvm.org/D73573
2020-01-28 14:00:31 -08:00
Michael Spang a2fb2c0ddc [GlobalMerge] Preserve symbol visibility when merging globals
Symbols created for merged external global variables have default
visibility. This can break programs when compiling with -Oz
-fvisibility=hidden as symbols that should be hidden will be exported at
link time.

Differential Revision: https://reviews.llvm.org/D73235
2020-01-28 13:26:18 -08:00
Roland McGrath 2b0e6fe2e2 [Fuchsia] Remove aarch64-fuchsia target-specific -mcmodel=kernel
Under --target=aarch64-fuchsia, -mcmodel=kernel has the effect of
(the default) -mcmodel=small plus -mtp=el1 (which did not exist when
this behavior was added). Fuchsia's kernel now uses -mtp=el1
directly instead of -mcmodel=kernel, so remove this special support.

Patch By: mcgrathr

Differential Revision: https://reviews.llvm.org/D73409
2020-01-28 11:32:08 -08:00
Amara Emerson 14c2cf8e18 [AArch64][GlobalISel] Don't bail out of the select(cmp(a, b)) -> csel optimization with multiple users.
It can still be beneficial to do the optimization if the result of the compare
is used by *another* select.

Differential Revision: https://reviews.llvm.org/D73511
2020-01-28 10:09:03 -08:00
Jonathan Roelofs 7f93ff58e1 [llvm] Fix broken cases of 'CHECK[^:]*$' in tests 2020-01-28 09:52:59 -07:00
Florian Hahn 8e3f59b45a [AArch64] Add option to enable/disable load-store renaming.
This patch adds a new option to enable/disable register renaming in the
load-store optimizer. Defaults to disabled, as there is a potential
mis-compile caused by this.
2020-01-27 15:15:50 -08:00
Simon Pilgrim e7e043724e [DAG] Enable ISD::EXTRACT_SUBVECTOR SimplifyMultipleUseDemandedBits handling
This allows SimplifyDemandedBits to call SimplifyMultipleUseDemandedBits to create a simpler ISD::EXTRACT_SUBVECTOR, which is particularly useful for cases where we're splitting into subvectors anyhow.

Differential Revision: This allows SimplifyDemandedBits to call SimplifyMultipleUseDemandedBits to create a simpler ISD::EXTRACT_SUBVECTOR, which is particularly useful for cases where we're splitting into subvectors anyhow.
2020-01-27 21:17:47 +00:00
Matt Arsenault 2a160ba5b0 GlobalISel: Reimplement widenScalar for G_UNMERGE_VALUES results
Only use shifts if the requested type exactly matches the source type,
and create sub-unmerges otherwise.
2020-01-27 06:18:26 -08:00
Fangrui Song 50a3ff30e1 [PatchableFunction] Allow empty entry MachineBasicBlock
Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D73301
2020-01-24 09:42:48 -08:00
Kerry McLaughlin 4c4861b577 [AArch64][SVE] Add intrinsics for FFR manipulation
Summary:
Implements the following intrinsics:
  - llvm.aarch64.sve.setffr
  - llvm.aarch64.sve.rdffr
  - llvm.aarch64.sve.rdffr.z
  - llvm.aarch64.sve.wrffr

Reviewers: sdesmalen, efriedma, dancgr, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73097
2020-01-24 10:58:12 +00:00
Fangrui Song 22467e2595 Add function attribute "patchable-function-prefix" to support -fpatchable-function-entry=N,M where M>0
Similar to the function attribute `prefix` (prefix data),
"patchable-function-prefix" inserts data (M NOPs) before the function
entry label.

-fpatchable-function-entry=2,1 (1 NOP before entry, 1 NOP after entry)
will look like:

```
  .type	foo,@function
.Ltmp0:               # @foo
  nop
foo:
.Lfunc_begin0:
  # optional `bti c` (AArch64 Branch Target Identification) or
  # `endbr64` (Intel Indirect Branch Tracking)
  nop

  .section  __patchable_function_entries,"awo",@progbits,get,unique,0
  .p2align  3
  .quad .Ltmp0
```

-fpatchable-function-entry=N,0 + -mbranch-protection=bti/-fcf-protection=branch has two reasonable
placements (https://gcc.gnu.org/ml/gcc-patches/2020-01/msg01185.html):

```
(a)         (b)

func:       func:
.Ltmp0:     bti c
  bti c     .Ltmp0:
  nop       nop
```

(a) needs no additional code. If the consensus is to go for (b), we will
need more code in AArch64BranchTargets.cpp / X86IndirectBranchTracking.cpp .

Differential Revision: https://reviews.llvm.org/D73070
2020-01-23 17:02:27 -08:00
Amara Emerson 765b37abdf [AArch64][GlobalISel] Fallback if the +strict-align target feature is given.
Works around PR44246.
2020-01-23 14:45:29 -08:00
Danilo Carvalho Grael 58ceb81d31 [SVE] Add SVE2 patterns for unpredicated multiply instructions
Summary:
Add patterns for SVE2 unpredicated multiply instructions:
- mul, smulh, umulh, pmul, sqdmulh, sqrdmulh

Reviewers: sdesmalen, huntergr, efriedma, c-rhodes, kmclaughlin, rengolin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72799
2020-01-23 13:20:53 -05:00