Commit Graph

64949 Commits

Author SHA1 Message Date
Matt Morehouse 671f0930fe [X86] Selective relocation relaxation for +tagged-globals
For tagged-globals, we only need to disable relaxation for globals that
we actually tag.  With this patch function pointer relocations, which
we do not instrument, can be relaxed.

This patch also makes tagged-globals work properly with LTO, as
-Wa,-mrelax-relocations=no doesn't work with LTO.

Reviewed By: pcc

Differential Revision: https://reviews.llvm.org/D113220
2021-11-19 07:18:27 -08:00
Nico Weber 4f9a5c2a14 [asm] Remove explicit branch for modifier 'l'
No intended behavior change.

EmitGCCInlineAsmStr() used to explicitly check for modifier 'l'
after handling block address and machine basic block operands.
This prevented passing a MachineOperand with 'l' modifier to
PrintAsmMemoryOperand(). Conceptually that seems kind of nice,
but in practice the overrides of PrintAsmMemoryOperand() in all (*)
AsmPrinter subclasses already reject modifiers they don't know about,
and none of them don't know about 'l'. So removing this doesn't have
a behavior difference, is less code, and it makes EmitGCCInlineAsmStr()
and EmitMSInlineAsmStr() more similar, to prepare for merging them later.

(Why not _add_ the branch to EmitMSInlineAsmStr() instead? Because that
always works with X86AsmPrinter I think, and
X86AsmPrinter::PrintAsmMemoryOperand() very decisively rejects the 'l'
modifier, so it's hard to motivate adding that branch.)

*: The one exception was AVRAsmPrinter, which had an llvm_unreachable instead
of returning true. So this commit changes that, so that the AVR target keeps
emitting an error instead of crashing when passing a mem operand with a :l
modifier to it. All the other targets already don't crash on this.

Differential Revision: https://reviews.llvm.org/D114216
2021-11-19 09:19:53 -05:00
Jay Foad 30b27ecfc2 [AMDGPU] Use new opcode for indexed vgpr reads
Introduce V_MOV_B32_indirect_read for indexed vgpr reads
(and rename the old V_MOV_B32_indirect to
V_MOV_B32_indirect_write) so they can be unambiguously
distinguished from regular V_MOV_B32_e32. Previously they
were distinguished by looking for extra implicit operands
but this is fragile because regular moves sometimes have
extra implicit operands too:
- either by accident, when instructions end up with
  duplicate implicit operands (see e.g. D100939)
- or by design, when SIInstrInfo::copyPhysReg breaks a
  multi-dword copy into individual subreg mov instructions
  and adds implicit operands for the super-register.

The effect of this is that SIInstrInfo::isFoldableCopy can
be simplified and identifies more foldable copies. The test
diffs show that more immediate 0 values have been folded as
inline operands.

SIInstrInfo::isReallyTriviallyReMaterializable could
probably be simplified too but that is not part of this
patch.

Differential Revision: https://reviews.llvm.org/D114230
2021-11-19 13:08:11 +00:00
Roman Lebedev 049799c311
[X86][Costmodel] `getReplicationShuffleCost()`: promote 1 bit-wide elements to 8 bit when have AVX512BW+AVX512VBMI
If in addition to AVX512BW (that provides `{k}<->{i8,i16}` casts and i16 shuffles),
we have AVX512VBMI, which provides i8 shuffles, we are in an optimal situation.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D114071
2021-11-19 15:58:10 +03:00
Roman Lebedev a751084bb4
[X86][Costmodel] `trunc v16i8 to v8i1` can appear after legalization, cost is same as for `trunc v8i8 to v8i1`
Note that there are many other missing costs, i'm *only* adding the ones that are queried
from `getReplicationShuffleCost()` for the existing (quite exhaustive) test coverage.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D114070
2021-11-19 15:57:32 +03:00
Roman Lebedev a50fdd3fc9
[X86][Costmodel] `getReplicationShuffleCost()`: promote 1 bit-wide elements to 16 bit when have AVX512BW
Here we get pretty lucky. AVX512F does not provide any instructions
to convert between a `k` vector mask and a vector,
but AVX512BW adds `{k}<->nX{i8,i16}`conversions,
and just as it happens, with AVX512BW we have a i16 shuffle.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D113915
2021-11-19 15:55:41 +03:00
Simon Pilgrim 0f652d8f52 [X86] LowerRotate - recognise hidden ROTR patterns for better vXi8 codegen
Check for a hidden ISD::ROTR (rotl(sub(0,x))) - vXi8 lowering can handle both (its always beneficial for splats, but otherwise only if we have VPTERNLOG).

We currently hit infinite loops in TargetLowering::expandROT if we set ISD::ROTR to custom, which needs addressing before we extend this much further.
2021-11-19 11:49:15 +00:00
Yonghong Song 8fb3f84484 BPF: Workaround InstCombine trunc+icmp => mask+icmp Optimization
Patch [1] added further InstCombine trunc+icmp => mask+icmp
optimization and this caused a couple of bpf selftest failure.
Previous llvm BPF backend patch [2] introduced llvm.bpf.compare
builtin to handle such situations.

This patch further added support ">" and ">=" icmp opcodes.
Tested with bpf selftests and all tests are passed including two
previously failed ones.

Note Patch [1] also added optimization if the to-be-compared
constant is negative-power-of-2 (-C) or not-of-power-of-2 (~C).
This patch didn't implement these two cases as typical bpf
program compares a scalar to a positive length or boundary value,
and this scalar later is used as a index into an array buffer
or packet buffer.

  [1] https://reviews.llvm.org/D112634
  [2] https://reviews.llvm.org/D112938

Differential Revision: https://reviews.llvm.org/D114215
2021-11-18 20:25:28 -08:00
Serguei Katkov 3557f49353 [AARCH64] Teach AArch64FrameLowering::getFrameIndexReferencePreferSP really prefer SP.
Do more efforts to use sp if it is possible to lower a frame index.

Reviewers: reames, loicottet, ostannard, t.p.northover
Reviewed By: reames
Subscribers: arphaman, danilaml, hiraditya, kristof.beyls, llvm-commits, Matt, yrouban
Differential Revision: https://reviews.llvm.org/D111133
2021-11-19 11:14:02 +07:00
Stanislav Mekhanoshin f8e615462b [AMDGPU] Fix SIPostRABundler crash on null register used by dbg value
Recently we started generate DBG_VALUEs with $noreg operands.
This crashes SIPostRABundler, and it should not iterate these
registers anyway.

Fixes: SWDEV-311733

Differential Revision: https://reviews.llvm.org/D114202
2021-11-18 17:01:19 -08:00
Victor Huang 40c65655af [PowerPC] Remove the redundant terminator instruction when optimizing conditional trap
This patch is a follow up patch for ae27ca9a67 to
the remove redundant terminator when optimizing conditional trap.

Peer reviewed by: nemanjai
2021-11-18 17:52:26 -06:00
Ahmed Bougacha e3a7f0e2f9 [AArch64][PAC] Select llvm.ptrauth.sign/sign.generic to PAC*.
The @llvm.ptrauth.sign/sign.generic intrinsics map cleanly to
the various AArch64 PAC[IDG][Z][AB] instructions.  Select them.

Differential Revision: https://reviews.llvm.org/D91087
2021-11-18 15:21:30 -08:00
Kazu Hirata 7ca14f6044 [llvm] Use range-based for loops (NFC) 2021-11-18 09:09:52 -08:00
Simon Pilgrim 3821d2ab3b [X86] LowerRotate - pull out repeated is ISD::ROTL check. NFC. 2021-11-18 17:03:19 +00:00
Mubashar Ahmad 8e47b83ec9 [AArch64][ARM] Enablement of Cortex-A710 Support
Phabricator review: https://reviews.llvm.org/D113256
2021-11-18 10:58:05 +00:00
David Sherwood 9cef7c1ca9 [CodeGen][SVE] Add missing isel patterns for vector_reverse
We were missing patterns for vector_reverse of unpacked FP vector
types, as well as all the supported bfloat vectors.

Tests added here:

  CodeGen/AArch64/named-vector-shuffle-reverse-sve.ll

Differential Revision: https://reviews.llvm.org/D114089
2021-11-18 09:59:26 +00:00
Chen Zheng 9bda9a3980 [PowerPC] fix typos in comments, NFC 2021-11-18 08:55:23 +00:00
Phoebe Wang a9fba2be35 [X86][ABI] Do not return float/double from x87 registers when x87 is disabled
This is aligned with GCC's behavior.
Also, alias `-mno-fp-ret-in-387` to `-mno-x87`, by which we can fix pr51498.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D112143
2021-11-18 12:31:08 +08:00
Phoebe Wang de34a940ae [X86] Add -mskip-rax-setup support to align with GCC
AMD64 ABI mandates caller to specify the number of used SSE registers
when passing variable arguments.
GCC also provides option -mskip-rax-setup to skip the setup of rax when
SSE is disabled. This helps to reduce the code size, see pr23258.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D112413
2021-11-18 11:20:32 +08:00
Zi Xuan Wu 24d1673c8b [llvm-tblgen][RISCV] Make llvm-tblgen RISCVCompressInstEmitter to be common infra across different targets
Not only RISCV but also other target such as CSKY, there are compressed instructions mixed with normal instructions.
To reuse the basic infra to compress/uncompress and predict instruction, we need reconstruct the RISCVCompressInstEmitter
and make it more general and suitable for other target.

Differential Revision: https://reviews.llvm.org/D113475
2021-11-18 11:14:27 +08:00
Zarko Todorovski 5b8bbbecfa [NFC][llvm] Inclusive language: reword and remove uses of sanity in llvm/lib/Target
Reworded removed code comments that contain `sanity check` and `sanity
test`.
2021-11-17 21:59:00 -05:00
Luo, Yuanke c4dba47196 [X86][AMX] Don't emit tilerelease for old AMX instrisic.
We should avoid mixing old AMX instrinsic with new AMX intrinsic. For
old AMX intrinsic, user is responsible for invoking tile release. This
patch is to check if there is any tile config generated by compiler. If
so it emit tilerelease instruction, otherwise it don't emit the
instruction.

Differential Revision: https://reviews.llvm.org/D114066
2021-11-18 09:28:32 +08:00
Simon Pilgrim 5f99f771ec [X86] splitVector - only extract lower half subvector from splats
If we're splitting a source vector that is a splat (with no undefs), just extract (for free) the lower half subvector and use it for both halfs.
2021-11-17 19:38:43 +00:00
Simon Pilgrim 3020608b61 Fix MSVC signed/unsigned mismatch warning. NFC. 2021-11-17 18:59:23 +00:00
Simon Pilgrim e76032c173 [X86] LowerRotate - improve vXi8 rotate-by-scalar lowering with direct use of (extended) shift-by-scalar helpers.
If we're rotating vXi8 by a splatted amount, then unpack to vXi16, perform a SHL by the (extended) scalar, and then pack the results.

This is a vector equivalent to the "rotl(x,y) -> (((aext(x) << bw) | zext(x)) << (y & (bw-1))) >> bw" style expansion we do for scalars in LowerFunnelShift.

I think we can usefully use this for other vector types and vector funnel-shifts in the future, depending how we expand beyond D113192 for matching rotations/funnel-shifts for more type/ops.
2021-11-17 18:59:23 +00:00
Craig Topper 0274be28d7 [RISCV] Lower vector CTLZ_ZERO_UNDEF/CTTZ_ZERO_UNDEF by converting to FP and extracting the exponent.
If we have a large enough floating point type that can exactly
represent the integer value, we can convert the value to FP and
use the exponent to calculate the leading/trailing zeros.

The exponent will contain log2 of the value plus the exponent bias.
We can then remove the bias and convert from log2 to leading/trailing
zeros.

This doesn't work for zero since the exponent of zero is zero so we
can only do this for CTLZ_ZERO_UNDEF/CTTZ_ZERO_UNDEF. If we need
a value for zero we can use a vmseq and a vmerge to handle it.

We need to be careful to make sure the floating point type is legal.
If it isn't we'll continue using the integer expansion. We could split the vector
and concatenate the results but that needs some additional work and evaluation.

Differential Revision: https://reviews.llvm.org/D111904
2021-11-17 10:29:41 -08:00
Nico Weber 103cc914d6 [x86/asm] Make variants work when converting at&t inline asm input to intel asm output
`asm` always has AT&T-style input (`asm inteldialect` has Intel-style asm
input), so EmitGCCInlineAsmStr() always has to pick the same variant since it
cares about the input asm string, not the output asm string.

For PowerPC, that default variant is 1. For other targets, it's 0.

Without this, the included test case errors out with

    error: unknown use of instruction mnemonic without a size suffix
             mov rax, rbx

since it picks the intel branch and then tries to interpret it as AT&T
when selecting intel-style output with `-x86-asm-syntax=intel`.

Differential Revision: https://reviews.llvm.org/D113894
2021-11-17 13:23:18 -05:00
Mirko Brkusanin db6bc2ab51 [AMDGPU][GlobalISel] Fold G_FNEG above when users cannot fold mods
If possible fold fneg into instruction above if users cannot fold mods and we
know it will decrease instruction count.
Follows same logic as SDAG combiner in choosing opportunities to combine.

Differential Revision: https://reviews.llvm.org/D112827
2021-11-17 14:25:13 +01:00
Jay Foad 3264e95938 [CodeGen] Update LiveIntervals in TargetInstrInfo::convertToThreeAddress
Delegate updating of LiveIntervals to each target's
convertToThreeAddress implementation, instead of repairing LiveIntervals
after the fact in TwoAddressInstruction::convertInstTo3Addr.

Differential Revision: https://reviews.llvm.org/D113493
2021-11-17 10:16:47 +00:00
Roman Lebedev 2037ec725f
[X86][Costmodel] `*ext v64i1 to v32i16` can appear after legalization, cost is same as for `*ext v32i1 to v32i16`
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D113914
2021-11-17 12:02:50 +03:00
Roman Lebedev 23b194bf18
[X86][Costmodel] `trunc v32i16 to v64i1` can appear after legalization, cost is same as for `trunc v32i16 to v32i1`
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D113913
2021-11-17 12:02:50 +03:00
Benjamin Kramer 8b8e8704ce [PowerPC] Fix a nullptr dereference
LiMI1/LiMI2 can be null, so don't call a method on them before checking.
Found by ubsan.
2021-11-16 23:52:42 +01:00
Victor Huang ae27ca9a67 [PowerPC] PPC backend optimization on conditional trap intrustions
This patch adds PPC back end optimization to analyze the arguments of a
conditional trap instruction to execute one of the following:
1. Delete it if never trap
2. Replace it if always trap
3. Otherwise keep it

Reviewed By: nemanjai, amyk, PowerPC

Differential revision: https://reviews.llvm.org/D111434
2021-11-16 13:11:57 -06:00
Kazu Hirata ee0133dc6d [llvm] Use range-for loops (NFC) 2021-11-16 09:01:56 -08:00
David Sherwood 4607459022 [AArch64] Fix TypeSize->uint64_t implicit conversion in AArch64ISelLowering::hasAndNot
For now I've just changed the code to only return true from
AArch64ISelLowering::hasAndNot if the vector is fixed-length.
Once we have the right patterns or DAG combines to use bic/bif
we can also enable this for SVE.

Test added here:

  CodeGen/AArch64/vselect-constants.ll

Differential Revision: https://reviews.llvm.org/D113994
2021-11-16 16:25:16 +00:00
Jon Chesterfield 30b29db7c7 [amdgpu] Don't crash on empty global ctor/dtor
Global ctor/dtor can be an empty array, which is a Constant not a
ConstantArray. The cast<ConstantArray> therefore asserts / crashes.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D113800
2021-11-16 14:36:08 +00:00
Matt Devereau f526c600c0 [AArch64][SVE] Instcombine SVE LD1/ST1 to stock LLVM IR
InstCombine AArch64 LD1/ST1 to llvm.masked.load/llvm.masked.store
and LD1/ST1 to load/store when a ptrue all predicate pattern operand
is present.

This allows existing IR optimizations such as dead-load removal to
occur.

Differential Revision: https://reviews.llvm.org/D113489
2021-11-16 11:10:23 +00:00
jacquesguan 6405e8b584 [RISCV] Refactor some rvv instructions' definition with foreach.
Simplify rvv instructions that use eew in their mnemonic and encoding with foreach. And fix a scheduling bug.

Differential Revision: https://reviews.llvm.org/D113453
2021-11-16 15:20:45 +08:00
Serguei Katkov 0ecb12a27f [STATEPOINT] Force implicit-def for lr register.
STATEPOINT instruction behavior is similar to call instruction.
In aarch64 BL instruction implicitly define lr register, so
STATEPOINT instruction should do the same.
However STATEPOINT is a general pseudo instruction and I could not find
a way to override list of implicit defs for specific target.

So this patch post processes inserting STATEPOINT instruction by
adding implisit dead def for lr.

Reviewers: reames, loicottet, ostannard
Reviewed By: reames
Subscribers: danilaml, hiraditya, kristof.beyls, llvm-commits, yrouban
Differential Revision: https://reviews.llvm.org/D111114
2021-11-16 12:52:00 +07:00
Kazu Hirata 7f00806a6a [llvm] Use make_early_inc_range (NFC) 2021-11-15 21:28:46 -08:00
Amara Emerson dc84770d55 [GlobalISel] Add a store-merging optimization pass and enable for AArch64.
This is a first attempt at a constant value consecutive store merging pass,
a counterpart to the DAGCombiner's store merging optimization.

The high level goals of this pass:

* Have a simple and efficient algorithm. As close to linear time as we can get.
  Thus, prioritizing scalability of the algorithm over merging every corner case
  we can find. The DAGCombiner's store merging code has been the source of
  compile time and complexity issues in the past and I wanted to avoid that.
* Don't introduce any new data structures for ordering memory operations. In MIR,
  we don't have the concept of chains like we do in the DAG, and the instruction
  order is stricter than enforcing ordering with graph edges. Although I
  considered adding something similar, I couldn't justify the overhead.

The pass is current split into 3 main parts. The main store merging code focuses
on identifying candidate stores and managing the candidate group that's under
consideration for merging. Analyzing addressing of stores is a potentially
complex part and for now there's just a basic implementation to identify easy
cases. Finally, the other main bit of complexity is the alias analysis, which
tries to follow the same logic as the DAG's AA.

Currently this implementation only supports merging of constant stores. Stores
of arbitrary variables are technically possible with a very small change, but
the DAG chooses not to do this. Doing so here makes most code worse since
there's extra overhead in merging values into wider registers.

On AArch64 -Os, this optimization results in very minor savings on CTMark.

Differential Revision: https://reviews.llvm.org/D109131
2021-11-15 21:10:39 -08:00
Craig Topper 391b0ba603 [RISCV] Override TargetLowering::hasAndNot for Zbb.
Differential Revision: https://reviews.llvm.org/D113937
2021-11-15 18:44:07 -08:00
Matt Arsenault 659887b405 AMDGPU: Mark prolog/epilog SCC defs as dead
A future change will add SCC liveness checks. Since we are still
relying on forward register scavenging, add dead flags to avoid
spuriously detecting SCC as live.
2021-11-15 21:35:06 -05:00
Ben Shi 4c3d916c4b [RISCV] Optimize immediate materialisation with SH*ADD
Use LUI+SH*ADD+ADDI to compose specific immediates.

Reviewed By: craig.topper, luismarques

Differential Revision: https://reviews.llvm.org/D113568
2021-11-15 23:34:28 +00:00
Jonas Paulsson 1c3ef9ef4a [SystemZ] Support symbolic displacements.
This patch adds support for symbolic displacements, e.g. like 'lg %r0,
sym(%r1)', which is done using relocations. This is needed to compile the
kernel without disabling the integrated assembler.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D113341
2021-11-15 16:46:31 -05:00
Fangrui Song fee52fe0ad [X86] Fix -Wunused-variable in -DLLVM_ENABLE_ASSERTIONS=off build. NFC 2021-11-15 13:10:47 -08:00
Lei Huang f50c6c1718 [PowerPC] Fix 32bit vector insert instructions for ISA3.1
The platform independent ISD::INSERT_VECTOR_ELT take a element index,
but vins* instructions take a byte index. Update 32bit td patterns for
vector insert to handle the element index accordingly.

Since vector insert for non constant index are supported in
ISA3.1, there is no need to use platform specific ISD node,
PPCISD::VECINSERT.  Update td pattern to directly use
ISD::INSERT_VECTOR_ELT instead.

Reviewed By: nemanjai, #powerpc

Differential Revision: https://reviews.llvm.org/D113802
2021-11-15 14:36:39 -06:00
Simon Pilgrim 441de2536b [X86] Add generic splitVectorOp helper. NFC
Update splitVectorIntUnary/splitVectorIntBinary to use this internally, after some operand type sanity checks.

Avoid code duplication and makes it easier to split other vector instruction forms in the future.
2021-11-15 17:59:23 +00:00
Craig Topper f59307bfdc [RISCV] Teach needVSETVLIPHI to handle mask register instructions.
This handles the case where the mask register instruction input
comes from a Phi of vsetvlis. If the VLMAX is the same as the VLMAX
required by the mask register instruction, we can avoid a vsetvli.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D113204
2021-11-15 09:57:28 -08:00
Simon Pilgrim fc7c1cebbc [X86] LowerFunnelShift - pull out repeated EltSizeInBits variable. NFC. 2021-11-15 17:11:44 +00:00