Commit Graph

31823 Commits

Author SHA1 Message Date
Kang Zhang d1b51c5de7 [PowerPC] Modify the hasSideEffects of some VSX instructions from 1 to 0
Summary:
If we didn't set the value for hasSideEffects bit in our td file,  `llvm-tblgen`
will set it as true for those instructions which has no match pattern.
Below 6 instructions don't set the hasSideEffects flag and don't have match
pattern, so their hasSideEffects flag will be set true by llvm-tblgen.

But in fact below instructions don't modify any special register and don't have
other SideEffects, they shouldn't have SideEffects.
This patch is to modify the hasSideEffects of below instructions from 1 to 0.

```
VEXTUHLX
VEXTUHRX
VEXTUWLX
VEXTUWRX
VSPLTBs
VSPLTHs
```

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D71391
2019-12-28 09:04:54 +00:00
Matt Arsenault a33cab0f06 AMDGPU: Adjust test so it will work with GlobalISel
This is mostly a workaround for not handling the mubuf store path yet.
2019-12-27 19:37:39 -05:00
Matt Arsenault 5ce2ca524e AMDGPU/GlobalISel: Use SReg_32 for readfirstlane constraining
This matches the DAG behavior where we don't use SReg_32_XM0
everywhere anymore, and fixes not coalescing the copies into m0.
2019-12-27 17:52:12 -05:00
Matt Arsenault 3213ce966b TailDuplication: Clear NoPHIs property
The early tail duplicator pass introduces new ones, so a MIR test that
infers no phis since there were none on the input would fail the
verifier after running.
2019-12-27 14:06:31 -05:00
Matt Arsenault e088846712 AMDGPU/GlobalISel: Fix extra result register in fdiv64 lowering
There ended up being two result registers, which would fail on
select. It was really defing a new temp register in the correct def
position, instead of the correct result register.
2019-12-27 08:49:43 -05:00
Matt Arsenault ed9a56b0f2 AMDGPU/GlobalISel: Select some 128-bit load/stores 2019-12-27 08:49:43 -05:00
Craig Topper fca4736874 [X86] Allow v2i32->v2f32 strict and non-strict uint_to_fp to be widened to v4i32->v4f32 under avx512.
With avx512vl we get v4i32->v4f32 uint_to_fp instructions. With
avx512f we get v16i32->v16f32 instructions which we can use to
emulate v4i32->v4f32.
2019-12-27 00:28:44 -08:00
Craig Topper 931946bb1d [X86] Add v2i32->v2f32 non-strict sint_to_fp/uint_to_fp tests. NFC 2019-12-27 00:28:44 -08:00
Craig Topper 20aab49492 [X86] Custom widen v2i32->v2f32 strict_sint_to_fp to avoid scalarization. 2019-12-27 00:28:44 -08:00
Craig Topper ecbaf152f8 [X86] Custom widen 128/256-bit vXi32 fp_to_uint on avx512f targets without avx512vl. Similar for vXi64 on avx512dq without avx512vl.
Summary:
Previously we did this with isel patterns that used garbage in
the widened part of the source. But that's not valid for strictfp.
So now we custom widen and use zeroes for the widened elemens for
strictfp.

This replaces D71864.

Reviewers: RKSimon, spatel, andrew.w.kaylor, pengfei, LiuChen3

Reviewed By: pengfei

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71879
2019-12-26 22:04:40 -08:00
Craig Topper 50fb3957c1 [X86] Custom widen strict v2f32->v2i32 by padding with zeroes.
For non-strict, generic type legalization will take care of this,
but that doesn't happen currently for strict nodes.
2019-12-26 21:45:18 -08:00
Craig Topper 53ee806d93 [X86][FPEnv] Promote some float strictfp operations to double on i686-pc-windows-msvc to match what we do for non-strict.
The float libcalls are inlined in MSVC's math header where they
just cast to double and use the double libcall. Do the same when
we emit libcalls.
2019-12-26 20:22:24 -08:00
Craig Topper e647ff0d7d [X86] Add tests for constrained float intrinsics on i686-pc-windows-msvc. NFC
We need to promote these to double due to missing libcalls on
Windows.
2019-12-26 20:09:34 -08:00
Craig Topper a5d266b9cf [X86] Add custom legalization for strict_uint_to_fp v2i32->v2f32.
I believe the algorithm we use for non-strict is exception safe
for strict. The fsub won't generate any exceptions. After it we
will have an exact version of the i32 integer in a double. Then
we just round it to f32. That rounding will generate a precision
exception if it can't be represented exactly.
2019-12-26 19:10:26 -08:00
Craig Topper bc202547d5 [X86] Add test cases for v2i32->v2f32 strict_sint_to_fp/strict_uint_to_fp. NFC 2019-12-26 19:10:26 -08:00
Liu, Chen3 1a7b69f5dd add custom operation for strict fpextend/fpround
Differential Revision: https://reviews.llvm.org/D71892
2019-12-27 08:28:33 +08:00
Craig Topper f953882113 [X86] Custom widen 128/256-bit vXi32 uint_to_fp on avx512f targets without avx512vl. Similar for vXi64 sint_to_fp/uint_to_fp on avx512dq without avx512vl.
Previously we widened these through isel patterns, but that
didn't work for STRICT_ nodes. Those need to be padded with
zeroes in the upper bits which is harder to do in isel patterns.
2019-12-26 14:46:56 -08:00
Craig Topper 90ff34e6ab [X86] Add custom widening for v2i32->v2f64 strict_uint_to_fp with AVX512F, but not AVX512VL.
Previously we were widening with isel patterns, but that wasn't
exception safe for strict FP. So now we widen to v4i32->v4f64
during type legalization. And then let op legalization further
widen to v8i32->v8f64.

The vec_int_to_fp.ll changes are caused by us no longer narrowing
extracts of strict_uint_to_fp to the v4i32->v2f64 instruction
without AVX512VL only to have isel rewiden it. Now we just keep
it wide throughout. So we don't have an opportunity to narrow
the load.
2019-12-26 13:40:56 -08:00
Craig Topper bb0138729b [X86] Add custom widening for v2f64->v2i32 strict_fp_to_uint with avx512f, but not avx512vl.
AVX512F added instruction for vector fp_to_uint conversions. With
AVX512VL we can use a specific instruction that does v2f64->v4i32 with
zeroes in the 2 extra elements. For non-strict nodes without AVX512VL
we relied on type legalization to turn it to v4f64->v4i32 which would
later be widened by op legalization to v8f64->v8i32. But type legalization
doesn't currently widen strict nodes since it doesn't know how to
safely and efficiently pad the extra elements. But for X86 we know
padding with zeroes is safe and efficient so do that ourselves.
2019-12-26 12:42:27 -08:00
Yonghong Song ffd57408ef [BPF] Enable relocation location for load/store/shifts
Previous btf field relocation is always at assignment like
   r1 = 4
which is converted from an ld_imm64 instruction.

This patch did an optimization such that relocation
instruction might be load/store/shift. Specically, the
following insns may also have relocation, except BPF_MOV:
  LDB, LDH, LDW, LDD, STB, STH, STW, STD,
  LDB32, LDH32, LDW32, STB32, STH32, STW32,
  SLL, SRL, SRA

To accomplish this, a few BPF target specific
codegen only instructions are invented. They
are generated at backend BPF SimplifyPatchable phase,
which is at early llc phase when SSA form is available.
The new codegen only instructions will be converted to
real proper instructions at the codegen and BTF emission stage.

Note that, as revealed by a few tests, this optimization might
be actual generating more relocations:
Scenario 1:
  if (...) {
    ... __builtin_preserve_field_info(arg->b2, 0) ...
  } else {
    ... __builtin_preserve_field_info(arg->b2, 0) ...
  }
  Compiler could do CSE to only have one relocation. But if both
  of the above is translated into codegen internal instructions,
  the compiler will not be able to do that.
Scenario 2:
  offset = ... __builtin_preserve_field_info(arg->b2, 0) ...
  ...
  ...  offset ...
  ...  offset ...
  ...  offset ...
  For whatever reason, the compiler might be temporarily do copy
  propagation of the righthand of "offset" assignment like
  ...  __builtin_preserve_field_info(arg->b2, 0) ...
  ...  __builtin_preserve_field_info(arg->b2, 0) ...
  and CSE will be able to deduplicate later.
  But if these intrinsics are converted to BPF pseudo instructions,
  they will not be able to get deduplicated.

I do not expect we have big instruction count difference.
It may actually reduce instruction count since now relocation
is in deeper insn dependency chain.
For example, for test offset-reloc-fieldinfo-2.ll, this patch
generates 7 instead of 6 relocations for non-alu32 mode, but it
actually reduced instruction count from 29 to 26.

Differential Revision: https://reviews.llvm.org/D71790
2019-12-26 09:07:39 -08:00
Craig Topper 4e6b0dd681 [X86] Add custom lowering for v2i64->v2f32 strict_sint_to_fp/strict_uint_to_fp for avx512dq+avx512vl targets.
With avx512dq+avx512vl we have instruction that implements this and
places zeroes in the upper 64-bits of the destination xmm register.
2019-12-26 08:58:34 -08:00
Craig Topper 7f071958cd [X86] Add test cases for v2i64->v2f32 strict_sint_to_fp/strict_uint_to_fp. 2019-12-26 08:58:34 -08:00
Craig Topper de60c2633b [X86] Add avx512f and avx512dq+vl command lines to the vector strictfp int<->fp tests. 2019-12-26 08:58:34 -08:00
czhengsz 1b57749a53 [PowerPC] stop folding if result rlwinm mask is wrap while original rlwinm is not.
%1:g8rc = RLWINM8 %0:g8rc, 0, 16, 9
%2:g8rc = RLWINM8 killed %1:g8rc, 0, 0, 31
->
%2:g8rc = RLWINM8 %0:g8rc, 0, 16, 9

The above folding is wrong. Before transformation, %2:g8rc is 32 bit value. After
transformation, %2:g8rc becomes a 64 bit value.
This patch fixes above issue.

Reviewed by: steven.zhang

Differential Revision: https://reviews.llvm.org/D71833
2019-12-25 21:56:18 -05:00
Kang Zhang 6d88b7d6e7 [PowerPC] Modify the hasSideEffects of MTLR and MFLR from 1 to 0
Summary:
If we didn't set the value for hasSideEffects bit in our td file, `llvm-tblgen`
will set it as true for those instructions which has no match pattern.
The instructions `MTLR` and `MFLR` don't set the hasSideEffects flag and don't
have match pattern, so their hasSideEffects flag will be set true by
`llvm-tblgen`.
But in fact, we can use `[LR]` to model the two instructions, so they should not
have SideEffects.

This patch is to modify the hasSideEffects of MTLR and MFLR from 1 to 0.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D71390
2019-12-26 02:12:32 +00:00
Wang, Pengfei 472bded3ed [X86] Enable STRICT_SINT_TO_FP/STRICT_UINT_TO_FP on X86 backend
Summary: Enable STRICT_SINT_TO_FP/STRICT_UINT_TO_FP on X86 backend

Reviewers: craig.topper, RKSimon, LiuChen3, uweigand, andrew.w.kaylor

Subscribers: hiraditya, llvm-commits, LuoYuanke

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71871
2019-12-26 08:15:13 +08:00
Craig Topper c5b4a2386b [X86] Use zero vector to extend to 512-bits for strict_fp_to_uint v2i1->v2f64 on targets with AVX512F, but not AVX512VL.
In the worst case, this requires a 128-bit move instruction to
implicitly zero the upper bits. In the common case, we should
recognize the producing instruction already zeroed the upper bits.
2019-12-25 10:46:00 -08:00
Craig Topper 4af5b23db3 [X86FixupSetCC] Remember the preceding eflags defining instruction while we're scanning the basic block instead of looking back for it.
Summary:
We're already scanning forward through the basic block. Might as
well just remember eflags defs instead of doing a bounded search
backwards later.

Based on a comment in D71841.

Reviewers: RKSimon, spatel, uweigand

Reviewed By: uweigand

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71865
2019-12-25 10:26:13 -08:00
Liu, Chen3 8304781cae Add missing strict_fp_to_int
Differential Revision: https://reviews.llvm.org/D71867
2019-12-25 16:10:10 +08:00
Fangrui Song c853c73d3d [Thumb][test] Fix CodeGen/Thumb/PR17309.ll after llvmorg-10-init-16046-ga36ddf0aa9d
All of

"no-frame-pointer-elim-non-leaf"
"no-frame-pointer-elim-non-leaf"="true"
"no-frame-pointer-elim-non-leaf"="false"

mean "frame-pointer"="non-leaf", which is quite counter-intuitive.
llvmorg-10-init-16046-ga36ddf0aa9d accidentally broke it.

This fixes the -DLLVM_ENABLE_EXPENSIVE_CHECKS=On test:

```
*** Bad machine code: Non-flag-setting Thumb1 mov is v6-only ***
- function:    pass_C
- basic block: %bb.0 entry (0x1fc9bf0)
- instruction: $r0 = tMOVr killed $r6, 14, $noreg
```
2019-12-24 16:58:12 -08:00
Fangrui Song a36ddf0aa9 Migrate function attribute "no-frame-pointer-elim"="false" to "frame-pointer"="none" as cleanups after D56351 2019-12-24 16:27:51 -08:00
Fangrui Song eb16435b5e Migrate function attribute "no-frame-pointer-elim-non-leaf" to "frame-pointer"="non-leaf" as cleanups after D56351 2019-12-24 16:05:15 -08:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Craig Topper c06e53119b [X86] Use 128-bit vector instructions for f32/f64->i64 conversions on 32-bit targets with avx512dq and avx512vl instructions.
On 32-bit targets we can't use the scalar instruction so we
insert the scalar into a vector and use packed conversions.
Previously we used either v4f32->v4i64 or v4f64->v4i64 to avoid
some complexity creating target specific ISD opcodes for
v4f32->v2i64. But this causes extra vzeroupper instructions and
possibly frequency throttling on Intel CPUs.

This patch changes this to create a 128-bit vector and uses a
target specific ISD opcode if needed.
2019-12-24 11:20:10 -08:00
Craig Topper a21beccea2 [X86] Add STRICT versions of CVTTP2SI, CVTTP2UI, CMPM, and CMPP.
Differential Revision: https://reviews.llvm.org/D71850
2019-12-24 10:07:04 -08:00
Matt Arsenault df5c2159d0 AMDGPU/GlobalISel: Legalize some 16-bit round instructions 2019-12-24 09:53:01 -05:00
Matt Arsenault e351256c0d GlobalISel: Define equivalent node for G_INTRINSIC_TRUNC 2019-12-24 09:53:01 -05:00
Matt Arsenault 9035fa6b54 AMDGPU/GlobalISel: Lower llvm.amdgcn.else 2019-12-24 09:53:01 -05:00
Ulrich Weigand 0d3f782e41 [FPEnv][X86] More strict int <-> FP conversion fixes
Fix several several additional problems with the int <-> FP conversion
logic both in common code and in the X86 target. In particular:

- The STRICT_FP_TO_UINT expansion emits a floating-point compare. This
  compare can raise exceptions and therefore needs to be a strict compare.
  I've made it signaling (even though quiet would also be correct) as
  signaling is the more usual default for an LT. This code exists both
  in common code and in the X86 target.

- The STRICT_UINT_TO_FP expansion algorithm was incorrect for strict mode:
  it emitted two STRICT_SINT_TO_FP nodes and then used a select to choose one
  of the results. This can cause spurious exceptions by the STRICT_SINT_TO_FP
  that ends up not chosen. I've fixed the algorithm to use only a single
  STRICT_SINT_TO_FP instead.

- The !isStrictFPEnabled logic in DoInstructionSelection would sometimes do
  the wrong thing because it calls getOperationAction using the result VT.
  But for some opcodes, incuding [SU]INT_TO_FP, getOperationAction needs to
  be called using the operand VT.

- Remove some (obsolete) code in X86DAGToDAGISel::Select that would mutate
  STRICT_FP_TO_[SU]INT to non-strict versions unnecessarily.

Reviewed by: craig.topper

Differential Revision: https://reviews.llvm.org/D71840
2019-12-23 21:11:45 +01:00
Jay Foad c7c05b0c8a [AMDGPU] Don't create MachinePointerInfos with an UndefValue pointer
Summary:
The only useful information the UndefValue conveys is the address space,
which MachinePointerInfo can represent directly without referring to an
IR value.

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71838
2019-12-23 15:58:19 +00:00
Luís Marques 5b1d0dc6bf [RISCV][NFC] Fix use of missing attribute groups in tests 2019-12-23 15:39:04 +00:00
czhengsz 79b3325be0 [PowerPC] NFC - fix the testcase bug of folding rlwinm 2019-12-23 10:28:22 -05:00
Sanjay Patel 8cefc37be5 [DAGCombine] visitEXTRACT_SUBVECTOR - 'little to big' extract_subvector(bitcast()) support
This moves the X86 specific transform from rL364407
into DAGCombiner to generically handle 'little to big' cases
(for example: extract_subvector(v2i64 bitcast(v16i8))). This
allows us to remove both the x86 implementation and the aarch64
bitcast(extract_subvector(bitcast())) combine.

Earlier patches that dealt with regressions initially exposed
by this patch:
rG5e5e99c041e4
rG0b38af89e2c0

Patch by: @RKSimon (Simon Pilgrim)

Differential Revision: https://reviews.llvm.org/D63815
2019-12-23 10:11:45 -05:00
Martin Storsjö 5a751e747d [AArch64] [Windows] Use COFF stubs for calls to extern_weak functions
As the extern_weak target might be missing, resolving to the absolute
address zero, we can't use the normal direct PC-relative branch
instructions (as that would result in relocations out of range).

Improve the classifyGlobalFunctionReference method to set
MO_DLLIMPORT/MO_COFFSTUB, and simplify the existing code in
AArch64TargetLowering::LowerCall to use the return value from
classifyGlobalFunctionReference for these cases.

Add code in both AArch64FastISel and GlobalISel/IRTranslator to
bail out for function calls to extern weak functions on windows,
to let SelectionDAG handle them.

This matches what was done for X86 in 6bf108d77a.

Differential Revision: https://reviews.llvm.org/D71721
2019-12-23 12:13:49 +02:00
Martin Storsjö b774aa1011 [ARM] [Windows] Use COFF stubs for calls to extern_weak functions
As the extern_weak target might be missing, resolving to the absolute
address zero, we can't use the normal direct PC-relative branch
instructions (as that would result in relocations out of range).

Instead check the shouldAssumeDSOLocal method and load the address
from a COFF stub.

This matches what was done for X86 in 6bf108d77a.

Differential Revision: https://reviews.llvm.org/D71720
2019-12-23 12:13:49 +02:00
QingShan Zhang 6d5e35e89d [Power9] Remove the PPCISD::XXREVERSE as it has completely the same semantics of ISD::BSWAP
The custom node PPCISD::XXREVERSE has completely the same semantics of generic node ISD::BSWAP.
We need to clean up it as we have the combine rules for bswap in the base class, while nothing for xxreverse.

Differential Revision: https://reviews.llvm.org/D70657
2019-12-23 07:44:33 +00:00
QingShan Zhang 9d1071eac4 [NFC][Test][PowerPC] Add more tests for 'and mask' 2019-12-23 06:59:14 +00:00
Jim Lin da0fe5db99 [AVR] Fix codegen for rotate instructions
Summary:
    This patch introduces the ROLBRd and RORBRd pseudo-instructions,
    which implemenent the "traditional" rotate operations; instead of
    the AVR rotate instructions that use the carry bit.

    The code is not optimized at all. Especially when dealing with
    loops of rotate instructions, this codegen should be improved some
    day.

Related bug: 41358 <https://bugs.llvm.org/show_bug.cgi?id=41358>

//Note//: This is my first submitted patch.

Reviewers: dylanmckay, Jim

Reviewed By: dylanmckay

Subscribers: hiraditya, llvm-commits, dylanmckay, dsprenkels

Tags: #llvm

Patched by dsprenkels (Daan Sprenkels)

Differential Revision: https://reviews.llvm.org/D60365
2019-12-23 11:41:28 +08:00
Kai Luo 9681dc9627 [PowerPC] Exploit `vrl(b|h|w|d)` to perform vector rotation
Summary:
Currently, we set legalization action of `ISD::ROTL` vectors as
`Expand` in `PPCISelLowering`. However, we can exploit `vrl(b|h|w|d)`
to lower `ISD::ROTL` directly.

Differential Revision: https://reviews.llvm.org/D71324
2019-12-23 03:04:43 +00:00
Carl Ritson 2791667d2e [DAGCombiner] Check term use before applying aggressive FSUB optimisations
Summary:
Without this check unnecessary FMA instructions are generated when the FSUB terms are reused.
This also has the side-effect that the same value is computed to different levels of precision, which can create undesirable effects if the results are used together in subsequent computation.

Reviewers: arsenm, nhaehnle, foad, tpr, dstuttard, spatel

Reviewed By: arsenm

Subscribers: jvesely, wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71656
2019-12-23 09:37:58 +09:00