Commit Graph

31856 Commits

Author SHA1 Message Date
Fangrui Song d2bb8c16e7 [MC][TargetMachine] Delete MCTargetOptions::MCPIECopyRelocations
clang/lib/CodeGen/CodeGenModule performs the -mpie-copy-relocations
check and sets dso_local on applicable global variables. We don't need
to duplicate the work in TargetMachine shouldAssumeDSOLocal.

Verified that -mpie-copy-relocations can still emit PC relative
relocations for external variable accesses.

clang -target x86_64 -fpie -mpie-copy-relocations -c => R_X86_64_PC32
clang -target aarch64 -fpie -mpie-copy-relocations -c => R_AARCH64_ADR_PREL_PG_HI21+R_AARCH64_LDST64_ABS_LO12_NC
2020-01-01 00:50:18 -08:00
Craig Topper 468a0cb5f3 [X86] Add X87 FCMOV support to X86FlagsCopyLowering.
Fixes PR44396
2019-12-31 20:35:21 -08:00
Matt Arsenault 4d7201e7b9 DAG: Stop trying to fold FP -(x-y) -> y-x in getNode with nsz
This was increasing the number of instructions when fsub was legalized
on AMDGPU with no signed zeros enabled. This fold should be guarded by
hasOneUse, and I don't think getNode should be doing that. The same
fold is already done as a regular combine through isNegatibleForFree.

This does require duplicating, even though isNegatibleForFree does
this combine already (and properly checks hasOneUse) to avoid one PPC
regression. In the regression, the outer fneg has nsz but the fsub
operand does not. isNegatibleForFree only sees the operand, and
doesn't see it's used from a nsz context. A nsz parameter needs to be
added and threaded through isNegatibleForFree to avoid this.
2019-12-31 22:49:51 -05:00
Craig Topper 26bdc603f7 [X86] Constant fold KSHIFT of an all zeros vector to just an all zeros vector. 2019-12-31 15:57:39 -08:00
Craig Topper 1cc8a74de3 [X86] Use carry flag from add for (seteq (add X, -1), -1).
If we just subtracted 1 and are checking if the result is -1. We can use the carry flag from the ADD instead of an explicit CMP. I'm using the same checks for the add users as EmitTest.

Fixes one case from PR44412

Differential Revision: https://reviews.llvm.org/D72019
2019-12-31 15:05:23 -08:00
Matt Arsenault 64cf26548a AMDGPU: Precommit test showing extra instructions are introduced 2019-12-31 14:54:57 -05:00
Michael Liao 79d401905f [amdgpu] Fix scoreboard updating on `s_waitcnt_vscnt`.
Summary: - Other counters are accidentally cleared.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71866
2019-12-31 14:20:30 -05:00
Craig Topper 73855e4300 [X86] Add test case for opposite branch condition for PR44412. NFC 2019-12-31 10:58:04 -08:00
Sanjay Patel e6bdecf1cd [AArch64] add test for fsub+fneg; NFC
D72015 proposes to restrict the current behavior.
2019-12-31 10:25:41 -05:00
jasonliu 991f7abdfc [NFC] Add comments in unit test aix-xcoff-toc.ll to clarify the intent
Address David's post review comment in https://reviews.llvm.org/D71667.
Add comments to clarify what we are testing in that file.
2019-12-31 03:29:50 +00:00
Craig Topper 6185dc0eb3 [X86] Add test case for PR44412. NFC 2019-12-30 14:41:20 -08:00
Eric Astor 4a7aa252a3 [X86][AsmParser] re-introduce 'offset' operator
Summary:
Amend MS offset operator implementation, to more closely fit with its MS counterpart:

    1. InlineAsm: evaluate non-local source entities to their (address) location
    2. Provide a mean with which one may acquire the address of an assembly label via MS syntax, rather than yielding a memory reference (i.e. "offset asm_label" and "$asm_label" should be synonymous
    3. address PR32530

Based on http://llvm.org/D37461

Fix broken test where the break appears unrelated.

- Set up appropriate memory-input rewrites for variable references.

- Intel-dialect assembly printing now correctly handles addresses by adding "offset".

- Pass offsets as immediate operands (using "r" constraint for offsets of locals).

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D71436
2019-12-30 14:35:26 -05:00
Matt Arsenault 7fa0bfe7d5 AMDGPU/GlobalISel: Select mul24 intrinsics 2019-12-30 14:24:25 -05:00
Craig Topper 47a2fd2df4 [X86] Add X86ISD::PCMPGT to SimplifyMultipleUseDemandedBitsForTargetNode.
If only the sign bit is demanded, and the LHS is all zeroes, then
we can bypass the PCMPGT.
2019-12-30 10:50:25 -08:00
Petar Avramovic 98f72a5107 [MIPS GlobalISel] Select bitreverse. Recommit
G_BITREVERSE is generated from llvm.bitreverse.<type> intrinsics,
clang genrates these intrinsics from __builtin_bitreverse32 and
__builtin_bitreverse64.
Add lower and narrowscalar for G_BITREVERSE.
Lower G_BITREVERSE on MIPS32.

Recommit notes:
Introduce temporary variables in order to make sure
instructions get inserted into MachineFunction in same order
regardless of compiler used to build llvm.

Differential Revision: https://reviews.llvm.org/D71363
2019-12-30 18:06:29 +01:00
Matt Arsenault 1247865fe0 AMDGPU/GlobalISel: Select llvm.amdgcn.fmad.ftz 2019-12-30 11:12:35 -05:00
Diogo Sampaio f33fd9648c [ARM][Thumb][FIX] Add unwinding information to t4
Summary:
Add missing part of patch D71361. Now that the stack-frame
can be operated using a addw/subw instruction, they should
appear in the unwinding list.

Reviewers: dmgreen, efriedma

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72000
2019-12-30 15:59:48 +00:00
Matt Arsenault 18240c3cd6 AMDGPU/GlobalISel: Add select test for fexp2 2019-12-30 10:56:37 -05:00
Matt Arsenault 9fd31fdbd3 GlobalISel: moreElementsVector for FP min/max 2019-12-30 10:39:53 -05:00
Matt Arsenault 9e1a2a668b AMDGPU: Improve llvm.round.f64 lowering for CI+
The path already used for f16/f32 works a lot better when v_trunc_f64
is available.
2019-12-30 09:55:46 -05:00
Matt Arsenault 58bcf51107 AMDGPU: Generate check lines 2019-12-30 09:55:40 -05:00
Matt Arsenault 491cfa4250 AMDGPU/GlobalISel: Account for G_PHI result bank
Sometimes the result bank of the phi is already assigned to something,
and should not be ignored. This is in preparation for additional
boolean phi handling changes.

Also refine the logic to fix some cases that were incorrectly deciding
to use SGPRs.
2019-12-30 09:55:21 -05:00
Nemanja Ivanovic 0f0330a787 [PowerPC] Legalize rounding nodes
VSX provides a full complement of rounding instructions yet we somehow ended up
with some of them legal and others not. This just legalizes all of the FP
rounding nodes and the FP -> int rounding nodes with unsafe math.

Differential revision: https://reviews.llvm.org/D69949
2019-12-30 08:03:53 -06:00
Dmitri Gribenko 32cc14100e Revert "[MIPS GlobalISel] Select bitreverse"
This reverts commit dbc136e0fe.
It broke buildbots:
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/21066
2019-12-30 14:29:47 +01:00
David Green b4abe7afbf [ARM] Sink splat to ICmp
This adds ICmp to the list of instructions that we sink a splat to in a
loop, allowing the register forms of instructions to be selected more
often. It does not add FCmp yet as the results look a little odd, trying
to keep the register in an float reg and having to move it back to a GPR.

Differential Revision: https://reviews.llvm.org/D70997
2019-12-30 12:58:14 +00:00
David Green a5a141544d [ARM] MVE sink ICmp test. NFC 2019-12-30 12:58:13 +00:00
Diogo Sampaio 8232497c31 [ARM][THUMB2] Allow emitting T3 types of add and sub
Summary:
This patch allows to emit thumb2 add and sub
instructions with 12 bit immediates in the
emitT2RegPlusImmediate function.
- Splitting parts of the D70680

Reviewers: eli.friedman, olista01, efriedma

Reviewed By: efriedma

Subscribers: efriedma, kristof.beyls, hiraditya, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71361
2019-12-30 11:03:58 +00:00
Petar Avramovic dbc136e0fe [MIPS GlobalISel] Select bitreverse
G_BITREVERSE is generated from llvm.bitreverse.<type> intrinsics,
clang genrates these intrinsics from __builtin_bitreverse32 and
__builtin_bitreverse64.
Add lower and narrowscalar for G_BITREVERSE.
Lower G_BITREVERSE on MIPS32.

Differential Revision: https://reviews.llvm.org/D71363
2019-12-30 11:26:45 +01:00
Petar Avramovic 94a24e7a40 [MIPS GlobalISel] Select bswap
G_BSWAP is generated from llvm.bswap.<type> intrinsics, clang genrates
these intrinsics from __builtin_bswap32 and __builtin_bswap64.
Add lower and narrowscalar for G_BSWAP.
Lower G_BSWAP on MIPS32, select G_BSWAP on MIPS32 revision 2 and later.

Differential Revision: https://reviews.llvm.org/D71362
2019-12-30 11:13:22 +01:00
QingShan Zhang 874a8004f9 [PowerPC] Exploit the rlwinm instructions for "and" with constant
For now, PowerPC will using several instructions to get the constant and "and" it with the following case:

define i32 @test1(i32 %a) {
  %and = and i32 %a, -2
  ret i32 %and
}

However, we could exploit it with the rotate mask instructions.
               MB  ME
+----------------------+
|xxxxxxxxxxx00011111000|
+----------------------+
 0         32         64
Notice that, we can only do it if the MB is larger than 32 and MB <= ME as
RLWINM will replace the content of [0 - 32) with [32 - 64) even we didn't rotate it.

Differential Revision: https://reviews.llvm.org/D71829
2019-12-30 03:18:31 +00:00
Craig Topper c926d96fca [X86] Make the AVX1 check lines in vec-strict-inttofp-256.ll test 'avx' instead of 'avx2'. Add AVX2 checks. NFC 2019-12-29 11:11:26 -08:00
Craig Topper 3b6aec79b2 [X86] Add test cases for v4i64->v4f32 and v8i64->v8f32 strict_sint_to_fp/strict_uint_to_fp to vec-strict-inttofp-256.ll and vec-strict-inttofp-512.ll. NFC 2019-12-28 11:20:45 -08:00
Nemanja Ivanovic a9ad65a2b3 [PowerPC] Change default for unaligned FP access for older subtargets
This is a fix for https://bugs.llvm.org/show_bug.cgi?id=40554

Some CPU's trap to the kernel on unaligned floating point access and there are
kernels that do not handle the interrupt. The program then fails with a SIGBUS
according to the PR. This just switches the default for unaligned access to only
allow it on recent server CPUs that are known to allow this.

Differential revision: https://reviews.llvm.org/D71954
2019-12-28 11:20:52 -06:00
Kang Zhang d1b51c5de7 [PowerPC] Modify the hasSideEffects of some VSX instructions from 1 to 0
Summary:
If we didn't set the value for hasSideEffects bit in our td file,  `llvm-tblgen`
will set it as true for those instructions which has no match pattern.
Below 6 instructions don't set the hasSideEffects flag and don't have match
pattern, so their hasSideEffects flag will be set true by llvm-tblgen.

But in fact below instructions don't modify any special register and don't have
other SideEffects, they shouldn't have SideEffects.
This patch is to modify the hasSideEffects of below instructions from 1 to 0.

```
VEXTUHLX
VEXTUHRX
VEXTUWLX
VEXTUWRX
VSPLTBs
VSPLTHs
```

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D71391
2019-12-28 09:04:54 +00:00
Matt Arsenault a33cab0f06 AMDGPU: Adjust test so it will work with GlobalISel
This is mostly a workaround for not handling the mubuf store path yet.
2019-12-27 19:37:39 -05:00
Matt Arsenault 5ce2ca524e AMDGPU/GlobalISel: Use SReg_32 for readfirstlane constraining
This matches the DAG behavior where we don't use SReg_32_XM0
everywhere anymore, and fixes not coalescing the copies into m0.
2019-12-27 17:52:12 -05:00
Matt Arsenault 3213ce966b TailDuplication: Clear NoPHIs property
The early tail duplicator pass introduces new ones, so a MIR test that
infers no phis since there were none on the input would fail the
verifier after running.
2019-12-27 14:06:31 -05:00
Matt Arsenault e088846712 AMDGPU/GlobalISel: Fix extra result register in fdiv64 lowering
There ended up being two result registers, which would fail on
select. It was really defing a new temp register in the correct def
position, instead of the correct result register.
2019-12-27 08:49:43 -05:00
Matt Arsenault ed9a56b0f2 AMDGPU/GlobalISel: Select some 128-bit load/stores 2019-12-27 08:49:43 -05:00
Craig Topper fca4736874 [X86] Allow v2i32->v2f32 strict and non-strict uint_to_fp to be widened to v4i32->v4f32 under avx512.
With avx512vl we get v4i32->v4f32 uint_to_fp instructions. With
avx512f we get v16i32->v16f32 instructions which we can use to
emulate v4i32->v4f32.
2019-12-27 00:28:44 -08:00
Craig Topper 931946bb1d [X86] Add v2i32->v2f32 non-strict sint_to_fp/uint_to_fp tests. NFC 2019-12-27 00:28:44 -08:00
Craig Topper 20aab49492 [X86] Custom widen v2i32->v2f32 strict_sint_to_fp to avoid scalarization. 2019-12-27 00:28:44 -08:00
Craig Topper ecbaf152f8 [X86] Custom widen 128/256-bit vXi32 fp_to_uint on avx512f targets without avx512vl. Similar for vXi64 on avx512dq without avx512vl.
Summary:
Previously we did this with isel patterns that used garbage in
the widened part of the source. But that's not valid for strictfp.
So now we custom widen and use zeroes for the widened elemens for
strictfp.

This replaces D71864.

Reviewers: RKSimon, spatel, andrew.w.kaylor, pengfei, LiuChen3

Reviewed By: pengfei

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71879
2019-12-26 22:04:40 -08:00
Craig Topper 50fb3957c1 [X86] Custom widen strict v2f32->v2i32 by padding with zeroes.
For non-strict, generic type legalization will take care of this,
but that doesn't happen currently for strict nodes.
2019-12-26 21:45:18 -08:00
Craig Topper 53ee806d93 [X86][FPEnv] Promote some float strictfp operations to double on i686-pc-windows-msvc to match what we do for non-strict.
The float libcalls are inlined in MSVC's math header where they
just cast to double and use the double libcall. Do the same when
we emit libcalls.
2019-12-26 20:22:24 -08:00
Craig Topper e647ff0d7d [X86] Add tests for constrained float intrinsics on i686-pc-windows-msvc. NFC
We need to promote these to double due to missing libcalls on
Windows.
2019-12-26 20:09:34 -08:00
Craig Topper a5d266b9cf [X86] Add custom legalization for strict_uint_to_fp v2i32->v2f32.
I believe the algorithm we use for non-strict is exception safe
for strict. The fsub won't generate any exceptions. After it we
will have an exact version of the i32 integer in a double. Then
we just round it to f32. That rounding will generate a precision
exception if it can't be represented exactly.
2019-12-26 19:10:26 -08:00
Craig Topper bc202547d5 [X86] Add test cases for v2i32->v2f32 strict_sint_to_fp/strict_uint_to_fp. NFC 2019-12-26 19:10:26 -08:00
Liu, Chen3 1a7b69f5dd add custom operation for strict fpextend/fpround
Differential Revision: https://reviews.llvm.org/D71892
2019-12-27 08:28:33 +08:00
Craig Topper f953882113 [X86] Custom widen 128/256-bit vXi32 uint_to_fp on avx512f targets without avx512vl. Similar for vXi64 sint_to_fp/uint_to_fp on avx512dq without avx512vl.
Previously we widened these through isel patterns, but that
didn't work for STRICT_ nodes. Those need to be padded with
zeroes in the upper bits which is harder to do in isel patterns.
2019-12-26 14:46:56 -08:00