Commit Graph

13230 Commits

Author SHA1 Message Date
Dimitry Andric 6a482a73d6 Only attempt to detect AVG if SSE2 is available
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support.  It turned out
this was because of the AVG pattern detection introduced in rL253952.

Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.

Also add a minimized test case.

Reviewers: congh, eli.friedman, spatel

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D20905

llvm-svn: 271548
2016-06-02 17:30:49 +00:00
Simon Pilgrim 0afd5a4d80 [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (llvm)
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.

Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.

Differential Revision: http://reviews.llvm.org/D20860

llvm-svn: 271510
2016-06-02 10:55:21 +00:00
Craig Topper 048a08af66 [AVX512] Add 512-bit load/stores to fast isel.
llvm-svn: 271486
2016-06-02 04:51:37 +00:00
Craig Topper 292a86db5b [X86] No need to use 256-bit VMOVNTPS for integer types when only AVX1 is supported. VMOVNTDQ is available with AVX1.
We were getting this right for v4i64 but not the other integer types.

llvm-svn: 271482
2016-06-02 04:19:48 +00:00
Craig Topper ca9c0801e1 [X86] Add AVX 256-bit load and stores to fast isel.
I'm not sure why this was missing for so long.

This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit.

llvm-svn: 271481
2016-06-02 04:19:45 +00:00
Craig Topper 6611188633 [X86] Use uint16_t for a couple arrays of instruction opcodes. NFC
llvm-svn: 271480
2016-06-02 04:19:42 +00:00
Craig Topper 5bb9cda620 [AVX512] Remove LOADA/LOADU/STOREA/STOREU intrinsic types now that they are unused.
llvm-svn: 271479
2016-06-02 04:19:40 +00:00
Craig Topper f10fbfa738 [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271478
2016-06-02 04:19:36 +00:00
Michael Zuckerman 6a894956fc Adding back-end support to two bit scanning intrinsics
Adding LLVM back-end support to two intrinsics dealing with bit scan: _bit_scan_forward and _bit_scan_reverse.
Their functionality is as described in Intel intrinsics guide:
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_forward&expand=371,370
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_reverse&expand=371,370

Commit on behalf of Omer Paparo Bivas


Differential Revision: http://reviews.llvm.org/D19915

llvm-svn: 271386
2016-06-01 12:02:37 +00:00
Craig Topper 4f2d5a68d3 Revert r271362 "[AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead."
Looks like something isn't quite right still. Also forgot to move the test cases to an autoupgrade test.

llvm-svn: 271363
2016-06-01 05:57:55 +00:00
Craig Topper dacd9d2bac [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271362
2016-06-01 05:35:16 +00:00
Kevin B. Smith ed0b620a65 [X86]: Add a pattern that uses GR16_ABCD rather than GR32_ABCD to avoid falsely marking whole 32 bit register as live.
Differential Revision: http://reviews.llvm.org/D20649

llvm-svn: 271341
2016-05-31 22:00:12 +00:00
Yaron Keren a34bfa000f Do not modify a std::vector while looping it.
Introduced in r271244, this is probably undefined behaviour and asserts when
compiled with Visual C++ debug mode. 

On further note, the loop is quadratic with regard to the number of successors
since removeSuccessor is linear and could probably be modified to linear time.

llvm-svn: 271278
2016-05-31 13:45:05 +00:00
Simon Pilgrim e05dc45897 [X86][SSE] Add load-folding patterns for (V)CVTDQ2PD (PR27291)
Added patterns for (V)CVTDQ2PD -> 2f64 loading from a 64-bit source.

llvm-svn: 271269
2016-05-31 12:04:35 +00:00
Igor Breger 73ee8ba9b0 [AVX512] Fix intrinsic vcvtps2ph lowering.
Differential Revision: http://reviews.llvm.org/D20788

llvm-svn: 271255
2016-05-31 08:04:21 +00:00
Igor Breger 52bd1d5fcc Fix intrinsic vbroadcast{i32|f32}x2 lowering.
Differential Revision: http://reviews.llvm.org/D20780

llvm-svn: 271254
2016-05-31 07:43:39 +00:00
Craig Topper 50f85c22c5 [AVX512] Remove masked store intrinsics. Clang now emits generic masked store intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked stores.

llvm-svn: 271245
2016-05-31 01:50:02 +00:00
Saleem Abdulrasool d2f705ddf9 X86: permit using SjLj EH on x86 targets as an option
This adds support to the backed to actually support SjLj EH as an exception
model.  This is *NOT* the default model, and requires explicitly opting into it
from the frontend.  GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.

Addresses PR27749!

llvm-svn: 271244
2016-05-31 01:48:07 +00:00
Craig Topper 8287fd8abd [X86] Remove SSE/AVX unaligned store intrinsics as clang no longer uses them. Auto upgrade to native unaligned store instructions.
llvm-svn: 271236
2016-05-30 23:15:56 +00:00
Rafael Espindola 4f1062adb8 Fix a crash when producing COFF.
llvm-svn: 271229
2016-05-30 20:18:53 +00:00
Rafael Espindola 9768d73c74 Move RelaxELFRel out to llvm-mc.
llvm-svn: 271160
2016-05-29 01:11:00 +00:00
Simon Pilgrim 9602d678cb [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Rafael Espindola 52bd330500 Fix production of R_X86_64_GOTPCRELX/R_X86_64_REX_GOTPCRELX.
We were producing R_X86_64_GOTPCRELX for invalid instructions and
sometimes producing R_X86_64_GOTPCRELX instead of
R_X86_64_REX_GOTPCRELX.

llvm-svn: 271118
2016-05-28 15:51:38 +00:00
Sanjay Patel 97c2c108fd [x86] avoid printing unnecessary sign bits of hex immediates in asm comments (PR20347)
It would be better to check the valid/expected size of the immediate operand, but this is
generally better than what we print right now.

Differential Revision: http://reviews.llvm.org/D20385

llvm-svn: 271114
2016-05-28 14:58:37 +00:00
Ahmed Bougacha a3dc1ba142 [X86] Try to zero elts when lowering 256-bit shuffle with PSHUFB.
Otherwise we fallback to a blend of PSHUFBs later on.

Differential Revision: http://reviews.llvm.org/D19661

llvm-svn: 271113
2016-05-28 14:38:04 +00:00
Rafael Espindola 2d39bb3c6a Simplify and clang-format a table.
llvm-svn: 271112
2016-05-28 11:13:34 +00:00
Michael Kuperstein a75c77b127 [X86] Detect SAD patterns and emit psadbw instructions.
This recommits r267649 with a fix for PR27539.

Differential Revision: http://reviews.llvm.org/D20598

llvm-svn: 271033
2016-05-27 18:53:22 +00:00
Ahmed Bougacha 346b98011c [X86] Clarify PSHUFB+blend lowering function name. NFC.
Also guard against v32i8 users.

llvm-svn: 271024
2016-05-27 17:58:17 +00:00
Simon Pilgrim 4642a57fbf Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim c013e5737b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Simon Pilgrim cf340bd9c1 [X86][SSE] When lowering a 256-bit shuffle as PMOVZX, reduce the input vector to the lower 128-bit subvector.
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.

llvm-svn: 270858
2016-05-26 15:40:36 +00:00
Rafael Espindola a224de06bc Use shouldAssumeDSOLocal on AArch64.
This reduces code duplication and now AArch64 also handles PIE.

llvm-svn: 270844
2016-05-26 12:42:55 +00:00
Igor Breger 8437bb70fd [AVX512] Fix intrinsic cmp{sd|ss} lowering.
Differential Revision: http://reviews.llvm.org/D20615

llvm-svn: 270843
2016-05-26 12:42:25 +00:00
Simon Pilgrim 4810683ddf Simplify std::all_of/any_of predicates by using llvm::all_of/any_of. NFCI.
llvm-svn: 270753
2016-05-25 20:41:11 +00:00
Rafael Espindola 84f0562064 Fix shouldAssumeDSOLocal for private linkage.
llvm-svn: 270746
2016-05-25 19:55:16 +00:00
Sanjay Patel aedc347b29 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Sanjay Patel 3955360b24 [x86, AVX] allow explicit calls to VZERO* to modify state in VZeroUpperInserter pass (PR27823)
As noted in the review, there are still problems, so this doesn't the bug completely.

Differential Revision: http://reviews.llvm.org/D20529

llvm-svn: 270718
2016-05-25 16:39:47 +00:00
Simon Pilgrim 4298d06d0f [X86][SSE] Replace (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) lossless conversion intrinsics with generic IR
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.

Differential Revision: http://reviews.llvm.org/D20568

llvm-svn: 270678
2016-05-25 08:59:18 +00:00
Craig Topper 12e322a8cf [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
Igor Breger 23c2090606 [llvm][AVX512][intrinsics] Fix vperm{b|w|d|q|ps|pd} intrinsics. Index is second argument to buildin function but it is first instruction operand.
Differential Revision: http://reviews.llvm.org/D20515

llvm-svn: 270548
2016-05-24 11:06:22 +00:00
Simon Pilgrim 14000b3cea [CostModel][X86][XOP] Added XOP costmodel for BITREVERSE
Now that we have a nice fast VPPERM solution. Added framework for future intrinsic costs as well.

llvm-svn: 270537
2016-05-24 08:17:50 +00:00
Sanjay Patel 8099fb7e0a fix typo; NFC
llvm-svn: 270469
2016-05-23 18:01:20 +00:00
Sanjay Patel 13a0d49813 use range-loop; NFCI
llvm-svn: 270467
2016-05-23 18:00:50 +00:00
Aaron Ballman a81264ba09 Removing a switch statement that contains only a default label; NFC.
llvm-svn: 270444
2016-05-23 15:52:59 +00:00
Craig Topper a6d0104823 [X86] Use instruction aliases to replace custom asm parser code for optimizing moves to use 2 byte VEX prefix.
llvm-svn: 270394
2016-05-23 04:02:27 +00:00
Craig Topper 95bdabd338 [AVX512] Add patterns to implement stores of extracts of least signficant subvectors using XMM or YMM stores instead of the vector extract instructions.
Similar is already done for AVX and we had lost it going to AVX512VL.

llvm-svn: 270383
2016-05-22 23:44:33 +00:00
Sanjay Patel 2959ff4a88 [x86, AVX] don't add a vzeroupper if that's what the code is already doing (PR27823)
This isn't the complete fix, but it handles the trivial examples of duplicate vzero* ops in PR27823:
https://llvm.org/bugs/show_bug.cgi?id=27823
...and amusingly, the bogus cases already exist as regression tests, so let's take this baby step.

We'll need to do more in the general case where there's legitimate AVX usage in the function + there's
already a vzero in the code.

Differential Revision: http://reviews.llvm.org/D20477

llvm-svn: 270378
2016-05-22 20:22:47 +00:00
Igor Breger 2ba64ab9ae [AVX512] Implement missing patterns for any_extend load lowering.
Differential Revision: http://reviews.llvm.org/D20513

llvm-svn: 270357
2016-05-22 10:21:04 +00:00
Craig Topper 5f3fef884f [AVX512] The AVX512 file only need subtract_subvector index 0 patterns where the source is 512-bits. The 256-bit source patterns were redundant with AVX.
llvm-svn: 270356
2016-05-22 07:40:58 +00:00
Craig Topper a1041ff001 [AVX512] Add an AddedComplexity line to the 512-bit insert_subvector undef index 0 patterns. This gives them higher priority than the memory patterns. This matches AVX1/2.
llvm-svn: 270355
2016-05-22 07:40:40 +00:00