Commit Graph

9218 Commits

Author SHA1 Message Date
Craig Topper 25191b4ac3 [X86] Remove sse4.1 intrinsic tests from the avx intrinsics test file.
They are all covered by the SSE4.1 intrinsics test with SSE4.1, AVX, and AVX512 command lines.

llvm-svn: 295706
2017-02-21 08:06:02 +00:00
Craig Topper da8e6f1337 [X86] Remove sse3 intrinsic tests from the avx intrinsics test file.
They are all covered by the SSE3 intrinsics test with SSE2, AVX, and AVX512 command lines.

llvm-svn: 295705
2017-02-21 08:05:59 +00:00
Craig Topper 002549b8be [X86] Remove aes intrinsic tests from the avx intrinsics test file.
They are all covered by the AES intrinsics test with a legacy command line and an AVX command line.

llvm-svn: 295702
2017-02-21 07:32:18 +00:00
Craig Topper 2a71fd95e8 [X86] Add an AVX command line and regenerate AES intrinsics test using the update_llc_test_checks.py
llvm-svn: 295701
2017-02-21 07:32:14 +00:00
Craig Topper dbf6f367e9 [X86] Remove sse2 intrinsic tests from the avx intrinsics test file.
They are all covered by the SSE2 intrinsics test with SSE2, AVX, and AVX512 command lines.

Also remove an unneeded lfence intrinsic test since it was already covered.

llvm-svn: 295700
2017-02-21 07:32:11 +00:00
Craig Topper 0d47fdcf3f [X86] Remove sse1 intrinsic tests from the avx intrinsics test file.
They are all covered by the SSE intrinsics test with SSE, AVX, and AVX512 command lines.

Also remove an unneeded sfence intrinsic test since it was already covered.

llvm-svn: 295699
2017-02-21 07:32:03 +00:00
Craig Topper d88389aa7e [X86] Use SHLD with both inputs from the same register to implement rotate on Sandy Bridge and later Intel CPUs
Summary:
Sandy Bridge and later CPUs have better throughput using a SHLD to implement rotate versus the normal rotate instructions. Additionally it saves one uop and avoids a partial flag update dependency.

This patch implements this change on any Sandy Bridge or later processor without BMI2 instructions. With BMI2 we will use RORX as we currently do.

Reviewers: zvi

Reviewed By: zvi

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30181

llvm-svn: 295697
2017-02-21 06:39:13 +00:00
Craig Topper d9fe664868 [AVX-512] Use sse_load_f32/f64 in place of scalar_to_vector and scalar load in some patterns.
llvm-svn: 295693
2017-02-21 04:26:10 +00:00
Craig Topper 63b7d71844 [AVX-512] Add test cases showing failure to fold zero extending scalar loads in scalar intrinsics without the peephole pass.
llvm-svn: 295692
2017-02-21 04:26:07 +00:00
Taewook Oh 4cf5c1087c [BranchFolding] Update debug location along with the update of branch instruction.
Summary:
Currently, BranchFolder drops DebugLoc for branch instructions in some places. For example, for the test code attached, the branch instruction of 'entry' block has a DILocation of

```
!12 = !DILocation(line: 6, column: 3, scope: !11)
```

, but this information is gone when then block is lowered because BranchFolder misses it. This patch is a fix for this issue.

Reviewers: qcolombet, aprantl, craig.topper, MatzeB

Reviewed By: aprantl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29902

llvm-svn: 295684
2017-02-21 00:12:38 +00:00
Craig Topper e8beaff021 [X86] Add additonal check lines to one of the rotate tests.
llvm-svn: 295682
2017-02-20 23:38:51 +00:00
Craig Topper a80f90e66b [X86] FileCheckize one of the rotate tests.
llvm-svn: 295681
2017-02-20 23:38:48 +00:00
Craig Topper bb10c0f1ec [X86] FileCheckize one of the rotate tests.
llvm-svn: 295676
2017-02-20 19:44:10 +00:00
Craig Topper 2012dda9a0 [AVX-512] Add a few more patterns for selecting masked vpternlog with broadcast loads where the passthru operand is not operand 0.
llvm-svn: 295673
2017-02-20 17:44:09 +00:00
Simon Pilgrim e9a8145adb [X86][SSE] Regenerate extracted bitcasted constant tests and add 32-bit test target
llvm-svn: 295669
2017-02-20 15:57:14 +00:00
Simon Pilgrim 72d666e443 [X86][SSE] Regenerate re-materialized store tests and add 64-bit test target
llvm-svn: 295666
2017-02-20 15:20:37 +00:00
Simon Pilgrim 5a33d1c266 [X86][SSE] Regenerate vselect widening tests and add 32-bit test target
llvm-svn: 295665
2017-02-20 15:16:43 +00:00
Igor Breger fda32d266a [X86] Fix EXTRACT_VECTOR_ELT with variable index from v32i16 and v64i8 vector.
Its more profitable to go through memory (1 cycles throughput)
than using VMOVD + VPERMV/PSHUFB sequence ( 2/3 cycles throughput) to implement EXTRACT_VECTOR_ELT with variable index.
IACA tool was used to get performace estimation (https://software.intel.com/en-us/articles/intel-architecture-code-analyzer)
For example for var_shuffle_v16i8_v16i8_xxxxxxxxxxxxxxxx_i8 test from vector-shuffle-variable-128.ll I get 26 cycles vs 79 cycles. 
Removing the VINSERT node, we don't need it any more.

Differential Revision: https://reviews.llvm.org/D29690

llvm-svn: 295660
2017-02-20 14:16:29 +00:00
Simon Pilgrim 5910ebe720 [X86][AVX512] Add support for ASHR v2i64/v4i64 support without VLX
Use v8i64 ASHR instructions if we don't have VLX.

Differential Revision: https://reviews.llvm.org/D28537

llvm-svn: 295656
2017-02-20 12:16:38 +00:00
Simon Pilgrim 50b958c07a [SelectionDAG] Add scalarization support for ISD::*_EXTEND_VECTOR_INREG opcodes.
Thanks to Mikael Holmén for the initial test case

llvm-svn: 295652
2017-02-20 11:55:58 +00:00
Craig Topper c6c68f5958 [AVX-512] Add more patterns to fold masked VPTERNLOG with load when the passthru isn't operand 0.
llvm-svn: 295640
2017-02-20 07:00:40 +00:00
Craig Topper 5aef828ba7 [AVX-512] Add tests for missed opportunities to fold masked VPTERNLOG with load when the passthru op isn't operand 0.
llvm-svn: 295639
2017-02-20 07:00:37 +00:00
Craig Topper a5fa2e40f9 [AVX-512] Fix mistake in the immediate swizzle for some of the VPTERNLOG patterns.
llvm-svn: 295638
2017-02-20 07:00:34 +00:00
Craig Topper cb5b45cc36 [AVX-512] Use a better immediate in the VPTERNLOG commuting tests so its easier to spot bad swizzling.
llvm-svn: 295637
2017-02-20 07:00:31 +00:00
Craig Topper 5b4e36aafa [AVX-512] Add more VPTERNLOG patterns to enable folding of broadcast loads that aren't in operand 2.
llvm-svn: 295634
2017-02-20 02:47:42 +00:00
Craig Topper c184b671d9 [X86] Use memory form of shift right by 1 when the rotl immediate is one less than the operation size.
An earlier commit already did this for the register form.

llvm-svn: 295626
2017-02-20 00:37:23 +00:00
Craig Topper 0f14411b57 [X86] Add test cases showing missed opportunities to use rotate right by 1 instructions when operation reads/writes memory.
llvm-svn: 295625
2017-02-20 00:37:20 +00:00
Craig Topper 489057715e [AVX-512] Disable peephole optimizations on the VPTERNLOG commute test. Add new patterns to enable isel to fold the loads on it own.
llvm-svn: 295616
2017-02-19 21:32:15 +00:00
Simon Pilgrim d590de2998 [X86][SSE] Use getTargetConstantBitsFromNode to find zeroable shuffle elements.
Replaces existing approach that could only search BUILD_VECTOR nodes.

Requires getTargetConstantBitsFromNode to discriminate cases with all/partial UNDEF bits in each element - this should also be useful when we get around to supporting getTargetShuffleMaskIndices with UNDEF elements. 

llvm-svn: 295613
2017-02-19 19:40:31 +00:00
Craig Topper 4e794c71a6 [AVX-512] Add patterns to recognize masked vpternlog when the passthrough operand is not operand 0.
This uses a SDNodeXForm to swizzle the appropriate immediate bits to allow this to be matched.

llvm-svn: 295612
2017-02-19 19:36:58 +00:00
Craig Topper ab1afa85ba [AVX-512] Add test cases that show failure to select masked VPTERNLOG when a select is used to force the passthru operand to be not operand 0.
llvm-svn: 295611
2017-02-19 19:36:54 +00:00
Simon Pilgrim 4271186f9c [X86][SSE] Enable initial support for domain crossing at high shuffle combine depths.
As discussed on D27692, this permits another domain to be used to combine a shuffle at high depths.

We currently set the required depth at 4 or more combined shuffles, this is probably too high for most targets but is a good starting point and already helps avoid a number of costly variable shuffles.

llvm-svn: 295608
2017-02-19 17:19:38 +00:00
Craig Topper 218d1a020e [AVX-512] Add broadcast VPTERNLOG instructions to special case commuting switch.
The instructions are marked commutable, but without special handling we don't get the immediate correct.

While here also remove the masked memory forms that aren't commutable.

llvm-svn: 295602
2017-02-19 08:03:26 +00:00
Craig Topper 94de4b9330 [AVX-512] Add patterns to show missed opportunities for folding vpternlog with broadcast loads. Also demonstrates a bug in the commuting of broadcast vpternlog instructions when we are able to select them.
llvm-svn: 295601
2017-02-19 08:03:23 +00:00
Craig Topper de10312bea Recommit "[X86] Remove XOP VPCMOV intrinsics and autoupgrade them to native IR."
Clang has now been fixed to not use these intrinsics.

llvm-svn: 295571
2017-02-18 21:50:58 +00:00
Sanjay Patel dc8a24ea4c [x86] remove stale comments from tests; NFC
llvm-svn: 295569
2017-02-18 21:07:37 +00:00
Sanjay Patel 12c2093e1e [x86] fold sext (xor Bool, -1) --> sub (zext Bool), 1
This is the same transform that is current used for:
select Bool, 0, -1

llvm-svn: 295568
2017-02-18 21:03:28 +00:00
Craig Topper ba2a726cc6 Revert "[X86] Remove XOP VPCMOV intrinsics and autoupgrade them to native IR."
This reverts r295564. I missed that clang was still using the intrinsics despite our half implemented autoupgrade support.

llvm-svn: 295565
2017-02-18 20:14:20 +00:00
Craig Topper 884db3f85d [X86] Remove XOP VPCMOV intrinsics and autoupgrade them to native IR.
It seems we were already upgrading 128-bit VPCMOV, but the intrinsic was still defined and being used in isel patterns. While I was here I also simplified the tablegen multiclasses.

llvm-svn: 295564
2017-02-18 19:51:25 +00:00
Craig Topper a505169ca5 [AVX-512] Remove 128/256-bit masked fp max/min intrinsics. Upgrade them to legacy unmasked intrinsics and select instructions.
llvm-svn: 295543
2017-02-18 07:07:50 +00:00
Matthias Braun 2a707a3d3d machine-region-info.mir: Slightly simplify test, -mtriple
llvm-svn: 295520
2017-02-18 00:48:43 +00:00
Matthias Braun 431305927f MachineRegionInfo: Fix pass initialization
- Adapt MachineBasicBlock::getName() to have the same behavior as the IR
  BasicBlock (Value::getName()).
- Add it to lib/CodeGen/CodeGen.cpp::initializeCodeGen so that it is linked in
  the CodeGen library.
- MachineRegionInfoPass's name conflicts with RegionInfoPass's name ("region").
- MachineRegionInfo should depend on MachineDominatorTree,
  MachinePostDominatorTree and MachineDominanceFrontier instead of their
  respective IR versions.
- Since there were no tests for this, add a X86 MIR test.

Patch by Francis Visoiu Mistrih<fvisoiumistrih@apple.com>

llvm-svn: 295518
2017-02-18 00:41:16 +00:00
Simon Pilgrim 8670993dc1 [X86] Add MOVBE targets to load combine tests
Test folded endian swap tests with MOVBE instructions.

llvm-svn: 295508
2017-02-17 23:00:21 +00:00
Simon Pilgrim 996f9b4cad [X86] Add subborrow stack folding tests
llvm-svn: 295496
2017-02-17 21:16:24 +00:00
Sanjay Patel 00872c3dfe [x86] add tests for sext (not bool); NFC
llvm-svn: 295495
2017-02-17 21:10:40 +00:00
Simon Pilgrim a4c350ff17 [X86][SSE] Add (V)MOVD folding pattern with zextloadi64i32 load node.
Fixes PRPR31309

llvm-svn: 295492
2017-02-17 20:43:32 +00:00
Simon Pilgrim a3f2803905 [X86][SHA] Add SHA stack folding tests
llvm-svn: 295479
2017-02-17 19:24:55 +00:00
Simon Pilgrim f4f5cd5d19 [X86][TBM] Add TBM stack folding tests
llvm-svn: 295477
2017-02-17 18:51:53 +00:00
Simon Pilgrim 99193de8ab [X86][BMI] Add BMI2 stack folding tests
llvm-svn: 295470
2017-02-17 18:00:43 +00:00
Simon Pilgrim 09dde435ab [X86][BMI] Add BMI stack folding tests
llvm-svn: 295462
2017-02-17 17:11:00 +00:00
Simon Pilgrim 511d788a95 [DAGCombine] Recognise any_extend_vector_inreg and truncation style shuffle masks
During legalization we are often creating shuffles (via a build_vector scalarization stage) that are "any_extend_vector_inreg" style masks, and also other masks that are the equivalent of "truncate_vector_inreg" (if we had such a thing).

This patch is an attempt to match these cases to help undo the effects of just leaving shuffle lowering to handle it - which typically means we lose track of the undefined elements of the shuffles resulting in an unnecessary extension+truncation stage for widened illegal types.

The 2011-10-21-widen-cmp.ll regression will be fixed by making SIGN_EXTEND_VECTOR_IN_REG legal in SSE instead of lowering them to X86ISD::VSEXT (PR31712).

Differential Revision: https://reviews.llvm.org/D29454

llvm-svn: 295451
2017-02-17 15:14:48 +00:00
Chandler Carruth 96d86a7f9c [x86] Give this test a triple so that we don't have to cope with two
different asm comment syntaxes.

llvm-svn: 295394
2017-02-17 01:18:38 +00:00
Chandler Carruth 8960686927 FileCheck-ize some tests in test/CodeGen/X86/
Patch by Jorge Gorbe!

Differential Revision: https://reviews.llvm.org/D29807

llvm-svn: 295386
2017-02-17 00:29:59 +00:00
Simon Pilgrim e5215751ff [X86][SSE] Add PR31309 test case (load-extend i32 to i128).
llvm-svn: 295363
2017-02-16 19:17:36 +00:00
Hans Wennborg 35905d6a67 Re-apply r282920 "X86: Allow conditional tail calls in Win64 "leaf" functions (PR26302)"
The original commit was reverted in r283329 due to a miscompile in
Chromium. That turned out to be the same issue as PR31257, which was
fixed in r295262.

llvm-svn: 295357
2017-02-16 19:04:42 +00:00
Andrea Di Biagio 42f7712e23 x86 interrupt calling convention: only save xmm registers if the target supports SSE
The existing code always saves the xmm registers for 64-bit targets even if the
target doesn't support SSE (which is common for kernels). Thus, the compiler
inserts movaps instructions which lead to CPU exceptions when an interrupt
handler is invoked.

This commit fixes this bug by returning a register set without xmm registers
from getCalleeSavedRegs and getCallPreservedMask for such targets.

Patch by Philipp Oppermann.

Differential Revision: https://reviews.llvm.org/D29959

llvm-svn: 295347
2017-02-16 18:25:37 +00:00
Sanjay Patel 8e55b685c2 [x86] add more tests of select of constants; NFC
llvm-svn: 295346
2017-02-16 18:15:16 +00:00
Artur Pilipenko 85d758299e [DAGCombiner] Support {a|s}ext, {a|z|s}ext load nodes in load combine
Resubmit -r295314 with PowerPC and AMDGPU tests updated.

Support {a|s}ext, {a|z|s}ext load nodes as a part of load combine patters.

Reviewed By: filcab

Differential Revision: https://reviews.llvm.org/D29591

llvm-svn: 295336
2017-02-16 17:07:27 +00:00
Artur Pilipenko a1b384c4ce Rever -r295314 "[DAGCombiner] Support {a|s}ext, {a|z|s}ext load nodes in load combine"
This change causes some of AMDGPU and PowerPC tests to fail.

llvm-svn: 295316
2017-02-16 13:04:46 +00:00
Artur Pilipenko daaa0c0f7d [DAGCombiner] Support {a|s}ext, {a|z|s}ext load nodes in load combine
Support {a|s}ext, {a|z|s}ext load nodes as a part of load combine patters.

Reviewed By: filcab

Differential Revision: https://reviews.llvm.org/D29591

llvm-svn: 295314
2017-02-16 12:53:26 +00:00
Craig Topper 715873ead3 [AVX-512] Remove masked packss/packus intrinsics and autoupgrade to unmasked intrinsics with select instructions. For 512-bit add new unmasked intrinsics.
The new 512-bit unmasked intrinsics will make it easy to handle these with the SSE/AVX intrinsics in InstCombine where we currently have a TODO.

llvm-svn: 295290
2017-02-16 06:31:54 +00:00
Hans Wennborg a468601e0e [X86] Re-enable conditional tail calls and fix PR31257.
This reverts r294348, which removed support for conditional tail calls
due to the PR above. It fixes the PR by marking live registers as
implicitly used and defined by the now predicated tailcall. This is
similar to how IfConversion predicates instructions.

Differential Revision: https://reviews.llvm.org/D29856

llvm-svn: 295262
2017-02-16 00:04:05 +00:00
Kyle Butt 7fbec9bdf1 Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:

  A     B
  |\   /|
  | \ / |
  |  X  |
  | / \ |
  |/   \|
  C     D

would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.

because of this we can tail duplicate to extend existing trellises.

As an example consider the following CFG:

    B   D   F   H
   / \ / \ / \ / \
  A---C---E---G---Ret

Where A,C,E,G are all small (Currently 2 instructions).

The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.

The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret

define void @straight_test(i32 %tag) {
entry:
  br label %test1
test1: ; A
  %tagbit1 = and i32 %tag, 1
  %tagbit1eq0 = icmp eq i32 %tagbit1, 0
  br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
  call void @a()
  br label %test2
test2: ; C
  %tagbit2 = and i32 %tag, 2
  %tagbit2eq0 = icmp eq i32 %tagbit2, 0
  br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
  call void @b()
  br label %test3
test3: ; E
  %tagbit3 = and i32 %tag, 4
  %tagbit3eq0 = icmp eq i32 %tagbit3, 0
  br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
  call void @c()
  br label %test4
test4: ; G
  %tagbit4 = and i32 %tag, 8
  %tagbit4eq0 = icmp eq i32 %tagbit4, 0
  br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
  call void @d()
  br label %exit
exit:
  ret void
}

here is the layout after D27742:
straight_test:                          # @straight_test
; ... Prologue elided
; BB#0:                                 # %entry ; A (merged with test1)
; ... More prologue elided
	mr 30, 3
	andi. 3, 30, 1
	bc 12, 1, .LBB0_2
; BB#1:                                 # %test2 ; C
	rlwinm. 3, 30, 0, 30, 30
	beq	 0, .LBB0_3
	b .LBB0_4
.LBB0_2:                                # %optional1 ; B (copy of C)
	bl a
	nop
	rlwinm. 3, 30, 0, 30, 30
	bne	 0, .LBB0_4
.LBB0_3:                                # %test3 ; E
	rlwinm. 3, 30, 0, 29, 29
	beq	 0, .LBB0_5
	b .LBB0_6
.LBB0_4:                                # %optional2 ; D (copy of E)
	bl b
	nop
	rlwinm. 3, 30, 0, 29, 29
	bne	 0, .LBB0_6
.LBB0_5:                                # %test4 ; G
	rlwinm. 3, 30, 0, 28, 28
	beq	 0, .LBB0_8
	b .LBB0_7
.LBB0_6:                                # %optional3 ; F (copy of G)
	bl c
	nop
	rlwinm. 3, 30, 0, 28, 28
	beq	 0, .LBB0_8
.LBB0_7:                                # %optional4 ; H
	bl d
	nop
.LBB0_8:                                # %exit ; Ret
	ld 30, 96(1)                    # 8-byte Folded Reload
	addi 1, 1, 112
	ld 0, 16(1)
	mtlr 0
	blr

The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.

This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.

Here is the resulting concrete layout:

straight_test:                          # @straight_test
; BB#0:                                 # %entry ; A (merged with test1)
	mr 30, 3
	andi. 3, 30, 1
	bc 12, 1, .LBB0_4
; BB#1:                                 # %test2 ; C
	rlwinm. 3, 30, 0, 30, 30
	bne	 0, .LBB0_5
.LBB0_2:                                # %test3 ; E
	rlwinm. 3, 30, 0, 29, 29
	bne	 0, .LBB0_6
.LBB0_3:                                # %test4 ; G
	rlwinm. 3, 30, 0, 28, 28
	bne	 0, .LBB0_7
	b .LBB0_8
.LBB0_4:                                # %optional1 ; B (Copy of C)
	bl a
	nop
	rlwinm. 3, 30, 0, 30, 30
	beq	 0, .LBB0_2
.LBB0_5:                                # %optional2 ; D (Copy of E)
	bl b
	nop
	rlwinm. 3, 30, 0, 29, 29
	beq	 0, .LBB0_3
.LBB0_6:                                # %optional3 ; F (Copy of G)
	bl c
	nop
	rlwinm. 3, 30, 0, 28, 28
	beq	 0, .LBB0_8
.LBB0_7:                                # %optional4 ; H
	bl d
	nop
.LBB0_8:                                # %exit

Differential Revision: https://reviews.llvm.org/D28522

llvm-svn: 295223
2017-02-15 19:49:14 +00:00
Michael Kuperstein ba80db39d7 [DAG] Don't try to create an INSERT_SUBVECTOR with an illegal source
We currently can't legalize those, but we should really not be creating
them in the first place, since legalization would probably look similar to the
way we legalize CONCAT_VECTORS - basically replace the INSERT with a BUILD.

This fixes PR311956.

Differential Revision: https://reviews.llvm.org/D29961

llvm-svn: 295213
2017-02-15 18:37:26 +00:00
Simon Pilgrim da25d5c7b6 [X86][SSE] Propagate undef upper elements from scalar_to_vector during shuffle combining
Only do this for integer types currently - floats types (in particular insertps) load folding often fails with this.

llvm-svn: 295208
2017-02-15 17:41:33 +00:00
Simon Pilgrim d811bdd61a [X86] Regenerate scalar stack reload test
llvm-svn: 295195
2017-02-15 16:48:45 +00:00
Simon Pilgrim a0e56d2d68 [X86] Regenerate i64 ext-load on 32-bit target tests
llvm-svn: 295177
2017-02-15 14:06:17 +00:00
Simon Pilgrim 0f0e5bd3c6 [X86][SSE] Allow matchVectorShuffleWithUNPCK to recognise ZERO inputs
Add support for specifying an UNPCK input as ZERO, particularly improves ZEXT cases with non-zero offsets

llvm-svn: 295169
2017-02-15 11:46:15 +00:00
Craig Topper fbc7805e25 [X86] Don't create VBROADCAST nodes with 256-bit or 512-bit input types
Summary:
We don't seem to have great rules on what a valid VBROADCAST node looks like. And as a consequence we end up with a lot of patterns to try to catch everything. We have patterns with scalar inputs, 128-bit vector inputs, 256-bit vector inputs, and 512-bit vector inputs.

As you can see from the things improved here we are currently missing patterns for 128-bit loads being extended to 256-bit before the vbroadcast.

I'd like to propose that VBROADCAST should always take a 128-bit vector type as input. As a first step towards that this patch adds an EXTRACT_SUBVECTOR in front of VBROADCAST when the input is 256 or 512-bits. In the future I would like to add scalar_to_vector around all the scalar operations. And maybe we should consider adding a VBROADCAST+load node to avoid separating loads from the broadcasting operation when the load itself isn't foldable.

This requires an additional change in target shuffle combining to look for the extract subvector and look through it to find the original operand. I'm sure this change isn't perfect but was enough to fix a few test failures that were being caused.

Another interesting thing I noticed is that the changes in masked_gather_scatter.ll show cases were we don't remove a useless insert into element 1 before broadcasting element 0.

Reviewers: delena, RKSimon, zvi

Reviewed By: zvi

Subscribers: igorb, llvm-commits

Differential Revision: https://reviews.llvm.org/D28747

llvm-svn: 295155
2017-02-15 06:58:47 +00:00
Craig Topper ec5df5f4aa [AVX-512] Add PACKSS/PACKUS instructions to load folding tables.
llvm-svn: 295154
2017-02-15 06:51:39 +00:00
Reid Kleckner a622fc9bdf [BranchFolding] Tail common all identical unreachable blocks
Summary:
Blocks ending in unreachable are typically cold because they end the
program or throw an exception, so merging them with other identical
blocks is usually profitable because it reduces the size of cold code.
MachineBlockPlacement generally does not arrange to fall through to such
blocks, so commoning these blocks will not introduce additional
unconditional branches.

Reviewers: hans, iteratee, haicheng

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29153

llvm-svn: 295105
2017-02-14 21:02:24 +00:00
Simon Pilgrim 6f732e026d [X86][SSE] Allow matchVectorShuffleWithUNPCK to recognise UNDEF inputs
Add support for specifying an UNPCK input as UNDEF

llvm-svn: 295061
2017-02-14 16:22:04 +00:00
Simon Pilgrim 5b281d9a5c [X86][SSE] Add shuffle combine tests showing missed opportunities to use UNPCK
Not correctly using UNDEF or ZERO inputs to combine to UNPCK shuffles

llvm-svn: 295059
2017-02-14 15:49:37 +00:00
Simon Pilgrim 8351cf1b6e [X86][SSE] Regenerate intrinsic upgrade tests
Remove excess semicolons

llvm-svn: 295058
2017-02-14 15:29:50 +00:00
Simon Pilgrim 75dda50ebe [X86][SSE] Test case showing missed PSHUFB target shuffle constant fold opportunity.
It also shows an unnecessary pshufb/broadcast being used - the original pshufb mask only requested the lowest byte.

llvm-svn: 295046
2017-02-14 11:20:11 +00:00
Craig Topper d2d50cba2a [AVX-512] Add PAVGB/PAVGW to load folding tables.
llvm-svn: 295035
2017-02-14 06:54:57 +00:00
Andrew Kaylor 709f1c2a9b [X86] Add MXCSR register
This adds MXCSR to the set of recognized registers for X86 targets and updates the instructions that read or write it. I do not intend for all of the various floating point instructions that implicitly use the control bits or update the status bits of this register to ever have that usage modeled by default. However, when constrained floating point modes (such as strict FP exception status modeling or dynamic rounding modes) are enabled, implicit use/def information for MXCSR will be added to those instructions.

Until those additional updates are made this should cause (almost?) no functional changes. Theoretically, this will prevent instructions like LDMXCSR and STMXCSR from being moved past one another, but that should be prevented anyway and I haven't found a case where it is happening now.

Differential Revision: https://reviews.llvm.org/D29903

llvm-svn: 295004
2017-02-13 23:38:52 +00:00
Amaury Sechet 3422b539c8 Revert autogenerated check result for test/CodeGen/X86/atomic-minmax-i6432.ll as they don't regenerate cleanly.
llvm-svn: 294996
2017-02-13 23:00:23 +00:00
Arnold Schwaighofer 8f3df731dc swiftcc: Don't emit tail calls from callers with swifterror parameters
Backends don't support this yet. They would have to move to the swifterror
register before the tail call to make sure it is live-in to the call.

rdar://30495920

llvm-svn: 294982
2017-02-13 19:58:28 +00:00
Taewook Oh 06a2128cfa Make MachineBasicBlock::updateTerminator to update DebugLoc as well
Summary:
Currently MachineBasicBlock::updateTerminator simply drops DebugLoc for newly created branch instructions, which may cause incorrect stepping and/or imprecise sample profile data. Below is an example:

```
  1 extern int bar(int x);
  2
  3 int foo(int *begin, int *end) {
  4   int *i;
  5   int ret = 0;
  6   for (
  7       i = begin ;
  8       i != end ;
  9       i++)
 10   {
 11       ret += bar(*i);
 12   }
 13   return ret;
 14 }
```

Below is a bitcode of 'foo' at the end of LLVM-IR level optimizations with -O3:

```
define i32 @foo(i32* readonly %begin, i32* readnone %end) !dbg !4 {
entry:
  %cmp6 = icmp eq i32* %begin, %end, !dbg !9
  br i1 %cmp6, label %for.end, label %for.body.preheader, !dbg !12

for.body.preheader:                               ; preds = %entry
  br label %for.body, !dbg !13

for.body:                                         ; preds = %for.body.preheader, %for.body
  %ret.08 = phi i32 [ %add, %for.body ], [ 0, %for.body.preheader ]
  %i.07 = phi i32* [ %incdec.ptr, %for.body ], [ %begin, %for.body.preheader ]
  %0 = load i32, i32* %i.07, align 4, !dbg !13, !tbaa !15
  %call = tail call i32 @bar(i32 %0), !dbg !19
  %add = add nsw i32 %call, %ret.08, !dbg !20
  %incdec.ptr = getelementptr inbounds i32, i32* %i.07, i64 1, !dbg !21
  %cmp = icmp eq i32* %incdec.ptr, %end, !dbg !9
  br i1 %cmp, label %for.end.loopexit, label %for.body, !dbg !12, !llvm.loop !22

for.end.loopexit:                                 ; preds = %for.body
  br label %for.end, !dbg !24

for.end:                                          ; preds = %for.end.loopexit, %entry
  %ret.0.lcssa = phi i32 [ 0, %entry ], [ %add, %for.end.loopexit ]
  ret i32 %ret.0.lcssa, !dbg !24
}
```

where

```
!12 = !DILocation(line: 6, column: 3, scope: !11)
```

. As you can see, the terminator of 'entry' block, which is a loop control branch, has a DebugLoc of line 6, column 3. Howerver, after the execution of 'MachineBlock::updateTerminator' function, which is triggered by MachineSinking pass, the DebugLoc info is dropped as below (see there's no debug-location for JNE_1):

```
  bb.0.entry:
    successors: %bb.4(0x30000000), %bb.1.for.body.preheader(0x50000000)
    liveins: %rdi, %rsi

    %6 = COPY %rsi
    %5 = COPY %rdi
    %8 = SUB64rr %5, %6, implicit-def %eflags, debug-location !9
    JNE_1 %bb.1.for.body.preheader, implicit %eflags
```

This patch addresses this issue and make newly created branch instructions to keep debug-location info.

Reviewers: aprantl, MatzeB, craig.topper, qcolombet

Reviewed By: qcolombet

Subscribers: qcolombet, llvm-commits

Differential Revision: https://reviews.llvm.org/D29596

llvm-svn: 294976
2017-02-13 18:15:31 +00:00
Quentin Colombet fbae5fcb96 [FastISel] Add a diagnostic to warm on fallback.
This is consistent with what we do for GlobalISel. That way, it is easy
to see whether or not FastISel is able to fully select a function.
At some point we may want to switch that to an optimization remark.

llvm-svn: 294970
2017-02-13 17:38:59 +00:00
Simon Pilgrim ce2cb2d968 [X86][SSE] Add v4f32 and v2f64 extract to store tests
llvm-svn: 294952
2017-02-13 14:20:13 +00:00
Simon Pilgrim 0de807f878 [X86][SSE] Add more thorough extract to store tests
Added v4i32 and v2i64 tests and test on i686 as well as x86_64.

llvm-svn: 294946
2017-02-13 13:40:12 +00:00
Pierre Gousseau 796e0d6df1 [X86] Improve readability of test/CodeGen/X86/lzcnt-zext-cmp.ll by adding a common check prefix ALL. NFC.
llvm-svn: 294938
2017-02-13 09:57:17 +00:00
Craig Topper 3668bde371 [DAGCombiner] Teach DAG combine that inserting an extract_subvector result into the same location of a an undef vector can just use the original input to the extract.
llvm-svn: 294932
2017-02-13 04:53:33 +00:00
Craig Topper 680c73e7ab [X86] Genericize the handling of INSERT_SUBVECTOR from an EXTRACT_SUBVECTOR to support 512-bit vectors with 128-bit or 256-bit subvectors.
We now detect that both the extract and insert indices are non-zero and convert to a shuffle. This will be lowered as a blend for 256-bit vectors or as a vshuf operations for 512-bit vectors.

llvm-svn: 294931
2017-02-13 04:53:29 +00:00
Craig Topper aa46204ed9 [DAGCombiner] Remove the half vector width check for the combine of EXTRACT_SUBVECTOR from an INSERT_SUBVECTOR.
This gives more parallelism opportunities for AVX-512 when dealing with 128-bit extracts from 512-bit vectors.

llvm-svn: 294930
2017-02-12 23:49:49 +00:00
Sanjay Patel 0557a44287 [TargetLowering] fix SETCC SETLT folding with FP types
The bug was introduced with:
https://reviews.llvm.org/rL294863

...and manifests as a selection failure in x86, but that's actually
another bug. This fix prevents wrong codegen with -0.0, but in the
more common case when we have NSZ and NNAN (-ffast-math), we should 
still be able to fold this setcc/compare.

llvm-svn: 294924
2017-02-12 23:07:52 +00:00
Craig Topper 6eca3170a8 [AVX-512] Add VPEXTRD/Q to load folding tables.
llvm-svn: 294905
2017-02-12 18:47:37 +00:00
Simon Pilgrim 4cd841757a [X86][AVX2] Add support for combining target shuffles to VPMOVZX
Initial 256-bit vector support - 512-bit support requires extra checks for AVX512BW support (PMOVZXBW) that will be handled in a future patch.

llvm-svn: 294896
2017-02-12 14:31:23 +00:00
Craig Topper 04840ab752 [X86] Update test case I missed in r294876.
llvm-svn: 294878
2017-02-11 23:23:11 +00:00
Craig Topper 1c37e991e6 [X86] Move code for using blendi for insert_subvector out to an isel pattern. This gives the DAG combiner more opportunity to optimize without needing to dig through the blend.
llvm-svn: 294876
2017-02-11 22:57:12 +00:00
Simon Pilgrim 755d9127f5 [X86][SSE] Use VSEXT/VZEXT constant folding for SIGN_EXTEND_VECTOR_INREG/ZERO_EXTEND_VECTOR_INREG
Preparatory step for PR31712

llvm-svn: 294874
2017-02-11 22:47:06 +00:00
Simon Pilgrim 437d64c49e [X86][SSE] Improve VSEXT/VZEXT constant folding.
Generalize VSEXT/VZEXT constant folding to work with any target constant bits source not just BUILD_VECTOR .

llvm-svn: 294873
2017-02-11 21:55:24 +00:00
Amaury Sechet cafc256fd4 Fix atomic-minmax-i6432.ll .
llvm-svn: 294867
2017-02-11 19:34:11 +00:00
Amaury Sechet 42fb927438 Regen expected tests result. NFC
llvm-svn: 294866
2017-02-11 19:27:15 +00:00
Sanjay Patel 63499b61c9 [TargetLowering] check for sign-bit comparisons in SimplifyDemandedBits
I don't know if anything other than x86 vectors is affected by this change, but this may allow 
us to remove target-specific intrinsics for blendv* (vector selects). The simplification arises
from the fact that blendv* instructions only use the sign-bit when deciding which vector element
to choose for the destination vector. The mechanism to fold VSELECT into SHRUNKBLEND nodes already
exists in x86 lowering; this demanded bits change just enables the transform to fire more often.

The original motivation starts with a bug for DSE of masked stores that seems completely unrelated, 
but I've explained the likely steps in this series here:
https://llvm.org/bugs/show_bug.cgi?id=11210

Differential Revision: https://reviews.llvm.org/D29687

llvm-svn: 294863
2017-02-11 18:01:55 +00:00
Craig Topper 255343483d [AVX-512] Add VPMINS/MINU/MAXS/MAXU instructions to load folding tables.
llvm-svn: 294858
2017-02-11 17:35:28 +00:00
Simon Pilgrim 86a95c1ff7 [X86][3DNow!] Add tests to ensure PFMAX/PFMIN are not commuted.
llvm-svn: 294848
2017-02-11 14:01:37 +00:00
Simon Pilgrim 6411a0ebed [X86][3DNow!] Enable PFSUB<->PFSUBR commutation
llvm-svn: 294847
2017-02-11 13:51:14 +00:00
Simon Pilgrim 4ead1d4aa9 [X86][3DNow!] Enable commutation for PFADD/PFMUL/PFCMPEQ/PAVGUSB/PMULHRW
All commutations confirmed to give identical results - note PFMAX/PFMIN do not

PFSUB<->PFSUBR should be commutable as well

llvm-svn: 294846
2017-02-11 13:32:55 +00:00
Simon Pilgrim 6b4a5134af [X86][3DNow!] Add tests showing missed commutation opportunities.
llvm-svn: 294845
2017-02-11 13:00:32 +00:00
Simon Pilgrim 8158816efe [X86][XOP] Regenerate XOP commutation tests.
Added 32-bit tests as well.

llvm-svn: 294841
2017-02-11 12:30:59 +00:00
Simon Pilgrim 008ba63e04 [X86][SSE] Regenerate float comparison commutation tests.
llvm-svn: 294840
2017-02-11 12:29:56 +00:00
Simon Pilgrim 0d8632f089 [X86] Regenerate CLMUL commutation tests.
llvm-svn: 294839
2017-02-11 12:23:22 +00:00
Craig Topper 1f6153bab4 [AVX-512] Add VPINSRB/W/D/Q instructions to load folding tables.
llvm-svn: 294830
2017-02-11 07:01:40 +00:00
Craig Topper 3afa777f10 [AVX-512] Add VPSADBW instructions to load folding tables.
llvm-svn: 294827
2017-02-11 06:24:03 +00:00
Craig Topper 464b8cb244 [X86] Don't base domain decisions on VEXTRACTF128/VINSERTF128 if only AVX1 is available.
Seems the execution dependency pass likes to use FP instructions when most of the consuming code is integer if a vextractf128 instruction produced the register. Without AVX2 we don't have the corresponding integer instruction available.

This patch suppresses the domain on these instructions to GenericDomain if AVX2 is not supported so that they are ignored by domain fixing. If AVX2 is supported we'll report the correct domain and allow them to switch between integer and fp.

Overall I think this produces better results in the modified test cases.

llvm-svn: 294824
2017-02-11 05:32:57 +00:00
Wei Mi 8f20e63a20 [LSR] Recommit: Allow formula containing Reg for SCEVAddRecExpr related with outerloop.
The recommit includes some changes of testcases. No functional change to the patch.

In RateRegister of existing LSR, if a formula contains a Reg which is a SCEVAddRecExpr,
and this SCEVAddRecExpr's loop is an outerloop, the formula will be marked as Loser
and dropped.

Suppose we have an IR that %for.body is outerloop and %for.body2 is innerloop. LSR only
handle inner loop now so only %for.body2 will be handled.

Using the logic above, formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) will be dropped
no matter what because reg({1,+, %size}<%for.body>) is a SCEVAddRecExpr type reg related
with outerloop. Only formula like
reg(%array) + 1*reg({{1,+, %size}<%for.body>,+,1}<nuw><nsw><%for.body2>) will be kept
because the SCEVAddRecExpr related with outerloop is folded into the initial value of the
SCEVAddRecExpr related with current loop.

But in some cases, we do need to share the basic induction variable
reg{0 ,+, 1}<%for.body2> among LSR Uses to reduce the final total number of induction
variables used by LSR, so we don't want to drop the formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) unconditionally.

From the existing comment, it tries to avoid considering multiple level loops at the same time.
However, existing LSR only handles innermost loop, so for any SCEVAddRecExpr with a loop other
than current loop, it is an invariant and will be simple to handle, and the formula doesn't have
to be dropped.

Differential Revision: https://reviews.llvm.org/D26429

llvm-svn: 294814
2017-02-11 00:50:23 +00:00
Ahmed Bougacha 2e275e272f [X86] Bitcast subvector before broadcasting it.
Since r274013, we've been looking through bitcasts on broadcast inputs.
In the scalar-folding case (from a load, build_vector, or sc2vec),
the input type didn't matter, as we'd simply bitcast the resulting
scalar back.

However, when broadcasting a 128-bit-lane-aligned element, we create an
EXTRACT_SUBVECTOR.  Use proper types, by creating an extract_subvector
of the original input type.

llvm-svn: 294774
2017-02-10 19:51:47 +00:00
Simon Pilgrim 39f8da3823 [X86][AVX512] Add vector rotate tests for AVX512 targets
AVX512 does have vector rotate instructions, but we don't lower to them yet

llvm-svn: 294766
2017-02-10 18:06:11 +00:00
Amaury Sechet 280ad2cebb Autogenerate results for test/CodeGen/X86/peep-test-4.ll . NFC
llvm-svn: 294765
2017-02-10 17:57:48 +00:00
Amaury Sechet f6308cfe87 Autogenerate results for test/CodeGen/X86/pr14314.ll . NFC
llvm-svn: 294764
2017-02-10 17:57:46 +00:00
Amaury Sechet c8587e4257 Use autogenerate check in CodeGen/X86/pr16031.ll . NFC
llvm-svn: 294761
2017-02-10 17:26:21 +00:00
Amaury Sechet 3b87944433 Check full codegen in CodeGen/X86/i256-add.ll NFC
llvm-svn: 294756
2017-02-10 16:34:17 +00:00
Simon Pilgrim a3362a1c9e [X86][SSE] Added chained FDIV test cases for D26855
Tests to demonstrate throughput-latency decision between div and rcp on faster hardware such as Haswell

llvm-svn: 294750
2017-02-10 14:56:12 +00:00
Simon Pilgrim bfb1747806 [DAGCombine] Allow vector constant folding of any value type before type legalization
The patch comes in 2 parts:

1 - it makes use of the SelectionDAG::NewNodesMustHaveLegalTypes flag to tell when it can safely constant fold illegal types.

2 - it correctly resets SelectionDAG::NewNodesMustHaveLegalTypes at the start of each call to SelectionDAGISel::CodeGenAndEmitDAG so all the pre-legalization stages can make use of it - not just the first basic block that gets handled.

Fix for PR30760

Differential Revision: https://reviews.llvm.org/D29568

llvm-svn: 294749
2017-02-10 14:37:25 +00:00
Simon Pilgrim c371159aac [X86][SSE] Add support for extracting target constants from BUILD_VECTOR
In some cases we call getTargetConstantBitsFromNode for nodes that haven't been lowered from BUILD_VECTOR yet

Note: We're getting very close to being able to move most of the constant extraction code from getTargetShuffleMaskIndices into getTargetConstantBitsFromNode
llvm-svn: 294746
2017-02-10 14:04:11 +00:00
Igor Breger b4442f34cd [X86][GlobalISel] Add general-purpose Register Bank
Summary:
[X86][GlobalISel] Add general-purpose Register Bank.
Add trivial  handling of G_ADD legalization .
Add Regestry Bank selection for COPY and G_ADD  instructions

Reviewers: rovka, zvi, ab, t.p.northover, qcolombet

Reviewed By: qcolombet

Subscribers: qcolombet, mgorny, dberris, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D29771

llvm-svn: 294723
2017-02-10 07:05:56 +00:00
David L. Jones e072cf51da Update test/CodeGen/X86/sse-align-10.ll to use FileCheck instead of grep
Patch by Jorge Gorbe (lethalantidote).

Differential Revision: https://reviews.llvm.org/D29797

llvm-svn: 294686
2017-02-10 01:35:31 +00:00
Peter Collingbourne ef089bdb4b X86: Introduce relocImm-based patterns for cmp.
Differential Revision: https://reviews.llvm.org/D28690

llvm-svn: 294636
2017-02-09 22:02:28 +00:00
Peter Collingbourne d7dd65ad7c X86: Teach X86InstrInfo::analyzeCompare to recognize compares of symbols.
This requires that we communicate to X86InstrInfo::optimizeCompareInstr
that the second operand is neither a register nor an immediate. The way we
do that is by setting CmpMask to zero.

Note that there were already instructions where the second operand was not a
register nor an immediate, namely X86::SUB*rm, so also set CmpMask to zero
for those instructions. This seems like a latent bug, but I was unable to
trigger it.

Differential Revision: https://reviews.llvm.org/D28621

llvm-svn: 294634
2017-02-09 21:58:24 +00:00
Simon Pilgrim b25f60210f [X86][BMI2] Regenerate mulx tests
llvm-svn: 294598
2017-02-09 17:54:51 +00:00
David Bozier 93e773e9be Revert: "[Stack Protection] Add diagnostic information for why stack protection was applied to a function"
this reverts revision r294590 as it broke some buildbots.

llvm-svn: 294593
2017-02-09 15:40:14 +00:00
Artur Pilipenko 0e4583b56c Add DAGCombiner load combine tests for partially available values
If some of the trailing or leading bytes of a load combine pattern are zeroes we can combine the pattern to a load + zext and shift. Currently we don't support it, so the tests check the current codegen without load combine. This change will make the patch to support this kind of combine a bit more clear.

llvm-svn: 294591
2017-02-09 15:13:40 +00:00
David Bozier 6a44b7c2eb [Stack Protection] Add diagnostic information for why stack protection was applied to a function
Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause performance issues. By adding diagnostic information for which function have SSP and why, a user can quickly determine what they can do to stop SSP being applied to a specific hot function.

This change adds an SSP-specific DiagnosticInfo class and uses of it to the Stack Protection code. A subsequent change to clang will cause the remarks to be emitted when enabled.

Patch by: James Henderson

Differential Revision: https://reviews.llvm.org/D29023

llvm-svn: 294590
2017-02-09 15:08:40 +00:00
Pierre Gousseau 6953b32475 [X86][btver2] PR31902: Fix a crash in combineOrCmpEqZeroToCtlzSrl under fast math.
In combineOrCmpEqZeroToCtlzSrl, replace "getConstantOperand == 0" by "isNullConstant" to account for floating point constants.

Differential Revision: https://reviews.llvm.org/D29756

llvm-svn: 294588
2017-02-09 14:43:58 +00:00
Simon Pilgrim 05ac1f70be [X86][SSE] Added extra FMA/NO-FMA reciprocal test cases for D26855
Test for expected codegen for nr reciprocal cases with/without FMA

llvm-svn: 294587
2017-02-09 14:14:06 +00:00
Artur Pilipenko 4a64031954 [DAGCombiner] Support non-zero offset in load combine
Enable folding patterns which load the value from non-zero offset:

  i8 *a = ...
  i32 val = a[4] | (a[5] << 8) | (a[6] << 16) | (a[7] << 24)
=>
  i32 val = *((i32*)(a+4))

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D29394

llvm-svn: 294582
2017-02-09 12:06:01 +00:00
Simon Pilgrim 563e23e66e [X86][SSE] Attempt to break register dependencies during lowerBuildVector
LowerBuildVectorv16i8/LowerBuildVectorv8i16 insert values into a UNDEF vector if the build vector doesn't contain any zero elements, resulting in register dependencies with a previous use of the register.

This patch attempts to break the register dependency by either always zeroing the vector before hand or (if we're inserting to the 0'th element) by using VZEXT_MOVL(SCALAR_TO_VECTOR(i32 AEXT(Elt))) which lowers to (V)MOVD and performs a similar function. Additionally (V)MOVD is a shorter instruction than PINSRB/PINSRW. We already do something similar for SSE41 PINSRD.

On pre-SSE41 LowerBuildVectorv16i8 we go a little further and use VZEXT_MOVL(SCALAR_TO_VECTOR(i32 ZEXT(Elt))) if the build vector contains zeros to avoid the vector zeroing at the cost of a scalar zero extension, which can probably be brought over to the other cases in a future patch in some cases (load folding etc.)

Differential Revision: https://reviews.llvm.org/D29720

llvm-svn: 294581
2017-02-09 11:50:19 +00:00
Igor Breger ed43f15637 Add new tests for EXTRACT_VECTOR_ELT (vector of packed i8/16/i32/i64/ps/pd data)
llvm-svn: 294565
2017-02-09 07:39:19 +00:00
Craig Topper 50f3d1452c [X86] Clzero intrinsic and its addition under znver1
This patch does the following.

1. Adds an Intrinsic int_x86_clzero which works with __builtin_ia32_clzero
2. Identifies clzero feature using cpuid info. (Function:8000_0008, Checks if EBX[0]=1)
3. Adds the clzero feature under znver1 architecture.
4. The custom inserter is added in Lowering.
5. A testcase is added to check the intrinsic.
6. The clzero instruction is added to assembler test.

Patch by Ganesh Gopalasubramanian with a couple formatting tweaks, a disassembler test, and using update_llc_test.py from me.

Differential revision: https://reviews.llvm.org/D29385

llvm-svn: 294558
2017-02-09 04:27:34 +00:00
Simon Pilgrim 696e27e1ec [X86][SSE] Regenerate scalar integer conversions to float tests
llvm-svn: 294499
2017-02-08 19:01:27 +00:00
Sanjay Patel 28ef27e3dc [x86] add AVX512vl target for more coverage; NFC
llvm-svn: 294462
2017-02-08 15:22:52 +00:00
Craig Topper 3fd463a15a [X86] Add test for clflushopt intrinsic and only enable it to be selected if the feature flag is set.
llvm-svn: 294407
2017-02-08 05:45:46 +00:00
Amaury Sechet 4b946916ac [DAGCombiner] Push truncate through adde when the carry isn't used.
Summary: As per title.

Reviewers: mkuper, spatel, bkramer, RKSimon, zvi

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29528

llvm-svn: 294394
2017-02-08 00:32:36 +00:00
Simon Pilgrim 39c138cc76 [X86][SSE] Add SSE2 build vector insertion tests
llvm-svn: 294365
2017-02-07 22:23:12 +00:00
Simon Pilgrim 90ee0b2786 [X86][SSE] Add additional v4i32/v8i16/v16i8 build vector insertion tests
With particular interest in cases where we don't make use of implicit zeroing or fail to break register dependencies

llvm-svn: 294363
2017-02-07 22:03:37 +00:00
Hans Wennborg 819e3e02a9 [X86] Disable conditional tail calls (PR31257)
They are currently modelled incorrectly (as calls, which clobber
registers, confusing e.g. Machine Copy Propagation).

Reverting until we figure out the proper solution.

llvm-svn: 294348
2017-02-07 20:37:45 +00:00
Sanjoy Das 2f63cbcc0c [ImplicitNullCheck] Extend Implicit Null Check scope by using stores
Summary:
This change allows usage of store instruction for implicit null check.

Memory Aliasing Analisys is not used and change conservatively supposes
that any store and load may access the same memory. As a result
re-ordering of store-store, store-load and load-store is prohibited.

Patch by Serguei Katkov!

Reviewers: reames, sanjoy

Reviewed By: sanjoy

Subscribers: atrick, llvm-commits

Differential Revision: https://reviews.llvm.org/D29400

llvm-svn: 294338
2017-02-07 19:19:49 +00:00
Artur Pilipenko 469596ef87 Add DAGCombiner load combine tests for {a|s}ext, {a|z|s}ext load nodes
Currently we don't support these nodes, so the tests check the current codegen without load combine. This change makes the review of the change to support these nodes more clear.

Separated from https://reviews.llvm.org/D29591 review.

llvm-svn: 294305
2017-02-07 14:09:37 +00:00
Simon Pilgrim b4a9eeafcc [X86][SSE] Generalized integer absolute tests to test canonical pattern as well as intrinsics
llvm-svn: 294300
2017-02-07 13:15:09 +00:00
Simon Pilgrim d48b23c2f2 [X86][SSE] Added 256-bit vector tests cases
Exposes some poor codegen with identity shuffle due to bad interaction with insert_subvector(extract_subvector) / concat_subvectors

llvm-svn: 294296
2017-02-07 12:01:36 +00:00
Daniel Jasper 84b3cc394d Revert "[DAGCombiner] (add X, (adde Y, 0, Carry)) -> (adde X, Y, Carry)"
This reverts commit r294186.

On an internal test, this triggers an out-of-memory error on PPC,
presumably because there is another dagcombine that does the exact
opposite triggering and endless loop consuming more and more memory.

Chandler has started at creating a reduced test case and we'll attach it
as soon as possible.

llvm-svn: 294288
2017-02-07 08:57:50 +00:00
Craig Topper 9191c3324a [AVX-512] Add masked and unmasked shift by immediate instructions to load folding tables.
llvm-svn: 294287
2017-02-07 07:31:00 +00:00
Craig Topper 62304d80e3 [AVX-512] Add masked shift instructions to load folding tables.
This adds the masked versions of everything, but the shift by immediate instructions.

llvm-svn: 294286
2017-02-07 07:30:57 +00:00
Craig Topper 45d9ddc687 [AVX-512] Add some of the shift instructions to the load folding tables.
This includes unmasked forms of variable shift and shifting by the lower element of a register.

Still need to do shift by immediate which was not foldable prior to avx512 and all the masked forms.

llvm-svn: 294285
2017-02-07 07:30:54 +00:00
Craig Topper 39d86bb688 [X86] Change the Defs list for VZEROALL/VZEROUPPER back to not including YMM16-31.
llvm-svn: 294277
2017-02-07 04:10:57 +00:00
Craig Topper 190314ce4a [AVX-512] Put the integer stack folding tests in alphabetical order.
llvm-svn: 294276
2017-02-07 04:10:54 +00:00
Sanjay Patel cbcef5709f [x86] add tests to show current codegen for vblendv*; NFC
As noted in the comments, we should be able to eliminate cmp ops
in several cases.

llvm-svn: 294263
2017-02-07 00:10:50 +00:00