Commit Graph

158121 Commits

Author SHA1 Message Date
Sean Fertile 5fb624a3b8 [Memcpy Loop Lowering] Remove the fixed int8 lowering.
Switch over to the lowering that uses target supplied operand types.

Differential Revision: https://reviews.llvm.org/D41201

llvm-svn: 320989
2017-12-18 15:31:14 +00:00
Sander de Smalen 190979189a [TableGen][AsmMatcherEmitter] Only choose specific diagnostic for enabled instruction
Summary:
When emitting a diagnostic for an invalid operand, a specific diagnostic
should only be reported when the instruction being matched is actually
enabled by the feature flags.

Patch [3/4] in a series to add parsing of predicates and properly parse SVE 
ZIP1/ZIP2 instructions. This patch fixes bogus diagnostic messages for when
the SVE feature is not specified.

Reviewers: rengolin, craig.topper, olista01, sdardis, stoklund

Reviewed By: olista01, sdardis

Subscribers: fhahn, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D40362

llvm-svn: 320986
2017-12-18 14:34:24 +00:00
Max Kazantsev 1acab00229 [LVI] Support for ashr in LVI
Enhance LVI to analyze the ‘ashr’ binary operation. This leverages the infrastructure in ConstantRange for the ashr operation.

Patch by Surya Kumari Jangala!

Differential Revision: https://reviews.llvm.org/D40886

llvm-svn: 320983
2017-12-18 14:23:30 +00:00
Diana Picus 8ee540c01a [ARM GlobalISel] Fix G_(UN)MERGE_VALUES handling after r319524
r319524 has made more G_MERGE_VALUES/G_UNMERGE_VALUES pairs legal than
are supported by the rest of the pipeline. Restrict that to only the
cases that we can currently handle: packing 32-bit values into 64-bit
ones, when we have hardware FP.

llvm-svn: 320980
2017-12-18 13:22:28 +00:00
Benjamin Kramer bc8fdaaf60 Constexprify LaneBitmask factory methods.
This avoids global constructors when they're used in a global constant.

llvm-svn: 320979
2017-12-18 13:20:26 +00:00
Max Kazantsev d792171efb [ConstantRange] Support for ashr in ConstantRange computation
Extend the ConstantRange implementation to compute the range of possible values resulting from an arithmetic right shift operation.
There will be a follow up patch to leverage this constant range infrastructure in LazyValueInfo.

Patch by Surya Kumari Jangala!

Differential Revision: https://reviews.llvm.org/D40881

llvm-svn: 320976
2017-12-18 13:01:32 +00:00
Simon Dardis f70af977af Revert "[mips] Fix the target specific instruction verifier"
This reverts commit r320974. The commit message lacked the Differential Revison: line.

llvm-svn: 320975
2017-12-18 12:30:34 +00:00
Simon Dardis c3c0d4590b [mips] Fix the target specific instruction verifier
Fix an off by one error in the bounds checking for 'dinsu' and update
the ranges in the test comments so that they are accurate.

Reviewers: atanasyan

https://reviews.llvm.org/D41183

llvm-svn: 320974
2017-12-18 12:24:17 +00:00
Sander de Smalen fce0c1c45b [AArch64][SVE] Asm: Add ZIP1/ZIP2 instructions (predicate/data vectors)
Summary: Patch [2/4] in a series to add parsing of predicates and properly parse SVE ZIP1/ZIP2 instructions.

Reviewers: rengolin, kristof.beyls, fhahn, mcrosier, evandro

Reviewed By: fhahn

Subscribers: aemerson, javed.absar, llvm-commits, tschuett

Differential Revision: https://reviews.llvm.org/D40361

llvm-svn: 320973
2017-12-18 11:29:59 +00:00
Sander de Smalen ce1e0975f4 [AArch64][SVE] Asm: Add SVE predicate register definitions and parsing support
Summary: Patch [1/4] in a series to add parsing of predicates and properly parse SVE ZIP1/ZIP2 instructions.

Reviewers: rengolin, kristof.beyls, fhahn, mcrosier, evandro, echristo, efriedma

Reviewed By: fhahn

Subscribers: aemerson, javed.absar, llvm-commits, tschuett

Differential Revision: https://reviews.llvm.org/D40360

llvm-svn: 320970
2017-12-18 11:26:34 +00:00
Eugene Leviant c95b49603e [ThinLTO] Remove unused code
This is a re-commit of r320464, after patch for gold plugin
was landed.

llvm-svn: 320968
2017-12-18 10:53:45 +00:00
Tim Northover 9097a07e4e AArch64: work around how Cyclone handles "movi.2d vD, #0".
For Cylone, the instruction "movi.2d vD, #0" is executed incorrectly in some rare
circumstances. Work around the issue conservatively by avoiding the instruction entirely.

This patch changes CodeGen so that problematic instructions are never
generated, and the AsmParser so that an equivalent instruction is used (with a
warning).

llvm-svn: 320965
2017-12-18 10:36:00 +00:00
Igor Laevsky 7bd3fb15e1 [TargetLibraryInfo] Discard library functions with incorrectly sized integers
Differential Revision: https://reviews.llvm.org/D41184

llvm-svn: 320964
2017-12-18 10:31:58 +00:00
Sam Parker fd967f2f7a [ARM] Adjust test checks
Correct the CHECK-LABELS of a couple of dag combine tests.

llvm-svn: 320963
2017-12-18 10:08:03 +00:00
Sam Parker 00804efd72 [DAGCombine] Move AND nodes to multiple load leaves
Search from AND nodes to find whether they can be propagated back to
loads, so that the AND and load can be combined into a narrow load.
We search through OR, XOR and other AND nodes and all bar one of the
leaves are required to be loads or constants. The exception node then
needs to be masked off meaning that the 'and' isn't removed, but the
loads(s) are narrowed still.

Differential Revision: https://reviews.llvm.org/D41177

llvm-svn: 320962
2017-12-18 10:04:27 +00:00
Clement Courbet 6f42de3062 [NFC][CodeGen][ExpandMemCmp] Fix documentation.
llvm-svn: 320960
2017-12-18 07:32:48 +00:00
Craig Topper 7034d401f8 [X86] Use mattr instead of mcpu in some of the cost model tests.
Based on the names of the check lines, features seems more appropriate that cpu.

Spotted while prototyping my patch to make 512-bit vectors illegal on SKX sometimes.

llvm-svn: 320959
2017-12-18 07:21:58 +00:00
Hiroshi Inoue c6faf15459 [SROA] Disable non-whole-alloca splits by default
This patch introduce a switch to control splitting of non-whole-alloca slices with default off.
The switch will be default on again after fixing an issue reported in PR35657.

llvm-svn: 320958
2017-12-18 06:47:37 +00:00
Craig Topper 8e2837cc6e [X86] Fix mistake that I made when splitting up the setOperationAction calls recently.
The block I moved things that need BWI and 512-bit or VLX is incorrectly qualified with just hasBWI || hasVLX. Here I've qualified it with hasBWI && (hasAVX512 || hasVLX) where the hasAVX512 will be replaced with allowing 512-bit vectors in an upcoming patch.

llvm-svn: 320957
2017-12-18 04:50:05 +00:00
Serguei Katkov b0b67a8d38 [CGP] Fix the handling select inst in complex addressing mode
When we put the value in select placeholder we must pass
the value through simplification tracker due to the value might
be already simplified and erased.

This is a fix for PR35658.

Reviewers: john.brawn, uabelho
Reviewed By: john.brawn
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D41251

llvm-svn: 320956
2017-12-18 04:25:07 +00:00
Sanjay Patel 9da049fa8a [x86] add tests for finite libcall lowering (PR35672); NFC
llvm-svn: 320955
2017-12-18 00:38:45 +00:00
Bjorn Steinbrink 3603de2fa2 Re-commit "Properly handle multi-element and dynamically sized allocas in getPointerDereferenceableBytes()""
llvm-clang-x86_64-expensive-checks-win is still broken, so the failure
seems unrelated.

llvm-svn: 320953
2017-12-17 21:20:16 +00:00
Craig Topper 255a76d6d1 [X86] Add test cases that show cases where buildvector of extract and inserts should be turned into fmsubadd.
This is a follow up to the fmaddsub support added in r320950. Hopefully in the future we can fix lowering to handle this fmsubadd too.

llvm-svn: 320951
2017-12-17 18:31:36 +00:00
Craig Topper fd8d040820 [X86] Make the code that creates fmaddsub from build_vector of extracts and inserts functional and add tests.
Summary:
We had no tests for this and we couldn't do the optimization because of a bad use count check. We need to know how many non-undef pieces of the build vector were filled in and ensure our use count is equal to that. But on the shuffle combine version we need the use count to be 2.

The missing coverage was noticed during the review of D40335.

Reviewers: RKSimon, zvi, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D41133

llvm-svn: 320950
2017-12-17 18:23:45 +00:00
Simon Pilgrim 406d04a916 [X86] Regenerate truncated rotation tests + add missing 32-bit checks
llvm-svn: 320949
2017-12-17 18:20:42 +00:00
Sam Clegg b07a016ed1 use uint32_t
llvm-svn: 320947
2017-12-17 17:50:07 +00:00
Sam Clegg c551522d25 [WebAssembly] Export some more info on wasm funtions
Summary:
These fields are useful for lld's gc-sections support

Also remove an unused field.

Subscribers: jfb, dschuff, jgravelle-google, aheejin, sunfish

Differential Revision: https://reviews.llvm.org/D41320

llvm-svn: 320946
2017-12-17 17:50:07 +00:00
Bjorn Steinbrink 6f7bbf349f Revert "Properly handle multi-element and dynamically sized allocas in getPointerDereferenceableBytes()"
This reverts commit 217067d5179882de9deb60d2e866befea4c126e7.

Fails on llvm-clang-x86_64-expensive-checks-win

llvm-svn: 320945
2017-12-17 15:16:58 +00:00
Bjorn Steinbrink e880f262e5 Revert "Treat sret arguments as being dereferenceable in getPointerDereferenceableBytes()"
This reverts commit 8b7a7660a3904b2088bc594311bcea2c651def08.

I didn't mean to commit this.

llvm-svn: 320944
2017-12-17 15:16:51 +00:00
Bjorn Steinbrink 7afcb71a42 Treat sret arguments as being dereferenceable in getPointerDereferenceableBytes()
llvm-svn: 320943
2017-12-17 15:11:52 +00:00
Simon Pilgrim b1b30286bf Remove superfluous break after a return. NFCI.
llvm-svn: 320941
2017-12-17 11:01:33 +00:00
Craig Topper 5992535e1a [X86DomainReassignment] Store legal domains in a std::bitset instead of using a SmallVector that really only ever has one element as a set.
llvm-svn: 320940
2017-12-17 03:16:23 +00:00
Bjorn Steinbrink c27f81b92b Properly handle byval arguments in getPointerDereferenceableBytes()
Summary:
For byval arguments, the number of dereferenceable bytes is equal to
the size of the pointee, not the pointer.

Reviewers: hfinkel, rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D41305

llvm-svn: 320939
2017-12-17 02:37:42 +00:00
Bjorn Steinbrink 5d86532467 Properly handle multi-element and dynamically sized allocas in getPointerDereferenceableBytes()
Reviewers: hfinkel, rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D41288

llvm-svn: 320938
2017-12-17 01:54:25 +00:00
Craig Topper ee1e71e576 [X86] Use extract_vector_elt instead of X86ISD::VEXTRACT for isel of vXi1 extractions.
llvm-svn: 320937
2017-12-17 01:35:48 +00:00
Craig Topper c0c2d19e08 [X86] Canonicalize extract_vector_elt from vXi1 to always return MVT::i32.
This allows us to remove some isel patterns that allowed MVT::i8 result type.

llvm-svn: 320936
2017-12-17 01:35:47 +00:00
Craig Topper c609dc8f55 [X86] Don't create X86ISD::VEXTRACT nodes directly. Use EXTRACT_VECTOR_ELT and allow that to be legaized to VEXTRACT.
I think we can remove the VEXTRACT node completely and use a canonicalized EXTRACT_VECTOR_ELT instead. This is a first step.

llvm-svn: 320935
2017-12-17 01:35:44 +00:00
Simon Pilgrim 5c0c93ed4c Fix unused variable warning.
llvm-svn: 320934
2017-12-16 23:37:51 +00:00
Simon Pilgrim 4c9e8215e9 [X86][AVX] lowerVectorShuffleAsBroadcast - aggressively peek through BITCASTs
Assuming we can safely adjust the broadcast index for the new type to keep it suitably aligned, then peek through BITCASTs when looking for the broadcast source.

Fixes PR32007

llvm-svn: 320933
2017-12-16 23:32:18 +00:00
Simon Pilgrim 88c10bc969 [X86][AVX] Use extract128BitVector helper. NFCI.
llvm-svn: 320932
2017-12-16 23:09:57 +00:00
Simon Pilgrim f3b6da00f5 [X86][AVX] Fix failed broadcast fold
Strip excess BITCASTs from EXTRACT_SUBVECTOR input

llvm-svn: 320930
2017-12-16 22:57:17 +00:00
Sean Fertile 68d7f9da76 [Memcpy Loop Lowering] Only calculate residual size/bytes copied when needed.
If the loop operand type is int8 then there will be no residual loop for the
unknown size expansion. Dont create the residual-size and bytes-copied values
when they are not needed.

llvm-svn: 320929
2017-12-16 22:41:39 +00:00
Craig Topper 849b717c86 [X86] Don't pass a zero input to the passthru operand of getVectorMaskingNode/getScalarMaskingNode when its going to emit an ISD::OR/ISD::AND. NFCI
In those cases, the pass thru operand of the methods isn't used. The calls to the scalar version were passing a MVT::i1 zero, which is an illegal type at the stage this code runs.

llvm-svn: 320928
2017-12-16 21:12:24 +00:00
Craig Topper 93253e189c [X86] Have getVectorMaskingNode return an ISD::AND for X86ISD::VPSHUFBITQMB instead of creating a select with one input being 0.
llvm-svn: 320927
2017-12-16 21:12:23 +00:00
Craig Topper 1260a4e826 [X86] When using vpopcntdq for ctpop of v8i16 vectors, only promote to v8i32.
Previously we promoted to v8i64, but we don't need to go all the way to 512-bits. If we have VLX we can use the 256-bit instruction. And even if we don't have VLX we can widen v8i32 to v16i32 and drop the upper half.

llvm-svn: 320926
2017-12-16 19:31:36 +00:00
Craig Topper a42a2ba221 [X86] Combine some more scheduler model entries using regular expressions.
We had a lot of separate 32 and 64 instructions that had the same scheduling data. This merges them into the same regular expression. This is pretty consistent with a lot of other instructions.

llvm-svn: 320924
2017-12-16 18:35:31 +00:00
Craig Topper 17a311831c [X86] Use instrs instead of instregex for gather/scatter instructions in the scheduler models. Combine into single InstrRW entries.
The reduces the number of scheduler groups in subtarget info.

llvm-svn: 320923
2017-12-16 18:35:29 +00:00
Simon Pilgrim 5f022d278b [InstCombine] Regenerate FMUL/FMA combine tests with update_test_checks.py
llvm-svn: 320922
2017-12-16 17:18:15 +00:00
Sanjay Patel 5a0cdac174 [InstCombine] canonicalize shifty abs(): ashr+add+xor --> cmp+neg+sel
We want to do this for 2 reasons:
1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766.
2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern.

More detail about what happens in the backend:
1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs 
   into the shift variant. That is the opposite of this IR canonicalization.
2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs 
   into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization.
3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2
   into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node 
   when that's legal/custom.
4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a 
   variety of ways.
   a. For #2, the vector path is missing the case for setlt with a '1' constant.
   b. For #3, we are missing a match for commuted versions of the shift variants.
5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel 
   produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the 
   shift sequence when not.
6. In the following examples with this patch applied, we may get conditional moves rather than the shift 
   produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific 
   decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate.

define i32 @abs_shifty(i32 %x) {
  %signbit = ashr i32 %x, 31 
  %add = add i32 %signbit, %x  
  %abs = xor i32 %signbit, %add 
  ret i32 %abs
}

define i32 @abs_cmpsubsel(i32 %x) {
  %cmp = icmp slt i32 %x, zeroinitializer
  %sub = sub i32 zeroinitializer, %x
  %abs = select i1 %cmp, i32 %sub, i32 %x
  ret i32 %abs
}

define <4 x i32> @abs_shifty_vec(<4 x i32> %x) {
  %signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> 
  %add = add <4 x i32> %signbit, %x  
  %abs = xor <4 x i32> %signbit, %add 
  ret <4 x i32> %abs
}

define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) {
  %cmp = icmp slt <4 x i32> %x, zeroinitializer
  %sub = sub <4 x i32> zeroinitializer, %x
  %abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x
  ret <4 x i32> %abs
}

> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=x86_64 -mattr=avx 
> abs_shifty:
> 	movl	%edi, %eax
> 	negl	%eax
> 	cmovll	%edi, %eax
> 	retq
> 
> abs_cmpsubsel:
> 	movl	%edi, %eax
> 	negl	%eax
> 	cmovll	%edi, %eax
> 	retq
> 
> abs_shifty_vec:
> 	vpabsd	%xmm0, %xmm0
> 	retq
> 
> abs_cmpsubsel_vec:
> 	vpabsd	%xmm0, %xmm0
> 	retq
> 
> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=aarch64
> abs_shifty:
> 	cmp	w0, #0                  // =0
> 	cneg	w0, w0, mi
> 	ret
> 
> abs_cmpsubsel: 
> 	cmp	w0, #0                  // =0
> 	cneg	w0, w0, mi
> 	ret
>                                        
> abs_shifty_vec: 
> 	abs	v0.4s, v0.4s
> 	ret
> 
> abs_cmpsubsel_vec: 
> 	abs	v0.4s, v0.4s
> 	ret
> 
> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=powerpc64le 
> abs_shifty:  
> 	srawi 4, 3, 31
> 	add 3, 3, 4
> 	xor 3, 3, 4
> 	blr
> 
> abs_cmpsubsel:
> 	srawi 4, 3, 31
> 	add 3, 3, 4
> 	xor 3, 3, 4
> 	blr
> 
> abs_shifty_vec:   
> 	vspltisw 3, -16
> 	vspltisw 4, 15
> 	vsubuwm 3, 4, 3
> 	vsraw 3, 2, 3
> 	vadduwm 2, 2, 3
> 	xxlxor 34, 34, 35
> 	blr
> 
> abs_cmpsubsel_vec: 
> 	vspltisw 3, -16
> 	vspltisw 4, 15
> 	vsubuwm 3, 4, 3
> 	vsraw 3, 2, 3
> 	vadduwm 2, 2, 3
> 	xxlxor 34, 34, 35
> 	blr
>

Differential Revision: https://reviews.llvm.org/D40984

llvm-svn: 320921
2017-12-16 16:41:17 +00:00
Craig Topper d2a2a39c93 [X86] Remove GCCBuiltin from kand/kandn/kor/kxor/kxnor/knot intrinsics so clang can implement with native IR.
llvm-svn: 320918
2017-12-16 08:25:30 +00:00