Commit Graph

19076 Commits

Author SHA1 Message Date
Simon Pilgrim 0a9479ef39 [X86] EltsFromConsecutiveLoads - cleanup Zero/Undef/Load element collection. NFCI.
llvm-svn: 365628
2019-07-10 13:28:13 +00:00
Simon Pilgrim ef1aac3191 [X86] EltsFromConsecutiveLoads - LDBase is non-null. NFCI.
Don't bother checking for LDBase != null - it should be (and we assert that it is).

llvm-svn: 365622
2019-07-10 12:22:59 +00:00
Simon Pilgrim c972193583 [X86] EltsFromConsecutiveLoads - store Loads on a per-element basis. NFCI.
Cache the LoadSDNode nodes so we can easily map to/from the element index instead of packing them together - this will be useful for future patches for PR16739 etc.

llvm-svn: 365620
2019-07-10 11:26:57 +00:00
Simon Pilgrim 6a58583951 [X86][SSE] EltsFromConsecutiveLoads - add basic dereferenceable support
This patch checks to see if the vector element loads are based off a dereferenceable pointer that covers the entire vector width, in which case we don't need to have element loads at both extremes of the vector width - just the start (base pointer) of it.

Another step towards partial vector loads......

Differential Revision: https://reviews.llvm.org/D64205

llvm-svn: 365614
2019-07-10 10:46:36 +00:00
Craig Topper 50f70de557 [X86] Limit getTargetConstantFromNode to only work on NormalLoads not extending loads.
This seems to fix a failure reported by Jordan Rupprecht, but we
don't have a reduced test case yet.

llvm-svn: 365589
2019-07-10 00:40:01 +00:00
Craig Topper 1ae60797cd [X86] Don't form extloads in combineExtInVec unless the load extension is legal.
This should prevent doing this on pre-sse4.1 targets or for 256
bit vectors without avx2.

I don't know of a failure from this. Op legalization will probably
take care of, but seemed better to be safe.

llvm-svn: 365577
2019-07-09 23:05:54 +00:00
Craig Topper 84a1f07363 [X86][AMDGPU][DAGCombiner] Move call to allowsMemoryAccess into isLoadBitCastBeneficial/isStoreBitCastBeneficial to allow X86 to bypass it
Basically the problem is that X86 doesn't set the Fast flag from
allowsMemoryAccess on certain CPUs due to slow unaligned memory
subtarget features. This prevents bitcasts from being folded into
loads and stores. But all vector loads and stores of the same width
are the same cost on X86.

This patch merges the allowsMemoryAccess call into isLoadBitCastBeneficial to allow X86 to skip it.

Differential Revision: https://reviews.llvm.org/D64295

llvm-svn: 365549
2019-07-09 19:55:28 +00:00
Simon Pilgrim 294f37561a [X86] LowerToHorizontalOp - use count_if to count non-UNDEF ops. NFCI.
llvm-svn: 365540
2019-07-09 19:19:17 +00:00
Djordje Todorovic 01eaae6dd1 [DwarfDebug] Dump call site debug info
Dump the DWARF information about call sites and call site parameters into
debug info sections.

The patch also provides an interface for the interpretation of instructions
that could load values of a call site parameters in order to generate DWARF
about the call site parameters.

([13/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60716

llvm-svn: 365467
2019-07-09 11:33:56 +00:00
Reid Kleckner 2f07c2e9d9 Standardize on MSVC behavior for triples with no environment
Summary:
This makes it so that IR files using triples without an environment work
out of the box, without normalizing them.

Typically, the MSVC behavior is more desirable. For example, it tends to
enable things like constant merging, use of associative comdats, etc.

Addresses PR42491

Reviewers: compnerd

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64109

llvm-svn: 365387
2019-07-08 21:05:20 +00:00
Simon Pilgrim e1a9b49d6b [X86] ISD::INSERT_SUBVECTOR - use uint64_t index. NFCI.
Keep the uint64_t type from getConstantOperandVal to stop truncation/extension overflow warnings in MSVC in subvector index math.

llvm-svn: 365328
2019-07-08 14:52:56 +00:00
Craig Topper 1deca50ab1 [X86] Allow execution domain fixing to turn SHUFPD into SHUFPS.
This can help with code size on SSE targets where SHUFPD requires
a 0x66 prefix and SHUFPS doesn't.

llvm-svn: 365293
2019-07-08 06:52:49 +00:00
Craig Topper d8261f0288 [X86] Make movsd commutable to shufpd with a 0x02 immediate on pre-SSE4.1 targets.
This can help avoid a copy or enable load folding.

On SSE4.1 targets we can commute it to blendi instead.

I had to make shufpd with a 0x02 immediate commutable as well
since we expect commuting to be reversible.

llvm-svn: 365292
2019-07-08 06:52:43 +00:00
Craig Topper 46f2b583a2 [X86] Add MOVSDrr->MOVLPDrm entry to load folding table. Add custom handling to turn UNPCKLPDrr->MOVHPDrm when load is under aligned.
If the load is aligned we can turn UNPCKLPDrr into UNPCKLPDrm.

llvm-svn: 365287
2019-07-08 02:10:20 +00:00
Craig Topper ac744d5a86 [X86] Make sure load isn't volatile before shrinking it in MOVDDUP isel patterns.
llvm-svn: 365275
2019-07-07 05:33:20 +00:00
Simon Pilgrim a7145c45a7 [X86] SimplifyDemandedVectorEltsForTargetNode - fix shadow variable warning. NFCI.
Fixes cppcheck warning.

llvm-svn: 365271
2019-07-06 18:46:09 +00:00
Simon Pilgrim 01f1bad618 [X86] LowerBuildVectorv16i8 - pull out repeated getOperand() call. NFCI.
llvm-svn: 365270
2019-07-06 18:33:29 +00:00
Craig Topper 317d6093df [X86] Remove patterns from MOVLPSmr and MOVHPSmr instructions.
These patterns are the same as the MOVLPDmr and MOVHPDmr patterns,
but with a bitcast at the end. We can just select the PD instruction
and let execution domain fixing switch to PS.

llvm-svn: 365267
2019-07-06 17:59:51 +00:00
Craig Topper 913105ca42 [X86] Add patterns to select MOVLPDrm from MOVSD+load and MOVHPD from UNPCKL+load.
These narrow the load so we can only do it if the load isn't
volatile.

There also tests in vector-shuffle-128-v4.ll that this should
support, but we don't seem to fold bitcast+load on pre-sse4.2
targets due to the slow unaligned mem 16 flag.

llvm-svn: 365266
2019-07-06 17:59:45 +00:00
Craig Topper d22b2d01ca [X86] Correct the size check in foldMemoryOperandCustom.
The Size either needs to be 0 meaning we aren't folding
a stack reload. Or the stack slot needs to be at least
16 bytes. I've also added a paranoia check ensure the
RCSize is at leat 16 bytes as well. This avoids any
FR32/FR64 surprises, but I think we already filtered
those earlier.

All of our test case have Size as either 0 or 16 and
RCSize == 16. So the Size <= 16 check worked for those
cases.

llvm-svn: 365234
2019-07-05 18:54:00 +00:00
Craig Topper 6e6d229e5e [X86] Update SSE1 MOVLPSrm and MOVHPSrm isel patterns to ensure loads are non-volatile before folding.
These patterns use 128-bit loads, but the instructions only load
64-bits. We shouldn't narrow the load if its volatile.

Fixes another variant of PR42079

llvm-svn: 365225
2019-07-05 17:31:29 +00:00
Craig Topper 8a93952a5c [X86] Remove unnecessary isel pattern for MOVLPSmr.
This was identical to a pattern for MOVPQI2QImr with a bitcast
as an input. But we should be able to turn MOVPQI2QImr into
MOVLPSmr in the execution domain fixup pass so we shouldn't
need this.

llvm-svn: 365224
2019-07-05 17:31:25 +00:00
Robert Lougher 9dcfbbae76 This reverts r365061 and r365062 (test update)
Revision r365061 changed a skip of debug instructions for a skip
of meta instructions. This is not safe, as IMPLICIT_DEF is classed
as a meta instruction.

llvm-svn: 365202
2019-07-05 12:42:06 +00:00
Robert Lougher 2478b62098 Revert r365198 as this accidentally commited something that
should not have been added.

llvm-svn: 365199
2019-07-05 12:30:45 +00:00
Robert Lougher 3bea2b15f5 This reverts r365061 and r365062 (test update)
Revision r365061 changed a skip of debug instructions for a skip
of meta instructions. This is not safe, as IMPLICIT_DEF is classed
as a meta instruction.

llvm-svn: 365198
2019-07-05 12:20:21 +00:00
Simon Pilgrim 8b25d9bf01 [X86][SSE] LowerINSERT_VECTOR_ELT - early out for out of range indices
Fixes OSS-Fuzz #15662

llvm-svn: 365180
2019-07-05 10:34:53 +00:00
Craig Topper 171732aeb3 [X86] Add custom isel to select ADD/SUB/OR/XOR/AND to their non-immediate forms under optsize when the immediate has additional users.
Summary:
We attempt to prevent folding immediates with multiple users under optsize. But we only do this from store nodes and X86ISD::ADD/SUB/XOR/OR/AND patterns. We don't do it for ISD::ADD/SUB/XOR/OR/AND even though we count them as users when deciding whether to fold into other nodes. This leads to situations where we block folding to a compare for example, but still fold into an AND or OR as seen in PR27202.

Unfortunately touching the isel patterns in tablegen for the ISD::ADD/SUB/XOR/OR/AND opcodes will cause the patterns to be unusable for fast isel. And we don't have a way to make a fast isel only pattern.

To workaround this, this patch adds custom isel in front of the isel table that will select the non-immediate forms if the immediate has additional users. This may create some issues for ANDN and NOT matching. And there's room for improvement with unsigned 32 immediates on 64-bit AND.

This patch needs more thorough test cases, but I wanted to get feedback on the direction. Please send me any other test cases you've seen in the wild.

I think we probably have the same issue with the immediate matching when we fold RMW from X86ISD::ADD/SUB/XOR/OR/AND. And our TEST immedaite shrinking logic. Our cost modeling for immediates that can fit in a sign extended 8-bit immediate on a 16/32/64 bit operation is completely wrong.

I also wonder if we should update the ConstantHoisting cost model and block folding for "opaque" constants. But of course constants can still be created by DAG combine and lowering optimizations.

Fixes PR27202

Reviewers: spatel, RKSimon, andreadb

Reviewed By: RKSimon

Subscribers: jsji, hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59909

llvm-svn: 365163
2019-07-04 22:53:57 +00:00
Simon Pilgrim fde766de4b [X86][AVX1] Combine concat_vectors(pshufd(x,c),pshufd(y,c)) -> vpermilps(concat_vectors(x,y),c)
Bitcast v4i32 to v8f32 and back again - it might be worth adding isel patterns for X86PShufd v8i32 on AVX1 targets like we did for X86Blendi to avoid the bitcasts?

llvm-svn: 365125
2019-07-04 10:17:10 +00:00
Craig Topper 163b8bb3f5 [X86] Use pointer sized indices instead of i32 for EXTRACT_VECTOR_ELT and INSERT_VECTOR_ELT in a couple places.
Most places already did this.

llvm-svn: 365109
2019-07-04 06:21:54 +00:00
Robert Lougher 720baf0416 [X86] Avoid SFB - Skip meta instructions
This patch generalizes the fix in D61680 to ignore all meta instructions,
not just debug info.

Patch by Chris Dawson.

Differential Revision: https://reviews.llvm.org/D62605

llvm-svn: 365061
2019-07-03 17:43:55 +00:00
Francis Visoiu Mistrih 83bbe2f418 [CodeGen] Make branch funnels pass the machine verifier
We previously marked all the tests with branch funnels as
`-verify-machineinstrs=0`.

This is an attempt to fix it.

1) `ICALL_BRANCH_FUNNEL` has no defs. Mark it as `let OutOperandList =
(outs)`

2) After that we hit an assert: ``` Assertion failed: (Op.getValueType()
!= MVT::Other && Op.getValueType() != MVT::Glue && "Chain and glue
operands should occur at end of operand list!"), function AddOperand,
file
/Users/francisvm/llvm/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp,
line 461.  ```

The chain operand was added at the beginning of the operand list. Move
that to the end.

3) After that we hit another verifier issue in the pseudo expansion
where the registers used in the cmps and jmps are not added to the
livein lists. Add the `EFLAGS` to all the new MBBs that we create.

PR39436

Differential Review: https://reviews.llvm.org/D54155

llvm-svn: 365058
2019-07-03 17:16:45 +00:00
Simon Pilgrim 26812c7675 [X86] ComputeNumSignBitsForTargetNode - add target shuffle support.
llvm-svn: 365057
2019-07-03 17:06:59 +00:00
Simon Pilgrim 783dbe402f [X86][AVX] combineX86ShufflesRecursively - peek through extract_subvector
If we have more then 2 shuffle ops to combine, try to use combineX86ShuffleChainWithExtract to see if some are from the same super vector.

llvm-svn: 365050
2019-07-03 15:46:08 +00:00
Simon Pilgrim 868d0b7fd9 [X86][AVX] Combine vpermi(bitcast(x)) -> bitcast(vpermi(x))
iff the number of elements doesn't change.

This gets around an issue with combineX86ShuffleChain not being able to hint which domain is preferred for shuffles that can be done with either.

Fixes regression introduced in rL365041

llvm-svn: 365044
2019-07-03 14:34:16 +00:00
Simon Pilgrim 0c230209fe [X86][AVX] combineX86ShuffleChainWithExtract - add number of non-zero extract_subvectors to the combine depth
This better accounts for the cost/benefit of removing extract_subvectors from the shuffle and will be more useful in future patches.

The vpermq predicate regression will be fixed shortly.

llvm-svn: 365041
2019-07-03 14:17:21 +00:00
Simon Pilgrim 8c099cbe7c [X86][SSE] lowerUINT_TO_FP_v2i32 - explicitly cast half word to double
Fixes MSVC analyzer extension->double warning.

llvm-svn: 365027
2019-07-03 11:23:27 +00:00
Simon Pilgrim 8df90b843d [X86][SSE] LowerINSERT_VECTOR_ELT - ensure insertion index correctness. NFCI.
Assert that the insertion index is in range and use uint64_t for the index to fix MSVC/cppcheck truncation warning.

llvm-svn: 365025
2019-07-03 10:59:52 +00:00
Simon Pilgrim 8853bd9592 [X86][SSE] LowerScalarImmediateShift - ensure shift amount correctness. NFCI.
Assert that the shift amount is in range and create vXi8 shift masks in a way that doesn't cause MSVC/cppcheck shift result is truncated then extended warnings.

llvm-svn: 365024
2019-07-03 10:47:33 +00:00
Simon Pilgrim 64e3a51534 Fix uninitialized variable warnings. NFCI.
Both MSVC and cppcheck don't like the fact that the variables are initialized via references.

llvm-svn: 365018
2019-07-03 10:22:08 +00:00
Simon Pilgrim 7b7b9b78a2 [X86] LowerFunnelShift - use modulo constant shift amount.
This avoids the use of getZExtValue and uses the modulo shift amount which is whats expected for funnel shifts anyhow. 

llvm-svn: 365016
2019-07-03 10:04:16 +00:00
Craig Topper b770d2c9d4 [X86] Add a DAG combine for turning *_extend_vector_inreg+load into an appropriate extload if the load isn't volatile.
Remove the corresponding isel patterns that did the same thing without checking for volatile.

This fixes another variation of PR42079

llvm-svn: 364977
2019-07-02 23:20:03 +00:00
Simon Pilgrim 5613874947 [X86] getTargetConstantBitsFromNode - remove unnecessary getZExtValue() (PR42486)
Don't use APInt::getZExtValue() if you can avoid it - eventually someone will call it with i128 or something that doesn't fit into 64-bits.

In this case it was completely superfluous as we'd moved the rest of the code to always use APInt.

Fixes the <1 x i128> addition bug in PR42486

llvm-svn: 364953
2019-07-02 18:20:38 +00:00
Craig Topper cffbaa93b7 [X86] Add patterns to select (scalar_to_vector (loadf32)) as (V)MOVSSrm instead of COPY_TO_REGCLASS + (V)MOVSSrm_alt.
Similar for (V)MOVSD. Ultimately, I'd like to see about folding
scalar_to_vector+load to vzload. Which would select as (V)MOVSSrm
so this is closer to that.

llvm-svn: 364948
2019-07-02 17:51:02 +00:00
Simon Pilgrim 9304168103 [X86][AVX] combineX86ShuffleChain - pull out CombineShuffleWithExtract lambda. NFCI.
Pull out CombineShuffleWithExtract lambda to new combineX86ShuffleChainWithExtract wrapper and refactored it to handle more than 2 shuffle inputs - this will allow combineX86ShufflesRecursively to call this in a future patch.

llvm-svn: 364924
2019-07-02 13:30:04 +00:00
Simon Pilgrim d609ebb779 [X86] resolveTargetShuffleInputsAndMask - add repeated input handling.
We were relying on combineX86ShufflesRecursively to handle this - this patch gets it done earlier which should make it easier for other code to use resolveTargetShuffleInputsAndMask.

llvm-svn: 364906
2019-07-02 10:53:17 +00:00
Craig Topper 2d306b2d57 [X86] Add PreprocessISelDAG support for turning ISD::FP_TO_SINT/UINT into X86ISD::CVTTP2SI/CVTTP2UI and to reduce the number of isel patterns.
llvm-svn: 364887
2019-07-02 05:53:37 +00:00
Craig Topper 3f722d40c5 [X86] Use v4i32 vzloads instead of v2i64 for vpmovzx/vpmovsx patterns where only 32-bits are loaded.
v2i64 vzload defines a 64-bit memory access. It doesn't look like
we have any coverage for this either way.

Also remove some vzload usages where the instruction loads only
16-bits.

llvm-svn: 364851
2019-07-01 21:25:11 +00:00
Craig Topper 328b24150e [X86] Remove several bad load folding isel patterns for VPMOVZX/VPMOVSX.
These patterns all matched a v2i64 vzload which only loads 64-bits
to instructions that load a full 128-bits.

llvm-svn: 364847
2019-07-01 21:23:38 +00:00
Craig Topper 5e7815b695 [X86] Correct v4f32->v2i64 cvt(t)ps2(u)qq memory isel patterns
These instructions only read 64-bits of memory so we shouldn't
allow a full vector width load to be pattern matched in case it
is marked volatile.

Instead allow vzload or scalar_to_vector+load.

Also add a DAG combine to turn full vector loads into vzload when
used by one of these instructions if the load isn't volatile.

This fixes another case for PR42079

llvm-svn: 364838
2019-07-01 19:01:37 +00:00
Robert Lougher e20030f612 [X86] Avoid SFB - Fix inconsistent codegen with/without debug info(2)
The function findPotentialBlockers may consider debug info instructions as
potential blockers and may stop searching for a store-load pair prematurely.

This patch corrects this and tests the cases where the store is separated
from the load by more than InspectionLimit debug instructions.

Patch by Chris Dawson.

Differential Revision: https://reviews.llvm.org/D62408

llvm-svn: 364829
2019-07-01 18:28:21 +00:00