Commit Graph

46 Commits

Author SHA1 Message Date
Simon Pilgrim b71b214287 [X86][SSE] Add tests for sign extended vXi64 multiplication
llvm-svn: 289342
2016-12-10 22:02:36 +00:00
Simon Pilgrim e4050a2961 [SelectionDAG] Add partial BITCAST support to computeKnownBits
Adds support for bitcasting a little endian 'small element' vector to 'large element' scalar/vector (e.g. v16i8 to v4i32 or v2i32 to i64), which is required for PR30845. We extract the knownbits for each 'small element' part and concatenate the results together.

We can add support for big endian and 'large element' scalar/vector to 'small element' vector bitcasting once we have test cases for them.

Differential Revision: https://reviews.llvm.org/D27129

llvm-svn: 289200
2016-12-09 10:13:45 +00:00
Simon Pilgrim 3e5045e8f1 [X86][AVX2] Add v8i32->v8i64 mul test (PR30845)
llvm-svn: 287332
2016-11-18 11:00:36 +00:00
Simon Pilgrim 8eca5520dc [X86][SSE] Improve lowering of vXi64 multiply with known zero 32-bit halves
vXi64 multiplication is lowered into 3 calls of vpmuludq with the upper/lower 32-bit halves.

If any of these halves are zero then we can remove individual calls. Although there was isBuildVectorAllZeros code to do this I don't think it ever worked (maybe just for constant folded cases that don't seem to be tested for any longer).

This requires additional X86ISD support for computeKnownBitsForTargetNode, so far I've just added support for X86ISD::VZEXT (VPMOVZX* - helping the AVX2+ cases).

Partial fix for PR30845

Differential Revision: https://reviews.llvm.org/D26590

llvm-svn: 287223
2016-11-17 12:14:49 +00:00
Simon Pilgrim 055c09c1c0 [X86][SSE] Add zero lower 32-bits test case for PR30845
llvm-svn: 286764
2016-11-13 15:32:11 +00:00
Simon Pilgrim ce59a536f7 [X86][SSE] Add additional test case for PR30845
llvm-svn: 286762
2016-11-13 14:57:52 +00:00
Simon Pilgrim 6e09afa9d0 [X86][SSE] Add test case for PR30845
llvm-svn: 286734
2016-11-12 23:44:58 +00:00
Simon Pilgrim 5c3c9707c3 [X86][SSE] Add support for (V)PMOVSX* constant folding
We already have (V)PMOVZX* combining support, this is the beginning of handling (V)PMOVSX* similarly - other combines in combineVSZext can be generalized in future patches.

This unearthed an interesting bug in that we were generating illegal build vectors on 32-bit targets - it was proving difficult to create a test for it from PMOVZX, but it fired immediately with PMOVSX. I've created a more general form of the existing getConstVector to handle these cases - ideally this should be handled in non-target-specific code but I couldn't find an equivalent.

Differential Revision: https://reviews.llvm.org/D25874

llvm-svn: 285072
2016-10-25 14:29:25 +00:00
Craig Topper 64378f4378 [AVX-512] Port 128 and 256-bit memory->register sign/zero extend patterns from SSE file. Also add a minimal set for 512-bit.
llvm-svn: 283704
2016-10-09 23:08:39 +00:00
Craig Topper d9ca3d97ef [AVX-512] Simplify X86InstrInfo::copyPhysReg for 128/256-bit vectors with AVX512, but not VLX. We should use the VEX opcodes and trust the register allocator to not use the extended XMM/YMM register space.
Previously we were extending to copying the whole ZMM register. The register allocator shouldn't use XMM16-31 or YMM16-31 in this configuration as the instructions to spill them aren't available.

llvm-svn: 280648
2016-09-05 06:43:06 +00:00
Craig Topper 52e2e8381b [AVX512] Add ExeDomain to vector extend and truncate instructions.
llvm-svn: 276394
2016-07-22 05:46:44 +00:00
Craig Topper f4151bea72 [AVX512] Add initial support for the Execution Domain fixing pass to change some EVEX instructions.
llvm-svn: 276393
2016-07-22 05:00:52 +00:00
Craig Topper 1af6cc00dc [X86] Add VPADD instructions to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275769
2016-07-18 06:14:54 +00:00
Matthias Braun 152e7c8b12 VirtRegMap: Replace some identity copies with KILL instructions.
An identity COPY like this:
   %AL = COPY %AL, %EAX<imp-def>
has no semantic effect, but encodes liveness information: Further users
of %EAX only depend on this instruction even though it does not define
the full register.

Replace the COPY with a KILL instruction in those cases to maintain this
liveness information. (This reverts a small part of r238588 but this
time adds a comment explaining why a KILL instruction is useful).

llvm-svn: 274952
2016-07-09 00:19:07 +00:00
Simon Pilgrim d263fdc512 [X86][AVX512BW] Add support for v64i8 multiplies
Extend the existing lowering of vXi8 multiplies to support v64i8 on avx512bw targets.

I added the Lower512IntArith helper function to help with this - not sure how often this could be used in the future, but it seemed better than putting all that logic inside LowerMUL.

Differential Revision: http://reviews.llvm.org/D18937

llvm-svn: 265902
2016-04-10 17:02:48 +00:00
Simon Pilgrim 7b36cdaecf [X86][SSE] Added v64i8 vector integer multiply tests
llvm-svn: 264510
2016-03-26 09:50:06 +00:00
Simon Pilgrim 7379a70677 [X86][AVX512BW] AVX512BW can sign-extend v32i8 to v32i16 for simpler v32i8 multiplies.
Only pre-AVX512BW targets need to split v32i8 vectors.

llvm-svn: 264509
2016-03-26 09:44:27 +00:00
Simon Pilgrim 9a5f19f509 [X86][SSE] Refreshed vector integer multiply tests
Add all 256-bit vector tests.
Added AVX512F/AVX512BW test targets.
Renamed tests something more meaningful.

llvm-svn: 264507
2016-03-26 09:35:48 +00:00
Simon Pilgrim c1225c28e1 [X86][SSE] Regenerate PMUL tests
Tidied up triple and regenerate tests using update_llc_test_checks.py

llvm-svn: 254029
2015-11-24 22:09:31 +00:00
Elena Demikhovsky 77f0e9f662 X86: optimized i64 vector multiply with constant
When we multiply two 64-bit vectors, we extract lower and upper part and use the PMULUDQ instruction.
When one of the operands is a constant, the upper part may be zero, we know this at compile time.
Example: %a = mul <4 x i64> %b, <4 x i64> < i64 5, i64 5, i64 5, i64 5>.
I'm checking the value of the upper part and prevent redundant "multiply", "shift" and "add" operations.

llvm-svn: 239802
2015-06-16 06:07:24 +00:00
Simon Pilgrim ecf5875bd5 [SSE] Fix for MUL v16i8 on pre-SSE41 targets (PR23369).
Sign extension of i8 to i16 was placing the unpacked bytes in the lower byte instead of the upper byte.

llvm-svn: 236209
2015-04-30 08:23:16 +00:00
Simon Pilgrim 4f683c264a [X86][SSE] Add v16i8/v32i8 multiplication support
Patch to allow int8 vectors to be multiplied on the SSE unit instead of being scalarized.

The patch sign extends the i8 lanes to i16, uses the SSE2 pmullw multiplication instruction, then packs the lower byte from each result.

Differential Revision: http://reviews.llvm.org/D9115

llvm-svn: 235837
2015-04-27 07:55:46 +00:00
Chandler Carruth 1e57e2deb8 [x86] Add a generic unpack-targeted lowering technique. This can be used
to generically lower blends and is particularly nice because it is
available frome SSE2 onward. This removes a lot of the remaining domain
crossing blends in SSE2 code.

I'm hoping to replace some of the "interleaved" lowering hacks with
something closer to this which should be more principled. First, this
needs to learn how to detect and use other interleavings besides that of
the natural type provided. That will be a follow-up patch though.

llvm-svn: 229378
2015-02-16 12:28:18 +00:00
Chandler Carruth 50dc783d75 [x86] Switch this test to use checks generated by my update script. NFC
llvm-svn: 229377
2015-02-16 12:23:22 +00:00
Chandler Carruth 99627bfbff [x86] Enable the new vector shuffle lowering by default.
Update the entire regression test suite for the new shuffles. Remove
most of the old testing which was devoted to the old shuffle lowering
path and is no longer relevant really. Also remove a few other random
tests that only really exercised shuffles and only incidently or without
any interesting aspects to them.

Benchmarking that I have done shows a few small regressions with this on
LNT, zero measurable regressions on real, large applications, and for
several benchmarks where the loop vectorizer fires in the hot path it
shows 5% to 40% improvements for SSE2 and SSE3 code running on Sandy
Bridge machines. Running on AMD machines shows even more dramatic
improvements.

When using newer ISA vector extensions the gains are much more modest,
but the code is still better on the whole. There are a few regressions
being tracked (PR21137, PR21138, PR21139) but by and large this is
expected to be a win for x86 generated code performance.

It is also more correct than the code it replaces. I have fuzz tested
this extensively with ISA extensions up through AVX2 and found no
crashes or miscompiles (yet...). The old lowering had a few miscompiles
and crashers after a somewhat smaller amount of fuzz testing.

There is one significant area where the new code path lags behind and
that is in AVX-512 support. However, there was *extremely little*
support for that already and so this isn't a significant step backwards
and the new framework will probably make it easier to implement lowering
that uses the full power of AVX-512's table-based shuffle+blend (IMO).

Many thanks to Quentin, Andrea, Robert, and others for benchmarking
assistance. Thanks to Adam and others for help with AVX-512. Thanks to
Hal, Eric, and *many* others for answering my incessant questions about
how the backend actually works. =]

I will leave the old code path in the tree until the 3 PRs above are at
least resolved to folks' satisfaction. Then I will rip it (and 1000s of
lines of code) out. =] I don't expect this flag to stay around for very
long. It may not survive next week.

llvm-svn: 219046
2014-10-04 03:52:55 +00:00
Chandler Carruth 591c16a967 [x86] Finish switching from CHECK to ALL. This was mistakenly included
in r214007 and then reverted when I backed that (very misguided) patch
out. This recovers the test case cleanup which was good.

llvm-svn: 214010
2014-07-26 03:46:54 +00:00
Chandler Carruth f6406ac5d6 [x86] Revert r214007: Fix PR20355 ...
The clever way to implement signed multiplication with unsigned *is
already implemented* and tested and working correctly. The bug is
somewhere else. Re-investigating.

This will teach me to not scroll far enough to read the code that did
what I thought needed to be done.

llvm-svn: 214009
2014-07-26 02:14:54 +00:00
Chandler Carruth 1bf4d19172 [x86] Fix PR20355 (and dups) by not using unsigned multiplication when
signed multiplication is requested. While there is not a difference in
the *low* half of the result, the *high* half (used specifically to
implement the signed division by these constants) certainly is used. The
test case I've nuked was actively asserting wrong code.

There is a delightful solution to doing signed multiplication even when
we don't have it that Richard Smith has crafted, but I'll add the
machinery back and implement that in a follow-up patch. This at least
restores correctness.

llvm-svn: 214007
2014-07-26 01:52:13 +00:00
Chandler Carruth 80adc64066 [x86] Add coverage for PMUL* instruction testing on SSE2 as well as
SSE4.1.

llvm-svn: 214001
2014-07-26 01:11:10 +00:00
Chandler Carruth 8709cb4a6b [x86] More cleanup for this test -- simplify the command line.
llvm-svn: 213991
2014-07-26 00:21:52 +00:00
Chandler Carruth 6da2d97a32 [x86] FileCheck-ize this test.
llvm-svn: 213988
2014-07-25 23:59:20 +00:00
Andrew Trick e97d8d6dde Enable MI Sched for x86.
This changes the SelectionDAG scheduling preference to source
order. Soon, the SelectionDAG scheduler can be bypassed saving
a nice chunk of compile time.

Performance differences that result from this change are often a
consequence of register coalescing. The register coalescer is far from
perfect. Bugs can be filed for deficiencies.

On x86 SandyBridge/Haswell, the source order schedule is often
preserved, particularly for small blocks.

Register pressure is generally improved over the SD scheduler's ILP
mode. However, we are still able to handle large blocks that require
latency hiding, unlike the SD scheduler's BURR mode. MI scheduler also
attempts to discover the critical path in single-block loops and
adjust heuristics accordingly.

The MI scheduler relies on the new machine model. This is currently
unimplemented for AVX, so we may not be generating the best code yet.

Unit tests are updated so they don't depend on SD scheduling heuristics.

llvm-svn: 192750
2013-10-15 23:33:07 +00:00
Rafael Espindola ae6000e86d Replace more uses of sse41 with sse4.1.
llc using the host cpu features and *waning* on unknown features is probably
not a good thing :-(

llvm-svn: 189144
2013-08-23 20:39:19 +00:00
Andrew Trick 121124acf8 Revert "Temporarily enable MI-Sched on X86."
This reverts commit 98a9b72e8c56dc13a2617de84503a3d78352789c.

llvm-svn: 184823
2013-06-25 02:48:58 +00:00
Andrew Trick 5a1e0af838 Temporarily enable MI-Sched on X86.
Sorry for the unit test churn. I'll try to make the change permanently
next time.

llvm-svn: 184705
2013-06-24 09:13:20 +00:00
Jakob Stoklund Olesen 589c6eb95c Remove -join-physregs from the test suite.
This option has been disabled for a while, and it is going away so I can
clean up the coalescer code.

The tests that required physreg joining to be enabled were almost all of
the form "tiny function with interference between arguments and return
value". Such functions are usually inlined in the real world.

The problem exposed by phys_subreg_coalesce-3.ll is real, but fairly
rare.

llvm-svn: 157027
2012-05-17 23:44:19 +00:00
Craig Topper e57b49ee16 Add mcpu to tests to prevent them from using AVX instructions on Sandy Bridge after r155618.
llvm-svn: 155696
2012-04-27 07:11:58 +00:00
Jakob Stoklund Olesen 17d4f9bbcc Prepare remaining tests for -join-physreg going away.
llvm-svn: 130893
2011-05-04 23:54:59 +00:00
Eric Christopher 6ad8167714 Remove the pmulld intrinsic and autoupdate it as a vector multiply.
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.

llvm-svn: 99910
2010-03-30 18:49:01 +00:00
Dan Gohman 40503396da Eliminate more uses of llvm-as and llvm-dis.
llvm-svn: 81290
2009-09-08 23:54:48 +00:00
Dan Gohman 6571ef3577 Don't use special heuristics for nodes with no data predecessors
unless they actually have data successors, and likewise for nodes
with no data successors unless they actually have data precessors.

llvm-svn: 64327
2009-02-11 21:29:39 +00:00
Evan Cheng abda665f5f Teach 2addr pass to be do more commuting. If both uses of a two-address instruction are killed, but the first operand has a use before and after the def, commute if the second operand does not suffer from the same issue.
%reg1028<def> = EXTRACT_SUBREG %reg1027<kill>, 1                                                                                                                                     
%reg1029<def> = MOV8rr %reg1028                                                                                                                                                      
%reg1029<def> = SHR8ri %reg1029, 7, %EFLAGS<imp-def,dead>                                                                                                                            
insert => %reg1030<def> = MOV8rr %reg1028                                                                                                                                            
%reg1030<def> = ADD8rr %reg1028<kill>, %reg1029<kill>, %EFLAGS<imp-def,dead>                                                                                                         

In this case, it might not be possible to coalesce the second MOV8rr                                                                                                                 
instruction if the first one is coalesced. So it would be profitable to                                                                                                              
commute it:                                                                                                                                                                          
%reg1028<def> = EXTRACT_SUBREG %reg1027<kill>, 1                                                                                                                                     
%reg1029<def> = MOV8rr %reg1028                                                                                                                                                      
%reg1029<def> = SHR8ri %reg1029, 7, %EFLAGS<imp-def,dead>                                                                                                                            
insert => %reg1030<def> = MOV8rr %reg1029                                                                                                                                            
%reg1030<def> = ADD8rr %reg1029<kill>, %reg1028<kill>, %EFLAGS<imp-def,dead>

llvm-svn: 62954
2009-01-25 03:53:59 +00:00
Mon P Wang 308a1acaaf Fix test to account for generating some vector code for mul v2i64 instead
of incorrectly generating pmuldq

llvm-svn: 61228
2008-12-18 23:42:37 +00:00
Dan Gohman 923a375053 Make this test independent of the target-triple; the stack alignment
is specifically what this test depends on.

llvm-svn: 51599
2008-05-27 17:44:23 +00:00
Nick Lewycky 213e114a2c The Linux ABI emits an extra "movl %esp, %ebp" in function prologue and
sometimes a "mov %ebp, %esp" in the epilogue.

Force these tests that rely on counting 'mov' to use i686-apple-darwin8.8.0
where they were written.

llvm-svn: 51568
2008-05-26 20:18:56 +00:00
Dan Gohman 3388d022ac Use PMULDQ for v2i64 multiplies when SSE4.1 is available. And add
load-folding table entries for PMULDQ and PMULLD.

llvm-svn: 51489
2008-05-23 17:49:40 +00:00