Commit Graph

12385 Commits

Author SHA1 Message Date
Craig Topper 9f42726cc7 [X86] Support v2i32 gather/scatter indices with -x86-experimental-vector-widening-legalization
Summary: This is split out from D41062 to cover the code in LegalVectorTypes.cpp

Reviewers: RKSimon, spatel, efriedma

Reviewed By: efriedma

Subscribers: sdardis, jvesely, nhaehnle, jrtc27, atanasyan, llvm-commits

Differential Revision: https://reviews.llvm.org/D51337

llvm-svn: 340891
2018-08-29 02:12:49 +00:00
Craig Topper 9401fd0ed2 [X86] Add intrinsics for KADD instructions
These are intrinsics for supporting kadd builtins in clang. These builtins are already in gcc to implement intrinsics from icc. Though they are missing from the Intel Intrinsics Guide.

This instruction adds two mask registers together as if they were scalar rather than a vXi1. We might be able to get away with a bitcast to scalar and a normal add instruction, but that would require DAG combine smarts in the backend to recoqnize add+bitcast. For now I'd prefer to go with the easiest implementation so we can get these builtins in to clang with good codegen.

Differential Revision: https://reviews.llvm.org/D51370

llvm-svn: 340869
2018-08-28 19:22:55 +00:00
Craig Topper f1c111431b [X86] Fix copy paste mistake in vector-idiv-v2i32.ll. Add missing test case.
Some of the test cases contained the same load twice instead of a different load.

llvm-svn: 340833
2018-08-28 15:24:12 +00:00
Simon Pilgrim af98587095 [X86][SSE] Improve variable scalar shift of vXi8 vectors (PR34694)
This patch creates the shift mask and actual shift using the vXi16 vector shift ops.

Differential Revision: https://reviews.llvm.org/D51263

llvm-svn: 340813
2018-08-28 10:37:29 +00:00
Simon Pilgrim f119e27d80 [X86][SSE] Avoid vector extraction/insertion for non-constant uniform shifts
As discussed on D51263, we're better off using byte shifts to clear the upper bits on pre-SSE41 hardware.

llvm-svn: 340810
2018-08-28 10:14:09 +00:00
Sanjay Patel fe0b5d215b [x86] add AVX runs to show more potential scalar->vector mov opportunities; NFC
llvm-svn: 340785
2018-08-27 22:29:06 +00:00
Craig Topper 171c6fe6cb [X86] Reverse the check prefixes in the test added in r340774.
The 32-bit and 64-bit checks were reversed.

llvm-svn: 340775
2018-08-27 21:34:37 +00:00
Craig Topper 76b18beef1 [X86] Add test cases to show current codegen of v2i32 div/rem in 32-bit and 64-bit modes
In particular this shows that we end up using libcalls in 32-bit mode even for division by constant.

llvm-svn: 340774
2018-08-27 21:13:07 +00:00
Sanjay Patel 7b6df50669 [x86] add tests for possibly avoiding scalar->vector move; NFC
llvm-svn: 340773
2018-08-27 20:21:33 +00:00
Craig Topper 4be11c0585 [X86] When lowering v32i8 MULHS/MULHU, shuffle after the PACKUS rather than before.
We're using a 256-bit PACKUS to do the truncation, but that instruction operates on 128-bit lanes. So previously we shuffled first to rearrange the lanes. But that requires 2 shuffles. Instead we can shuffle after the PACKUS using a single VPERMQ. This matches what our normal LowerTRUNCATE code does when it uses PACKUS.

Differential Revision: https://reviews.llvm.org/D51284

llvm-svn: 340757
2018-08-27 17:20:41 +00:00
Craig Topper fff90377fd [X86] Add support for matching paddus patterns where one of the vectors is a constant.
InstCombine mucks these up a bit. So we need to do some additional pattern matching to fix it. There are a still a few special cases not handled, but this covers the general case.

Differential Revision: https://reviews.llvm.org/D50952

llvm-svn: 340756
2018-08-27 17:20:38 +00:00
Aleksandr Urakov ff88f1763b [X86] Adding the test pointing to the fail case of D45653
Summary:
This commit adds the case of tail calling a sret function from a non-sret
function when both functions have the C calling convention.

llvm-svn: 340737
2018-08-27 11:56:32 +00:00
Aleksandr Urakov 6f7fef7865 [NFC][X86] Fix `sibcall.ll` formatting
Summary:
Remove unnecessary lines from `sibcall.ll` and rename labels according
to @RKSimon's recommendations in the D45653 conversation.

llvm-svn: 340735
2018-08-27 11:25:38 +00:00
Craig Topper 128915f4ae [X86] Add FeatureCMOV explicitly to all CPUs that support it. Remove FeatureCMOV implication from Feature64Bit and FeatureSSE1
Summary:
Previously most CPUs inherited cmov support through Feature64Bit(or FeatureCMPXCHG16HB implying Feature64Bit) or FeatureSSE1.

This has the surprising side effect that -mattr=-cmov causes an assert to fire in 64-bit mode because it clears the Feature64Bit. Or in 32-bit mode, -mattr=-cmov disables any sse/avx features which seems surprising.

This patch removes the implication and instead updates hasCMOV in X86Subtarget to check SSE1 or is64Bit in addition to the regular cmov flag. This should keep most things working the way they did before. I don't believe there is a way to specific "-cmov" directly from clang so this should only effect our lower level tools.

This does stop -mattr=cx16(cmpxchg16b) from implying cmov is enabled via the 64bit flag as you can see from one of the changed tests. But that was a 32-bit test so I don't know why it enabled cx16 anyway.

For the other test I had to add -sse to override the new sse check in hasCMOV.

Reviewers: RKSimon, DavidKreitzer, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits, jfb

Differential Revision: https://reviews.llvm.org/D51228

llvm-svn: 340707
2018-08-26 18:29:33 +00:00
Craig Topper b68a78b9ac [X86] Add FeatureCMOV to athlon and athlon-tbird cpus.
Summary: This matches gcc and one cpuid dump I found online. Given that these are considered 7th generation x86 CPU it seems likely they support cmov since cmov was added by Intel in their 6th generation.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D51264

llvm-svn: 340706
2018-08-26 18:29:27 +00:00
Sanjay Patel 113cac3b15 [SelectionDAG][x86] turn insertelement into undef with variable index into splat
I noticed this along with the patterns in D51125, but when the index is variable, 
we don't convert insertelement into a build_vector.

For x86, that means these get expanded at legalization time into the loading/spilling 
code that we see in the tests. I think it's always better to avoid going to memory on 
these, and we get the optimal 'broadcast' if it's available.

I suspect other targets may want to look at enabling the hook. AArch64 and AMDGPU have 
regression tests that would be affected (although I did not check what would happen in 
those cases). In the most basic cases shown here, AArch64 would probably do much 
better with a splat.

Differential Revision: https://reviews.llvm.org/D51186

llvm-svn: 340705
2018-08-26 18:20:41 +00:00
Craig Topper 7ef643ef17 [X86] Add test cases for D50952, paddus patterns involving constants. NFC
llvm-svn: 340694
2018-08-26 00:22:07 +00:00
Craig Topper ebec2793d1 [X86] Replace support for vXi32 SMUL_LOHI/UMUL_LOHI with MULHS/MULHU support instead.
Summary:
The only time vector SMUL_LOHI/UMUL_LOHI nodes are created is during division/remainder lowering. If its created before op legalization, generic DAGCombine immediately turns that SMUL_LOHI/UMUL_LOHI into a MULHS/MULHU since only the upper half is used. That node will stick around through vector op legalization and will be turned back into UMUL_LOHI/SMUL_LOHI during op legalization. It will then be custom lowered by the X86 backend. Due to this two step lowering the vector shuffles created by the custom lowering get legalized after their inputs rather than before. This prevents the shuffles from being combined with any build_vector of constants.

This patch uses changes vXi32 to use MULHS/MULHU instead. This is what the later DAG combine did anyway. But by skipping the change back to UMUL_LOHI/SMUL_LOHI we lower it before any constant BUILD_VECTORS. This allows the vector_shuffle creation to constant fold with the build_vectors. This accounts for the test changes here.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D51254

llvm-svn: 340690
2018-08-25 18:01:24 +00:00
Sanjay Patel 8a84c747d2 [x86] try harder to use broadcast to load a scalar into vector reg
This is a preliminary step for a preliminary step for D50992. 
I noticed that x86 often misses chances to load a scalar directly 
into a vector register.

So this patch is just allowing more of those cases to match a 
broadcast op in lowerBuildVectorAsBroadcast(). The old code comment 
said it doesn't make sense to use a broadcast when we're loading a 
single element and everything else is undef, but I think that's the 
best case in the improved tests in insert-loaded-scalar.ll. We avoid 
scalar-to-vector-register move and/or less efficient shuffling.

Note that there are some existing types that were already producing 
a broadcast, but that happens semi-accidentally. Ie, it's not 
happening as part of lowerBuildVectorAsBroadcast(). The build vector 
gets expanded into load + shuffle, and then shuffle lowering produces 
the broadcast.

Description of the other test diffs:
1. avx-basic.ll - replacing load+shufle is a win.
2. sse3-avx-addsub-2.ll - vmovddup vs. vbroadcastss is neutral
3. sse41.ll - don't care - we convert that intrinsic to generic IR now, so this test is deprecated
4. vector-shuffle-128-v8.ll / vector-shuffle-256-v16.ll - pshufb alternatives with an extra instruction are not obviously bad

Differential Revision: https://reviews.llvm.org/D51125

llvm-svn: 340685
2018-08-25 14:56:05 +00:00
Simon Pilgrim eb6a3cbb28 [X86] Make requested test changes from D50636
The tests were relying on X / X -> 1 and X % X -> 0 combines not happening in the DAG.

llvm-svn: 340682
2018-08-25 14:16:03 +00:00
Bjorn Pettersson 8483004723 [LiveDebugVariables] Avoid faulty addDefsFromCopies in computeIntervals
Summary:
When computeIntervals is looking through COPY instruction to
extend the location mapping for a debug variable it did not
handle subregisters correctly.

For example
    DBG_VALUE debug-use %0.sub_8bit_hi, ...
    %1:gr16 = COPY %0
was transformed into
    DBG_VALUE debug-use %0.sub_8bit_hi, ...
    %1:gr16 = COPY %0
    DBG_VALUE debug-use %1, ...
So the subregister index was missing in the added DBG_VALUE.

As long as the subreg refered to the least significant bits
of the superreg, then I guess we could get the correct
result in a debugger even when referring to the superreg.
But as in the example above when the subreg refers to other
parts of the superreg, then debuginfo would be incorrect.

I'm not sure exactly how to fix this properly, so this patch
just avoids looking through the COPY when there is a subreg
involved (for more info, see the FIXME added in the code).

Reviewers: rnk, aprantl

Reviewed By: aprantl

Subscribers: JDevlieghere, llvm-commits

Tags: #debug-info

Differential Revision: https://reviews.llvm.org/D50788

llvm-svn: 340679
2018-08-25 10:02:03 +00:00
Peter Collingbourne 3f792230cb CodeGen: Add two more conditions for adding symbols to the address-significance table.
Firstly, require the symbol to be used within the module. If a
symbol is unused within a module, then by definition it cannot be
address-significant within that module. This condition is useful on all
platforms because it could make symbol tables smaller -- without this
change, emitting an address-significance table could cause otherwise
unused undefined symbols to be added to the object file.

But this change is necessary with COFF specifically in order to
preserve the property that an unreferenced undefined symbol in an IR
module does not result in a link failure. This is already the case for
ELF because ELF linkers only reject links with unresolved symbols if
there is a relocation to that symbol, but COFF linkers require all
undefined symbols to be resolved regardless of relocations. So if
a module contains an unreferenced undefined symbol, we need to make
sure not to add it to the address-significance table (and thus the
symbol table) in case it doesn't end up resolved at link time.

Secondly, do not add dllimport symbols to the table. These symbols
won't be able to be resolved because their definitions live in another
module and are accessed via the IAT, and the address-significance
table has no effect on other modules anyway. It wouldn't make sense
to add the IAT entry symbol to the address-significance table either
because the IAT entry isn't address-significant -- the generated code
never takes its address.

Differential Revision: https://reviews.llvm.org/D51199

llvm-svn: 340648
2018-08-24 20:37:09 +00:00
Stefan Pintilie 892fc6b7f2 [Exception Handling] Unwind tables are required for all functions that have an EH personality.
This patch is for defect:
https://bugs.llvm.org/show_bug.cgi?id=32611

Functions may require unwind tables even if they are marked with the attribute
nounwind. Any function with an EH personality may require an unwind table.

Differential Revision: https://reviews.llvm.org/D50987

llvm-svn: 340641
2018-08-24 19:38:29 +00:00
Craig Topper 4058e29e7d [X86] Teach combineLoopMAddPattern to handle cases where there is no loop and the add has two multiply inputs
Differential Revision: https://reviews.llvm.org/D50868

llvm-svn: 340631
2018-08-24 18:05:04 +00:00
Craig Topper 3c78622d64 [X86] Add test case for D50868. NFC
llvm-svn: 340630
2018-08-24 18:05:02 +00:00
Stefan Pintilie 7cb44f2470 Revert "[Exception Handling] Unwind tables are required for all functions that have an EH personality."
This reverts commit rL340614.
Previous commit broke some llvm-cfi-verify tests.

llvm-svn: 340625
2018-08-24 17:27:35 +00:00
Stefan Pintilie 36f31617d3 [Exception Handling] Unwind tables are required for all functions that have an EH personality.
This patch is for defect:
https://bugs.llvm.org/show_bug.cgi?id=32611

Functions may require unwind tables even if they are marked with the attribute
nounwind. Any function with an EH personality may require an unwind table.

Differential Revision: https://reviews.llvm.org/D50987

llvm-svn: 340614
2018-08-24 15:51:47 +00:00
Sanjay Patel 851e02e52e [x86] move/add tests for insertelement with variable index; NFC
The variable index pattern is different than the constant index
cases as shown in D51125. We might want to splat regardless of
whether the scalar is loaded from memory or transferred from GPR.

llvm-svn: 340565
2018-08-23 18:38:40 +00:00
Chandler Carruth ae0cafece8 [x86/retpoline] Split the LLVM concept of retpolines into separate
subtarget features for indirect calls and indirect branches.

This is in preparation for enabling *only* the call retpolines when
using speculative load hardening.

I've continued to use subtarget features for now as they continue to
seem the best fit given the lack of other retpoline like constructs so
far.

The LLVM side is pretty simple. I'd like to eventually get rid of the
old feature, but not sure what backwards compatibility issues that will
cause.

This does remove the "implies" from requesting an external thunk. This
always seemed somewhat questionable and is now clearly not desirable --
you specify a thunk the same way no matter which set of things are
getting retpolines.

I really want to keep this nicely isolated from end users and just an
LLVM implementation detail, so I've moved the `-mretpoline` flag in
Clang to no longer rely on a specific subtarget feature by that name and
instead to be directly handled. In some ways this is simpler, but in
order to preserve existing behavior I've had to add some fallback code
so that users who relied on merely passing -mretpoline-external-thunk
continue to get the same behavior. We should eventually remove this
I suspect (we have never tested that it works!) but I've not done that
in this patch.

Differential Revision: https://reviews.llvm.org/D51150

llvm-svn: 340515
2018-08-23 06:06:38 +00:00
Craig Topper cf9df99d79 [X86] Teach combineLoopSADPattern to handle cases where there is no loop and the add has two absolute difference inputs
Previously we asumed a vector reduction add is part of a loop and one of the input is a phi. But the code in SelectionDAGBuilder that sets vector reduction flag handles more cases than that. It just requires that the use chain ends in a horizontal reduction. And there are no other uses. This means it can handle unrolled reduction loops.

If the initial value of the reduction was 0, an unrolled loop would begin with a vector reduction add that has two sad inputs. Previously we would only transform one side of the add, but for this case we need to transform both sides.

I've created a lambda to reuse some of the code for both sides. And fixed the variables names to remove reference to "phi".

Differential Revision: https://reviews.llvm.org/D50817

llvm-svn: 340478
2018-08-22 23:19:01 +00:00
Craig Topper 903ef6a03f [X86] Add test cases for D50817. NFC
llvm-svn: 340477
2018-08-22 23:18:58 +00:00
Sanjay Patel ed1b9695ee [SelectionDAG] unroll unsupported vector FP ops earlier to avoid libcalls on undef elements (PR38527)
This solves the motivating case from:
https://bugs.llvm.org/show_bug.cgi?id=38527

If we are legalizing an FP vector op that maps to 1 of the LLVM intrinsics that mimic libm calls, 
but we're going to end up with scalar libcalls for that vector type anyway, then we should unroll 
the vector op into scalars before widening. This avoids libcalls because we've lost the knowledge 
that some of the scalar elements are undef.

Differential Revision: https://reviews.llvm.org/D50791

llvm-svn: 340469
2018-08-22 22:52:05 +00:00
Craig Topper 538f8ab438 [X86] Replace (32/64 - n) shift amounts with (neg n) since the shift amount is masked in hardware
Inspired by what AArch64 does for shifts, this patch attempts to replace shift amounts with neg if we can.

This is done directly as part of isel so its as late as possible to avoid breaking some BZHI patterns since those patterns need an unmasked (32-n) to be correct.

To avoid manual load folding and custom instruction selection for the negate. I've inserted new nodes in the DAG above the shift node in topological order.

Differential Revision: https://reviews.llvm.org/D48789

llvm-svn: 340441
2018-08-22 19:39:09 +00:00
Sanjay Patel b5686c4e4e [x86] add tests for load scalar + insertelement; NFC
llvm-svn: 340425
2018-08-22 17:46:28 +00:00
Simon Pilgrim ffdfe45645 [X86][SSE] LowerMULH vXi8 - use SSE shifts directly.
We know these vXi16 extended cases are legal constant splat shifts.

llvm-svn: 340414
2018-08-22 15:37:11 +00:00
Simon Pilgrim b89a4f85bf [X86][SSE] Add sdiv test case from PR38658
llvm-svn: 340393
2018-08-22 09:47:12 +00:00
Bjorn Pettersson e06321382b [RegisterCoalescer] Use substPhysReg in reMaterializeTrivialDef
Summary:
When RegisterCoalescer::reMaterializeTrivialDef is substituting
a register use in a DBG_VALUE instruction, and the old register
is a subreg, and the new register is a physical register,
then we need to use substPhysReg in order to extract the correct
subreg.

Reviewers: wmi, aprantl

Reviewed By: wmi

Subscribers: hiraditya, MatzeB, qcolombet, tpr, llvm-commits

Differential Revision: https://reviews.llvm.org/D50844

llvm-svn: 340326
2018-08-21 19:47:32 +00:00
Simon Pilgrim 9848e0c9ac [X86][SSE] Add non-uniform udiv test that is mostly divide by 1.
The test demonstrates over-complicated codegen for a udiv that only has one divisor that doesn't equal 1. This should have allowed the codegen to be a lot simpler (uniform shifts etc.) but only the SSE2 manages to make use of this......

llvm-svn: 340313
2018-08-21 18:02:28 +00:00
Craig Topper b172b8884a [BypassSlowDivision] Teach bypass slow division not to interfere with div by constant where constants have been constant hoisted, but not moved from their basic block
DAGCombiner doesn't pay attention to whether constants are opaque before doing the div by constant optimization. So BypassSlowDivision shouldn't introduce control flow that would make DAGCombiner unable to see an opaque constant. This can occur when a div and rem of the same constant are used in the same basic block. it will be hoisted, but not leave the block.

Longer term we probably need to look into the X86 immediate cost model used by constant hoisting and maybe not mark div/rem immediates for hoisting at all.

This fixes the case from PR38649.

Differential Revision: https://reviews.llvm.org/D51000

llvm-svn: 340303
2018-08-21 17:15:33 +00:00
Simon Pilgrim 43cf2c20ab [X86] Add SSE2 and XOP udiv combine tests
llvm-svn: 340282
2018-08-21 15:21:45 +00:00
Simon Pilgrim 8e15b43092 [X86] Add SSE2 sdiv combine tests
llvm-svn: 340264
2018-08-21 10:44:06 +00:00
Sam Parker 597811e7a7 [DAGCombiner] Reduce load widths of shifted masks
During combining, ReduceLoadWdith is used to combine AND nodes that
mask loads into narrow loads. This patch allows the mask to be a
shifted constant. This results in a narrow load which is then left
shifted to compensate for the new offset.

Differential Revision: https://reviews.llvm.org/D50432

llvm-svn: 340261
2018-08-21 10:26:59 +00:00
Simon Pilgrim 72b324de4d [TargetLowering] Add BuildSDiv support for division by one or negone.
This reduces most of the sdiv stages (the MULHS, shifts etc.) to just zero/identity values and use the numerator scale factor to multiply by +1/-1.

llvm-svn: 340260
2018-08-21 10:20:36 +00:00
Bjorn Pettersson 880f291577 [RegisterCoalescer] Do not assert when trying to remat dead values
Summary:
RegisterCoalescer::reMaterializeTrivialDef used to assert that
the input register was live in. But as shown by the new
coalesce-dead-lanes.mir test case that seems to be a valid
scenario. We now return false instead of the assert, simply
avoiding to remat the dead def.

Normally a COPY of an undef value is eliminated by
eliminateUndefCopy(). Although we only do that when the
destination isn't a physical register. So the situation
above should be limited to the case when we copy an undef
value to a physical register.

Reviewers: kparzysz, wmi, tpr

Reviewed By: kparzysz

Subscribers: MatzeB, qcolombet, tpr, llvm-commits

Differential Revision: https://reviews.llvm.org/D50842

llvm-svn: 340255
2018-08-21 07:49:05 +00:00
Craig Topper 9c57ba0dc3 [X86] Add test command line to expose PR38649.
Bypass slow division and constant hoisting are conspiring to break div+rem of large constants.

llvm-svn: 340217
2018-08-20 21:51:35 +00:00
Craig Topper 210ccfe3db [X86] Prevent lowerVectorShuffleByMerging128BitLanes from creating cycles
Due to some splat handling code in getVectorShuffle, its possible for NewV1/NewV2 to have their mask modified from what is requested. This can lead to cycles being created in the DAG.

This patch examines the returned mask and makes sure its different. Long term we may need to look closer at that splat code in getVectorShuffle, or add more splat awareness to getVectorShuffle.

Fixes PR38639

Differential Revision: https://reviews.llvm.org/D50981

llvm-svn: 340214
2018-08-20 21:08:35 +00:00
Craig Topper 7dcb2c4b0a [X86] Teach combineTruncatedArithmetic to handle some cases of ISD::SUB
We can safely avoid interfering with the subus combine if both inputs are freely truncatable. Either both extends, or an extend and a constant vector.

Differential Revision: https://reviews.llvm.org/D50878

llvm-svn: 340212
2018-08-20 20:57:35 +00:00
Craig Topper 08e7e04998 [X86] Pre-commit test cases for D50878.
llvm-svn: 340211
2018-08-20 20:57:32 +00:00
Cameron McInally 94b9029be9 [FPEnv] Support constrained FREM intrinsic
Differential Revision: https://reviews.llvm.org/D50975

llvm-svn: 340201
2018-08-20 19:28:56 +00:00
Simon Pilgrim 6ac905926f [TargetLowering] Disable BuildSDiv division by one or negone.
Fuzz tests have detected an issue, currently working on a fix.

llvm-svn: 340195
2018-08-20 18:23:54 +00:00