Commit Graph

17182 Commits

Author SHA1 Message Date
Igor Breger 8672408db0 [AVX512] Fix insertelement i1 lowering.
1. Use shuffle to insert element i1 into vector. The previous implementation was incorrect ( dest_bit OR src_bit , it doesn't clear the bit if src_bit=0 )
2. Improve shuffle i1 vector, use CVT2MASK if supported instead TRUNCATE.

Differential Revision: http://reviews.llvm.org/D23347

llvm-svn: 278623
2016-08-14 05:25:07 +00:00
Diana Picus 68be1eb885 Revert "CodeGen: If Convert blocks that would form a diamond when tail-merged."
This reverts commit r278287.

This commit broke the clang-cmake-thumbv7-a15-full-sh bot.
See https://llvm.org/bugs/show_bug.cgi?id=28949

llvm-svn: 278621
2016-08-14 02:10:18 +00:00
Diana Picus 35ccf53e75 Revert "Codegen: Don't tail-duplicate blocks with un-analyzable fallthrough."
This reverts commit r278288.

r278287 broke the clang-cmake-thumbv7-a15-full-sh bot.
Revert this so we can get to r278287.

llvm-svn: 278620
2016-08-14 02:10:12 +00:00
Ron Lieberman 822ee88ab8 Fix unsupported relocation type R_HEX_6_X' for symbol .rodata
LowerTargetConstantPool is not properly setting the TargetFlag to indicate
desired relocation. Coding error, the offset parameter was omitted, so the
TargetFlag was used as the offset, and the TargetFlag defaulted to zero.

This only affects -fpic compilation, and only those items created in a
Constant Pool, for example a vector of constants. Halide ran into this issue.

llvm-svn: 278614
2016-08-13 23:41:11 +00:00
Mehdi Amini 8c629ecf3a Revert "Revert "Invariant start/end intrinsics overloaded for address space""
This reverts commit 32fc6488e48eafc0ca1bac1bd9cbf0008224d530.

llvm-svn: 278609
2016-08-13 23:31:24 +00:00
Mehdi Amini 164ac651da Revert "Invariant start/end intrinsics overloaded for address space"
This reverts commit r276447.

llvm-svn: 278608
2016-08-13 23:27:32 +00:00
Sanjay Patel 08c876673e [x86] add tests to show missed 64-bit immediate merging
Tests are slightly modified versions of those written by
Sunita Marathe in D23391.

llvm-svn: 278599
2016-08-13 18:42:14 +00:00
Craig Topper 3f8126e6fa [AVX-512] Remove an AddedComplexity that was prioritizing basic vzmovl patterns over more complex ones that produce better code.
llvm-svn: 278593
2016-08-13 05:43:20 +00:00
Craig Topper 600685d510 [AVX-512] Add patterns to support VZEXT_MOVL from 512-bit vectors with 64-bit and 32-bit elements.
Fixes PR28961.

llvm-svn: 278592
2016-08-13 05:33:12 +00:00
Matt Arsenault 3cc1e0066d AMDGPU: Fix missing test for addressing mode with odd offsets
Add test if the constant offset looks unaligned.

llvm-svn: 278589
2016-08-13 01:43:51 +00:00
Dominic Chen 4a9b99ee92 [WebAssembly] Re-enable disabled debug value test
Summary:
This test was resulting in asan/valgrind failures due to undefined
DWARF register mappings for WebAssembly, and was disabled in r278495.
These have been resolved.

Reviewers: sunfish, dschuff

Subscribers: bkramer, llvm-commits, jfb

Differential Revision: https://reviews.llvm.org/D23459

llvm-svn: 278576
2016-08-12 23:14:18 +00:00
Haicheng Wu 7c4535d1e7 Reapply [BranchFolding] Restrict tail merging loop blocks after MBP
Fixed a bug in the test case.

To fix PR28104, this patch restricts tail merging to blocks that belong to the
same loop after MBP.

llvm-svn: 278575
2016-08-12 23:13:38 +00:00
Artem Belevich 2f0a3dfe64 [NVPTX] Use untyped (.b) integer registers in PTX.
This bring LLVM-generated PTX closer to what nvcc generates and avoids
triggering issues in ptxas.

For instance, ptxas does not accept .s16 (or .u16) registers as operands
for .fp16 instructions.

Differential Revision: https://reviews.llvm.org/D23460

llvm-svn: 278568
2016-08-12 22:02:19 +00:00
Eli Friedman f184e4befc [AArch64LoadStoreOptimizer] Check aliasing correctly when creating paired loads/stores.
The existing code accidentally skipped the aliasing check in edge cases.

Differential revision: https://reviews.llvm.org/D23372

llvm-svn: 278562
2016-08-12 20:39:51 +00:00
Eli Friedman 8585e9d33d [AArch64LoadStoreOpt] Handle offsets correctly for post-indexed paired loads.
Trunk would try to create something like "stp x9, x8, [x0], #512", which isn't actually a valid instruction.

Differential revision: https://reviews.llvm.org/D23368

llvm-svn: 278559
2016-08-12 20:28:02 +00:00
Artur Pilipenko 87e4038a91 [x86] X86ISelLowering zext(add_nuw(x, C)) --> add(zext(x), C_zext)
Currently X86ISelLowering has a similar transformation for sexts:
sext(add_nsw(x, C)) --> add(sext(x), C_sext)

In this change I extend this code to handle zexts as well.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D23359

llvm-svn: 278520
2016-08-12 16:08:30 +00:00
James Y Knight 2cc9da9a65 Revert "[Sparc] Leon errata fix passes."
...and the two followup commits:
Revert "[Sparc][Leon] Missed resetting option flags from check-in 278489."
Revert "[Sparc][Leon] Errata fixes for various errata in different
versions of the Leon variants of the Sparc 32 bit processor."

This reverts commit r274856, r278489, and r278492.

llvm-svn: 278511
2016-08-12 14:48:09 +00:00
Simon Pilgrim 687d71e877 [X86][SSE] Add support for combining target shuffles to PSLLDQ/PSRLDQ byte shifts
llvm-svn: 278502
2016-08-12 11:24:34 +00:00
Benjamin Kramer 05e760ec4b [Webassembly] disable unstable test.
It reads uninitialized memory and crashes randomly.

llvm-svn: 278495
2016-08-12 10:13:45 +00:00
Simon Pilgrim ed96b9adfb [X86][SSE] Fixed PALIGNR target shuffle decode
The PALIGNR target shuffle decode was not taking into account that DecodePALIGNRMask (rather oddly) expects the operands to be in reverse order, nor was it detecting unary patterns, causing combines to combine with the incorrect input.

The cgbuiltin, auto upgrade and instruction comments code correctly swap the operands so are not affected.

llvm-svn: 278494
2016-08-12 10:10:51 +00:00
Chris Dewhurst 829f8efe55 [Sparc][Leon] Errata fixes for various errata in different versions of the Leon variants of the Sparc 32 bit processor.
The nature of the errata are listed in the comments preceding the errata fix passes. Relevant unit tests are implemented for each of these.

These changes update older versions of these errata fixes with improvements to code and unit tests.

Differential Revision: https://reviews.llvm.org/D21960

llvm-svn: 278489
2016-08-12 09:34:26 +00:00
Haicheng Wu d9cbb1608f Revert "[BranchFolding] Restrict tail merging loop blocks after MBP"
This reverts commit r278463 because it hits the bot.

llvm-svn: 278484
2016-08-12 08:40:24 +00:00
Wei Mi 7e103d92cc Recommit 'Remove the restriction that MachineSinking is now stopped by
"insert_subreg, subreg_to_reg, and reg_sequence" instructions' after
adjusting some unittest checks.

This is to solve PR28852. The restriction was added at 2010 to make better register
coalescing. We assumed that it was not necessary any more. Testing results on x86
supported the assumption.

We will look closely to any performance impact it will bring and will be prepared
to help analyzing performance problem found on other architectures.

Differential Revision: https://reviews.llvm.org/D23210

llvm-svn: 278466
2016-08-12 03:33:22 +00:00
Haicheng Wu ea02372059 [BranchFolding] Restrict tail merging loop blocks after MBP
To fix PR28014, this patch restricts tail merging to blocks that belong to the
same loop after MBP.

Differential Revision: https://reviews.llvm.org/D23191

llvm-svn: 278463
2016-08-12 03:30:23 +00:00
Vyacheslav Klochkov 6daefcf626 X86-FMA3: Implemented commute transformation for EVEX/AVX512 FMA3 opcodes.
This helped to improved memory-folding and register coalescing optimizations.

Also, this patch fixed the tracker #17229.

Reviewer: Craig Topper.
Differential Revision: https://reviews.llvm.org/D23108

llvm-svn: 278431
2016-08-11 22:07:33 +00:00
Tim Northover 8e0c53a018 GlobalISel: support 'null' constant in translation.
It's sharing the integer G_CONSTANT for now since I don't *think* it creates
any ambiguity (even on weird archs). If that turns out wrong we can create a
G_PTRCONSTANT or something.

llvm-svn: 278423
2016-08-11 21:40:55 +00:00
Krzysztof Parzyszek 1b689da04e [Hexagon] Allow non-returning calls in hardware loops
llvm-svn: 278416
2016-08-11 21:14:25 +00:00
Tim Northover da6f5f2d0a Remove empty file left by partial reversion.
llvm-svn: 278411
2016-08-11 21:01:15 +00:00
Tim Northover 30e67ce793 GlobalISel: add translation support for shift operations.
llvm-svn: 278410
2016-08-11 21:01:13 +00:00
Tim Northover f1f7bf1279 GlobalISel: support zext & sext during translation phase.
llvm-svn: 278409
2016-08-11 21:01:10 +00:00
Wei Ding 70cda07526 AMDGPU : Add intrinsic for instruction v_cvt_pk_u8_f32
Differential Revision: http://reviews.llvm.org/D23336

llvm-svn: 278403
2016-08-11 20:34:48 +00:00
Wei Mi 3ab5816000 Revert rL278384 which caused several buildbot failures (like check failures in CodeGen/X86/clz.ll).
llvm-svn: 278402
2016-08-11 20:33:37 +00:00
Wei Mi ec19b35179 Remove the restriction that MachineSinking is now stopped by "insert_subreg,
subreg_to_reg, and reg_sequence" instructions.

This is to solve PR28852. The restriction was added at 2010 to make better register
coalescing. We assumed that it was not necessary any more. Testing results on x86
supported the assumption.

We will look closely to any performance impact it will bring and will be prepared
to help analyzing performance problem found on other architectures.

Differential Revision: https://reviews.llvm.org/D23210

llvm-svn: 278384
2016-08-11 18:42:56 +00:00
Krzysztof Parzyszek a003b76391 If-conversion incorrectly calculates liveness of redefined registers
Differential Revision: https://reviews.llvm.org/D23207

llvm-svn: 278383
2016-08-11 18:42:06 +00:00
Krzysztof Parzyszek 60f0b51485 [Hexagon] Skip byval arguments when checking parameter attributes
From the point of view of register assignment, byval parameters are
ignored: a byval parameter is not going to be assigned to a register,
and it will not affect the assignments of subsequent parameters.
When matching registers with parameters in the bit tracker, make sure
to skip byval parameters before advancing the registers.

llvm-svn: 278375
2016-08-11 18:15:16 +00:00
Dominic Chen 6ba19659cb Improve virtual register handling when computing debug information
Summary: Some backends, like WebAssembly, use virtual registers instead of physical registers. This crashes the DbgValueHistoryCalculator pass, which assumes that all registers are physical. Instead, skip virtual registers when iterating aliases, and assume that they are clobbered.

Reviewers: dexonsmith, dschuff, aprantl

Subscribers: yurydelendik, llvm-commits, jfb, sunfish

Differential Revision: https://reviews.llvm.org/D22590

llvm-svn: 278371
2016-08-11 17:52:40 +00:00
Michael Kuperstein e36d7716c3 Make TwoAddressInstructionPass::rescheduleMIBelowKill subreg-aware
This fixes PR28824.

Differential Revision: https://reviews.llvm.org/D23220

llvm-svn: 278370
2016-08-11 17:38:33 +00:00
Matt Arsenault 56684d4538 AMDGPU: Fix crashes on memory functions
llvm-svn: 278369
2016-08-11 17:31:42 +00:00
Wei Ding d3344378c6 AMDGPU : Fix SAD related instruction LIT tests function atttibute issues.
Differential Revision: http://reviews.llvm.org/D23133

llvm-svn: 278360
2016-08-11 17:14:17 +00:00
Wei Ding 34e1753585 AMDGPU : Add LLVM intrinsics for SAD related instructions.
Differential Revision: http://reviews.llvm.org/D23133

llvm-svn: 278354
2016-08-11 16:33:53 +00:00
Tim Northover 0d51044b69 GlobalISel: clear vreg mapping after translating each function
Otherwise we only materialize (shared) constants in the first function they
appear in. This doesn't go well.

llvm-svn: 278351
2016-08-11 16:21:29 +00:00
Igor Breger a77b14d02c [AVX512] Fix extractelement i1 lowering.
The previous implementation (not custom) doesn't enforce zeroing off upper bits. The assumption is that i1 PRODUCER (truncate and extractelement) must zero all upper bits, so i1 CONSUMER instructions ( test, zext, save, etc) can be done without additional zeroing.
Make extractelement i1 lowering custom for all vector i1.

Differential Revision: http://reviews.llvm.org/D23246

llvm-svn: 278328
2016-08-11 12:13:46 +00:00
Marina Yatsina 88f0c31f13 Avoid false dependencies of undef machine operands
This patch helps avoid false dependencies on undef registers by updating the machine instructions' undef operand to use a register that the instruction is truly dependent on, or use a register with clearance higher than Pref.

Pseudo example:

loop:
xmm0 = ...
xmm1 = vcvtsi2sdl eax, xmm0<undef>
... = inst xmm0
jmp loop

In this example, selecting xmm0 as the undef register creates false dependency between loop iterations.
This false dependency cannot be solved by inserting an xor before vcvtsi2sdl because xmm0 is alive at the point of the vcvtsi2sdl instruction.
Selecting a different register instead of xmm0, especially a register that is not used in the loop, will eliminate this problem.

Differential Revision: https://reviews.llvm.org/D22466

llvm-svn: 278321
2016-08-11 07:32:08 +00:00
Craig Topper a78b768ed4 [AVX-512] Promote 512-bit integer loads to v8i64 similar to what is done for 128/256-bit vectors for overall consistency.
llvm-svn: 278318
2016-08-11 06:04:07 +00:00
Craig Topper 14aa2665d3 [AVX-512] Add patterns to allow EVEX encoded stores of v16i16/v8i16/v16i8/v32i8 even when BWI is not supported.
llvm-svn: 278317
2016-08-11 06:04:04 +00:00
Craig Topper 3563d0f622 [AVX-512] Fix the 128-bit and 256-bit nontemporal load patterns with elements type other than i64. These loads have all been promoted to v2i64/v4i64 loads so we need bitcasts or we end up selecting VMOVDQA32/VMOVDQU32 instead.
llvm-svn: 278316
2016-08-11 06:04:00 +00:00
Tim Northover 357f1be2ca GlobalISel: support same ConstantExprs as Instructions.
It's more than just inttoptr, but the others can't be tested until we have
support for non-trivial constants (they currently get unavoidably folded to a
ConstantInt).

llvm-svn: 278303
2016-08-10 23:02:41 +00:00
Tim Northover 2ff5935a95 GlobalISel: add tests forgotten in r278293.
llvm-svn: 278296
2016-08-10 22:13:48 +00:00
Changpeng Fang fb9c3818dd AMDGPU/SI: Implement amdgcn image intrinsics with sampler
Summary:
  This patch define and implement amdgcn image intrinsics with sampler.

    1. define vdata type to be llvm_anyfloat_ty, address type to be llvm_anyfloat_ty,
       and rsrc type to be llvm_anyint_ty. As a result, we expect the intrinsics name
       to have three suffixes to overload each of these three types;

    2. D128 as well as two other flags are implied in the three types, for example,
       if you use v8i32 as resource type, then r128 is 0!

    3. don't expose TFE flag, and other flags are exposed in the instruction order:
       unrm, glc, slc, lwe and da.

Differential Revision: http://reviews.llvm.org/D22838

Reviewed by:
  arsenm and tstellarAMD

llvm-svn: 278291
2016-08-10 21:15:30 +00:00
Kyle Butt 81d32846b0 Codegen: Don't tail-duplicate blocks with un-analyzable fallthrough.
If AnalyzeBranch can't analyze a block and it is possible to
fallthrough, then duplicating the block doesn't make sense, as only one
block can be the layout predecessor for the un-analyzable fallthrough.

Submitted wit a test case, but NOTE: the test case doesn't currently
fail. However, the test case fails with D20505 and would have saved me
some time debugging.

llvm-svn: 278288
2016-08-10 21:03:27 +00:00
Kyle Butt e1c931b171 CodeGen: If Convert blocks that would form a diamond when tail-merged.
The following function currently relies on tail-merging for if
conversion to succeed. The common tail of cond_true and cond_false is
extracted, and this then forms a diamond pattern that can be
successfully if converted.

If this block does not get extracted, either because tail-merging is
disabled or the threshold is higher, we should still recognize this
pattern and if-convert it.

Fixed a regression in the original commit. Need to un-reverse branches after
reversing them, or other conversions go awry.

define i32 @t2(i32 %a, i32 %b) nounwind {
entry:
        %tmp1434 = icmp eq i32 %a, %b           ; <i1> [#uses=1]
        br i1 %tmp1434, label %bb17, label %bb.outer

bb.outer:               ; preds = %cond_false, %entry
        %b_addr.021.0.ph = phi i32 [ %b, %entry ], [ %tmp10, %cond_false ]
        %a_addr.026.0.ph = phi i32 [ %a, %entry ], [ %a_addr.026.0, %cond_false ]
        br label %bb

bb:             ; preds = %cond_true, %bb.outer
        %indvar = phi i32 [ 0, %bb.outer ], [ %indvar.next, %cond_true ]
        %tmp. = sub i32 0, %b_addr.021.0.ph
        %tmp.40 = mul i32 %indvar, %tmp.
        %a_addr.026.0 = add i32 %tmp.40, %a_addr.026.0.ph
        %tmp3 = icmp sgt i32 %a_addr.026.0, %b_addr.021.0.ph
        br i1 %tmp3, label %cond_true, label %cond_false

cond_true:              ; preds = %bb
        %tmp7 = sub i32 %a_addr.026.0, %b_addr.021.0.ph
        %tmp1437 = icmp eq i32 %tmp7, %b_addr.021.0.ph
        %indvar.next = add i32 %indvar, 1
        br i1 %tmp1437, label %bb17, label %bb

cond_false:             ; preds = %bb
        %tmp10 = sub i32 %b_addr.021.0.ph, %a_addr.026.0
        %tmp14 = icmp eq i32 %a_addr.026.0, %tmp10
        br i1 %tmp14, label %bb17, label %bb.outer

bb17:           ; preds = %cond_false, %cond_true, %entry
        %a_addr.026.1 = phi i32 [ %a, %entry ], [ %tmp7, %cond_true ], [ %a_addr.026.0, %cond_false ]
        ret i32 %a_addr.026.1
}

Without tail-merging or diamond-tail if conversion:
LBB1_1:                                 @ %bb
                                        @ =>This Inner Loop Header: Depth=1
        cmp     r0, r1
        ble     LBB1_3
@ BB#2:                                 @ %cond_true
                                        @   in Loop: Header=BB1_1 Depth=1
        subs    r0, r0, r1
        cmp     r1, r0
        it      ne
        cmpne   r0, r1
        bgt     LBB1_4
LBB1_3:                                 @ %cond_false
                                        @   in Loop: Header=BB1_1 Depth=1
        subs    r1, r1, r0
        cmp     r1, r0
        bne     LBB1_1
LBB1_4:                                 @ %bb17
        bx      lr

With diamond-tail if conversion, but without tail-merging:
@ BB#0:                                 @ %entry
        cmp     r0, r1
        it      eq
        bxeq    lr
LBB1_1:                                 @ %bb
                                        @ =>This Inner Loop Header: Depth=1
        cmp     r0, r1
        ite     le
        suble   r1, r1, r0
        subgt   r0, r0, r1
        cmp     r1, r0
        bne     LBB1_1
@ BB#2:                                 @ %bb17
        bx      lr

llvm-svn: 278287
2016-08-10 20:45:56 +00:00
Matt Arsenault 57431c9680 AMDGPU: Change insertion point of si_mask_branch
Insert before the skip branch if one is created.
This is a somewhat more natural placement relative
to the skip branches, and makes it possible to implement
analyzeBranch for skip blocks.

The test changes are mostly due to a quirk where
the block label is not emitted if there is a terminator
that is not also a branch.

llvm-svn: 278273
2016-08-10 19:11:42 +00:00
Sanjay Patel 5ccc85fe83 [x86, AVX] allow FP vector select folding to bitwise logic ops (PR28895)
This handles the case in:
https://llvm.org/bugs/show_bug.cgi?id=28895

...but we are not getting all of the possibilities yet. 
Eg, we use 'X86::FANDN' for scalar FP select combines.

That enhancement is filed as:
https://llvm.org/bugs/show_bug.cgi?id=28925

Differential Revision: https://reviews.llvm.org/D23337

llvm-svn: 278270
2016-08-10 19:00:11 +00:00
Nicolai Haehnle 02d784172c LiveIntervalAnalysis: fix a crash in repairOldRegInRange
Summary:
See the new test case for one that was (non-deterministically) crashing
on trunk and deterministically hit the assertion that I added in D23302.
Basically, the machine function contains a sequence

     DS_WRITE_B32 %vreg4, %vreg14:sub0, ...
     DS_WRITE_B32 %vreg4, %vreg14:sub0, ...
     %vreg14:sub1<def> = COPY %vreg14:sub0

and SILoadStoreOptimizer::mergeWrite2Pair merges the two DS_WRITE_B32
instructions into one before calling repairIntervalsInRange.

Now repairIntervalsInRange wants to repair %vreg14, in particular, and
ends up trying to repair %vreg14:sub1 as well, but that only becomes
active _after_ the range that is to be repaired, hence the crash due
to LR.find(...) == LR.begin() at the start of repairOldRegInRange.

I believe that just skipping those subrange is fine, but again, not too
familiar with that code.

Reviewers: MatzeB, kparzysz, tstellarAMD

Subscribers: llvm-commits, MatzeB

Differential Revision: https://reviews.llvm.org/D23303

llvm-svn: 278268
2016-08-10 18:51:14 +00:00
Kyle Butt 71b1ca1be4 Codegen: Tail Merge: Be less aggressive with special cases.
This change makes it possible for tail-duplication and tail-merging to
be disjoint. By being less aggressive when merging during layout, there are no
overlapping cases between tail-duplication and tail-merging, provided the
thresholds are disjoint.

There is a remaining TODO to benchmark the succ_size() test for non-layout tail
merging.

llvm-svn: 278265
2016-08-10 18:36:18 +00:00
Krzysztof Parzyszek 0bbad0fc86 [Hexagon] Simplify the SplitConst32/64 pass
llvm-svn: 278256
2016-08-10 18:05:47 +00:00
Krzysztof Parzyszek 3b946c90ef [Hexagon] Add extra patterns for single-precision min/max instructions
llvm-svn: 278252
2016-08-10 17:56:24 +00:00
Tim Northover 7552ef5a00 GlobalISel: avoid inserting redundant COPYs for bitcasts.
If the value produced by the bitcast hasn't been referenced yet, we can simply
reuse the input register avoiding an unnecessary COPY instruction.

llvm-svn: 278245
2016-08-10 16:51:14 +00:00
Krzysztof Parzyszek a3386501af [Hexagon] Use integer instructions for floating point immediates
Floating point instructions use general purpose registers, so the few
instructions that can put floating point immediates into registers are,
in fact, integer instruction. Use them explicitly instead of having
pseudo-instructions specifically for dealing with floating point values.

Simplify the constant loading instructions (from sdata) to have only two:
one for 32-bit values and one for 64-bit values: CONST32 and CONST64.

llvm-svn: 278244
2016-08-10 16:46:36 +00:00
Simon Pilgrim b204f03004 [X86][XOP] Tweak vpermil2pd test to stop it being combined away
The target shuffle combined to a BLENDPD pattern which we will shortly add support for.

llvm-svn: 278233
2016-08-10 15:15:56 +00:00
Simon Pilgrim f1f55198c1 [X86][SSE] Regenerate vector shift lowering tests
llvm-svn: 278232
2016-08-10 15:13:49 +00:00
Sanjay Patel 2c677a9306 use different comparison predicates for better test coverage
llvm-svn: 278229
2016-08-10 15:06:11 +00:00
Simon Pilgrim ac8fa6c2c6 [X86][SSE] Add support for combining target shuffles to MOVSS/MOVSD
Only do this on pre-SSE41 targets where we should be lowering to BLENDPS/BLENDPD instead

llvm-svn: 278228
2016-08-10 14:15:41 +00:00
Simon Pilgrim d99242c44d [X86][SSE] Regenerate SSE1 tests
Properly demonstrate the nasty codegen we get for vselect without integer vectors

llvm-svn: 278215
2016-08-10 12:26:40 +00:00
Simon Pilgrim cb5a189b90 Regenerate test
llvm-svn: 278214
2016-08-10 12:24:19 +00:00
Simon Pilgrim 85c7ea86ae [DAGCombine] Avoid INSERT_SUBVECTOR reinsertions (PR28678)
If the input vector to INSERT_SUBVECTOR is another INSERT_SUBVECTOR, and this inserted subvector replaces the last insertion, then insert into the common source vector.

i.e. 
INSERT_SUBVECTOR( INSERT_SUBVECTOR( Vec, SubOld, Idx ), SubNew, Idx ) --> INSERT_SUBVECTOR( Vec, SubNew, Idx )

Differential Revision: https://reviews.llvm.org/D23330

llvm-svn: 278211
2016-08-10 10:50:53 +00:00
Sam Parker 62965c96df [ARM] Improve sxta{b|h} and uxta{b|h} tests
Created a Thumb2 predicated pattern matcher that uses Thumb2 and
HasT2ExtractPack and used it to redefine the patterns for sxta{b|h}
and uxta{b|h}. Also used the similar patterns to fill in isel pattern
gaps for the corresponding instructions in the ARM backend.
The patch is mainly changes to tests since most of this functionality
appears not to have been tested.

Differential Revision: https://reviews.llvm.org/D23273

llvm-svn: 278207
2016-08-10 09:34:34 +00:00
Tim Northover d403a3d8ee GlobalISel: support 'undef' constant.
llvm-svn: 278174
2016-08-09 23:01:30 +00:00
Derek Schuff 66641322ce [WebAssembly] Add -emscripten-cxx-exceptions-whitelist option
This patch adds -emscripten-cxx-exceptions-whitelist option to
WebAssemblyLowerEmscriptenExceptions pass. This options is the list of
function names in which Emscripten-style exception handling is enabled.
This is to support emscripten's EXCEPTION_CATCHING_WHITELIST which
exists because of the performance impact of emscripten's non-zero-cost
EH method.

Patch by Heejin Ahn

Differential Revision: https://reviews.llvm.org/D23292

llvm-svn: 278171
2016-08-09 22:37:00 +00:00
Tim Northover 5ed648e509 GlobalISel: first translation support for Constants.
For now put them all in the entry block. This should be correct but may give
poor runtime performance. Hopefully MachineSinking combined with
isReMaterializable can solve those issues, but if not the interface is sound
enough to support alternatives.

llvm-svn: 278168
2016-08-09 21:28:04 +00:00
Sanjay Patel d34b128fbc add test cases for missed vselect optimizations (PR28895)
llvm-svn: 278165
2016-08-09 21:07:17 +00:00
Sanjay Patel b61346b8b0 regenerate checks and remove 'opt' run dependency
llvm-svn: 278154
2016-08-09 20:09:16 +00:00
David Majnemer adc688ce9c [X86] Don't model UD2/UD2B as a terminator
A UD2 might make its way into the program via a call to @llvm.trap.
Obviously, calls are not terminators.  However, we modeled the X86
instruction, UD2, as a terminator.  Later on, this confuses the epilogue
insertion machinery which results in the epilogue getting inserted
before the UD2.  For some platforms, like x64, the result is a
violation of the ABI.

Instead, model UD2/UD2B as a side effecting instruction which may
observe memory.

llvm-svn: 278144
2016-08-09 17:55:12 +00:00
Simon Pilgrim 76964e3140 [DAGCombiner] Better support for shifting large value type by constants
As detailed on D22726, much of the shift combining code assume constant values will fit into a uint64_t value and calls ConstantSDNode::getZExtValue where it probably shouldn't (leading to asserts). Using APInt directly avoids this problem but we encounter other assertions if we attempt to compare/operate on 2 APInt of different bitwidths.

This patch adds a helper function to ensure that 2 APInt values are zero extended as required so that they can be safely used together. I've only added an initial example use for this to the '(SHIFT (SHIFT x, c1), c2) --> (SHIFT x, (ADD c1, c2))' combines. Further cases can easily be added as required.

Differential Revision: https://reviews.llvm.org/D23007

llvm-svn: 278141
2016-08-09 17:39:11 +00:00
Simon Pilgrim 27740d038c [X86][XOP] Add support for combining target shuffles to VPERMIL2PD/VPERMIL2PS
llvm-svn: 278120
2016-08-09 12:56:15 +00:00
Elena Demikhovsky 0e0e07f436 AVX-512: A new test for FMA intrinsic
A new test that explores sub-optimal sequence of FMA intrinsic and FNEG operation.
An upcoming patch will fix it.

llvm-svn: 278117
2016-08-09 11:54:14 +00:00
Simon Pilgrim aae7d4a1b6 [X86][XOP] Add support for combining target shuffles to VPPERM
llvm-svn: 278114
2016-08-09 10:56:29 +00:00
Dean Michael Berris 3a25d84a51 [XRay] Test for xray_instr_map in object file. (NFC)
This makes a trivial change in the emission of the per-function XRay
tables, and makes sure that the xray_instr_map section does show up in
the object file.

llvm-svn: 278113
2016-08-09 10:42:11 +00:00
Simon Pilgrim 54c32ddf55 [X86][SSE] Fix memory folding of (v)roundsd / (v)roundss
We only had partial memory folding support for the intrinsic definitions, and (as noted on PR27481) was causing FR32/FR64/VR128 mismatch errors with the machine verifier.

This patch adds missing memory folding support for both intrinsics and the ffloor/fnearbyint/fceil/frint/ftrunc patterns and in doing so fixes the failing machine verifier stack folding tests from PR27481.

Differential Revision: https://reviews.llvm.org/D23276

llvm-svn: 278106
2016-08-09 09:32:34 +00:00
Craig Topper 92a4ff1294 [AVX-512] Add support for execution domain switching masked logical ops between floating point and integer domain.
This switches PS<->D and PD<->Q.

llvm-svn: 278097
2016-08-09 05:26:07 +00:00
Craig Topper 9bd6241106 [X86] Remove the Fv packed logical operation alias instructions. Replace them with patterns to the regular instructions.
This enables execution domain fixing which is why the tests changed.

llvm-svn: 278090
2016-08-09 03:06:33 +00:00
Craig Topper de06b51d3d [X86] Remove unnecessary bitcast from the front of AVX1Only 256-bit logical operation patterns.
llvm-svn: 278088
2016-08-09 03:06:26 +00:00
Matthias Braun 7313ca6dbf X86InstrInfo: Update liveness in classifyLea()
We need to update liveness information when we create COPYs in
classifyLea().

This fixes http://llvm.org/28301

llvm-svn: 278086
2016-08-09 01:47:26 +00:00
Derek Schuff 53b9af02c8 [WebAssembly] Fix bugs in WebAssemblyLowerEmscriptenExceptions pass
* Delete extra '_' prefixes from JS library function names. fixImports()
  function in JS glue code deals with this for wasm.
* Change command-line option names in order to be consistent with
  asm.js.
* Add missing lowering code for llvm.eh.typeid.for intrinsics
* Delete commas in mangled function names
* Fix a function argument attributes bug. Because we add the pointer to
  the original callee as the first argument of invoke wrapper, all
  argument attribute indices have to be incremented by one.

Patch by Heejin Ahn

Differential Revision: https://reviews.llvm.org/D23258

llvm-svn: 278081
2016-08-09 00:29:55 +00:00
Derek Schuff b7d6d9e3cd [WebAssembly] Fix CFI index to account for padding nullptr function
The WebAssembly linker now creates a dummy function at index 0 to
prevent miscomparisons with the NULL pointer, see
https://github.com/WebAssembly/binaryen/pull/658. Thanks to pcc for
pointing out this problem!

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D23137

llvm-svn: 278073
2016-08-08 23:56:01 +00:00
Charles Davis e9c32c7ed3 Revert "[X86] Support the "ms-hotpatch" attribute."
This reverts commit r278048. Something changed between the last time I
built this--it takes awhile on my ridiculously slow and ancient
computer--and now that broke this.

llvm-svn: 278053
2016-08-08 21:20:15 +00:00
Charles Davis 0822aa118e [X86] Support the "ms-hotpatch" attribute.
Summary:
Based on two patches by Michael Mueller.

This is a target attribute that causes a function marked with it to be
emitted as "hotpatchable". This particular mechanism was originally
devised by Microsoft for patching their binaries (which they are
constantly updating to stay ahead of crackers, script kiddies, and other
ne'er-do-wells on the Internet), but is now commonly abused by Windows
programs to hook API functions.

This mechanism is target-specific. For x86, a two-byte no-op instruction
is emitted at the function's entry point; the entry point must be
immediately preceded by 64 (32-bit) or 128 (64-bit) bytes of padding.
This padding is where the patch code is written. The two byte no-op is
then overwritten with a short jump into this code. The no-op is usually
a `movl %edi, %edi` instruction; this is used as a magic value
indicating that this is a hotpatchable function.

Reviewers: majnemer, sanjoy, rnk

Subscribers: dberris, llvm-commits

Differential Revision: https://reviews.llvm.org/D19908

llvm-svn: 278048
2016-08-08 21:01:39 +00:00
Krzysztof Parzyszek 341cf3fbe5 [Hexagon] Add pattern for 64-bit mulhs
llvm-svn: 278040
2016-08-08 19:24:25 +00:00
Elliot Colp d9e6668928 Re-add SystemZ SNaN test
The floating-point bug affecting ninja-x64-msvc-RA-centos6 is fixed (r277813) so this test should
now pass

llvm-svn: 278034
2016-08-08 18:11:13 +00:00
Oliver Stannard 8331aaee8f [ARM] Add support for embedded position-independent code
This patch adds support for some new relocation models to the ARM
backend:

* Read-only position independence (ROPI): Code and read-only data is accessed
  PC-relative. The offsets between all code and RO data sections are known at
  static link time. This does not affect read-write data.
* Read-write position independence (RWPI): Read-write data is accessed relative
  to the static base register (r9). The offsets between all writeable data
  sections are known at static link time. This does not affect read-only data.

These two modes are independent (they specify how different objects
should be addressed), so they can be used individually or together. They
are otherwise the same as the "static" relocation model, and are not
compatible with SysV-style PIC using a global offset table.

These modes are normally used by bare-metal systems or systems with
small real-time operating systems. They are designed to avoid the need
for a dynamic linker, the only initialisation required is setting r9 to
an appropriate value for RWPI code.

I have only added support to SelectionDAG, not FastISel, because
FastISel is currently disabled for bare-metal targets where these modes
would be used.

Differential Revision: https://reviews.llvm.org/D23195

llvm-svn: 278015
2016-08-08 15:28:31 +00:00
Silviu Baranga fa00ba3c1a [AArch64] PR28877: Don't assume we're running after legalization when creating vcvtfp2fxs
Summary:
The DAG combine transformation that was generating the
aarch64_neon_vcvtfp2fxs node was assuming that all
inputs where legal and wasn't accounting that the input
could be a v4f64 if we're trying to do the transformation
before legalization. We now bail out in this case.

All illegal types besides v4f64 were already rejected.

Fixes https://llvm.org/bugs/show_bug.cgi?id=28877.

Reviewers: jmolloy

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23261

llvm-svn: 278002
2016-08-08 13:13:57 +00:00
Craig Topper f44423120f [AVX-512] Improve lowering of inserting a single element into lowest element of a 512-bit vector of zeroes by using vmovq/vmovd/vmovss/vmovsd.
llvm-svn: 277965
2016-08-07 21:52:59 +00:00
Nico Weber 99ceee8a85 Revert r277905, it caused PR28894
llvm-svn: 277962
2016-08-07 20:18:04 +00:00
Craig Topper 2c51c74d52 [AVX-512] Add 512-bit logical operations to load folding tables. Add avx512f stack folding test and move some tests from the avx512vl test.
llvm-svn: 277961
2016-08-07 17:14:09 +00:00
Craig Topper 938e7ab9e1 [AVX-512] Add EVEX encoded floating point MAX/MIN instructions to the load folding tables.
llvm-svn: 277960
2016-08-07 17:14:05 +00:00
Elena Demikhovsky dca03bebd3 AVX-512: Changed lowering of BITCAST between i1 vectors and i8/i16/i32 integer values
Optimized lowering of BITCAST node. The BITCAST node can be replaced with COPY_TO_REG instead of KMOV.
It allows to suppress two opposite BITCAST operations and avoid redundant "movs".

Differential Revision: https://reviews.llvm.org/D23247

llvm-svn: 277958
2016-08-07 13:05:58 +00:00
Simon Pilgrim 69f2299efc [X86][AVX512BW] Add sext/zext AVX512BW 512-bit vector tests
llvm-svn: 277957
2016-08-07 12:41:36 +00:00
Simon Pilgrim a23141eca7 [X86][AVX512] Add sext/zext to 512-bit vector tests
llvm-svn: 277956
2016-08-07 12:10:46 +00:00
Elena Demikhovsky 2fabdcc60a AVX-512: Added a test for cmp intrinsics
This is a new test that should explore a current suboptimal sequence in passing values between cmp and kor intrinsics.
The code will be optimized in an upcoming patch.

Submitted bug here:
https://llvm.org/bugs/show_bug.cgi?id=28839

llvm-svn: 277954
2016-08-07 09:29:34 +00:00
Craig Topper 49841c3812 [X86] Add commutable floating point max/min instructions to the load folding tables.
llvm-svn: 277949
2016-08-07 05:39:51 +00:00
Craig Topper 2c1f6706de [AVX-512] Add andnps/andnpd to the avx512vl stack folding test.
llvm-svn: 277948
2016-08-07 05:39:48 +00:00
Simon Pilgrim bc573ca1b8 [X86][AVX2] Improve sign/zero extension on AVX2 targets
Split extensions to large vectors into 256-bit chunks - the equivalent of what we do with pre-AVX2 into 128-bit chunks

llvm-svn: 277939
2016-08-06 21:21:12 +00:00
Craig Topper 19505bc354 [AVX-512] Add AVX-512 scalar CVT instructions to hasUndefRegUpdate.
llvm-svn: 277933
2016-08-06 19:31:50 +00:00
Craig Topper b0476fcc1f [AVX-512] Add AVX512 run line to a test and re-generate the checks. Future commits will refine some of the sequences.
llvm-svn: 277932
2016-08-06 19:31:47 +00:00
Simon Pilgrim 7d168e19e8 [X86][SSE] Enable commutation between MOVHLPS and UNPCKHPD
Assuming SSE2 is available then we can safely commute between these, removing some unnecessary register moves and improving memory folding opportunities.

VEX encoded versions don't benefit so I haven't added support to them.

llvm-svn: 277930
2016-08-06 18:40:28 +00:00
Simon Pilgrim ef10e922d8 [X86][SSE] Regenerate SSE1 shuffle tests
llvm-svn: 277925
2016-08-06 13:46:09 +00:00
Kyle Butt 71cb44d969 CodeGen: If Convert blocks that would form a diamond when tail-merged.
The following function currently relies on tail-merging for if
conversion to succeed. The common tail of cond_true and cond_false is
extracted, and this then forms a diamond pattern that can be
successfully if converted.

If this block does not get extracted, either because tail-merging is
disabled or the threshold is higher, we should still recognize this
pattern and if-convert it.
define i32 @t2(i32 %a, i32 %b) nounwind {
entry:
	%tmp1434 = icmp eq i32 %a, %b		; <i1> [#uses=1]
	br i1 %tmp1434, label %bb17, label %bb.outer

bb.outer:		; preds = %cond_false, %entry
	%b_addr.021.0.ph = phi i32 [ %b, %entry ], [ %tmp10, %cond_false ]
	%a_addr.026.0.ph = phi i32 [ %a, %entry ], [ %a_addr.026.0, %cond_false ]
	br label %bb

bb:		; preds = %cond_true, %bb.outer
	%indvar = phi i32 [ 0, %bb.outer ], [ %indvar.next, %cond_true ]
	%tmp. = sub i32 0, %b_addr.021.0.ph
	%tmp.40 = mul i32 %indvar, %tmp.
	%a_addr.026.0 = add i32 %tmp.40, %a_addr.026.0.ph
	%tmp3 = icmp sgt i32 %a_addr.026.0, %b_addr.021.0.ph
	br i1 %tmp3, label %cond_true, label %cond_false

cond_true:		; preds = %bb
	%tmp7 = sub i32 %a_addr.026.0, %b_addr.021.0.ph
	%tmp1437 = icmp eq i32 %tmp7, %b_addr.021.0.ph
	%indvar.next = add i32 %indvar, 1
	br i1 %tmp1437, label %bb17, label %bb

cond_false:		; preds = %bb
	%tmp10 = sub i32 %b_addr.021.0.ph, %a_addr.026.0
	%tmp14 = icmp eq i32 %a_addr.026.0, %tmp10
	br i1 %tmp14, label %bb17, label %bb.outer

bb17:		; preds = %cond_false, %cond_true, %entry
	%a_addr.026.1 = phi i32 [ %a, %entry ], [ %tmp7, %cond_true ], [ %a_addr.026.0, %cond_false ]
	ret i32 %a_addr.026.1
}

Without tail-merging or diamond-tail if conversion:
LBB1_1:                                 @ %bb
                                        @ =>This Inner Loop Header: Depth=1
        cmp     r0, r1
        ble     LBB1_3
@ BB#2:                                 @ %cond_true
                                        @   in Loop: Header=BB1_1 Depth=1
        subs    r0, r0, r1
        cmp     r1, r0
        it      ne
        cmpne   r0, r1
        bgt     LBB1_4
LBB1_3:                                 @ %cond_false
                                        @   in Loop: Header=BB1_1 Depth=1
        subs    r1, r1, r0
        cmp     r1, r0
        bne     LBB1_1
LBB1_4:                                 @ %bb17
        bx      lr

With diamond-tail if conversion, but without tail-merging:
@ BB#0:                                 @ %entry
        cmp     r0, r1
        it      eq
        bxeq    lr
LBB1_1:                                 @ %bb
                                        @ =>This Inner Loop Header: Depth=1
        cmp     r0, r1
        ite     le
        suble   r1, r1, r0
        subgt   r0, r0, r1
        cmp     r1, r0
        bne     LBB1_1
@ BB#2:                                 @ %bb17
        bx      lr

llvm-svn: 277905
2016-08-06 01:52:37 +00:00
Justin Bogner 6863027f00 PowerPC: Add a triple to this test
This is running opt without specifying a triple, which isn't correct.

llvm-svn: 277875
2016-08-05 21:49:54 +00:00
Marek Olsak 355a8642b4 AMDGPU/SI: Increase SGPR limit to 96 on Tonga/Iceland
Summary:
This is the setting of the Vulkan closed source driver.

It decreases the max wave count from 10 to 8.

26010 shaders in 14650 tests
Totals:
VGPRS: 829593 -> 808440 (-2.55 %)
Spilled SGPRs: 81878 -> 42226 (-48.43 %)
Spilled VGPRs: 367 -> 358 (-2.45 %)
Scratch VGPRs: 1764 -> 1748 (-0.91 %) dwords per thread
Code Size: 36677864 -> 35923932 (-2.06 %) bytes

There is a massive decrease in SGPR spilling in general and -7.4% spilled
VGPRs for DiRT Showdown (= SGPRs spilled to scratch?)

Reviewers: arsenm, tstellarAMD, nhaehnle

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D23034

llvm-svn: 277867
2016-08-05 21:23:29 +00:00
Weiming Zhao f68a6a720c [ARM] Constant Materialize: imms with specific value can be encoded into mov.w
Summary: Thumb2 supports encoding immediates with specific patterns into mov.w by splatting the low 8 bits into other bytes.

I'm resubmitting this patch. The test case in the original commit
r277610 does not specify triple, so builds with differnt default triple
will have different output.

This patch fixed trile as thumb-darwin-apple.

Reviewers: john.brawn, jmolloy, bruno

Subscribers: jmolloy, aemerson, rengolin, samparker, llvm-commits

Differential Revision: https://reviews.llvm.org/D23090

llvm-svn: 277865
2016-08-05 20:58:29 +00:00
Tim Northover 14e7f73a0f GlobalISel: clear pending phis after MachineFunction translated
Test is just reordering the existing functions (it would trigger for any
function after one with a phi).

llvm-svn: 277841
2016-08-05 17:50:36 +00:00
Simon Pilgrim 69b6a70834 [X86][SSE] Add initial support for 2 input target shuffle combining.
At the moment only the INSERTPS matching can actually use 2 inputs but the plumbing is now in place.

llvm-svn: 277839
2016-08-05 17:36:14 +00:00
Tim Northover 97d0cb3165 GlobalISel: IRTranslate PHI instructions
llvm-svn: 277835
2016-08-05 17:16:40 +00:00
Ulrich Weigand c3b495a649 [PowerPC] Wrong fast-isel codegen for VSX floating-point loads
There were two locations where fast-isel would generate a LFD instruction
with a target register class VSFRC instead of F8RC when VSX was enabled.
This can ccause invalid registers to be used in certain cases, like:
   lfd 36, ...
instead of using a VSX load instruction.  The wrong register number gets
silently truncated, causing invalid code to be generated.


The first place is PPCFastISel::PPCEmitLoad, which had multiple problems:

1.) The IsVSSRC and IsVSFRC flags are not initialized correctly, since they
are computed from resultReg, which is still zero at this point in many cases.
Fixed by changing the helper routines to operate on a register class instead
of a register and passing in UseRC.
 
2.) Even with this fixed, Is64VSXLoad is still wrong due to a typo:

bool Is32VSXLoad = IsVSSRC && Opc == PPC::LFS;
bool Is64VSXLoad = IsVSSRC && Opc == PPC::LFD;

The second line needs to use isVSFRC (like PPCEmitStore does).

3.) Once both the above are fixed, we're now generating a VSX instruction --
but an incorrect one, since generation of an indexed instruction with null
index is wrong. Fixed by copying the code handling the same issue in
PPCEmitStore.


The second place is PPCFastISel::PPCMaterializeFP, where we would emit an
LFD to load a constant from the literal pool, and use the wrong result
register class. Fixed by hardcoding a F8RC class even on systems
supporting VSX.


Fixes: https://llvm.org/bugs/show_bug.cgi?id=28630

Differential Revision: https://reviews.llvm.org/D22632

llvm-svn: 277823
2016-08-05 15:22:05 +00:00
Strahinja Petrovic 30e0ce8e9f [PowerPC] fix passing long double arguments to function (soft-float)
This patch fixes passing long double type arguments to function in 
soft float mode. If there is less than 4 argument registers free 
(long double type is mapped in 4 gpr registers in soft float mode) 
long double type argument must be passed through stack.
Differential Revision: https://reviews.llvm.org/D20114.

llvm-svn: 277804
2016-08-05 08:47:26 +00:00
Tim Northover 61c16142b4 GlobalISel: extend add widening to SUB, MUL, OR, AND and XOR.
These are the operations that are trivially identical. Division is omitted for
now because you need to use the correct sign/zero extension.

llvm-svn: 277775
2016-08-04 21:39:49 +00:00
Tim Northover 1cfa919b3d GlobalISel: add support for G_MUL
llvm-svn: 277774
2016-08-04 21:39:44 +00:00
Tim Northover 9656f1476c GlobalISel: implement narrowing for G_ADD.
llvm-svn: 277769
2016-08-04 20:54:13 +00:00
Yaxun Liu 86c052238a [OpenCL] Add missing tests for getOCLTypeName
Adding missing tests for OCL type names for half, float, double, char, short, long, and unknown.

Patch by Aaron En Ye Shi.

Differential Revision: https://reviews.llvm.org/D22964

llvm-svn: 277759
2016-08-04 19:45:00 +00:00
Tim Northover 2f32e7f0ac AArch64: don't assume all i128s are BUILD_PAIRs
It leads to a crash when they're not. I'm *sure* I've made this mistake before,
at least once.

llvm-svn: 277755
2016-08-04 19:32:28 +00:00
Tim Northover 06db18fbf8 GlobalISel: also add G_TRUNC to IRTranslator.
llvm-svn: 277749
2016-08-04 18:35:17 +00:00
Tim Northover 323358184e GlobalISel: add code to widen scalar G_ADD
llvm-svn: 277747
2016-08-04 18:35:11 +00:00
Derek Schuff 732636d901 [WebAssembly] Check return value of getRegForValue in FastISel
Previously, FastISel for WebAssembly wasn't checking the return value of
`getRegForValue` in certain cases, which would generate instructions
referencing NoReg. This patch fixes this behavior.

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D23100

llvm-svn: 277742
2016-08-04 18:01:52 +00:00
Krzysztof Parzyszek 04c0796e37 [Hexagon] Validate register class when doing bit simplification
llvm-svn: 277740
2016-08-04 17:56:19 +00:00
Daniel Sanders 5dcbac57c5 [mips] Set Personality and LSDA encoding for FreeBSD
Reviewers: seanbruno, sdardis

Subscribers: tberghammer, danalbert, srhines, dsanders, sdardis, llvm-commits, seanbruno

Differential Revision: https://reviews.llvm.org/D23113

llvm-svn: 277732
2016-08-04 15:36:03 +00:00
Krzysztof Parzyszek 7773c58458 [Hexagon] Clear kill flags from modified registers in peephole optimizer
llvm-svn: 277727
2016-08-04 14:17:16 +00:00
Nikolai Bozhenov f679530ba1 [X86] Heuristic to selectively build Newton-Raphson SQRT estimation
On modern Intel processors hardware SQRT in many cases is faster than RSQRT
followed by Newton-Raphson refinement. The patch introduces a simple heuristic
to choose between hardware SQRT instruction and Newton-Raphson software
estimation.

The patch treats scalars and vectors differently. The heuristic is that for
scalars the compiler should optimize for latency while for vectors it should
optimize for throughput. It is based on the assumption that throughput bound
code is likely to be vectorized.

Basically, the patch disables scalar NR for big cores and disables NR completely
for Skylake. Firstly, scalar SQRT has shorter latency than NR code in big cores.
Secondly, vector SQRT has been greatly improved in Skylake and has better
throughput compared to NR.

Differential Revision: https://reviews.llvm.org/D21379

llvm-svn: 277725
2016-08-04 12:47:28 +00:00
Simon Pilgrim 8ae6dad49b [X86][SSE] Don't decide when to scalarize CTTZ/CTLZ for performance at lowering - this is what cost models are for
Improved CTTZ/CTLZ costings will be added shortly

llvm-svn: 277713
2016-08-04 10:14:39 +00:00
Simon Dardis 57f4ae4625 [mips] Enable tail calls by default
Enable tail calls by default for (micro)MIPS(64).

microMIPS is slightly more tricky than doing it for MIPS(R6) or microMIPSR6.
microMIPS has two instruction encodings: 16bit and 32bit along with some
restrictions on the size of the instruction that can fill the delay slot.
For safe tail calls for microMIPS, the delay slot filler attempts to find
a correct size instruction for the delay slot of TAILCALL pseudos.

Reviewers: dsanders, vkalintris

Subscribers: jfb, dsanders, sdardis, llvm-commits

Differential Revision: https://reviews.llvm.org/D21138

llvm-svn: 277708
2016-08-04 09:17:07 +00:00
Dean Michael Berris 7e9abea2ae [XRay] Align entry and return sleds to 2 byte boundaries
This should ensure that we can atomically write two bytes (on top of the
retq and the one past it) and have those two bytes not straddle cache
lines.

We also move the label past the alignment instruction so that we can refer
to the actual first instruction, as opposed to potential padding before the
aligned instruction.

Update the tests to allow us to reflect the new order of assembly.

Reviewers: rSerge, echristo, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23101

llvm-svn: 277701
2016-08-04 07:37:28 +00:00
Matt Arsenault b0e32f1ba1 AMDGPU: Fix a slow test by using basic regalloc
This just tests that the register limit isn't exceeded,
so the regisetr allocation doesn't need to be great.'

The critically slow part is all in greedy RA, so
switch to basic.

llvm-svn: 277700
2016-08-04 07:04:54 +00:00
Matthias Braun 1873998b16 RenameIndependentSubregs: Fix liveness query in rewriteOperands()
rewriteOperands() always performed liveness queries at the base index
rather than the RegSlot/Base as apropriate for the machine operand. This
could lead to illegal rewriting in some cases.

llvm-svn: 277661
2016-08-03 22:37:47 +00:00
Guozhi Wei 9584d18d48 [PPC] Handling CallInst in PPCBoolRetToInt
This patch fixes pr25548.

Current implementation of PPCBoolRetToInt doesn't handle CallInst correctly, so it failed to do the intended optimization when there is a CallInst with parameters. This patch fixed that.

llvm-svn: 277655
2016-08-03 21:43:51 +00:00
Bruno Cardoso Lopes 3fcf832cce Revert "[ARM] Constant Materialize: imms with specific value can be encoded into mov.w"
This reverts commit r277610 / d619aa8878c3dafcc0d29a46517f63ff3209fdd4.

This make subtarget-no-movt.ll fail in
http://lab.llvm.org:8080/green/job/clang-stage1-cmake-RA-incremental_check/26892,

llvm-svn: 277654
2016-08-03 21:26:21 +00:00
Elliot Colp 6af6f64f87 I can't reproduce this buildbot failure locally, so temporarily remove this test while I investigate.
http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/27427

llvm-svn: 277636
2016-08-03 19:39:20 +00:00
Simon Pilgrim 898f030f70 [X86][SSE] Enable target shuffle combining to combine multiple shuffle inputs.
We currently only support combining target shuffles that consist of a single source input (plus elements known to be undef/zero).

This patch generalizes the recursive combining of the target shuffle to collect all the inputs, merging any duplicates along the way, into a full set of src ops and its shuffle mask.

We uncover a number of cases where we have failed to combine a unary shuffle because the input has been duplicated and separated during lowering.

This will allow us to combine to 2-input shuffles in a future patch.

Differential Revision: https://reviews.llvm.org/D22859

llvm-svn: 277631
2016-08-03 19:08:24 +00:00
Krzysztof Parzyszek 23ee12e173 [Hexagon] Generate COPY/REG_SEQUENCE more aggressively for vectors
llvm-svn: 277626
2016-08-03 18:35:48 +00:00
Ehsan Amiri a538b0f023 Adding -verify-machineinstrs option to PowerPC tests
Currently we have a number of tests that fail with -verify-machineinstrs.
To detect this cases earlier we add the option to the testcases with the
exception of tests that will currently fail with this option. PR 27456 keeps
track of this failures.

No code review, as discussed with Hal Finkel.

llvm-svn: 277624
2016-08-03 18:17:35 +00:00
Weiming Zhao 57dc4cf0e1 [ARM] Constant Materialize: imms with specific value can be encoded into mov.w
Summary: Thumb2 supports encoding immediates with specific patterns into mov.w by splatting the low 8 bits into other bytes.

Reviewers: john.brawn, jmolloy

Subscribers: jmolloy, aemerson, rengolin, samparker, llvm-commits

Differential Revision: https://reviews.llvm.org/D23090

llvm-svn: 277610
2016-08-03 17:05:23 +00:00
Elliot Colp 82b1468a4d Disable shrinking of SNaN constants
When expanding FP constants, we attempt to shrink doubles to floats and perform an extending load.
However, on SystemZ, and possibly on other targets (I've only confirmed the problem on SystemZ), the FP extending load instruction may convert SNaN into QNaN, or may cause an exception. So in the general case, we would still like to shrink FP constants, but SNaNs should be left as doubles.

Differential Revision: https://reviews.llvm.org/D22685

llvm-svn: 277602
2016-08-03 15:09:21 +00:00
Krzysztof Parzyszek ed4e7827bb [Hexagon] Do not check alignment for unsized types in isLegalAddressingMode
When the same base address is used to load two different data types, LSR
would assume a memory type of "void". This type is not sized and has no
alignment information. Checking for it causes a crash.

llvm-svn: 277601
2016-08-03 15:06:18 +00:00
Dean Michael Berris 0b8f6c8777 [XRay] Make the xray_instr_map section specification more correct
Summary:
We also add a test to show what currently happens when we create a
section per function and emit an xray_instr_map. This illustrates the
relationship (or lack thereof) between the per-function section and the
xray_instr_map section.

We also change the code generation slightly so that we don't always
create group sections, but rather only do so if a function where the
table is associated with is in a group.

Also in this change:

  - Remove the "merge" flag on the xray_instr_map section.
  - Test that we're generating the right table for comdat and non-comdat functions.

Reviewers: echristo, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23104

llvm-svn: 277580
2016-08-03 07:21:55 +00:00
Derek Schuff 39bf39f35c [WebAssembly] Initial SIMD128 support.
Kicks off the implementation of wasm SIMD128 support (spec:
https://github.com/stoklund/portable-simd/blob/master/portable-simd.md),
adding support for add, sub, mul for i8x16, i16x8, i32x4, and f32x4.

The spec is WIP, and might change in the near future.

Patch by João Porto

Differential Revision: https://reviews.llvm.org/D22686

llvm-svn: 277543
2016-08-02 23:16:09 +00:00
Tim Northover 765777ce67 ARM: only form SMMLS when SUBE flags unused.
In this particular example we wouldn't want the smmls anyway (the value is
actually unused), but in general smmls does not provide the required flags
register so if that SUBE result is used we can't replace it.

llvm-svn: 277541
2016-08-02 23:12:36 +00:00
Matt Arsenault 979902b3ff AMDGPU: fdiv -1, x -> rcp -x
llvm-svn: 277535
2016-08-02 22:25:04 +00:00
Krzysztof Parzyszek 824d347d2d [Hexagon] Recognize vcombine in copy propagation
llvm-svn: 277528
2016-08-02 21:49:20 +00:00
Artem Belevich db4bc667af [NVPTX] remove unnecessary named metadata update that happens to break debug info.
Also added test case to verify IR changes done by NVPTXGenericToNVVM pass.

Differential Revision: https://reviews.llvm.org/D22837

llvm-svn: 277520
2016-08-02 20:58:24 +00:00
Tim Northover 1021d89398 AArch64: properly calculate cmpxchg status in FastISel.
We were relying on the misleadingly-names $status result to actually be the
status. Actually it's just a scratch register that may or may not be valid (and
is the inverse of the real ststus anyway). Success can be determined by
comparing the value loaded against the one we wanted to see for "cmpxchg
strong" loops like this.

Should fix PR28819.

llvm-svn: 277513
2016-08-02 20:22:36 +00:00
Nicolai Haehnle 8a482b33fe AMDGPU: Stay in WQM for non-intrinsic stores
Summary:
Two types of stores are possible in pixel shaders: stores to memory that are
explicitly requested at the API level, and stores that are an implementation
detail of register spilling or lowering of arrays.

For the first kind of store, we must ensure that helper pixels have no effect
and hence WQM must be disabled. The second kind of store must always be
executed, because the written value may be loaded again in a way that is
relevant for helper pixels as well -- and there are no externally visible
effects anyway.

This is a candidate for the 3.9 release branch.

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: https://reviews.llvm.org/D22675

llvm-svn: 277504
2016-08-02 19:31:14 +00:00
Nicolai Haehnle bef0e90cf1 AMDGPU: Track physical registers in SIWholeQuadMode
Summary:
There are cases where uniform branch conditions are computed in VGPRs, and
we didn't correctly mark those as WQM.

The stray change in basic-branch.ll is because invoking the LiveIntervals
analysis leads to the detection of a dead register that would otherwise not
be seen at -O0.

This is a candidate for the 3.9 branch, as it fixes a possible hang.

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D22673

llvm-svn: 277500
2016-08-02 19:17:37 +00:00
Ahmed Bougacha 91bdeb1cc2 [AArch64][GlobalISel] Replace test REQUIRES with lit.local.cfg. NFC.
I forgot the REQUIRES once (see r277486).
Let's prevent it from happening again.

llvm-svn: 277499
2016-08-02 19:04:29 +00:00
Ahmed Bougacha 8a31ed2432 [AArch64] Remove useless 'import re' from CodeGen lit.local.cfg. NFC.
llvm-svn: 277498
2016-08-02 19:04:25 +00:00
Krzysztof Parzyszek 962932c2e2 [Hexagon] Prefer _io over _rr for 64-bit store with constant offset
Identify patterns where the address is aligned to an 8-byte boundary,
but both the base address and the constant offset are both proper
multiples of 4. In such cases, extract Base+4 into a separate instruc-
tion, and use S2_storerd_io, instead of using S4_storerd_rr.

llvm-svn: 277497
2016-08-02 18:50:05 +00:00
Ahmed Bougacha 0d020190dd [AArch64][GlobalISel] Add REQUIRES: global-isel to verifier tests.
I thought the directory had a lit.local.cfg, but it doesn't.
I'll add one, but for now, add the REQUIRES line. While there,
move the triple into the IR and add a datalayout.

llvm-svn: 277486
2016-08-02 17:19:35 +00:00
Ahmed Bougacha bfaddd999a [GlobalISel] Set the Selected MF property.
None of GlobalISel requires the property, but this lets us use the
verifier instead of rolling our own "all instructions selected" check.

llvm-svn: 277484
2016-08-02 16:49:25 +00:00
Ahmed Bougacha b14e944cdb [GlobalISel] Verify Selected MF property.
After instruction selection, there should be no pre-isel generic
instructions remaining, nor should generic virtual registers be
used. Verify that.

llvm-svn: 277483
2016-08-02 16:49:22 +00:00
Ahmed Bougacha b109d51865 [GlobalISel] Add Selected MachineFunction property.
Selected: the InstructionSelect pass ran and all pre-isel generic
instructions have been eliminated; i.e., all instructions are now
target-specific or non-pre-isel generic instructions (e.g., COPY).

Since only pre-isel generic instructions can have generic virtual register
operands, this also means that all generic virtual registers have been
constrained to virtual registers (assigned to register classes) and that
all sizes attached to them have been eliminated.

This lets us enforce certain invariants across passes.
This property is GlobalISel-specific, but is always available.

llvm-svn: 277482
2016-08-02 16:49:19 +00:00
Ahmed Bougacha 4628e37e7f [GlobalISel] Set and require RegBankSelected MF property.
The InstructionSelect pass assumes that RegBankSelect ran; set the
property on all tests (thereby verifying the test inputs) and require
it in the pass.

llvm-svn: 277477
2016-08-02 16:17:18 +00:00
Ahmed Bougacha 3681c772cf [GlobalISel] Verify RegBankSelected MF property.
RegBankSelected functions shouldn't have any generic virtual
register not assigned to a bank. Verify that.

llvm-svn: 277476
2016-08-02 16:17:15 +00:00
Ahmed Bougacha 2471265508 [GlobalISel] Add RegBankSelected MachineFunction property.
RegBankSelected: the RegBankSelect pass ran and all generic virtual
registers have been assigned to a register bank.

This lets us enforce certain invariants across passes.
This property is GlobalISel-specific, but is always available.

llvm-svn: 277475
2016-08-02 16:17:10 +00:00
Ahmed Bougacha 24d0d4d2ec [GlobalISel] Set, require, and verify Legalized MF property.
RegBankSelect and InstructionSelect run after the legalizer and
require a Legalized function: check that all instructions are legal.

Note that this should be in the MachineVerifier, but it can't use the
MachineLegalizer as it's currently in the separate GlobalISel library.
Note that the RegBankSelect verifier checks have the same layering
problem, but we only use inline methods so end up not needing to link
against the GlobalISel library.

llvm-svn: 277472
2016-08-02 15:10:32 +00:00
Ahmed Bougacha 0d7b0cb865 [GlobalISel] Add Legalized MachineFunction property.
Legalized: The MachineLegalizer ran; all pre-isel generic instructions
have been legalized, i.e., all instructions are now one of:
  - generic and always legal (e.g., COPY)
  - target-specific
  - legal pre-isel generic instructions.

This lets us enforce certain invariants across passes.
This property is GlobalISel-specific, but is always available.

llvm-svn: 277470
2016-08-02 15:10:25 +00:00
Sam Parker 18bc3a002e [ARM] Improve smul* and smla* isel for Thumb2
Added (sra (shl x, 16), 16) to the sext_16_node PatLeaf for ARM to
simplify some pattern matching. This has allowed several patterns
for smul* and smla* to be removed as well as making it easier to add
the matching for the corresponding instructions for Thumb2 targets.
Also added two Pat classes that are predicated on Thumb2 with the
hasDSP flag and UseMulOps flags. Updated the smul codegen test with
the wider range of patterns plus the ThumbV6 and ThumbV6T2 targets.

Differential Revision: https://reviews.llvm.org/D22908

llvm-svn: 277450
2016-08-02 12:44:27 +00:00
Ahmed Bougacha 45eb3b94d4 [GlobalISel] Don't RegBankSelect target-specific instructions.
They don't have types and should be using register classes.

llvm-svn: 277447
2016-08-02 11:41:16 +00:00
Ahmed Bougacha faf8e9f8c6 [GlobalISel] Don't legalize non-generic instructions.
They don't have types and should be legal.

llvm-svn: 277446
2016-08-02 11:41:09 +00:00
Matt Arsenault dfa7683d71 AArch64: Add missing branch relaxation tests
The branch relaxation pass has the worst test coverage
of any pass in AArch64. Add a few tests that hit some
large pieces of code in the pass.

llvm-svn: 277428
2016-08-02 07:41:05 +00:00
Craig Topper 05948fb36c [AVX-512] Correct ExeDomain for many AVX-512 instructions.
llvm-svn: 277416
2016-08-02 05:11:15 +00:00
Derek Schuff c64d7655b2 [WebAssembly] Support CFI for WebAssembly target
Summary: This patch implements CFI for WebAssembly. It modifies the
LowerTypeTest pass to pre-assign table indexes to functions that are
called indirectly, and lowers type checks to test against the
appropriate table indexes. It also modifies the WebAssembly backend to
support a special ".indidx" assembly directive that propagates the table
index assignments out to the linker.

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D21768

llvm-svn: 277398
2016-08-01 22:25:02 +00:00
Derek Schuff f41f67d3d9 [WebAssembly] Add asm.js-style exception handling support
Summary: This patch includes asm.js-style exception handling support for
WebAssembly. The WebAssembly MVP does not have any support for
unwinding or non-local control flow. In order to support C++ exceptions,
emscripten currently uses JavaScript exceptions along with some support
code (written in JavaScript) that is bundled by emscripten with the
generated code.
This scheme lowers exception-related instructions for wasm such that
wasm modules can be compatible with emscripten's existing scheme and
share the support code.

Patch by Heejin Ahn

Differential Revision: https://reviews.llvm.org/D22958

llvm-svn: 277391
2016-08-01 21:34:04 +00:00
Michael Kuperstein c97da7f3a4 [DAGCombine] Make sext(setcc) combine respect getBooleanContents
We used to combine "sext(setcc x, y, cc) -> (select (setcc x, y, cc), -1, 0)"
Instead, we should combine to (select (setcc x, y, cc), T, 0) where the value
of T is 1 or -1, depending on the type of the setcc, and getBooleanContents()
for the type if it is not i1.

This fixes PR28504.

llvm-svn: 277371
2016-08-01 19:39:49 +00:00
Ron Lieberman 8123b966cb [Hexagon] Generate vector printing instructions
llvm-svn: 277370
2016-08-01 19:36:39 +00:00
Evandro Menezes 82e245a202 [AArch64] Add support for Samsung Exynos M2 (NFC).
llvm-svn: 277364
2016-08-01 18:39:45 +00:00
Krzysztof Parzyszek ddafa2cd5f [Hexagon] Check for offset overflow when reserving scavenging slots
Scavenging slots were only reserved when pseudo-instruction expansion in
frame lowering created new virtual registers. It is possible to still
need a scavenging slot even if no virtual registers were created, in cases
where the stack is large enough to overflow instruction offsets.

llvm-svn: 277355
2016-08-01 17:15:30 +00:00
Daniel Sanders b3ae33c7a6 [mips][fastisel] Correct argument lowering for (f64, f64, i32) and similar.
Summary:
Allocating an AFGR64 shadows two GPR32's instead of just one.

This fixes an LNT regression detected by our internal buildbots.

Reviewers: sdardis

Subscribers: dsanders, sdardis, llvm-commits

Differential Revision: https://reviews.llvm.org/D23012

llvm-svn: 277348
2016-08-01 15:32:51 +00:00
Simon Pilgrim 46f119a59f [X86] Use implicit masking of SHLD/SHRD shift double instructions
Similar to the regular shift instructions, SHLD/SHRD only use the bottom bits of the shift value

llvm-svn: 277341
2016-08-01 12:11:43 +00:00
Diana Picus ab5a4c7dbb [AArch64] Return the correct size for TLSDESC_CALLSEQ
The branch relaxation pass is computing the wrong offsets because it assumes
TLSDESC_CALLSEQ eats up 4 bytes, when in fact it is lowered to an instruction
sequence taking up 16 bytes. This can become a problem in huge files with lots
of TLS accesses, as it may slowly move branch targets out of the range computed
by the branch relaxation pass.

Fixes PR24234 https://llvm.org/bugs/show_bug.cgi?id=24234

Differential Revision: https://reviews.llvm.org/D22870

llvm-svn: 277331
2016-08-01 08:38:49 +00:00
Craig Topper c48c029610 [AVX-512] Fix duplicate column in AVX512 execution dependency table that was preventing VMOVDQU32/VMOVDQA32 from being recognized. Fix a bug in the code that stops execution dependency fix from turning operations on 32-bit integer element types into operations on 64-bit integer element types.
llvm-svn: 277327
2016-08-01 07:55:33 +00:00
Craig Topper ddc96cd33d [X86] Regenerate a test to pick up shuffle comments that were added at some point.
llvm-svn: 277326
2016-08-01 07:55:24 +00:00
Hrvoje Varga 00d96ee7b9 [mips] Clang generates unaligned offset for MSA instruction st.d
Differential Revision: https://reviews.llvm.org/D19475

llvm-svn: 277323
2016-08-01 06:46:20 +00:00
Diana Picus 850043b25a [AArch64] Register passes so they can be run by llc
Initialize all AArch64-specific passes in the TargetMachine so they can be run
by llc. This can lead to conflicts in opt with some command line options that
share the same name as the pass, so I took this opportunity to do some cleanups:
* rename all relevant command line options from "aarch64-blah" to
  "aarch64-enable-blah" and update the tests accordingly
* run clang-format on their declarations
* move all these declarations to a common place (the TargetMachine) as opposed
  to having them scattered around (AArch64BranchRelaxation and
  AArch64AddressTypePromotion were the only offenders)

llvm-svn: 277322
2016-08-01 05:56:57 +00:00
Craig Topper 749a111f1e [AVX-512] Teach X86InstrInfo::getLargestLegalSuperClass to inflate to FR32X/FR64X if AVX512 is supported and VR128X/VR256X if VLX is supported.
Had to update a stack folding test to clobber the other 16 registers since this now made them get used instead of spilling.

llvm-svn: 277321
2016-08-01 05:31:50 +00:00
Craig Topper 9161e4ec22 [AVX512] Replace scalar fp arithmetic intrinsics with native IR in an AVX512 test. The intrinsics aren't lowered to AVX512 instructions.
The intrinsics really should be removed and autoupgraded.

llvm-svn: 277320
2016-08-01 04:29:16 +00:00
Simon Pilgrim 8ae4354df6 [X86][SSE] Regenerate frem tests
llvm-svn: 277311
2016-07-31 21:59:23 +00:00
Simon Pilgrim b089ba4c65 [X86][SSE] Regenerate fpext tests
llvm-svn: 277310
2016-07-31 21:55:33 +00:00
Craig Topper 7afdc0fb25 [AVX512] Always use EVEX encodings for 128/256-bit move instructions in getLoadStoreRegOpcode if VLX is supported.
llvm-svn: 277305
2016-07-31 20:20:05 +00:00
Craig Topper 4c53e60360 [AVX512] Add VLX packed move instructions to the execution dependency fix pass and update tests.
llvm-svn: 277304
2016-07-31 20:20:01 +00:00
Craig Topper 338ec9a0cb [AVX512] Stop treating VR512 specially in getLoadStoreRegOpcode and use the regular switch which already tried to handle it, but was unreachable. This has the added benefit of enabling aligned loads/stores if the stack is aligned.
llvm-svn: 277302
2016-07-31 20:19:53 +00:00
Craig Topper 2a6bbb8203 [AVX512] Add X86::VR512RegClassID to X86RegisterInfo::getLargestLegalSuperClass.
llvm-svn: 277301
2016-07-31 20:19:50 +00:00
Simon Pilgrim 6be48e4aa7 [X86] Improve 64-bit shifts on 32-bit targets (PR14593)
As discussed on PR14593, this patch adds support for lowering to SHLD/SHRD from the patterns generated by DAGTypeLegalizer::ExpandShiftWithKnownAmountBit.

Differential Revision: https://reviews.llvm.org/D23000

llvm-svn: 277299
2016-07-31 19:50:45 +00:00
Simon Pilgrim 64845bb8b4 [X86] Add tests for the lowering SHLD/SHRD from manual pattern similar to those generated by ExpandShiftWithKnownAmountBit
Test for add(v,v) as well as shl(v,1)

llvm-svn: 277293
2016-07-31 17:51:37 +00:00
Craig Topper 00d34ed64f [AVX-512] Don't let ExeDependencyFix pass convert VPANDD/Q to VPANDPS/PD unless DQI instructions are supported. Same for ANDN, OR, and XOR.
Thanks to Igor Breger for pointing out my mistake.

llvm-svn: 277292
2016-07-31 17:15:07 +00:00
Simon Pilgrim 1e096b3a7a [X86] Add tests for the lowering SHLD/SHRD from manual patterns
As discussed on D23000

llvm-svn: 277291
2016-07-31 17:11:49 +00:00
Simon Pilgrim fbe0fcb009 [X86][SSE] Regenerate vshift tests
llvm-svn: 277278
2016-07-30 20:28:02 +00:00
Simon Pilgrim 2e9de1cebb [X86][AVX] Added signum example test functions from PR13248
These are good examples of missed combine opportunities with zero/all bit vector compare results
 

llvm-svn: 277274
2016-07-30 16:29:19 +00:00
Simon Pilgrim b9e1f9c5c5 [X86][X87] Add vector arithmetic tests for targets with sse disabled
To make sure the X86_64 target isn't doing anything stupid

llvm-svn: 277272
2016-07-30 16:01:30 +00:00
Simon Pilgrim cf49fa3251 [X86][SSE] Let 64-bit targets use the fast 2i32-2f32 UINT_TO_FP conversion as well as 32-bit
The 2i32-2i64 legalization means that we can use the slightly quicker double bits + fptrunc approach for the same results

llvm-svn: 277271
2016-07-30 14:06:59 +00:00
Matt Arsenault 749035b7b1 AMDGPU: Fix shouldConvertConstantLoadToIntImm behavior
This should really be true for any immediate, not just
inline ones.

llvm-svn: 277260
2016-07-30 01:40:36 +00:00
Weiming Zhao 812fde3603 DAG: avoid duplicated truncating for sign extended operand
Summary:
When performing cmp for EQ/NE and the operand is sign extended, we can
avoid the truncaton if the bits to be tested are no less than origianl
bits.

Reviewers: eli.friedman

Subscribers: eli.friedman, aemerson, nemanjai, t.p.northover, llvm-commits

Differential Revision: https://reviews.llvm.org/D22933

llvm-svn: 277252
2016-07-29 23:33:48 +00:00
Tim Northover 5fc93b75d9 GlobalISel: translate "unreachable" (into nothing)
Easiest instruction ever!

llvm-svn: 277225
2016-07-29 22:41:55 +00:00
Tim Northover 5fb414d870 GlobalISel: support translation of intrinsic calls.
These come in two variants for now: G_INTRINSIC and G_INTRINSIC_W_SIDE_EFFECTS.
We may decide to split the latter up with finer-grained restrictions later, if
necessary.

llvm-svn: 277224
2016-07-29 22:32:36 +00:00
Michael Kuperstein f396b4c40d [X86] Match PSADBW in straight-line code
Up until now, we only had code to match PSADBW patterns that look like what
comes out of the loop vectorizer - a partial reduction inside the loop body
that gets fed into a horizontal operation in a different basic block.

This adds support for straight-line patterns, like those generated by the
SLP vectorizer.

Differential Revision: https://reviews.llvm.org/D22889

llvm-svn: 277219
2016-07-29 21:45:51 +00:00
Michael Kuperstein e9ac9b9aaf [Hexagon] Fix test that uses -debug-only to require asserts.
llvm-svn: 277218
2016-07-29 21:44:33 +00:00
Simon Pilgrim f107ffa8f0 [X86][AVX] Fix VBROADCASTF128 selection bug (PR28770)
Support for lowering to VBROADCASTF128 etc. in D22460 was not correctly ensuring that the only users of the 128-bit vector load were the insertions of the vector into the lower/upper subvectors.

llvm-svn: 277214
2016-07-29 21:05:10 +00:00
Tim Northover 6b3bd61283 CodeGen: add new "intrinsic" MachineOperand kind.
This will be used during GlobalISel, where we need a more robust and readable
way to write tests than a simple immediate ID.

llvm-svn: 277209
2016-07-29 20:32:59 +00:00
Eli Bendersky c08ff1267f Add a REQUIRES: assert on a Lanai test that uses a -debug-only flag
llvm-svn: 277204
2016-07-29 19:35:22 +00:00
Simon Pilgrim 455460f310 Fixed line endings
llvm-svn: 277199
2016-07-29 18:58:57 +00:00
Andrew Kaylor b99d1cc7ed Recommitting r275284: add support to inline __builtin_mempcpy
Patch by Sunita Marathe

Third try, now following fixes to MSan to handle mempcy in such a way that this commit won't break the MSan buildbots. (Thanks, Evegenii!)

llvm-svn: 277189
2016-07-29 18:23:18 +00:00
Kyle Butt 02d8d054ab Codegen: MachineBlockPlacement Improve probability layout.
The following pattern was being layed out poorly:

              A
             / \
            B   C
           / \ / \
          D   E   ? (Doesn't matter)

Where A->B is far more likely than A->C, and prob(B->D) = prob(B->E)

The current algorithm gives:
A,B,C,E (D goes on worklist)

It does this even if C has a frequency count of 0. This patch
adjusts the layout calculation so that if freq(B->E) >> freq(C->E)
then we go ahead and layout E rather than C. Fallthrough half the time
is better than fallthrough never, or fallthrough very rarely. The
resulting layout is:

A,B,E, (C and D are in a worklist)

llvm-svn: 277187
2016-07-29 18:09:28 +00:00
Kyle Butt af324f76ad Tests: Add branch weights to non-layout tests.
Add branch weights to a few tests that aren't testing layout to make them less
sensitive to changes in the layout algorithm.

llvm-svn: 277186
2016-07-29 18:09:25 +00:00
Tim Northover 69c2ba546f GlobalISel: add generic conditional branch.
Just the basic equivalent to DAG's condbr for now, we'll get to things like
br_cc when we start doing more legalization.

llvm-svn: 277184
2016-07-29 17:58:00 +00:00
Krzysztof Parzyszek b0c4376697 [Hexagon] Testcase for not merging stores into a misaligned store
The DAG combiner will try to merge consecutive stores into a bigger
store, unless the resulting store is not fast. Misaligned vector stores
are allowed on Hexagon, but are not fast. Add a testcase to make sure
this type of merging does not occur.

Patch by Pranav Bhandarkar.

llvm-svn: 277182
2016-07-29 17:55:37 +00:00
Krzysztof Parzyszek 3e137e3429 Revert r277178, the actual change had already been applied
Will submit another patch with the testcase only.

llvm-svn: 277180
2016-07-29 17:50:47 +00:00
Krzysztof Parzyszek 68fe439d06 [Hexagon] Misaligned loads and stores are not fast
The DAG combiner tries to merge stores to adjacent vector wide memory
locations by creating stores which are integral multiples of the vector
width. Discourage this by informing it that this is slow. This should
not affect legalization passes, because all of them ignore the "Fast"
argument.

Patch by Pranav Bhandarkar.

llvm-svn: 277178
2016-07-29 17:45:16 +00:00
Ahmed Bougacha 6db3cfe2da [AArch64][GlobalISel] Select G_XOR.
llvm-svn: 277173
2016-07-29 16:56:25 +00:00
Ahmed Bougacha 784e3423e6 [GlobalISel] Add G_XOR.
llvm-svn: 277172
2016-07-29 16:56:20 +00:00
Ahmed Bougacha 7adfac56b3 [AArch64][GlobalISel] Select G_LOAD/G_STORE.
Mostly straightforward as we ignore addressing modes and just
use the base + unsigned immediate offset (always 0) variants.

This currently fails to select extloads because we have yet to
agree on a representation.

llvm-svn: 277171
2016-07-29 16:56:16 +00:00
Brendon Cahoon 254f889dc5 MachinePipeliner pass that implements Swing Modulo Scheduling
Software pipelining is an optimization for improving ILP by
overlapping loop iterations. Swing Modulo Scheduling (SMS) is
an implementation of software pipelining that attempts to
reduce register pressure and generate efficient pipelines with
a low compile-time cost.

This implementaion of SMS is a target-independent back-end pass.
When enabled, the pass should run just prior to the register
allocation pass, while the machine IR is in SSA form. If the pass
is successful, then the original loop is replaced by the optimized
loop. The optimized loop contains one or more prolog blocks, the
pipelined kernel, and one or more epilog blocks.

This pass is enabled for Hexagon only. To enable for other targets,
a couple of target specific hooks must be implemented, and the
pass needs to be called from the target's TargetMachine
implementation.

Differential Review: http://reviews.llvm.org/D16829

llvm-svn: 277169
2016-07-29 16:44:44 +00:00
Krzysztof Parzyszek 0bd55a7608 [Hexagon] Custom lower VECTOR_SHUFFLE and EXTRACT_SUBVECTOR for HVX
If the mask of a vector shuffle has alternating odd or even numbers
starting with 1 or 0 respectively up to the largest possible index
for the given type in the given HVX mode (single of double) we can
generate vpacko or vpacke instruction respectively.

E.g.
  %42 = shufflevector <32 x i16> %37, <32 x i16> %41,
                      <32 x i32> <i32 1, i32 3, ..., i32 63>
  is %42.h = vpacko(%41.w, %37.w)

Patch by Pranav Bhandarkar.

llvm-svn: 277168
2016-07-29 16:44:27 +00:00
Krzysztof Parzyszek 0006e1afdd [Hexagon] Improve balancing of address calculation
Rebalances address calculation trees and applies Hexagon-specific
optimizations to the trees to improve instruction selection.

Patch by Tobias Edler von Koch.

llvm-svn: 277151
2016-07-29 15:15:35 +00:00
David L Kreitzer 8b959e5cfa Avoid unnecessary 32-bit to 64-bit zero extensions following
32-bit CMOV instructions on x86_64. The 32-bit CMOV implicitly
zero extends.

Differential Revision: https://reviews.llvm.org/D22941

llvm-svn: 277148
2016-07-29 15:09:54 +00:00
Daniel Sanders cbaca42a03 Re-commit: [mips][fastisel] Handle 0-4 arguments without SelectionDAG.
Summary:
Implements fastLowerArguments() to avoid the need to fall back on
SelectionDAG for 0-4 argument functions that don't do tricky things like
passing double in a pair of i32's.

This allows us to move all except one test to -fast-isel-abort=3. The
remaining one has function prototypes of the form 'i32 (i32, double, double)'
which requires floats to be passed in GPR's.

The previous commit had an uninitialized variable that caused the incoming
argument region to have undefined size. This has been fixed.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D22680

llvm-svn: 277136
2016-07-29 12:27:28 +00:00
Simon Pilgrim cb780b32a3 [X86][SSE] Optimize the truncation of vector comparison results with PACKSS
We currently default to using either generic shuffles or MASK+PACKUS/PACKSS to truncate all integer vectors. For vector comparisons, we know that the result will be either all or zero bits in every element, which can be efficiently truncated by directly using PACKSS to repeatedly halve the size of each element.

Due to the limited input values (-1 or 0) we don't need to account for vector element size, so for simplicity we just use the PACKSS(vXi16,vXi16) implementation in all cases. Additionally for AVX2 PACKSS of 256bit data we must perform a PERMQ shuffle to reorder the data into the correct order. I did investigate performing a single shuffle after all the PACKSS calls but the need to cross 128bit lanes makes this difficult to achieve efficiently.

We avoid performing this on AVX512 as it should have better alternative truncation instructions.

Differential Revision: https://reviews.llvm.org/D22814

llvm-svn: 277132
2016-07-29 10:23:10 +00:00
Prakhar Bahuguna d1233e857e [Thumb] Emit Thumb move in both Thumb modes for struct_byval predicates
Summary:
The MOV/MOVT instructions being chosen for struct_byval predicates was
conditional only on Thumb2, resulting in an ARM MOV/MOVT instruction
being incorrectly emitted in Thumb1 mode. This is especially apparent
with v8-m.base targets. This patch ensures that Thumb instructions are
emitted in both Thumb modes.

Reviewers: rengolin, t.p.northover

Subscribers: llvm-commits, aemerson, rengolin

Differential Revision: https://reviews.llvm.org/D22865

llvm-svn: 277128
2016-07-29 09:16:46 +00:00
Craig Topper e4f868ea16 [AVX512] Mark EVEX VMOVSSrm and VMOVSDrm as canFoldAsLoad and isReMaterializable.
llvm-svn: 277120
2016-07-29 06:06:04 +00:00
Craig Topper 07aa37039e [AVX512] Add AVX512 run lines to some tests for scalar fma/add/sub/mul/div and regenerate. Follow up commits will bring AVX512 code up to the same quality as AVX/SSE.
llvm-svn: 277118
2016-07-29 06:05:58 +00:00
Craig Topper c7de3a1018 [AVX512] Remove the intrinsic forms of VMOVSS/VMOVSD. We don't need two different forms of 'rr' and 'rm'. This matches SSE/AVX.
I'm not convinced the patterns for the rm_Int was correct anyway. It had a tied source that should't exist for the unmasked version. The load form of MOVSS always zeros the most significant bits. I've left the patterns off the masked load instructions as I'm not sure what the correct pattern should be and we don't have any tests currently. Nor do we implement masked scalar load intrinsics in clang currently.

llvm-svn: 277098
2016-07-29 02:49:08 +00:00
Changpeng Fang 26fb9d268b AMDGPU/SI: Don't handle a loop if there is no loop at all for a terminator BB.
Differential Revision: http://reviews.llvm.org/D22021

Reviewed by: arsenm

llvm-svn: 277073
2016-07-28 23:01:45 +00:00
Krzysztof Parzyszek 6400dec5ab Fix build breaks after r277028
llvm-svn: 277031
2016-07-28 20:25:21 +00:00
Krzysztof Parzyszek 167d918225 [Hexagon] Implement MI-level constant propagation
llvm-svn: 277028
2016-07-28 20:01:59 +00:00
Krzysztof Parzyszek c43644d332 [Hexagon] Insert CFI instructions before throwing calls
Normally, CFI instructions should be inserted after allocframe, but
if allocframe is in the same packet with a call, the CFI instructions
should be inserted before that packet.

llvm-svn: 277020
2016-07-28 19:13:46 +00:00
Ahmed Bougacha 8550509b64 [AArch64][GlobalISel] Select G_BR.
This is the first unsized instruction we support; move down the
'sized' check to binops.

llvm-svn: 277007
2016-07-28 17:15:15 +00:00
Ahmed Bougacha d760de0b32 [MIRParser] Accept unsized generic instructions.
Since r276158, we require generic instructions to have a sized type.
G_BR doesn't; relax the restriction.

llvm-svn: 277006
2016-07-28 17:15:12 +00:00
Ahmed Bougacha d7748d6491 [AArch64][GlobalISel] Select GPR G_SUB.
llvm-svn: 277003
2016-07-28 16:58:35 +00:00
Ahmed Bougacha 61a7928dde [AArch64][GlobalISel] Select GPR G_AND.
llvm-svn: 277002
2016-07-28 16:58:31 +00:00
Ahmed Bougacha 46c05fc861 [GlobalISel] Remove types on selected insts instead of using LLT().
LLT() has a particular meaning: it's one invalid type. But we really
want selected instructions to have no type whatsoever.

Also verify that types don't linger after ISel, and enable the verifier
on the AArch64 select test.

llvm-svn: 277001
2016-07-28 16:58:27 +00:00
Ahmed Bougacha 07994ec39b [AArch64][GlobalISel] Remove 'alignment' from MIR tests. NFC.
llvm-svn: 277000
2016-07-28 16:58:21 +00:00
Wei Ding 07e03712d3 AMDGPU : Add intrinsics for compare with the full wavefront result
Differential Revision: http://reviews.llvm.org/D22482

llvm-svn: 276998
2016-07-28 16:42:13 +00:00
Daniel Sanders 6e74651658 Revert r276982 and r276984: [mips][fastisel] Handle 0-4 arguments without SelectionDAG
It seems that the stack offset in callabi.ll varies between machines. I'll look
into it.

llvm-svn: 276989
2016-07-28 15:37:42 +00:00
Craig Topper 7e27885f69 [X86] Remove CustomInserter for FMA3 instructions. Looks like since we got full commuting support for FMAs after this was added, the coalescer can now get this right on its own.
Differential Revision: https://reviews.llvm.org/D22799

llvm-svn: 276987
2016-07-28 15:28:56 +00:00
Daniel Sanders e0b529f619 [mips][fastisel] Handle 0-4 arguments without SelectionDAG.
Summary:
Implements fastLowerArguments() to avoid the need to fall back on
SelectionDAG for 0-4 argument functions that don't do tricky things like
passing double in a pair of i32's.

This allows us to move all except one test to -fast-isel-abort=3. The
remaining one has function prototypes of the form 'i32 (i32, double, double)'
which requires floats to be passed in GPR's.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D22680

llvm-svn: 276982
2016-07-28 14:55:28 +00:00
Nicolai Haehnle 3b572002a2 AMDGPU: add execfix flag to SI_ELSE
Summary:
SI_ELSE is lowered into two parts:

s_or_saveexec_b64 dst, src (at the start of the basic block)

s_xor_b64 exec, exec, dst (at the end of the basic block)

The idea is that dst contains the exec mask of the preceding IF block. It can
happen that SIWholeQuadMode decides to switch from WQM to Exact mode inside
the basic block that contains SI_ELSE, in which case it introduces an instruction

s_and_b64 exec, exec, s[...]

which masks out bits that can correspond to both the IF and the ELSE paths.
So the resulting sequence must be:

s_or_savexec_b64 dst, src

s_and_b64 exec, exec, s[...] <-- added by SIWholeQuadMode
s_and_b64 dst, dst, exec <-- added by SILowerControlFlow

s_xor_b64 exec, exec, dst

Whether to add the additional s_and_b64 dst, dst, exec is currently determined
via the ExecModified tracking. With this change, it is instead determined by
an additional flag on SI_ELSE which is set by SIWholeQuadMode.

Finally: It also occured to me that an alternative approach for the long run
is for SILowerControlFlow to unconditionally emit

s_or_saveexec_b64 dst, src

...

s_and_b64 dst, dst, exec
s_xor_b64 exec, exec, dst

and have a pass that detects and cleans up the "redundant AND with exec"
pattern where possible. This could be useful anyway, because we also add
instructions

s_and_b64 vcc, exec, vcc

before s_cbranch_scc (in moveToALU), and those are often redundant. I have
some pending changes to how KILL is lowered that could also benefit from
such a cleanup pass.

In any case, this current patch could help in the short term with the whole
ExecModified business.

Reviewers: tstellarAMD, arsenm

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D22846

llvm-svn: 276972
2016-07-28 11:39:24 +00:00
Krzysztof Parzyszek 06a2b6b1ee [Hexagon] Find speculative loop preheader in hardware loop generation
Before adding a new preheader block, check if there is a candidate block
where the loop setup could be placed speculatively. This will be off by
default.

llvm-svn: 276919
2016-07-27 21:20:54 +00:00
Krzysztof Parzyszek 5241b8efcf [Hexagon] Do not optimize volatile stack spill slots
llvm-svn: 276916
2016-07-27 20:50:42 +00:00
Andrew Kaylor 9155354ff2 Revert EH-specific checks in BranchFolding that were causing blow ups in compile time.
Differential Revision: https://reviews.llvm.org/D22839

llvm-svn: 276898
2016-07-27 17:55:33 +00:00
Tim Northover 8d2f52e035 GlobalISel: support zero-sized allocas
All allocas must be at least 1 byte at the MachineIR level so we allocate just
one byte.

llvm-svn: 276897
2016-07-27 17:47:54 +00:00
Simon Pilgrim 7efafbc583 [X86][SSE] Updated test so that both are applying the post-multiply
This is to ensure that there are no diffs other than due to buildvector/legalization

llvm-svn: 276882
2016-07-27 15:30:20 +00:00
Ahmed Bougacha 6756a2c953 [GlobalISel] Introduce an instruction selector.
And implement it for AArch64, supporting x/w ADD/OR.

Differential Revision: https://reviews.llvm.org/D22373

llvm-svn: 276875
2016-07-27 14:31:55 +00:00
Simon Pilgrim 10bf0ff879 [DAGCombiner] Use APInt directly to detect out of range shift constants
Using getZExtValue() will assert if the value doesn't fit into uint64_t - SHL was already doing this, I've just updated ASHR/LSHR to match

As mentioned on D22726

llvm-svn: 276855
2016-07-27 10:30:55 +00:00
Matt Arsenault e3862cdc93 AMDGPU: Use rcp for fdiv 1, x with fpmath metadata
Using rcp should be OK for safe math usually, so this
should not be replacing the original fdiv.

llvm-svn: 276823
2016-07-26 23:25:44 +00:00
Matt Arsenault 918e81c3d4 AMDGPU: Add more tests for LDS size with occupancy
llvm-svn: 276821
2016-07-26 23:15:59 +00:00