Commit Graph

14986 Commits

Author SHA1 Message Date
Dan Gohman d85ab7fc10 [WebAssembly] Don't use setRequiresStructuredCFG(true).
While we still do want reducible control flow, the RequiresStructuredCFG
flag imposes more strict structure constraints than WebAssembly wants.
Unsetting this flag enables critical edge splitting and tail merging.

Also, disable TailDuplication explicitly, as it doesn't support virtual
registers, and was previously only disabled by the RequiresStructuredCFG
flag.

llvm-svn: 261190
2016-02-18 06:32:53 +00:00
Tim Northover 7687bcee4a AArch64: always clear kill flags up to last eliminated copy
After r261154, we were only clearing flags if the known-zero register was
originally live-in to the basic block, but we have to do it even if not when
more than one COPY has been eliminated, otherwise the user of the first COPY
may still have <kill> marked.

E.g.

BB#N:
    %X0 = COPY %XZR
    STRXui %X0<kill>, <fi#0>
    %X0 = COPY %XZR
    STRXui %X0<kill>, <fi#1>

We can eliminate both copies, X0 is not live-in, but we must clear the kill on
the first store.

Unfortunately, I've been unable to come up with a non-fragile test for this.
I've only seen it in the wild with regalloc-created spills, and attempts to
reproduce that in a reasonable way run afoul of COPY coalescing. Even volatile
asm clobbers were moved around. Should fix the aarch64 bot though.

llvm-svn: 261175
2016-02-17 23:07:04 +00:00
Tim Northover 3f2285615a AArch64: improve redundant copy elimination.
Mostly, this fixes the bug that if the CBZ guaranteed Xn but Wn was used, we
didn't sort out the use-def chain properly.

I've also made it check more than just the last instruction for a compatible
CBZ (so it can cope without fallthroughs). I'd have liked to do that
separately, but it's helps writing the test.

Finally, I removed some custom loops in favour of MachineInstr helpers and
refactored the control flow to flatten it and avoid possibly quadratic
iterations in blocks with many copies. NFC for these, just a general tidy-up.

llvm-svn: 261154
2016-02-17 21:16:53 +00:00
Nico Weber e6154ffbe0 Revert r261070, it caused PR26652 / PR26653.
llvm-svn: 261127
2016-02-17 18:47:29 +00:00
David Majnemer 7e5937b775 [WinEH] Optimize WinEH state stores
32-bit x86 Windows targets use a linked-list of nodes allocated on the
stack, referenced to via thread-local storage.  The personality routine
interprets one of the fields in the node as a 'state number' which
indicates where the personality routine should transfer control.

State transitions are possible only before call-sites which may throw
exceptions.  Our previous scheme had us update the state number before
all call-sites which may throw.

Instead, we can try to minimize the number of times we need to store by
reasoning about the nearest store which dominates the current call-site.
If the last store agrees with the current call-site, then we know that
the state-update is redundant and can be elided.

This is largely straightforward: an RPO walk of the blocks allows us to
correctly forward propagate the information when the function is a DAG.
Currently, loops are not handled optimally and may trigger superfluous
state stores.

Differential Revision: http://reviews.llvm.org/D16763

llvm-svn: 261122
2016-02-17 18:37:11 +00:00
Justin Lebar b5c7b1c00f [NVPTX] Test that MachineSink won't sink across llvm.cuda.syncthreads.
Summary:
The syncthreads MI is modeled as mayread/maywrite -- convergence doesn't
even come into play here.  Nonetheless this property is highly implicit
in the tablegen files, so a test seems appropriate.

Reviewers: jingyue

Subscribers: llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D17319

llvm-svn: 261114
2016-02-17 17:46:52 +00:00
Justin Lebar d596ec93ce [NVPTX] Annotate call machine instructions as calls.
Summary:
Otherwise we'll try to do unsafe optimizations on these MIs, such as
sinking loads below calls.

(I suspect that this is not the only bug in the NVPTX instruction
tablegen files; I need to comb through them.)

Reviewers: jholewinski, tra

Subscribers: jingyue, jhen, llvm-commits

Differential Revision: http://reviews.llvm.org/D17315

llvm-svn: 261113
2016-02-17 17:46:50 +00:00
Mitch Bodart 3f42095776 Fix some erroneous lit test failures due to unlucky name of working directory.
Differential Revision:  http://reviews.llvm.org/D17044

llvm-svn: 261104
2016-02-17 16:35:18 +00:00
Simon Pilgrim 07d72f4f49 [X86][SSE] Update pshufb mask tests.
We are getting better at combining constant pshufb masks - use a real input instead of undef.

Add test for decoding multi-use bitcasted masks as well (actual support will come soon).

llvm-svn: 261101
2016-02-17 15:52:39 +00:00
Simon Pilgrim 43bd887090 [X86][SSE] Update pshufb mask test to use a real input instead of undef
We are getting better at combining constant pshufb masks - this test would've failed once we decode bitcasted masks as well.

llvm-svn: 261095
2016-02-17 14:56:58 +00:00
Igor Breger ac02f1bb62 AVX512: Fix LowerMSCATTER() return value.
Bug description:
  The bug was discovered when test was compiled with -O0.
  In case scatter result is DAG root , VectorLegalizer failed (assert) due to LowerMSCATTER() return kmask as result.
Change LowerMSCATTER() to return chain as original node do.

Differential Revision: http://reviews.llvm.org/D17331

llvm-svn: 261090
2016-02-17 14:04:33 +00:00
Simon Pilgrim c5b5dcb985 [X86][AVX] Support bit-blend integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-blend patterns before defaulting to the splitting behaviour.

Part 2 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261082
2016-02-17 10:50:06 +00:00
Simon Pilgrim a50e8d3627 [X86][AVX] Support bit-mask integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-mask patterns before defaulting to the splitting behaviour. In some cases this ends up matching what AVX2 would do anyhow or what AVX1 does on the split vectors.

Part 1 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261081
2016-02-17 10:37:49 +00:00
Cong Hou bbd4e3b400 Detecte vector reduction operations just before instruction selection.
This patch detects vector reductions before instruction selection. Vector
reductions are vectorized reduction operations, and for such operations we have
freedom to reorganize the elements of the result as long as the reduction of them
stay unchanged. This will enable some reduction pattern recognition during
instruction combine such as SAD/dot-product on X86. A flag is added to
SDNodeFlags to mark those vector reduction nodes to be checked during instruction
combine.

To detect those vector reductions, we search def-use chains starting from the
given instruction, and check if all uses fall into two categories:

1. Reduction with another vector.
2. Reduction on all elements.

in which 2 is detected by recognizing the pattern that the loop vectorizer
generates to reduce all elements in the vector outside of the loop, which
includes several ShuffleVector and one ExtractElement instructions.


Differential revision: http://reviews.llvm.org/D15250

llvm-svn: 261070
2016-02-17 06:37:04 +00:00
Hans Wennborg 84047896b9 Revert r260979 "[X86] Enable the LEA optimization pass by default."
Asserts are still firing in Chromium builds. PR26575.

llvm-svn: 261058
2016-02-17 02:49:59 +00:00
Dan Gohman 476ffcec04 [WebAssembly] Call memcpy for large byval copies.
This fixes very slow compilation on
test/CodeGen/Generic/2010-11-04-BigByval.ll . Note that MaxStoresPerMemcpy
and friends are not yet carefully tuned so the cutoff point is currently
somewhat arbitrary. However, it's important that there be a cutoff point
so that we don't emit unbounded quantities of loads and stores.

llvm-svn: 261050
2016-02-17 01:43:37 +00:00
Reid Kleckner 8de35fef3d [X86] Fix a shrink-wrapping miscompile around __chkstk
__chkstk clobbers EAX. If EAX is live across the prologue, then we have
to take extra steps to save it. We already had code to do this if EAX
was a register parameter. This change adapts it to work when shrink
wrapping is used.

llvm-svn: 261039
2016-02-17 00:17:33 +00:00
Dan Gohman 94c6566055 [WebAssembly] Implement __builtin_frame_address.
Differential Revision: http://reviews.llvm.org/D17307

llvm-svn: 261032
2016-02-16 23:48:04 +00:00
Simon Pilgrim cc8a282647 [X86][AVX] Regenerated vselect tests
llvm-svn: 261026
2016-02-16 22:33:27 +00:00
Ahmed Bougacha af60a429c9 [X86] Generalize logic blend of (x, -x) combine to match (-x, x).
I suspect this is what let PR26110 lie dormant for so long.

llvm-svn: 261024
2016-02-16 22:14:07 +00:00
Ahmed Bougacha 132fbf5476 [X86] Don't turn (c?-v:v) into (c?-v:0) by blindly using PSIGN.
Currently, we sometimes miscompile this vector pattern:
    (c ? -v : v)
We lower it to (because "c" is <4 x i1>, lowered as a vector mask):
    (~c & v) | (c & -v)

When we have SSSE3, we incorrectly lower that to PSIGN, which does:
    (c < 0 ? -v : c > 0 ? v : 0)
in other words, when c is either all-ones or all-zero:
    (c ? -v : 0)
While this is an old bug, it rarely triggers because the PSIGN combine
is too sensitive to operand order. This will be improved separately.

Note that the PSIGN tests are also incorrect. Consider:
    %b.lobit = ashr <4 x i32> %b, <i32 31, i32 31, i32 31, i32 31>
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = xor <4 x i32> %b.lobit, <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = and <4 x i32> %a, %0
    %2 = and <4 x i32> %b.lobit, %sub
    %cond = or <4 x i32> %1, %2
    ret <4 x i32> %cond
if %b is zero:
    %b.lobit = <4 x i32> zeroinitializer
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = <4 x i32> %a
    %2 = <4 x i32> zeroinitializer
    %cond = or <4 x i32> %a, zeroinitializer
    ret <4 x i32> %a
whereas we currently generate:
    psignd %xmm1, %xmm0
    retq
which returns 0, as %xmm1 is 0.

Instead, use a pure logic sequence, as described in:
https://graphics.stanford.edu/~seander/bithacks.html#ConditionalNegate

Fixes PR26110.

Differential Revision: http://reviews.llvm.org/D17181

llvm-svn: 261023
2016-02-16 22:14:03 +00:00
Ahmed Bougacha a87c3480b5 [X86] Extract PSIGN/BLENDVP tests into vector-blend.ll. NFC.
We're going to stop generating PSIGN, so calling a test "psign"
isn't ideal. Instead, call these tests what they really are:
variable blends using logic.
Also add a test to exhibit a case we're currently missing in
the PSIGN combine.

llvm-svn: 261022
2016-02-16 22:13:59 +00:00
Derek Schuff f8f8f093aa [WebAssemly] Don't move calls or stores past intervening loads
The register stackifier currently checks for intervening stores (and
loads that may alias them) but doesn't account for the fact that the
instruction being moved may affect intervening loads.

Differential Revision: http://reviews.llvm.org/D17298

llvm-svn: 261014
2016-02-16 21:44:19 +00:00
Jun Bum Lim b389d9b9af [AArch64] Add pass to remove redundant copy after RA
Summary:
This change will add a pass to remove unnecessary zero copies in target blocks
of cbz/cbnz instructions. E.g., the copy instruction in the code below can be
removed because the cbz jumps to BB1 when x0 is zero :
  BB0:
    cbz x0, .BB1
  BB1:
    mov x0, xzr

Jun

Reviewers: gberry, jmolloy, HaoLiu, MatzeB, mcrosier

Subscribers: mcrosier, mssimpso, haicheng, bmakam, llvm-commits, aemerson, rengolin

Differential Revision: http://reviews.llvm.org/D16203

llvm-svn: 261004
2016-02-16 20:02:39 +00:00
Derek Schuff aadc89c25d [WebAssembly] Insert COPY_LOCAL between CopyToReg and FrameIndex DAG nodes
CopyToReg nodes don't support FrameIndex operands. Other targets select
the FI to some LEA-like instruction, but since we don't have that, we
need to insert some kind of instruction that can take an FI operand and
produces a value usable by CopyToReg (i.e. in a vreg). So insert a dummy
copy_local between Op and its FI operand. This results in a redundant
copy which we should optimize away later (maybe in the post-FI-lowering
peephole pass).

Differential Revision: http://reviews.llvm.org/D17213

llvm-svn: 260987
2016-02-16 18:18:36 +00:00
Andrey Turetskiy eab4e68650 [X86] Enable the LEA optimization pass by default.
Differential Revision: http://reviews.llvm.org/D16877

llvm-svn: 260979
2016-02-16 16:41:38 +00:00
Dan Gohman 442bfcec00 [WebAssembly] Switch from RPO sorting to topological sorting.
WebAssembly doesn't require full RPO; topological sorting is sufficient and
can preserve more of the MachineBlockPlacement ordering. Unfortunately, this
still depends a lot on heuristics, because while we use the
MachineBlockPlacement ordering as a guide, we can't use it in places where
it isn't topologically ordered. This area will require further attention.

llvm-svn: 260978
2016-02-16 16:22:41 +00:00
Dan Gohman 8aa237c3ca [WebAssembly] Create new registers instead of reusing old ones in RegStackify.
This avoids some complications updating LiveIntervals to be aware of the new
register lifetimes, because we can just compute new intervals from scratch
rather than describe how the old ones have been changed.

llvm-svn: 260971
2016-02-16 15:17:21 +00:00
Dan Gohman aa7429112e [WebAssembly] Implement support for custom NaN bit patterns.
llvm-svn: 260968
2016-02-16 15:14:23 +00:00
Andrey Turetskiy 1052ac2311 [X86] PR26575: Fix LEA optimization pass.
Add a missing check for a type of address displacement operand of the load/store instruction being a candidate for LEA substitution.

Ref: https://llvm.org/bugs/show_bug.cgi?id=26575

Differential Revision: http://reviews.llvm.org/D17261

llvm-svn: 260959
2016-02-16 12:47:45 +00:00
Zia Ansari 30a02384f7 Implemented stack symbol table ordering/packing optimization to improve data locality and code size from SP/FP offset encoding.
Differential Revision: http://reviews.llvm.org/D15393

llvm-svn: 260917
2016-02-15 23:44:13 +00:00
Simon Pilgrim 7c920e611c [X86][SSE2] Regenerated sse2 tests
llvm-svn: 260900
2016-02-15 17:57:40 +00:00
Krzysztof Parzyszek 04bf43bd83 [Hexagon] Missed testcase update in r260895
llvm-svn: 260897
2016-02-15 16:15:02 +00:00
Krzysztof Parzyszek 73f1a40626 [Hexagon] Use zero-extending loads for anyext
llvm-svn: 260895
2016-02-15 16:01:01 +00:00
Simon Pilgrim 766a659eb5 [X86] More thorough partial-register division checks
For when grep counts are just not enough...

llvm-svn: 260891
2016-02-15 14:09:35 +00:00
Simon Pilgrim a62170834d [X86] Regenerated 64/128 bit multiply tests
llvm-svn: 260890
2016-02-15 14:04:05 +00:00
Simon Pilgrim 9513b3c4c7 [X86][SSE] More thorough testing of all-ones vectors re-materialization
llvm-svn: 260889
2016-02-15 13:50:48 +00:00
Simon Pilgrim 02d3b6a82d [X86][SSE] Regenerated uint2fp special case tests
llvm-svn: 260888
2016-02-15 13:41:41 +00:00
Simon Pilgrim 4e4989a64a [X86][SSE] Regenerated fast isel intrinsics tests
llvm-svn: 260885
2016-02-15 12:32:16 +00:00
Igor Breger 4dc7d390db AVX512: Change store size of kmask. Store size of v8i1, v4i1 , v2i1 and i1 are changed to 16 bits.
If KMOVB not supported (require AVX512DQ) only KMOVW can be used so store size should be 2 bytes.

Differential Revision: http://reviews.llvm.org/D17138

llvm-svn: 260878
2016-02-15 08:25:28 +00:00
Simon Pilgrim 834931554b [X86][AVX] Fixed copy+paste typo in shuffle test
llvm-svn: 260852
2016-02-14 18:11:52 +00:00
Simon Pilgrim 08ba012973 [X86][AVX] Lower shuffles as repeated lane shuffles then lane-crossing shuffles
This patch attempts to represent a shuffle as a repeating shuffle (recognisable by is128BitLaneRepeatedShuffleMask) with the source input(s) in their original lanes, followed by a single permutation of the 128-bit lanes to their final destinations.

On AVX2 we can additionally attempt to match using 64-bit sub-lane permutation. AVX2 can also now match a similar 'broadcasted' repeating shuffle.

This patch has several benefits:

 * Avoids prematurely matching with lowerVectorShuffleByMerging128BitLanes which can require both inputs to have their input lanes permuted before shuffling.
 * Can replace PERMPS/PERMD instructions - although these are useful for cross-lane unary shuffling, they require their shuffle mask to be pre-loaded (and increase register pressure).
 * Matching the repeating shuffle makes use of a lot of existing shuffle lowering.

There is an outstanding minor AVX1 regression (combine_unneeded_subvector1 in vector-shuffle-combining.ll) of a previously 128-bit shuffle + subvector splat being converted to a subvector splat + (2 instruction) 256-bit shuffle, I intend to fix this in a followup patch for review.

Differential Revision: http://reviews.llvm.org/D16537

llvm-svn: 260834
2016-02-13 21:54:04 +00:00
Sanjay Patel e9bf993cee [x86-64] allow mfence even with -mno-sse (PR23203)
As shown in:
https://llvm.org/bugs/show_bug.cgi?id=23203
...we currently die because lowering believes that mfence is allowed without SSE2 on x86-64,
but the instruction def doesn't know that.

I don't know if allowing mfence without SSE is right, but if not, at least now it's consistently wrong. :)

Differential Revision: http://reviews.llvm.org/D17219

llvm-svn: 260828
2016-02-13 17:26:29 +00:00
Matt Arsenault f2ddbf00ed AMDGPU: Prepare for reducing private element size.
Tests for the new scalarize all private access options will be
included with a future commit.

The only functional change is to make the split/scalarize behavior
for private access of > 4 element vectors to be consistent
with the flat/global handling. This makes the spilling worse
in the two changed tests.

llvm-svn: 260804
2016-02-13 04:18:53 +00:00
Tom Stellard 4409051d00 AMDGPU/SI: Add llvm.amdgcn.mov.dpp intrinsic
This intrinsic will be used to expose dpp functionality to higher-level
languages. It will map to the dpp version of v_mov_b32.

llvm-svn: 260792
2016-02-13 02:09:49 +00:00
Matt Arsenault ce56a0ef54 AMDGPU: Add intrinsics for sin/cos
These provide direct access to the hardware instruction without
the unit version required like llvm.sin/llvm.cos lowering requires.

llvm-svn: 260782
2016-02-13 01:19:56 +00:00
Matt Arsenault 79963e80b8 AMDGPU: Rename intrinsic to better match instruction name
Also fixes missing f32 test.

llvm-svn: 260780
2016-02-13 01:03:00 +00:00
Pirama Arumuga Nainar 7476bc89e9 Don't combine fp_round (fp_round x) if f80 to f16 is generated
Summary:
This patch skips DAG combine of fp_round (fp_round x) if it results in
an fp_round from f80 to f16.

fp_round from f80 to f16 always generates an expensive (and as yet,
unimplemented) libcall to __truncxfhf2.  This prevents selection of
native f16 conversion instructions from f32 or f64.  Moreover, the first
(value-preserving) fp_round from f80 to either f32 or f64 may become a
NOP in platforms like x86.

Reviewers: ab

Subscribers: srhines, llvm-commits

Differential Revision: http://reviews.llvm.org/D17221

llvm-svn: 260769
2016-02-13 00:08:05 +00:00
Tom Stellard bc4497b13c AMDGPU/SI: Detect uniform branches and emit s_cbranch instructions
Reviewers: arsenm

Subscribers: mareko, MatzeB, qcolombet, arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D16603

llvm-svn: 260765
2016-02-12 23:45:29 +00:00
Yunzhong Gao 0de36ec169 Disable the vzeroupper insertion pass on PS4.
Differential Revision: http://reviews.llvm.org/D16837

llvm-svn: 260764
2016-02-12 23:37:57 +00:00