Commit Graph

18041 Commits

Author SHA1 Message Date
Tom Stellard 9daed22b04 AMDGPU/SI: Remove unnecessary run lines from test
Summary:
This test had run lines disabling/enabling the promote alloca pass, but
enabling/disabling promote alloca had no impact on the output.

Reviewers: arsenm

Subscribers: mgrang, kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25787

llvm-svn: 285197
2016-10-26 14:21:09 +00:00
Zvi Rackover aa3402b41e [X86] AVX512 fallback for floating-point scalar selects
Summary:
In the case where of 'select i1 , f32, f32' or select i1, f64, f64 prefer lowering to masked-moves over branches.

Fixes pr30561

Reviewers: igorb, aymanmus, delena

Differential Revision: https://reviews.llvm.org/D25310

llvm-svn: 285196
2016-10-26 14:12:46 +00:00
Craig Topper 812d3d30ae [AVX-512] Add scalar vfmsub/vfnmsub mask3 intrinsics
Summary: Clang's intrinsic header currently tries to negate the third operand of a vfmadd mask3 in order to create vfmsub, but this fails isel. This patch adds scalar vfmsub and vfnmsub mask3 that we can use instead to avoid the negate. This is consistent with the packed instructions.

Reviewers: igorb, delena

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25933

llvm-svn: 285173
2016-10-26 04:59:58 +00:00
James Y Knight 2e64b8b79e [Sparc] Don't overlap variable-sized allocas with other stack variables.
On SparcV8, it was previously the case that a variable-sized alloca
might overlap by 4-bytes the last fixed stack variable, effectively
because 92 (the number of bytes reserved for the register spill area) !=
96 (the offset added to SP for where to start a DYNAMIC_STACKALLOC).

It's not as simple as changing 96 to 92, because variables that should
be 8-byte aligned would then be misaligned.

For now, simply increase the allocation size by 8 bytes for each dynamic
allocation -- wastes space, but at least doesn't overlap. As the large
comment says, doing this more efficiently will require larger changes in
llvm.

Also adds some test cases showing that we continue to not support
dynamic stack allocation and over-alignment in the same function.

llvm-svn: 285131
2016-10-25 22:13:28 +00:00
Simon Pilgrim de86241a09 [DAGCombiner] Enable (urem x, (shl pow2, y)) -> (and x, (add (shl pow2, y), -1)) combine for splatted vectors
llvm-svn: 285129
2016-10-25 22:01:09 +00:00
Simon Pilgrim daf82f5f33 [X86][SSE] Regenerated known-bits test with srem->urem fix
llvm-svn: 285124
2016-10-25 21:24:33 +00:00
Simon Pilgrim f534573e8c [DAGCombiner] Enable srem(x.y) -> urem(x,y) combine for vectors
SelectionDAG::SignBitIsZero (via SelectionDAG::computeKnownBits) has supported vectors since rL280927

llvm-svn: 285123
2016-10-25 21:20:18 +00:00
Simon Pilgrim efd5643419 [X86][SSE] Added vector srem combine tests
llvm-svn: 285121
2016-10-25 21:14:11 +00:00
Simon Pilgrim 83fb4019f7 [X86][SSE] Added vector urem combine tests
llvm-svn: 285119
2016-10-25 21:10:12 +00:00
Simon Pilgrim 4ebb04510a [DAGCombiner] Enable sdiv(x.y) -> udiv(x,y) combine for vectors
SelectionDAG::SignBitIsZero (via SelectionDAG::computeKnownBits) has supported vectors since rL280927

llvm-svn: 285118
2016-10-25 20:56:42 +00:00
Simon Pilgrim 61cdeb46c9 [X86][SSE] Added vector sdiv combine tests
llvm-svn: 285112
2016-10-25 20:25:47 +00:00
Evandro Menezes eb97e3554c Add option to specify minimum number of entries for jump tables
Add an option to allow easier experimentation by target maintainers with the
minimum number of entries to create jump tables.  Also clarify the name of
the other existing option governing the creation of jump tables.

Differential revision: https://reviews.llvm.org/D25883

llvm-svn: 285104
2016-10-25 19:53:51 +00:00
Evandro Menezes 601f4cb9f7 Switch lowering: improve partitioning of jump tables
When there's a tie between partitionings of jump tables, consider also cases
that result in no jump tables, but in one or a few cases.  The motivation is
that many contemporary processors typically perform case switches fairly
quickly.

Differential revision: https://reviews.llvm.org/D25212

llvm-svn: 285099
2016-10-25 19:11:43 +00:00
Dan Gohman f50d964bdb [WebAssembly] Add immediate fields to call_indirect and memory operators.
call_indirect, grow_memory, and current_memory now have immediate
operands in the 0xd binary encoding.

llvm-svn: 285085
2016-10-25 16:55:52 +00:00
Ulrich Weigand 7bdb485e18 [SystemZ] Do not use LOC(G) for volatile loads
It is not safe to use LOAD ON CONDITION to implement access to a memory
location marked "volatile", since the architecture leaves it unspecified
whether or not an access happens if the condition is false.

The current code already appears to care about that:
  def LOC  : CondUnaryRSY<"loc",  0xEBF2, nonvolatile_load, GR32, 4>;

Unfortunately, that "nonvolatile_load" operator is simply ignored
by the CondUnaryRSY class, and there was no test to catch it.

llvm-svn: 285077
2016-10-25 15:39:15 +00:00
Simon Pilgrim 5c3c9707c3 [X86][SSE] Add support for (V)PMOVSX* constant folding
We already have (V)PMOVZX* combining support, this is the beginning of handling (V)PMOVSX* similarly - other combines in combineVSZext can be generalized in future patches.

This unearthed an interesting bug in that we were generating illegal build vectors on 32-bit targets - it was proving difficult to create a test for it from PMOVZX, but it fired immediately with PMOVSX. I've created a more general form of the existing getConstVector to handle these cases - ideally this should be handled in non-target-specific code but I couldn't find an equivalent.

Differential Revision: https://reviews.llvm.org/D25874

llvm-svn: 285072
2016-10-25 14:29:25 +00:00
Zvi Rackover 124470a202 [DAGCombine] Preserve shuffles when one of the vector operands is constant
Summary:
Do *not* perform combines such as:

    vector_shuffle<4,1,2,3>(build_vector(Ud, C0, C1 C2), scalar_to_vector(X))
    ->
    build_vector(X, C0, C1, C2)

Keeping the shuffle allows lowering the constant build_vector to a materialized
constant vector (such as a vector-load from the constant-pool or some other idiom).

Reviewers: delena, igorb, spatel, mkuper, andreadb, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25524

llvm-svn: 285063
2016-10-25 12:14:19 +00:00
Craig Topper 01e4667e02 [AVX-512] Add support for creating SIGN_EXTEND_VECTOR_INREG and ZERO_EXTEND_VECTOR_INREG for 512-bit vectors to support vpmovzxbq and vpmovsxbq.
Summary: The one tricky thing about this is that the sign/zero_extend_inreg uses v64i8 as an input type which isn't legal without BWI support. Though the vpmovsxbq and vpmovzxbq instructions themselves don't require BWI. To support this we need to add custom lowering for ZERO_EXTEND_VECTOR_INREG with v64i8 input. This can mostly reuse the existing sign extend code with a couple checks for sign extend vs zero extend added.

Reviewers: delena, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25594

llvm-svn: 285053
2016-10-25 04:00:29 +00:00
Simon Pilgrim e3e6585c2d [SelectionDAG] Update ComputeNumSignBits SRA/SHL handlers to accept scalar or vector splats
Use isConstOrConstSplat helper.

Also use APInt instead of getZExtValue directly to avoid out of range issues.

llvm-svn: 285033
2016-10-24 21:47:19 +00:00
Sanjay Patel 87eea10711 [x86] add tests for {-1,0,1} select of constants
llvm-svn: 285005
2016-10-24 19:13:29 +00:00
Mandeep Singh Grang da99e33ae3 [llvm] Remove redundant --check-prefix=CHECK from tests
Reviewers: MatzeB, mcrosier, rengolin

Differential Revision: https://reviews.llvm.org/D25894

llvm-svn: 285003
2016-10-24 18:57:55 +00:00
Ehsan Amiri c90b02cf50 [PPC] Generate positive FP zero using xor insn instead of loading from constant area
https://reviews.llvm.org/D23614

Currently we load +0.0 from constant area. That can change to be generated using
XOR instruction.

llvm-svn: 284995
2016-10-24 17:31:09 +00:00
Eli Friedman b37864b58d Revert r284580+r284917. ("Synthesize TBB/TBH instructions")
The optimization has correctness issues, so reverting for now to fix tests
on thumb1 targets.

llvm-svn: 284993
2016-10-24 17:20:50 +00:00
Evandro Menezes eff2bd9d4f [AArch64] Optionally use the Newton series for reciprocal estimation
Add support for estimating the square root or its reciprocal and division or
reciprocal using the combiner generic Newton series.

Differential revision: https://reviews.llvm.org/D25291

llvm-svn: 284986
2016-10-24 16:14:58 +00:00
Ehsan Amiri 1f31e9157d [PPC] Better codegen for AND, ANY_EXT, SRL sequence
https://reviews.llvm.org/D24924

This improves the code generated for a sequence of AND, ANY_EXT, SRL instructions. This is a targetted fix for this special pattern. The pattern is generated by target independet dag combiner and so a more general fix may not be necessary. If we come across other similar cases, some ideas for handling it are discussed on the code review.

llvm-svn: 284983
2016-10-24 15:46:58 +00:00
Sanjay Patel 7b12e36740 [x86] regenerate checks
llvm-svn: 284982
2016-10-24 15:43:40 +00:00
Nicolai Haehnle a785209bc2 AMDGPU: Fix Two Address problems with v_movreld
Summary:
The v_movreld machine instruction is used with three operands that are
in a sense tied to each other (the explicit VGPR_32 def and the implicit
VGPR_NN def and use). There is no way to express that using the currently
available operand bits, and indeed there are cases where the Two Address
instructions pass does the wrong thing.

This patch introduces a new set of pseudo instructions that are identical
in intended semantics as v_movreld, but they only have two tied operands.

Having to add a new set of pseudo instructions is admittedly annoying, but
it's a fairly straightforward and solid approach. The only alternative I
see is to try to teach the Two Address instructions pass about Three Address
instructions, and I'm afraid that's trickier and is going to end up more
fragile.

Note that v_movrels does not suffer from this problem, and so this patch
does not touch it.

This fixes several GL45-CTS.shaders.indexing.* tests.

Reviewers: tstellarAMD, arsenm

Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25633

llvm-svn: 284980
2016-10-24 14:56:02 +00:00
Craig Topper 8ec5c7326d [AVX-512] Remove masked pmin/pmax intrinsics and autoupgrade to native IR.
Clang patch to replace 512-bit vector and 64-bit element versions with native IR will follow.

llvm-svn: 284955
2016-10-24 04:04:16 +00:00
Sanjay Patel 9ca028c2d6 [DAG] enhance computeKnownBits to handle SRL/SRA with vector splat constant
llvm-svn: 284953
2016-10-23 23:13:31 +00:00
Simon Pilgrim d3829c89bc [X86][AVX512VL] Added support for combining target 256-bit shuffles to AVX512VL VPERMV3
llvm-svn: 284922
2016-10-22 20:15:39 +00:00
Simon Pilgrim 56c0524f0f [X86][AVX512] Added support for combining target shuffles to AVX512 VPERMV3
llvm-svn: 284921
2016-10-22 19:53:59 +00:00
James Molloy 2bae8640d7 [ARM] Fix crash in ConstantIslands
tPCRelJT may not be the first instruction in a block. Check that instead of dereferencing a broken iterator.

llvm-svn: 284917
2016-10-22 09:58:37 +00:00
Zvi Rackover 8eb99d88d4 [X86] Apply the Update LLC Test Checks tool on the mmx-bitcast test
llvm-svn: 284916
2016-10-22 07:15:00 +00:00
Craig Topper b084c90a18 [X86] Add support for printing shuffle comments for VALIGN instructions.
llvm-svn: 284915
2016-10-22 06:51:56 +00:00
Sanjay Patel 5b7767f646 [x86] add test for missing vector SRA combine via computeKnownBits
llvm-svn: 284896
2016-10-21 23:02:31 +00:00
Tom Stellard 6c7dd980e4 AMDGPU/SI: Fix crash caused by r284267
Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25782

llvm-svn: 284875
2016-10-21 20:25:11 +00:00
Sanjay Patel ca92c36e01 [DAG] enhance computeKnownBits to handle SHL with vector splat constant
Also, use APInt to avoid crashing on types larger than vNi64.

llvm-svn: 284874
2016-10-21 20:16:27 +00:00
Peter Collingbourne e9bd49824d X86: Improve BT instruction selection for 64-bit values.
If a 64-bit value is tested against a bit which is known to be in the range
[0..31) (modulo 64), we can use the 32-bit BT instruction, which has a slightly
shorter encoding.

Differential Revision: https://reviews.llvm.org/D25862

llvm-svn: 284864
2016-10-21 19:57:55 +00:00
Simon Pilgrim ab48872313 [X86][AVX512BWVL] Added support for lowering v16i16 shuffles to AVX512BWVL vpermw
llvm-svn: 284863
2016-10-21 19:54:38 +00:00
Simon Pilgrim da814cba0d [X86][AVX512BWVL] Added support for combining target v16i16 shuffles to AVX512BWVL vpermw
llvm-svn: 284860
2016-10-21 19:40:29 +00:00
Simon Pilgrim 0109bf116f [X86][AVX512] Added support for combining target shuffles to AVX512 vpermpd/vpermq/vpermps/vpermd/vpermw
llvm-svn: 284858
2016-10-21 19:18:09 +00:00
Sanjay Patel 81029f6a76 [DAG] fold negation of sign-bit
0 - X --> 0, if the sub is NUW
0 - X --> 0, if X is 0 or the minimum signed value and the sub is NSW
0 - X --> X, if X is 0 or the minimum signed value

This is the DAG equivalent of:
https://reviews.llvm.org/rL284649

plus the fold for the NUW case which already existed in InstSimplify.

Note that we miss a vector fold because of a deficiency in the DAG version of
computeKnownBits().

llvm-svn: 284844
2016-10-21 17:24:26 +00:00
Sanjay Patel 97de8a4c3a [x86] add tests for potential negation folds
These are the backend equivalents for the tests added in r284627.
The patterns may emerge late, so we should have folds for these in the DAG too.

llvm-svn: 284842
2016-10-21 16:56:29 +00:00
Krzysztof Parzyszek b71085b547 [Hexagon] Handle spills of partially defined double vector registers
After register allocation it is possible to have a spill of a register
that is only partially defined. That in itself it fine, but creates a
problem for double vector registers. Stores of such registers are pseudo
instructions that are expanded into pairs of individual vector stores,
and in case of a partially defined source, one of the stores may use
an entirely undefined register. To avoid this, track the defined parts
and only generate actual stores for those.

llvm-svn: 284841
2016-10-21 16:38:29 +00:00
Derek Schuff 6f69783f1f [WebAssembly] Fix for 0xc call_indirect changes
Summary:
Need to reorder the operands to have the callee as the last argument.
Adds a pseudo-instruction, and a pass to lower it into a real
call_indirect.

This is the first of two options for how to fix the problem.

Reviewers: dschuff, sunfish

Subscribers: jfb, beanz, mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D25708

llvm-svn: 284840
2016-10-21 16:38:07 +00:00
Simon Pilgrim 372e916a3f [X86][SSE] Regenerated sext/zext constant folding tests and added i686 tests
llvm-svn: 284837
2016-10-21 16:22:16 +00:00
Simon Pilgrim 098bd72804 [X86][SSE] Regenerated chained pmovsx store tests and added i686 tests
llvm-svn: 284833
2016-10-21 15:51:24 +00:00
Sanjay Patel cbaba93ce8 [DAG] use SDNode flags 'nsz' to enable fadd/fsub with zero folds
As discussed in D24815, let's start the process of killing off the broken fast-math global
state housed in TargetOptions and eliminate the need for function-level fast-math attributes.

Here we enable two similar folds that are possible when we don't care about signed-zero:
fadd nsz x, 0 --> x
fsub nsz 0, x --> -x

Note that although the test cases include a 'sin' function call, I'm side-stepping the 
FMF-on-calls question (and lack of support in the DAG) for now. It's not needed for these
tests - isNegatibleForFree/GetNegatedExpression just look through a ISD::FSIN node.

Also, when we create an FNEG node and propagate the Flags of the FSUB to it, this doesn't
actually do anything today because Flags are silently dropped for any node that is not a
binary operator.

Differential Revision: https://reviews.llvm.org/D25297

llvm-svn: 284824
2016-10-21 14:36:58 +00:00
Simon Pilgrim 32b06235da [X86][AVX512] Add mask/maskz writemask support to subvector broadcast shuffle decode comments
llvm-svn: 284821
2016-10-21 12:14:24 +00:00
Simon Pilgrim f447fbf913 [X86][AVX] Add 32-bit target tests for vector lzcnt/tzcnt to demonstrate missed folding opportunities
llvm-svn: 284816
2016-10-21 10:50:52 +00:00
Benjamin Kramer 0d41eb8cd1 Fix WebAssembly test after r284757.
The change to MachineSink shuffles code around, disable it.

llvm-svn: 284813
2016-10-21 09:51:41 +00:00
Craig Topper c1db0db864 [AVX-512] Add tests to show opportunities for commuting vpermi2/vpermt2 instructions.
Commuting will be added in a future commit.

llvm-svn: 284808
2016-10-21 05:55:40 +00:00
Konstantin Zhuravlyov 08326b6256 [AMDGPU] Emit constant address space data in .rodata section and use relocations instead of fixups (amdhsa only)
Differential Revision: https://reviews.llvm.org/D25693

llvm-svn: 284759
2016-10-20 18:12:38 +00:00
Dehao Chen f03f51555a Using branch probability to guide critical edge splitting.
Summary:
The original heuristic to break critical edge during machine sink is relatively conservertive: when there is only one instruction sinkable to the critical edge, it is likely that the machine sink pass will not break the critical edge. This leads to many speculative instructions executed at runtime. However, with profile info, we could model the splitting benefits: if the critical edge has 50% taken rate, it would always be beneficial to split the critical edge to avoid the speculated runtime instructions. This patch uses profile to guide critical edge splitting in machine sink pass.

The performance impact on speccpu2006 on Intel sandybridge machines:

spec/2006/fp/C++/444.namd                  25.3  +0.26%
spec/2006/fp/C++/447.dealII               45.96  -0.10%
spec/2006/fp/C++/450.soplex               41.97  +1.49%
spec/2006/fp/C++/453.povray               36.83  -0.96%
spec/2006/fp/C/433.milc                   23.81  +0.32%
spec/2006/fp/C/470.lbm                    41.17  +0.34%
spec/2006/fp/C/482.sphinx3                48.13  +0.69%
spec/2006/int/C++/471.omnetpp             22.45  +3.25%
spec/2006/int/C++/473.astar               21.35  -2.06%
spec/2006/int/C++/483.xalancbmk           36.02  -2.39%
spec/2006/int/C/400.perlbench              33.7  -0.17%
spec/2006/int/C/401.bzip2                  22.9  +0.52%
spec/2006/int/C/403.gcc                   32.42  -0.54%
spec/2006/int/C/429.mcf                   39.59  +0.19%
spec/2006/int/C/445.gobmk                 26.98  -0.00%
spec/2006/int/C/456.hmmer                 24.52  -0.18%
spec/2006/int/C/458.sjeng                 28.26  +0.02%
spec/2006/int/C/462.libquantum            55.44  +3.74%
spec/2006/int/C/464.h264ref               46.67  -0.39%

geometric mean                                   +0.20%

Manually checked 473 and 471 to verify the diff is in the noise range.

Reviewers: rengolin, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24818

llvm-svn: 284757
2016-10-20 18:06:52 +00:00
Pirama Arumuga Nainar 05b0f93ad3 Fix *_EXTEND_VECTOR_INREG legalization
Summary:
While promoting *_EXTEND_VECTOR_INREG nodes whose inputs are already
promoted, perform the appropriate sign extension for the promoted node
before doing the *_EXTEND_VECTOR_INREG operation.  If not, the undefined
high-order bits of the promoted operand may (a) be garbage inc ase of
zext) or (b) contribute the wrong sign-bit (in case of sext)

Updated the promote-vec3.ll test after this change.  The diff shows
explicit zeroing in case of zext and intermediate sign extension in case
of sext.

Reviewers: RKSimon

Subscribers: llvm-commits, srhines

Differential Revision: https://reviews.llvm.org/D25790

llvm-svn: 284752
2016-10-20 17:56:36 +00:00
Valery Pykhtin e55fd41f73 [AMDGPU] add fcopysign(f64, f32) pattern
Differential revision: https://reviews.llvm.org/D25827

llvm-svn: 284743
2016-10-20 16:17:54 +00:00
Simon Pilgrim 618d3aedaf [DAGCombiner] Add general constant vector support to (srl (shl x, c), c) -> (and x, cst2)
We already supported scalar constant / splatted constant vector - now accepts any (non opaque) constant scalar / vector

llvm-svn: 284717
2016-10-20 11:10:21 +00:00
Jonas Paulsson 8010b631d5 [SystemZ] Post-RA scheduler implementation
Post-RA sched strategy and scheduling instruction annotations for z196, zEC12
and z13.

This scheduler optimizes decoder grouping and balances processor resources
(including side steering the FPd unit instructions).

The SystemZHazardRecognizer keeps track of the scheduling state, which can
be dumped with -debug-only=misched.

Reviers: Ulrich Weigand, Andrew Trick.
https://reviews.llvm.org/D17260

llvm-svn: 284704
2016-10-20 08:27:16 +00:00
Derek Schuff 7edf93bc5b [WebAssembly] Update extending load test for new i1 behavior
r284611 changed the behavior of the DAG legalizer for sign-extending i1
values. Update the wasm extending load test to match.

llvm-svn: 284677
2016-10-20 00:10:34 +00:00
Wei Ding 3cb2a1e8d1 AMDGPU : Add a function to enable and disable IEEEBit for SC and shader
respectively.

Differential Revision: http://reviews.llvm.org/D25789

llvm-svn: 284655
2016-10-19 22:34:49 +00:00
Reid Kleckner f8d1d12fef [GlobalMerge] Handle non-landingpad EH pads
This code crashed on funclet-style EH instructions such as catchpad,
catchswitch, and cleanuppad. Just treat all EH pad instructions
equivalently and avoid merging the globals they reference through any
use.

llvm-svn: 284633
2016-10-19 19:56:22 +00:00
Simon Pilgrim a20aeea998 [DAGCombiner] Add general constant vector support to (shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)
We already supported scalar constant / splatted constant vector - now accepts any (non opaque) constant scalar / vector

llvm-svn: 284613
2016-10-19 17:12:22 +00:00
Reid Kleckner f7ad5341d0 [WinEH] Allow catchpads to reuse the same catch object
This code used a regular when it should have used a multimap.

llvm-svn: 284612
2016-10-19 17:08:23 +00:00
Sanjay Patel 3a3aaf67e0 [DAG] optimize negation of bool
Use mask and negate for legalization of i1 source type with SIGN_EXTEND_INREG.
With the mask, this should be no worse than 2 shifts. The mask can be eliminated
in some cases, so that should be better than 2 shifts.

This change exposed some missing folds related to negation:
https://reviews.llvm.org/rL284239
https://reviews.llvm.org/rL284395

There may be others, so please let me know if you see any regressions.

Differential Revision: https://reviews.llvm.org/D25485

llvm-svn: 284611
2016-10-19 16:58:59 +00:00
Krzysztof Parzyszek 7bb63ac029 [RDF] Switch RefMap in liveness calculation to use lane masks
This required reengineering of some of the part of liveness calculation,
including fixing some issues caused by the limitations of the previous
approach. The current code is not necessarily the fastest, but it should
be functionally correct (at least more so than before). The compile-time
performance will be addressed in the future.

llvm-svn: 284609
2016-10-19 16:30:56 +00:00
Simon Pilgrim 4554e161be [DAGCombiner] Add general constant vector support to (shl (sra x, c1), c1) -> (and x, (shl -1, c1))
We already supported scalar constant / splatted constant vector - now accepts any (non opaque) constant scalar / vector

llvm-svn: 284608
2016-10-19 16:15:30 +00:00
Simon Pilgrim c2e9724909 [DAGCombiner] Add general constant vector support to (shl (mul x, c1), c2) -> (mul x, c1 << c2)
We already supported scalar constant / splatted constant vector - now accepts any (non opaque) constant scalar / vector

llvm-svn: 284607
2016-10-19 15:59:28 +00:00
Tim Northover 34c83da4c3 Sparc: fix test.
llvm-svn: 284605
2016-10-19 15:55:11 +00:00
Tim Northover 7152dcaf77 GlobalISel: support translating volatile loads and stores.
llvm-svn: 284603
2016-10-19 15:55:06 +00:00
Chris Dewhurst 2c3cdd66d2 [Sparc][LEON] Detects an erratum on UT699 LEON 3 processors involving rounding mode changes and issues an appropriate user error message.
Differential Revision: https://reviews.llvm.org/D24665

llvm-svn: 284591
2016-10-19 14:01:06 +00:00
Sjoerd Meijer 2fc4cb6f72 Reapply r284571 (with the new tests fixed).
llvm-svn: 284588
2016-10-19 13:43:02 +00:00
James Molloy fbfd173447 [Thumb-1] Synthesize TBB/TBH instructions to make use of compressed jump tables
The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions.

It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size.

TBB example:
Before: lsls r0, r0, #2    After: add  r0, pc
        adr  r1, .LJTI0_0         ldrb r0, [r0, #6]
        ldr  r0, [r0, r1]         lsls r0, r0, #1
        mov  pc, r0               add  pc, r0
  => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4.

The only case that can increase dynamic instruction count is the TBH case:

Before: lsls r0, r4, #2    After: lsls r4, r4, #1
        adr  r1, .LJTI0_0         add  r4, pc
        ldr  r0, [r0, r1]         ldrh r4, [r4, #6]
        mov  pc, r0               lsls r4, r4, #1
                                  add  pc, r4
  => 1 more instruction in prologue. Jump table shrunk by a factor of 2.

So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!)

llvm-svn: 284580
2016-10-19 12:06:49 +00:00
Simon Pilgrim 9122ac994b Fix line endings
llvm-svn: 284576
2016-10-19 11:16:58 +00:00
Simon Pilgrim b2ca2505cc [DAGCombine] Generalize distributeTruncateThroughAnd to work with any non-opaque constant or constant vector
llvm-svn: 284574
2016-10-19 08:57:37 +00:00
Sjoerd Meijer 3f5111d363 Revert of r284571 because of failing tests.
llvm-svn: 284572
2016-10-19 07:45:48 +00:00
Sjoerd Meijer a318779263 Checking FP function attribute values and adding more build attribute tests.
This renames the function for checking FP function attribute values and also
adds more build attribute tests (which are in separate files because build
attributes are set per file).

Differential Revision: https://reviews.llvm.org/D25625

llvm-svn: 284571
2016-10-19 07:25:06 +00:00
Craig Topper a4dc340cf2 [AVX-512] Teach isel lowering that a subvector broadcast being inserted into both halves of a 512-bit vector can be combined into a larger subvector broadcast.
Summary:
This allows us to create broadcasts of 128-bit vector loads into 512-bit vectors.

New patterns added to support 8-bit and 16-bit vector types and v2f64/v2i64->v8f64/v8i64 without DQI instructions.

There also fallback patterns when the load can't be folded. These patterns are a little complex as we first need to insert the lower 128-bits into the second 128-bits using a zmm subvector insert instruction. We need to use a zmm insert in case VLX isn't available. Then use another zmm sub vector insert to take those 256-bits and insert them into the upper bits. Since we used a zmm insert to create the 256-bits we also need to do a extract_subreg to get just the lower 256-bits to pass to the second insert.

The outer insert for the fallback patterns should have its type correct because eventually we should also supported masked operations here too. So we need a DQI and a NoDQI version of the v16f32/v16i32 patterns.

Reviewers: RKSimon, delena, igorb

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25651

llvm-svn: 284567
2016-10-19 04:44:17 +00:00
Dehao Chen 95fc43143d Revert r284545 again as the regression in ppc still exists. There is bug in MBPI exposed by th patch.
Also update the section.ll to fix non-x86 failure.

llvm-svn: 284563
2016-10-19 01:18:25 +00:00
Dehao Chen f8ac3d26d5 Using branch probability to guide critical edge splitting.
Summary:
The original heuristic to break critical edge during machine sink is relatively conservertive: when there is only one instruction sinkable to the critical edge, it is likely that the machine sink pass will not break the critical edge. This leads to many speculative instructions executed at runtime. However, with profile info, we could model the splitting benefits: if the critical edge has 50% taken rate, it would always be beneficial to split the critical edge to avoid the speculated runtime instructions. This patch uses profile to guide critical edge splitting in machine sink pass.

The performance impact on speccpu2006 on Intel sandybridge machines:

spec/2006/fp/C++/444.namd                  25.3  +0.26%
spec/2006/fp/C++/447.dealII               45.96  -0.10%
spec/2006/fp/C++/450.soplex               41.97  +1.49%
spec/2006/fp/C++/453.povray               36.83  -0.96%
spec/2006/fp/C/433.milc                   23.81  +0.32%
spec/2006/fp/C/470.lbm                    41.17  +0.34%
spec/2006/fp/C/482.sphinx3                48.13  +0.69%
spec/2006/int/C++/471.omnetpp             22.45  +3.25%
spec/2006/int/C++/473.astar               21.35  -2.06%
spec/2006/int/C++/483.xalancbmk           36.02  -2.39%
spec/2006/int/C/400.perlbench              33.7  -0.17%
spec/2006/int/C/401.bzip2                  22.9  +0.52%
spec/2006/int/C/403.gcc                   32.42  -0.54%
spec/2006/int/C/429.mcf                   39.59  +0.19%
spec/2006/int/C/445.gobmk                 26.98  -0.00%
spec/2006/int/C/456.hmmer                 24.52  -0.18%
spec/2006/int/C/458.sjeng                 28.26  +0.02%
spec/2006/int/C/462.libquantum            55.44  +3.74%
spec/2006/int/C/464.h264ref               46.67  -0.39%

geometric mean                                   +0.20%

Manually checked 473 and 471 to verify the diff is in the noise range.

Reviewers: rengolin, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24818

llvm-svn: 284545
2016-10-18 23:24:02 +00:00
Dehao Chen 62d0e64e9e revert r284541.
llvm-svn: 284544
2016-10-18 23:11:20 +00:00
Dehao Chen ea62ae9844 Using branch probability to guide critical edge splitting.
Summary:
The original heuristic to break critical edge during machine sink is relatively conservertive: when there is only one instruction sinkable to the critical edge, it is likely that the machine sink pass will not break the critical edge. This leads to many speculative instructions executed at runtime. However, with profile info, we could model the splitting benefits: if the critical edge has 50% taken rate, it would always be beneficial to split the critical edge to avoid the speculated runtime instructions. This patch uses profile to guide critical edge splitting in machine sink pass.

The performance impact on speccpu2006 on Intel sandybridge machines:

spec/2006/fp/C++/444.namd                  25.3  +0.26%
spec/2006/fp/C++/447.dealII               45.96  -0.10%
spec/2006/fp/C++/450.soplex               41.97  +1.49%
spec/2006/fp/C++/453.povray               36.83  -0.96%
spec/2006/fp/C/433.milc                   23.81  +0.32%
spec/2006/fp/C/470.lbm                    41.17  +0.34%
spec/2006/fp/C/482.sphinx3                48.13  +0.69%
spec/2006/int/C++/471.omnetpp             22.45  +3.25%
spec/2006/int/C++/473.astar               21.35  -2.06%
spec/2006/int/C++/483.xalancbmk           36.02  -2.39%
spec/2006/int/C/400.perlbench              33.7  -0.17%
spec/2006/int/C/401.bzip2                  22.9  +0.52%
spec/2006/int/C/403.gcc                   32.42  -0.54%
spec/2006/int/C/429.mcf                   39.59  +0.19%
spec/2006/int/C/445.gobmk                 26.98  -0.00%
spec/2006/int/C/456.hmmer                 24.52  -0.18%
spec/2006/int/C/458.sjeng                 28.26  +0.02%
spec/2006/int/C/462.libquantum            55.44  +3.74%
spec/2006/int/C/464.h264ref               46.67  -0.39%

geometric mean                                   +0.20%

Manually checked 473 and 471 to verify the diff is in the noise range.

Reviewers: rengolin, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24818

llvm-svn: 284541
2016-10-18 21:36:11 +00:00
Eli Friedman c0a717ba5b Improve ARM lowering for "icmp <2 x i64> eq".
The custom lowering is pretty straightforward: basically, just AND
together the two halves of a <4 x i32> compare.

Differential Revision: https://reviews.llvm.org/D25713

llvm-svn: 284536
2016-10-18 21:03:40 +00:00
Evandro Menezes 4dd6c68d67 [AArch64] Fix test triplet
llvm-svn: 284532
2016-10-18 20:41:30 +00:00
Evandro Menezes ce8d60156c [AArch64] Avoid materializing 0.0 when generating FP SELECT
Transform `a == 0.0 ? 0.0 : x` to `a == 0.0 ? a : x` and `a != 0.0 ? x : 0.0`
to `a != 0.0 ? x : a` to avoid materializing 0.0 for FCSEL, since it does not
have to be materialized beforehand for FCMP, as it has a form that has 0.0
as an implicit operand.

Differential Revision: https://reviews.llvm.org/D24808

llvm-svn: 284531
2016-10-18 20:37:35 +00:00
Tim Northover 6e9043009e GlobalISel: translate the @llvm.objectsize intrinsic.
llvm-svn: 284527
2016-10-18 20:03:51 +00:00
Tim Northover 55782222c0 GlobalISel: select small binary operations on AArch64.
AArch64 actually supports many 8-bit operations under the definition used by
GlobalISel: the designated information-carrying bits of a GPR32 get the right
value if you just use the normal 32-bit instruction.

llvm-svn: 284526
2016-10-18 20:03:48 +00:00
Tim Northover 3f18603c52 GlobalISel: translate memcpy intrinsics.
llvm-svn: 284525
2016-10-18 20:03:45 +00:00
Tim Northover 4494d69862 GlobalISel: support floating-point constants on AArch64.
Patch from Ahmed Bougacha.

llvm-svn: 284523
2016-10-18 19:47:57 +00:00
Simon Pilgrim 476560aac4 [X86][SSE] Added vector lshr/shl combine tests
This doesn't cover all combines in DAGCombiner::visitSRL/visitSHL yet, but identifies several cases where we fail to combine vectors (or non-splatted) vectors

llvm-svn: 284518
2016-10-18 19:28:12 +00:00
Simon Pilgrim a2778b2591 [X86][SSE] Added vector ashr combine tests
This doesn't cover all combines in DAGCombiner::visitSRA yet, but identifies several cases where we fail to combine vectors (or non-splatted) vectors

llvm-svn: 284498
2016-10-18 17:31:07 +00:00
Simon Pilgrim 25e9628978 [DAGCombiner] Add splatted vector support to (udiv x, (shl pow2, y)) -> x >>u (log2(pow2)+y)
llvm-svn: 284491
2016-10-18 16:36:00 +00:00
Simon Pilgrim ca3072ac58 [X86][AVX512] Add mask/maskz writemask support to constant pool shuffle decode commentx
llvm-svn: 284488
2016-10-18 15:45:37 +00:00
Vasileios Kalintiris 3955b75ba9 [mips][FastISel] Instantiate the MipsFastISel class only for targets that support FastISel.
Summary:
Instead of instantiating the MipsFastISel class and checking if the
target is supported in the overriden methods, we should perform that
check before creating the class. This allows us to enable FastISel *only*
for targets that truly support it, ie. MIPS32 to MIPS32R5.

Reviewers: sdardis

Subscribers: ehostunreach, llvm-commits

Differential Revision: https://reviews.llvm.org/D24824

llvm-svn: 284475
2016-10-18 13:05:42 +00:00
Simon Pilgrim 33f138b566 [X86][SSE] Added extra (mul x, (1 << c)) -> x << c style vector tests
vXi64 will benefit more from lowering to shifts than multiplies

llvm-svn: 284461
2016-10-18 09:29:13 +00:00
Javed Absar e7c338081a [ARM] Assign cost of scaling for Cortex-R52
This patch assigns cost of the scaling used in addressing for Cortex-R52.

On Cortex-R52 a negated register offset takes longer than a non-negated
register offset, in a register-offset addressing mode.

Differential Revision: http://reviews.llvm.org/D25670

Reviewer: jmolloy
llvm-svn: 284460
2016-10-18 09:08:54 +00:00
Simon Pilgrim 4ddc92b6cd [X86][SSE] Add lowering to cvttpd2dq/cvttps2dq for sitofp v2f64/2f32 to 2i32
As discussed on PR28461 we currently miss the chance to lower "fptosi <2 x double> %arg to <2 x i32>" to cvttpd2dq due to its use of illegal types.

This patch adds support for fptosi to 2i32 from both 2f64 and 2f32.

It also recognises that cvttpd2dq zeroes the upper 64-bits of the xmm result (similar to D23797) - we still don't do this for the cvttpd2dq/cvttps2dq intrinsics - this can be done in a future patch.

Differential Revision: https://reviews.llvm.org/D23808

llvm-svn: 284459
2016-10-18 07:42:15 +00:00
Dean Michael Berris 156f6cafc2 [XRay] Support for for tail calls for ARM no-Thumb
This patch adds simplified support for tail calls on ARM with XRay instrumentation.

Known issue: compiled with generic flags: `-O3 -g -fxray-instrument -Wall
-std=c++14  -ffunction-sections -fdata-sections` (this list doesn't include my
specific flags like --target=armv7-linux-gnueabihf etc.), the following program

    #include <cstdio>
    #include <cassert>
    #include <xray/xray_interface.h>

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fC() {
      std::printf("In fC()\n");
    }

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fB() {
      std::printf("In fB()\n");
      fC();
    }

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fA() {
      std::printf("In fA()\n");
      fB();
    }

    // Avoid infinite recursion in case the logging function is instrumented (so calls logging
    //   function again).
    [[clang::xray_never_instrument]] void simplyPrint(int32_t functionId, XRayEntryType xret)
    {
      printf("XRay: functionId=%d type=%d.\n", int(functionId), int(xret));
    }

    int main(int argc, char* argv[]) {
      __xray_set_handler(simplyPrint);

      printf("Patching...\n");
      __xray_patch();
      fA();

      printf("Unpatching...\n");
      __xray_unpatch();
      fA();

      return 0;
    }

gives the following output:

    Patching...
    XRay: functionId=3 type=0.
    In fA()
    XRay: functionId=3 type=1.
    XRay: functionId=2 type=0.
    In fB()
    XRay: functionId=2 type=1.
    XRay: functionId=1 type=0.
    XRay: functionId=1 type=1.
    In fC()
    Unpatching...
    In fA()
    In fB()
    In fC()

So for function fC() the exit sled seems to be called too much before function
exit: before printing In fC().

Debugging shows that the above happens because printf from fC is also called as
a tail call. So first the exit sled of fC is executed, and only then printf is
jumped into. So it seems we can't do anything about this with the current
approach (i.e. within the simplification described in
https://reviews.llvm.org/D23988 ).

Differential Revision: https://reviews.llvm.org/D25030

llvm-svn: 284456
2016-10-18 05:54:15 +00:00
Craig Topper 72b9f9864f [AVX-512] Add test case to check shuffle decoding for masked vpermilps for r284450.
This is harder to do for vpermilpd as shuffle combining turns the constant vector into an immediate since all vpermilpd's inputs with constant vector can also be encoded with the immediate form.

llvm-svn: 284455
2016-10-18 05:44:04 +00:00
Craig Topper 448358b5f1 [X86] Fix DecodeVPERMVMask to handle cases where the constant pool entry has a different type than the shuffle itself.
This is especially important for 32-bit targets with 64-bit shuffle elements.

llvm-svn: 284453
2016-10-18 04:48:33 +00:00
Craig Topper 7268bf99ab [AVX-512] Fix DecodeVPERMV3Mask to handle cases where the constant pool entry has a different type than the shuffle itself.
Summary: This is especially important for 32-bit targets with 64-bit shuffle elements.This is similar to how PSHUFB and VPERMIL handle the same problem.

Reviewers: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25666

llvm-svn: 284451
2016-10-18 04:00:32 +00:00
Konstantin Zhuravlyov 98a3ac7106 [AMDGPU] Mark .note section SHF_ALLOC so lld creates a segment for it
Differential Revision: https://reviews.llvm.org/D25694

llvm-svn: 284435
2016-10-17 22:40:15 +00:00
Sanjay Patel 8716b3cbe0 remove FIXME comment (fixed with r284424); NFC
llvm-svn: 284427
2016-10-17 21:08:39 +00:00
Sanjay Patel 523cd8290a [DAG] use isConstOrConstSplat in ComputeNumSignBits to optimize SRA
The scalar version of this pattern was noted in:
https://reviews.llvm.org/D25485

and fixed with:
https://reviews.llvm.org/rL284395

More refactoring of the constant/splat helpers is needed and will happen in follow-up patches.

Differential Revision: https://reviews.llvm.org/D25685

llvm-svn: 284424
2016-10-17 20:41:39 +00:00
Tim Northover 020d104496 GlobalISel: support wider range of load/store sizes in AArch64.
llvm-svn: 284406
2016-10-17 18:36:53 +00:00
Tom Stellard bc6c523cce AMDGPU/SI: Fix LowerParameter() for i16 arguments
Summary:
If we are loading an i16 value from a 32-bit memory location, then
we need to be able to truncate the loaded value to i16.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25198

llvm-svn: 284397
2016-10-17 16:21:45 +00:00
Sanjay Patel 2cf6bfaf73 [DAG] optimize away an arithmetic-right-shift of a 0 or -1 value
This came up as part of:
https://reviews.llvm.org/D25485

Note that the vector case is missed because ComputeNumSignBits() is deficient for vectors.

llvm-svn: 284395
2016-10-17 15:58:28 +00:00
Sanjay Patel 95db75791e [x86] add tests to show missing DAG folds for arithmetic-shift-right
llvm-svn: 284394
2016-10-17 15:44:59 +00:00
Sanjay Patel 832962110a [x86] auto-generate checks
llvm-svn: 284393
2016-10-17 15:38:41 +00:00
James Molloy aa79b19a3e [SDAG] Use ABI type alignment for constant pools when optimizing for size
SelectionDAG::getConstantPool will automatically determine an appropriate alignment if one is not specified. It does this by querying the type's preferred alignment. This can end up creating quite a lot of padding when the preferred alignment for vectors is 128.

In optimize-for-size mode, it makes sense to instead query the ABI type alignment which is often smaller and causes less padding.

llvm-svn: 284381
2016-10-17 12:54:07 +00:00
Craig Topper 5b24cd31f5 [AVX-512] Add shuffle combining support for vpermi2var shuffles derived from existing support for vpermt2var.
llvm-svn: 284357
2016-10-17 04:26:47 +00:00
Craig Topper 2052318700 [AVX-512] Add vpermi2var test cases to shuffle combining test case. Combining will be added in a future commit.
llvm-svn: 284356
2016-10-17 04:26:44 +00:00
Craig Topper 715ad7fef5 [AVX-512] Add support for turning a 256-bit load that goes to both halfs of an insert_subvector into a subvector broadcast.
Differential Revision: https://reviews.llvm.org/D25650

llvm-svn: 284353
2016-10-16 23:29:51 +00:00
Craig Topper aa1370ac57 [AVX-512] Fix the operand order for vpermi2var_qi intrinsics to match the other vpermi2var intrinsics.
llvm-svn: 284329
2016-10-16 04:54:35 +00:00
Craig Topper 4729fe8bb6 [AVX-512] Correct execution domain for VPERMT2PS and VPERMI2PS.
llvm-svn: 284328
2016-10-16 04:54:31 +00:00
Simon Pilgrim 730f83a750 [X86][SSE] Added some basic examples of knownbits failing for vector types
computeKnownBits only returns the common bits of each vector element instead of only the elements that are actually used

llvm-svn: 284308
2016-10-15 19:29:26 +00:00
Simon Pilgrim d654530e55 [X86] Regenerate known bits test
llvm-svn: 284306
2016-10-15 18:56:38 +00:00
Craig Topper dde865afb5 [AVX-512] Add shuffle comments for vbroadcast instructions.
llvm-svn: 284305
2016-10-15 16:26:07 +00:00
Tom Stellard 961811c906 AMDGPU/SI: Handle s_getreg hazard in GCNHazardRecognizer
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25526

llvm-svn: 284298
2016-10-15 00:58:14 +00:00
Tim Northover 69fa84a6e9 GlobalISel: rename legalizer components to match others.
The previous names were both misleading (the MachineLegalizer actually
contained the info tables) and inconsistent with the selector & translator (in
having a "Machine") prefix. This should make everything sensible again.

The only functional change is the name of a couple of command-line options.

llvm-svn: 284287
2016-10-14 22:18:18 +00:00
Tim Northover 22bff66a9a PowerPC: specify full triple to avoid different Darwin asm syntax.
llvm-svn: 284281
2016-10-14 21:25:29 +00:00
Sanjay Patel 8f5bdb9d28 [ARM] add tests for PR30660
llvm-svn: 284280
2016-10-14 20:52:43 +00:00
Sanjay Patel 928f3d73f6 [PowerPC] add tests for PR30661
llvm-svn: 284279
2016-10-14 20:51:41 +00:00
Guozhi Wei 0cd65429be [PPC] Shorter sequence to load 64bit constant with same hi/lo words
This is a patch to implement pr30640.

When a 64bit constant has the same hi/lo words, we can use rldimi to copy the low word into high word of the same register.

This optimization caused failure of test case bperm.ll because of not optimal heuristic in function SelectAndParts64. It chooses AND or ROTATE to extract bit groups from a register, and OR them together. This optimization lowers the cost of loading 64bit constant mask used in AND method, and causes different code sequence. But actually ROTATE method is better in this test case. The reason is in ROTATE method the final OR operation can be avoided since rldimi can insert the rotated bits into target register directly. So this patch also enhances SelectAndParts64 to prefer ROTATE method when the two methods have same cost and there are multiple bit groups need to be ORed together.

Differential Revision: https://reviews.llvm.org/D25521

llvm-svn: 284276
2016-10-14 20:41:50 +00:00
Tom Stellard 09c2bd6bd4 AMDGPU/SI: Use new SimplifyDemandedBits helper for multi-use operations
Summary:
We are using this helper for our 24-bit arithmetic combines, so we are now able to eliminate multi-use operations that mask the high-bits of 24-bit inputs (e.g. and x, 0xffffff)

Reviewers: arsenm, nhaehnle

Subscribers: tony-tye, arsenm, kzhuravl, wdng, nhaehnle, llvm-commits, yaxunl

Differential Revision: https://reviews.llvm.org/D24672

llvm-svn: 284267
2016-10-14 19:14:29 +00:00
David L Kreitzer 01a057a0c4 Add a pass to optimize patterns of vectorized interleaved memory accesses for
X86. The pass optimizes as a unit the entire wide load + shuffles pattern
produced by interleaved vectorization. This initial patch optimizes one pattern
(64-bit elements interleaved by a factor of 4). Future patches will generalize
to additional patterns.

Patch by Farhana Aleen

Differential revision: http://reviews.llvm.org/D24681

llvm-svn: 284260
2016-10-14 18:20:41 +00:00
Tom Stellard 64a9d0876c AMDGPU/SI: Don't allow unaligned scratch access
Summary: The hardware doesn't support this.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25523

llvm-svn: 284257
2016-10-14 18:10:39 +00:00
Pierre Gousseau b6d652adb5 [X86] Take advantage of the lzcnt instruction on btver2 architectures when ORing comparisons to zero.
This change adds transformations such as:
  zext(or(setcc(eq, (cmp x, 0)), setcc(eq, (cmp y, 0))))
  To:
  srl(or(ctlz(x), ctlz(y)), log2(bitsize(x))
This optimisation is beneficial on Jaguar architecture only, where lzcnt has a good reciprocal throughput.
Other architectures such as Intel's Haswell/Broadwell or AMD's Bulldozer/PileDriver do not benefit from it.
For this reason the change also adds a "HasFastLZCNT" feature which gets enabled for Jaguar.

Differential Revision: https://reviews.llvm.org/D23446

llvm-svn: 284248
2016-10-14 16:41:38 +00:00
Sanjay Patel 00fc7a6159 [DAG] add folds for negated shifted sign bit
The same folds exist in InstCombine already.

This came up as part of:
https://reviews.llvm.org/D25485

llvm-svn: 284239
2016-10-14 14:26:47 +00:00
Sanjay Patel 7b4e4afb61 [x86] add tests to show missing folds for negated shifted sign bit
llvm-svn: 284238
2016-10-14 14:14:40 +00:00
Nicolai Haehnle 67624af0cc AMDGPU: Select 64-bit {ADD,SUB}{C,E} nodes
Summary:
This will be used for 64-bit MULHU, which is in turn used for the 64-bit
divide-by-constant optimization (see D24822).

Reviewers: arsenm, tstellarAMD

Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25289

llvm-svn: 284224
2016-10-14 10:30:00 +00:00
Diana Picus 68c7b04e8d [GlobalISel] Get the AArch64 tests to work on Linux
Mostly this just means changing the triple from aarch64-apple-ios to the generic
aarch64--. Only one test needs more significant changes, but GlobalISel already
does the right thing so it's ok to just change the checks.

Differential Revision: https://reviews.llvm.org/D25532

llvm-svn: 284223
2016-10-14 10:19:40 +00:00
Craig Topper 40feb7f157 [DAGCombiner] Teach createBuildVecShuffle to handle cases where input vectors are less than half of the output vector size.
This will be needed by a future commit to support sign/zero extending from v8i8 to v8i64 which requires a sign/zero_extend_vector_inreg to be created which requires v8i8 to be concatenated upto v64i8 and goes through this code.

llvm-svn: 284204
2016-10-14 06:00:42 +00:00
Konstantin Zhuravlyov c96b5d7073 [AMDGPU] Emit 32-bit lo/hi got and pc relative variant kinds for external and global address space variables
Differential Revision: https://reviews.llvm.org/D25562

llvm-svn: 284196
2016-10-14 04:37:34 +00:00
Saleem Abdulrasool 7705c4f1be CodeGen: use MSVC division on windows itanium
Windows itanium is identical to MSVC when dealing with everything but C++.
Lower the math routines into msvcrt rather than compiler-rt.

llvm-svn: 284175
2016-10-13 23:00:11 +00:00
Saleem Abdulrasool 06383dd272 CodeGen: adjust floating point operations in Windows itanium
Windows itanium is equivalent to MSVC except in C++ mode.  Ensure that the
promote the 32-bit floating point operations to their 64-bit equivalences.

llvm-svn: 284173
2016-10-13 22:38:15 +00:00
Sriraman Tallam f29fa586e1 New llc option pie-copy-relocations to optimize access to extern globals.
This option indicates copy relocations support is available from the linker
when building as PIE and allows accesses to extern globals to avoid the GOT.

Differential Revision: https://reviews.llvm.org/D24849

llvm-svn: 284160
2016-10-13 20:54:39 +00:00
Nirav Dave a81682aad4 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r284151 which appears to be triggering a LTO
failures on Hexagon

llvm-svn: 284157
2016-10-13 20:23:25 +00:00
Nirav Dave 4b36957243 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after upstream changes.

   Simplify Consecutive Merge Store Candidate Search

   Now that address aliasing is much less conservative, push through
   simplified store merging search which only checks for parallel stores
   through the chain subgraph. This is cleaner as the separation of
   non-interfering loads/stores from the store-merging logic.

   Whem merging stores, search up the chain through a single load, and
   finds all possible stores by looking down from through a load and a
   TokenFactor to all stores visited. This improves the quality of the
   output SelectionDAG and generally the output CodeGen (with some
   exceptions).

   Additional Minor Changes:

       1. Finishes removing unused AliasLoad code
       2. Unifies the the chain aggregation in the merged stores across
       code paths
       3. Re-add the Store node to the worklist after calling
       SimplifyDemandedBits.
       4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
       arbitrary, but seemed sufficient to not cause regressions in
       tests.

   This finishes the change Matt Arsenault started in r246307 and
   jyknight's original patch.

   Many tests required some changes as memory operations are now
   reorderable. Some tests relying on the order were changed to use
   volatile memory operations

   Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

    CodeGen/AMDGPU/vgpr-spill-emergency-stack-slot-compute.ll -
      This test appears to work but no longer exhibits the spill behavior.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

llvm-svn: 284151
2016-10-13 19:20:16 +00:00
Igor Breger 8409c356ad [X86][AVX512] Fix sext v32i1 -> v32i8 lowering.
Fix PR30600.

Differential Revision: https://reviews.llvm.org/D25554

llvm-svn: 284134
2016-10-13 17:20:38 +00:00
Reid Kleckner 468e793fea Fix for PR30687. Avoid dereferencing MBB.end().
We don't need to return a MachineInstr* from these stack probe insertion
calls anyway. If we ever need to add it back, we can return an iterator
instead.

Based on a patch by David Kreitzer

This bug is a consequence of

r279314 | dexonsmith | 2016-08-19 13:40:12 -0700 (Fri, 19 Aug 2016) | 110 lines

We hit the "Assertion `!NodePtr->isKnownSentinel()' failed" assertion,
but only when inserting a stack probe call at the end of an MBB, which
isn't necessarily a common situation.

Differential Revision: https://reviews.llvm.org/D25566

llvm-svn: 284130
2016-10-13 15:48:48 +00:00
Javed Absar 85874a9360 [ARM]: Assign cost of scaling used in addressing mode for ARM cores
This patch assigns cost of the scaling used in addressing.
On many ARM cores, a negated register offset takes longer than a
non-negated register offset, in a register-offset addressing mode.

For instance:

LDR R0, [R1, R2 LSL #2]
LDR R0, [R1, -R2 LSL #2]

Above, (1) takes less cycles than (2).

By assigning appropriate scaling factor cost, we enable the LLVM
to make the right trade-offs in the optimization and code-selection phase.

Differential Revision: http://reviews.llvm.org/D24857

Reviewers: jmolloy, rengolin
llvm-svn: 284127
2016-10-13 14:57:43 +00:00
Sanjay Patel 24b6ef7792 [x86] add negate-i1 run for 32-bit target
llvm-svn: 284124
2016-10-13 14:27:08 +00:00
Simon Pilgrim cb59b5257c [DAGCombiner] Add vector support to (mul (shl X, Y), Z) -> (shl (mul X, Z), Y) style combines
llvm-svn: 284122
2016-10-13 14:04:35 +00:00
Matt Arsenault 253640e18d AMDGPU: Assume spilling will occur at -O0
Because everything live is spilled at the end of a
block by fast regalloc, assume this will happen and
avoid the copies of the resource descriptor.

llvm-svn: 284119
2016-10-13 13:10:00 +00:00
Simon Pilgrim 26b6dbc369 Copy+pasts typo in comment describing combine test
Repeated the "fold (mul x, 0) -> 0" instead of "fold (mul x, 1) -> x"

llvm-svn: 284118
2016-10-13 12:54:32 +00:00
Simon Pilgrim fa8fadc0e5 [DAGCombiner] Add vector support to C2-(A+C1) -> (C2-C1)-A folding
llvm-svn: 284117
2016-10-13 12:49:31 +00:00
Simon Pilgrim 833b8a2071 [DAGCombiner] Add vector support to (sub -1, x) -> (xor x, -1) canonicalization
Improves commutation potential

llvm-svn: 284113
2016-10-13 12:05:20 +00:00
Oren Ben Simhon 92ccbf20ff [X86] Basic additions to support RegCall Calling Convention.
The Register Calling Convention (RegCall) was introduced by Intel to optimize parameter transfer on function call.
This calling convention ensures that as many values as possible are passed or returned in registers.
This commit presents the basic additions to LLVM CodeGen in order to support RegCall in X86.

Differential Revision: http://reviews.llvm.org/D25022

llvm-svn: 284108
2016-10-13 07:53:43 +00:00
Craig Topper 3d41f91f61 [AVX-512] Fix v16i32 zero extending shuffle test case so it's really zero extend.
llvm-svn: 284106
2016-10-13 05:41:01 +00:00
Craig Topper ff23af4299 [AVX-512] Teach shuffle lowering to recognize 512-bit zero extends.
llvm-svn: 284105
2016-10-13 05:29:41 +00:00
Craig Topper 05242739c2 [AVX-512] Add tests for basic 512-bit zero extending shuffle patterns. Code will be improved in a future commit.
llvm-svn: 284104
2016-10-13 05:29:37 +00:00
Quentin Colombet 6b87a3109c [AArch64][RegisterBankInfo] Provide alternative mappings for 64-bit load
This allows RegBankSelect in greedy mode to get rid some of the cross
register bank copies when loads are involved in the chain of
computation.

llvm-svn: 284097
2016-10-13 01:01:23 +00:00
Reid Kleckner 741d8a21d3 Correct PrivateLinkage for COFF
- Use storage class C_STAT for 'PrivateLinkage' The storage class for
  PrivateLinkage should equal to the Internal Linkage.

- Set 'PrivateGlobalPrefix' from "L" to ".L" for MM_WinCOFF (includes
  x86_64) MM_WinCOFF has empty GlobalPrefix '\0' so PrivateGlobalPrefix
  "L" may conflict to the normal symbol name starting with 'L'.

Based on a patch by Han Sangjin! Manually updated test cases.

llvm-svn: 284096
2016-10-13 00:55:24 +00:00
Quentin Colombet cd80e97e88 [AArch64][RegisterBankInfo] Provide alternative mappings for G_BITCASTs.
Thanks to this patch, RegBankSelect is able to get rid of some register
bank copies as demonstrated in the test case.

llvm-svn: 284094
2016-10-13 00:34:48 +00:00
Quentin Colombet 9e64919b7c [AArch64][RegisterBankInfo] Use static mapping for same bank G_BITCAST.
NFC.

llvm-svn: 284090
2016-10-13 00:12:04 +00:00
Quentin Colombet db643d9091 [AArch64][MachineLegalizer] Mark more G_BITCAST as legal.
Basically any vector types that fits in a 32-bit register is also valid
as far as copies are concerned.

llvm-svn: 284089
2016-10-13 00:12:01 +00:00
Albert Gutowski 3245ee7e57 fix function label name in addressofreturnaddress test
llvm-svn: 284085
2016-10-12 23:58:45 +00:00
Krzysztof Parzyszek abc0662f04 Handle lane masks in LivePhysRegs when adding live-ins
Differential Revision: https://reviews.llvm.org/D25533

llvm-svn: 284076
2016-10-12 22:53:41 +00:00
Tim Northover fb8d989818 GlobalISel: support G_TRUNC selection on AArch64.
Ahmed's patch again.

llvm-svn: 284075
2016-10-12 22:49:15 +00:00
Tim Northover 69271c64d5 GlobalISel: support int <-> float conversions on AArch64.
More of Ahmed's work.

llvm-svn: 284074
2016-10-12 22:49:11 +00:00
Tim Northover 7dd378dd08 GlobalISel: select G_FCMP instructions on AArch64.
Another of Ahmed's patches.

llvm-svn: 284073
2016-10-12 22:49:07 +00:00
Tim Northover 6c02ad5e4f GlobalISel: support selection of G_ICMP on AArch64.
Patch from Ahmed Bougaca again.

llvm-svn: 284072
2016-10-12 22:49:04 +00:00
Tim Northover 5e3dbf326c GlobalISel: select G_BRCOND instructions on AArch64.
llvm-svn: 284071
2016-10-12 22:49:01 +00:00
Tim Northover 6aacd27cd7 GlobalISel: mark G_BRCOND on s1 as legal.
It's going to be a TBNZ (at -O0) anyway, so the high bits don't matter.

llvm-svn: 284070
2016-10-12 22:48:36 +00:00
Albert Gutowski 795d7d6381 Create llvm.addressofreturnaddress intrinsic
Summary: We need a new LLVM intrinsic to implement MS _AddressOfReturnAddress builtin on 64-bit Windows.

Reviewers: majnemer, rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25293

llvm-svn: 284061
2016-10-12 22:13:19 +00:00
Krzysztof Parzyszek d62669d791 [MIRParser] Parse lane masks for register live-ins
Differential Revision: https://reviews.llvm.org/D25530

llvm-svn: 284052
2016-10-12 21:06:45 +00:00
Krzysztof Parzyszek 3cb5ffeb35 Fix testcases failing after r284036
The codegen has changed slightly between my tests and the commit.

llvm-svn: 284049
2016-10-12 20:39:33 +00:00
Krzysztof Parzyszek 8271be9a1d Do not remove implicit defs in BranchFolder
Branch folder removes implicit defs if they are the only non-branching
instructions in a block, and the branches do not use the defined registers.
The problem is that in some cases these implicit defs are required for
the liveness information to be correct.

Differential Revision: https://reviews.llvm.org/D25478

llvm-svn: 284036
2016-10-12 19:50:57 +00:00
Matt Arsenault d486d3f8d1 AMDGPU: Initial implementation of VGPR indexing mode
This is the most basic handling of the indirect access
pseudos using GPR indexing mode. This currently only enables
the mode for a single v_mov_b32 and then disables it.
This is much more complicated to use than the movrel instructions,
so a new optimization pass is probably needed to fold the access
into the uses and keep the mode enabled for them.

llvm-svn: 284031
2016-10-12 18:49:05 +00:00
Zvi Rackover 025e8614ab [X86] Add the v4i32 flavor test-case for pr30371
llvm-svn: 284025
2016-10-12 17:06:30 +00:00
Tom Stellard fac248cb5f AMDGPU/SI: Change mimg intrinsic signatures
This makes more fields overridable and removes redundant bits.

Patch by: Changpeng Fang

llvm-svn: 284024
2016-10-12 16:35:29 +00:00
Simon Pilgrim 08190943cb [DAGCombiner] Update most ADD combines to support general vector combines
Add a number of helper functions to match scalar or vector equivalent constant/splat values to allow most of the combine patterns to be used by vectors.

Differential Revision: https://reviews.llvm.org/D25374

llvm-svn: 284015
2016-10-12 13:48:10 +00:00
Konstantin Zhuravlyov 081385a74e [DAGCombiner] Do not remove the load of stored values when optimizations are disabled
This combiner breaks debug experience and should not be run when optimizations are disabled.

For example:
  int main() {
    int j = 0;
    j += 2;
    if (j == 2)
      return 0;
    return 5;
  }
When debugging this code compiled in /O0, it should be valid to break at line "j+=2;" and edit the value of j. It should change the return value of the function.

Differential Revision: https://reviews.llvm.org/D19268

llvm-svn: 284014
2016-10-12 13:44:24 +00:00
Diana Picus 40f9341154 Add AArch64 unit tests
Add unit tests for checking a few tricky instruction sizes. Also remove the old
tests for the instruction sizes, which were clunky and brittle.

Since this is the first set of target-specific unit tests, we need to add some
CMake plumbing. In the future, adding unit tests for a given target will be as
simple as creating a directory with the same name as the target under
unittests/Target. The tests are only run if the target is enabled in
LLVM_TARGETS_TO_BUILD.

Differential Revision: https://reviews.llvm.org/D24548

llvm-svn: 283990
2016-10-12 09:00:44 +00:00
Quentin Colombet a907b5ca7c [AArch64][InstructionSelector] Fix unintended test changes in r283973.
I screwed up my merge conflict and lost some of the CHECK lines.

llvm-svn: 283974
2016-10-12 04:12:44 +00:00
Quentin Colombet 9de30faeac [AArch64][InstrustionSelector] Teach the selector about G_BITCAST.
llvm-svn: 283973
2016-10-12 03:57:52 +00:00
Quentin Colombet cb629a897c [AArch64][InstructionSelector] Refactor the handling of copies.
Although Copies are not specific to preISel, we still have to assign them
a proper register class. However, given they are not constrained to
anything we do not have to handle the source register at the copy. It
will be properly mapped when reaching the related definition.

In the process, the handlong of G_ANYEXT is slightly modified as those
end up being selected as copy. The difference is that when register size
do not match on both sides, we need to insert SUBREG_TO_REG operation,
otherwise the post RA copy expansion will not be happy!

llvm-svn: 283972
2016-10-12 03:57:49 +00:00
Quentin Colombet 5a0f5d4831 [AArch64][InstructionSelector] Fix typos in the related mir file. NFC.
llvm-svn: 283971
2016-10-12 03:57:46 +00:00
Quentin Colombet 404e4350dc [AArch64][MachineLegalizer] Mark more bitcasts as legal.
Those are copies, we do not have to do any legalization action for them.

llvm-svn: 283970
2016-10-12 03:57:43 +00:00
Tim Shen 4ff62b187e [PPCMIPeephole] Fix splat elimination
Summary:
In PPCMIPeephole, when we see two splat instructions, we can't simply do the following transformation:
  B = Splat A
  C = Splat B
=>
  C = Splat A
because B may still be used between these two instructions. Instead, we should make the second Splat a PPC::COPY and let later passes decide whether to remove it or not:
  B = Splat A
  C = Splat B
=>
  B = Splat A
  C = COPY B

Fixes PR30663.

Reviewers: echristo, iteratee, kbarton, nemanjai

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D25493

llvm-svn: 283961
2016-10-12 00:48:25 +00:00
Michael Kuperstein 7adbf6b042 [DAG] Fix crash in build_vector -> vector_shuffle combine
Fixes a crash in the build_vector -> vector_shuffle combine
when the first vector input is twice as wide as the output,
and the second input vector is even wider.

llvm-svn: 283953
2016-10-11 22:44:31 +00:00
Tim Northover c1d8c2bf8c GlobalISel: support same-size casts on AArch64.
Mostly Ahmed's work again, I'm just sprucing things up slightly before
committing.

llvm-svn: 283952
2016-10-11 22:29:23 +00:00
Reid Kleckner bdfc05ff93 Re-land "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"
Reverts r283938 to reinstate r283867 with a fix.

The original change had an ArrayRef referring to a destroyed temporary
initializer list. Use plain C arrays instead.

llvm-svn: 283942
2016-10-11 21:14:03 +00:00
Reid Kleckner f4876beb2b Revert "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"
This reverts r283867.

This appears to be an infinite loop:

    while (HiRegToSave != AllHighRegs.end() && CopyReg != AllCopyRegs.end()) {
      if (HiRegsToSave.count(*HiRegToSave)) {
        ...

        CopyReg = findNextOrderedReg(++CopyReg, CopyRegs, AllCopyRegs.end());
        HiRegToSave =
            findNextOrderedReg(++HiRegToSave, HiRegsToSave, AllHighRegs.end());
      }
    }

llvm-svn: 283938
2016-10-11 20:54:41 +00:00
Tim Northover 3d38b3a4d1 GlobalISel: support selection of extend operations.
Patch mostly by Ahmed Bougaca.

llvm-svn: 283937
2016-10-11 20:50:21 +00:00
Kyle Butt 0846e56e63 Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.

In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.

This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.

Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.

Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.

Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll

Differential revision: https://reviews.llvm.org/D18226

llvm-svn: 283934
2016-10-11 20:36:43 +00:00
Sanjay Patel c2bd185dc4 [x86] add tests for negate bool
llvm-svn: 283930
2016-10-11 20:15:20 +00:00
Sanjay Patel 8253e15ef3 [DAG] add fold for masked negated sign-extended bool
This enhances the fold added with:
https://reviews.llvm.org/rL283900

llvm-svn: 283905
2016-10-11 17:05:52 +00:00
Sanjay Patel 58b4987284 [x86] add sext variants of tests added with r283894
llvm-svn: 283903
2016-10-11 16:49:52 +00:00
Sanjay Patel 8384703d9b [DAG] add fold for masked negated extended bool
The non-obvious motivation for adding this fold (which already happens in InstCombine)
is that we want to canonicalize IR towards select instructions and canonicalize DAG 
nodes towards boolean math. So we need to recreate some folds in the DAG to handle that
change in direction. 

An interesting implementation difference for cases like this is that InstCombine
generally works top-down while the DAG goes bottom-up. That means we need to detect 
different patterns. In this case, the SimplifyDemandedBits fold prevents us from 
performing a zext to sext fold that would then be recognized as a negation of a sext. 

llvm-svn: 283900
2016-10-11 16:26:36 +00:00
Sanjay Patel 9b3c8a7321 [x86] add tests to show missed folds for masked bools
llvm-svn: 283894
2016-10-11 16:04:37 +00:00
Changpeng Fang 98317d20f4 AMDGPU/SI: Update ISA version numbers for Tonga and Polaris10/11.
Differential Revision:
  http://reviews.llvm.org/D25454

Reviewers:
  tstellarAMD

llvm-svn: 283893
2016-10-11 16:00:47 +00:00
Simon Pilgrim 5b8627aada [X86][SSE] Regenerate scalar i64 uitofp test
Added 32-bit target test

llvm-svn: 283883
2016-10-11 14:01:38 +00:00
Simon Pilgrim 092cfc597f [X86][SSE] Regenerate vector load-trunc test
llvm-svn: 283881
2016-10-11 13:55:49 +00:00
Simon Pilgrim fe9fa7314c [X86][SSE] Regenerate vsplit and tests
To make it more obvious how bad some of that truncation code is....

llvm-svn: 283880
2016-10-11 13:51:44 +00:00
Sanjay Patel 6d71f7b348 [x86] update test to use FileCheck and auto-generate checks
llvm-svn: 283876
2016-10-11 13:36:07 +00:00
Oliver Stannard d2083fb356 [Thumb] Save/restore high registers in Thumb1 pro/epilogues
The high registers are not allocatable in Thumb1 functions, but they
could still be used by inline assembly, so we need to save and restore
the callee-saved high registers (r8-r11) in the prologue and epilogue.

This is complicated by the fact that the Thumb1 push and pop
instructions cannot access these registers. Therefore, we have to move
them down into low registers before pushing, and move them back after
popping into low registers.

In most functions, we will have low registers that are also being
pushed/popped, which we can use as the temporary registers for
saving/restoring the high registers. However, this is not guaranteed, so
we may need to push some extra low registers to ensure that the high
registers can be saved/restored. For correctness, it would be sufficient
to use just one low register, but if we have enough low registers
available then we only need one push/pop instruction, rather than one
per high register.

We can also use the argument/return registers when they are not live,
and the link register when saving (but not restoring), reducing the
number of extra registers we need to push.

There are still a few extreme edge cases where we need two push/pop
instructions, because not enough low registers can be made live in the
prologue or epilogue.

In addition to the regression tests included here, I've also tested this
using a script to generate functions which clobber different
combinations of registers, have different numbers of argument and return
registers (including variadic arguments), allocate different fixed sized
objects on the stack, and do or don't use variable sized allocas and the
__builtin_return_address intrinsic (all of which affect the available
registers in the prologue and epilogue). I ran these functions in a test
harness which verifies that all of the callee-saved registers are
correctly preserved.

Differential Revision: https://reviews.llvm.org/D24228

llvm-svn: 283867
2016-10-11 10:12:25 +00:00
Oliver Stannard 50a74393c2 [ARM] Fix registers clobbered by SjLj EH on soft-float targets
Currently, the Int_eh_sjlj_dispatchsetup intrinsic is marked as
clobbering all registers, including floating-point registers that may
not be present on the target. This is technically true, as we could get
linked against code that does use the FP registers, but that will not
actually work, as the soft-float code cannot save and restore the FP
registers. SjLj exception handling can only work correctly if either all
or none of the code is built for a target with FP registers. Therefore,
we can assume that, when Int_eh_sjlj_dispatchsetup is compiled for a
soft-float target, it is only going to be linked against other
soft-float code, and so only clobbers the general-purpose registers.
This allows us to check that no non-savable registers are clobbered when
generating the prologue/epilogue.

Differential Revision: https://reviews.llvm.org/D25180

llvm-svn: 283866
2016-10-11 10:06:59 +00:00
Daniel Jasper 0c42dc4784 Revert "Codegen: Tail-duplicate during placement."
This reverts commit r283842.

test/CodeGen/X86/tail-dup-repeat.ll causes and llc crash with our
internal testing. I'll share a link with you.

llvm-svn: 283857
2016-10-11 07:36:11 +00:00
Matthias Braun 74ad41c7cd MIRParser: Rewrite register info initialization; mostly NFC
This changes MachineRegisterInfo to be initializes after parsing all
instructions. This is in preparation for upcoming commits that allow the
register class specification on the operand or deduce them from the
MCInstrDesc.

This commit removes the unused feature of having nonsequential register
numbers. This was confusing anyway as the vreg numbers would be
different after parsing when you had "holes" in your numbering.

This patch also introduces the concept of an incomplete virtual
register. An incomplete virtual register may be used during .mir parsing
to construct MachineOperands without knowing the exact register class
(or register bank) yet.

NFC except for some error messages.

Differential Revision: https://reviews.llvm.org/D22397

llvm-svn: 283848
2016-10-11 03:13:01 +00:00
Kyle Butt ae068a320c Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.

In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.

This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.

Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.

Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.

Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll

Differential revision: https://reviews.llvm.org/D18226

llvm-svn: 283842
2016-10-11 01:20:33 +00:00
Dylan McKay c328fe5af4 [RegAllocGreedy] Attempt to split unspillable live intervals
Summary:
Previously, when allocating unspillable live ranges, we would never
attempt to split. We would always bail out and try last ditch graph
recoloring.

This patch changes this by attempting to split all live intervals before
performing recoloring.

This fixes LLVM bug PR14879.

I can't add test cases for any backends other than AVR because none of
them have small enough register classes to trigger the bug.

Reviewers: qcolombet

Subscribers: MatzeB

Differential Revision: https://reviews.llvm.org/D25070

llvm-svn: 283838
2016-10-11 01:04:36 +00:00
Quentin Colombet d2623f8e38 [AArch64][InstructionSelector] Teach how to select FP load/store.
This patch allows to select 32 and 64-bit FP load and store.

llvm-svn: 283832
2016-10-11 00:21:14 +00:00
Quentin Colombet 0e5312787e [AArch64][InstructionSelector] Teach the selector how to handle vector OR.
This only adds the support for 64-bit vector OR. Adding more sizes is
not difficult, but it requires a bigger refactoring because ORs work on
any size, not necessarly the ones that match the width of the register
width. Right now, this is not expressed in the legalization, so don't
bother pushing the refactoring yet.

llvm-svn: 283831
2016-10-11 00:21:11 +00:00
Quentin Colombet d3126d5fb4 [AArch64][MachineLegalizer] Mark v2s32 G_LOAD as legal.
Actually every 64-bit loads are legal, but right now the API does not
offer a simple way to express that.

llvm-svn: 283829
2016-10-11 00:21:08 +00:00
Sanjay Patel 3013a62dd8 [x86] auto-generate checks
llvm-svn: 283812
2016-10-10 22:04:12 +00:00
Sanjay Patel b493cdaabf [x86] auto-generate checks
llvm-svn: 283811
2016-10-10 22:01:42 +00:00
Tim Northover bdf1624367 GlobalISel: select G_GLOBAL_VALUE uses on AArch64.
llvm-svn: 283809
2016-10-10 21:50:00 +00:00
Tim Northover ad0acca544 GlobalISel: allow G_GLOBAL_VALUEs in AArch64 legalization.
llvm-svn: 283808
2016-10-10 21:49:53 +00:00
Tim Northover 2fda4b08ae GlobalISel: support selecting G_GEP instructions.
They're basically just an alias for G_ADD on AArch64.

llvm-svn: 283807
2016-10-10 21:49:49 +00:00
Tim Northover 4edc60d785 GlobalISel: support selecting constants on AArch64.
llvm-svn: 283806
2016-10-10 21:49:42 +00:00
Hal Finkel fcd2421667 [SelectionDAGBuilder] Support llvm.flt.rounds on targets where i32 is not legal
Add integer expansion for FLT_ROUNDS_ for targets where i32 is not a legal
type.

Patch by Edward Jones, thanks!

Differential Revision: https://reviews.llvm.org/D24459

llvm-svn: 283797
2016-10-10 20:45:15 +00:00
Alexandros Lamprineas 20e9ddba73 [ARM] Fix invalid VLDM/VSTM access when targeting Big Endian with NEON
The instructions VLDM/VSTM can only access word-aligned memory
locations and produce alignment fault if the condition is not met.

The compiler currently generates VLDM/VSTM for v2f64 load/store
regardless the alignment of the memory access. Instead, if a v2f64
load/store is not word-aligned, the compiler should generate
VLD1/VST1. For each non double-word-aligned VLD1/VST1, a VREV
instruction should be generated when targeting Big Endian.

Differential Revision: https://reviews.llvm.org/D25281

llvm-svn: 283763
2016-10-10 16:01:54 +00:00
Zvi Rackover 2a21f125bd [X86] Prefer rotate by 1 over rotate by imm
Summary:
Rotate by 1 is translated to 1 micro-op, while rotate with imm8 is translated to 2 micro-ops.

Fixes pr30644.

Reviewers: delena, igorb, craig.topper, spatel, RKSimon

Differential Revision: https://reviews.llvm.org/D25399

llvm-svn: 283758
2016-10-10 14:43:55 +00:00
Simon Pilgrim 4aea8e8a39 Fixed windows stdout/stderr redirection in inline asm constraint tests
llvm-svn: 283741
2016-10-10 11:11:27 +00:00
Chris Dewhurst 850131213f This pass, fixing an erratum in some LEON 2 processors ensures that the SDIV instruction is not issued, but replaced by SDIVcc instead, which does not exhibit the error. Unit test included.
Differential Review: https://reviews.llvm.org/D24660

llvm-svn: 283727
2016-10-10 08:53:06 +00:00
Craig Topper 9ece2f7529 [AVX-512] Add missing pattern sext or zext from bytes to quad words with a 128-bit load as input.
llvm-svn: 283720
2016-10-10 06:25:48 +00:00
Craig Topper 0f905027b3 [AVX-512] Add test cases for AVX512 sign/zero extend instructions derived from the sse41 and avx2 test cases. Code will be improved in future commits.
llvm-svn: 283719
2016-10-10 06:25:45 +00:00
Craig Topper aba15075da [AVX-512] Add an AVX512VL/BW command line to sse41-pmovxrm.ll and avx2-pmovxrm.ll. Also disable peephole so we really test pattern matching.
llvm-svn: 283718
2016-10-10 06:25:42 +00:00
Michael Zuckerman 3eeac2d56b [x86][inline-asm][llvm] accept 'v' constraint
Commit in the name of:Coby Tayree
1.'v' constraint for (x86) non-avx arch imitates the already implemented 'x' constraint, i.e. allows XMM{0-15} & YMM{0-15} depending on the apparent arch & mode (32/64).
2.for the avx512 arch it allows [X,Y,Z]MM{0-31} (mode dependent)

This patch applies the needed changes to clang
 clang patch: https://reviews.llvm.org/D25004

Differential Revision: D25005
 

llvm-svn: 283717
2016-10-10 05:48:56 +00:00
Craig Topper 64378f4378 [AVX-512] Port 128 and 256-bit memory->register sign/zero extend patterns from SSE file. Also add a minimal set for 512-bit.
llvm-svn: 283704
2016-10-09 23:08:39 +00:00
Zvi Rackover b764bf2987 [X86] Adding the 'nounwind' attribute to test functions for cleaner generated code
Thanks to RKSimon for the suggestion.

llvm-svn: 283696
2016-10-09 13:33:51 +00:00
Zvi Rackover f841080caf [X86] Improve the rotate ISel test
Summary:
- Added 64-bit target testing.
- Added 64-bit operand test cases.
- Added cases that demonstrate pr30644

Reviewers: RKSimon, craig.topper, igorb

Differential Revision: https://reviews.llvm.org/D25401

llvm-svn: 283695
2016-10-09 13:07:25 +00:00
Elena Demikhovsky 5b10aa1f1e DAG: Setting Masked-Expand-Load as a variant of Masked-Load node
Masked-expand-load node represents load operation that loads a variable amount of elements from memory according to amount of "true" bits in the mask and expands the loaded elements according to their position in the mask vector.
Right now, the node is used in intrinsics for VEXPAND* instructions. 
The work is done towards implementation of masked.expandload and masked.compressstore intrinsics.

Differential Revision: https://reviews.llvm.org/D25322

llvm-svn: 283694
2016-10-09 10:48:52 +00:00
Craig Topper 43973154dd [AVX-512] Fix execution domain for EVEX encoded VINSERTPS.
llvm-svn: 283692
2016-10-09 06:41:47 +00:00
Craig Topper e30cb00dc0 [AVX-512] Add subvector insert and extract to load/store folding tables.
llvm-svn: 283689
2016-10-09 03:54:13 +00:00
Craig Topper 50a468e03f [AVX-512] Add avx512dq to the fp stack folding test.
llvm-svn: 283688
2016-10-09 03:54:09 +00:00
Craig Topper 4262d53024 [AVX-512] Add the vector down convert instructions to the store folding tables.
llvm-svn: 283687
2016-10-09 03:54:05 +00:00
Simon Pilgrim 319c094771 [X86][SSE] Regenerate select tests
llvm-svn: 283674
2016-10-08 21:17:44 +00:00
Zvi Rackover ce4900aaa6 Revert "[X86] Apply the Update LLC Test Checks tool on the rotate tests."
This reverts commit 283667.

llvm-svn: 283673
2016-10-08 20:54:20 +00:00
Simon Pilgrim 9e7a22fc13 [X86][SSE] Regenerate and add 32-bit tests to widening tests
llvm-svn: 283672
2016-10-08 19:54:28 +00:00
Simon Pilgrim 30cbd1ab84 Fix comment typos - full update script path in assertions note
llvm-svn: 283670
2016-10-08 18:51:55 +00:00
Craig Topper 2067142d7d [AVX-512] Add test case for PR30430 that I should have added in r281959.
llvm-svn: 283669
2016-10-08 18:50:00 +00:00
Craig Topper 086f0c1401 [AVX-512] Fix a bug in getLargestLegalSuperClass where we inflated to VR128X/VR256X even when VLX isn't supported.
This seems to have been responsible for the XMM16-31 spills observed in PR29112. With this fixed the test case has been modified to no longer have a spill of XMM16.

llvm-svn: 283668
2016-10-08 18:49:57 +00:00
Zvi Rackover 2413d475fc [X86] Apply the Update LLC Test Checks tool on the rotate tests.
Also added cases demonstrating pr30644.

llvm-svn: 283667
2016-10-08 18:44:47 +00:00
Simon Pilgrim d0d90fb9b2 [X86][AVX2] Regenerate and add 32-bit tests to core tests
llvm-svn: 283666
2016-10-08 18:36:57 +00:00
Sebastian Pop eb65d72d9c [AArch64] Avoid generating indexed vector instructions for Exynos
Avoid generating indexed vector instructions for Exynos. This is needed for
fmla/fmls/fmul/fmulx. For example, the instruction

  fmla v0.4s, v1.4s, v2.s[1]

is less efficient than the instructions

  dup v2.4s, v2.s[1]
  fmla v0.4s, v1.4s, v2.4s

Patch written by Abderrazek Zaafrani.

Differential Revision: https://reviews.llvm.org/D21571

llvm-svn: 283663
2016-10-08 12:30:07 +00:00
Mehdi Amini 01e0e136bd Requires the AVR backend for running test/CodeGen/AVR
llvm-svn: 283653
2016-10-08 04:39:34 +00:00
Kyle Butt 2facd194a2 Revert "Codegen: Tail-duplicate during placement."
This reverts commit 71c312652c10f1855b28d06697c08d47e7a243e4.

llvm-svn: 283647
2016-10-08 01:47:05 +00:00
Dylan McKay 12109e7314 Allow a maximum of 64 bits to be returned in registers
The rest spills to the stack

Authored by Jake Goulding

llvm-svn: 283635
2016-10-08 01:05:09 +00:00
Dylan McKay c1ff65cf62 [AVR] Expand MULHS for all types
Once MULHS was expanded, this exposed an issue where the condition
register was thought to be 16-bit. This caused an attempt to copy a
16-bit register to an 8-bit register.

Authored by Jake Goulding

llvm-svn: 283634
2016-10-08 01:01:49 +00:00
Tom Stellard 5ab6154dc3 AMDGPU/SI: Handle div_fmas hazard in GCNHazardRecognizer
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25250

llvm-svn: 283622
2016-10-07 23:42:48 +00:00
Kyle Butt 37e676d857 Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.

In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.

This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.

Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.

Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.

Differential revision: https://reviews.llvm.org/D18226

llvm-svn: 283619
2016-10-07 22:33:20 +00:00
Arnold Schwaighofer 3f25658143 swifterror: Don't compute swifterror vregs during instruction selection
The code used llvm basic block predecessors to decided where to insert phi
nodes. Instruction selection can and will liberally insert new machine basic
block predecessors. There is not a guaranteed one-to-one mapping from pred.
llvm basic blocks and machine basic blocks.

Therefore the current approach does not work as it assumes we can mark
predecessor machine basic block as needing a copy, and needs to know the set of
all predecessor machine basic blocks to decide when to insert phis.

Instead of computing the swifterror vregs as we select instructions, propagate
them at the end of instruction selection when the MBB CFG is complete.

When an instruction needs a swifterror vreg and we don't know the value yet,
generate a new vreg and remember this "upward exposed" use, and reconcile this
at the end of instruction selection.

This will only happen if the target supports promoting swifterror parameters to
registers and the swifterror attribute is used.

rdar://28300923

llvm-svn: 283617
2016-10-07 22:06:55 +00:00
Simon Pilgrim f9648b72df [X86][SSE] Reapplied: Add vector fcopysign combine tests
Now with better lowering and fix for PR30443

llvm-svn: 283569
2016-10-07 16:00:59 +00:00
Simon Pilgrim 02f623e74c [X86][SSE] Tidied up tests - use standard check prefixes
llvm-svn: 283559
2016-10-07 14:42:22 +00:00
Konstantin Zhuravlyov f74fc60a7d [AMDGPU] Promote uniform (i1, i16] operations to i32
Differential Revision: https://reviews.llvm.org/D25302

llvm-svn: 283555
2016-10-07 14:22:58 +00:00
Martin Storsjo 04864f45b2 [ARM] Reapply: Use __rt_div functions for divrem on Windows
Reapplying r283383 after revert in r283442. The additional fix
is a getting rid of a stray space in a function name, in the
refactoring part of the commit.

This avoids falling back to calling out to the GCC rem functions
(__moddi3, __umoddi3) when targeting Windows.

The __rt_div functions have flipped the two arguments compared
to the __aeabi_divmod functions. To match MSVC, we emit a
check for division by zero before actually calling the library
function (even if the library function itself also might do
the same check).

Not all calls to __rt_div functions for division are currently
merged with calls to the same function with the same parameters
for the remainder. This is more wasteful than a div + mls as before,
but avoids calls to __moddi3.

Differential Revision: https://reviews.llvm.org/D25332

llvm-svn: 283550
2016-10-07 13:28:53 +00:00
Javed Absar fb4b6e8db9 [ARM]: Add Cortex-R52 target to LLVM
This patch adds Cortex-R52, the new ARM real-time processor, to LLVM. 
Cortex-R52 implements the ARMv8-R architecture.

llvm-svn: 283542
2016-10-07 12:06:40 +00:00
Simon Pilgrim a5d019ee95 [X86][SSE] Update register class during MOVSD/MOVSS - BLENDPD/BLENDPS commutation
MOVSD/MOVSS take a 128-bit register and a FR32/FR64 register input, the commutation code wasn't taking this into account leading to verification errors.

This patch inserts a vreg copy mi to ensure that the registers are correct.

Fix for PR30607

Differential Revision: https://reviews.llvm.org/D25280

llvm-svn: 283539
2016-10-07 11:18:38 +00:00