Commit Graph

38059 Commits

Author SHA1 Message Date
Craig Topper ddab395397 [AVX512] Add patterns for zero-extending a mask that use the def of KMOVW/KMOVB without going through an EXTRACT_SUBREG and a MOVZX.
llvm-svn: 272625
2016-06-14 03:12:54 +00:00
Kevin Enderby d2d2ce9b9f Update the AArch64ExternalSymbolizer to print literal strings as escaped strings
so it is the same as the MCExternalSymbolizer.

rdar://17349181

llvm-svn: 272588
2016-06-13 21:08:57 +00:00
David Majnemer 248190ba69 [X86] Remove llvm.x86.bit.scan.{forward,reverse}.32
The need for these intrinsics has been obviated by r272564 which
reimplements their functionality using generic IR.

llvm-svn: 272566
2016-06-13 17:33:13 +00:00
Marek Olsak e93f6d6923 AMDGPU/SI: Set INDEX_STRIDE for scratch coalescing
Summary:
Mesa and other users must set this to enable coalescing:
- STRIDE = 0
- SWIZZLE_ENABLE = 1

This makes one particular compute shader 8x faster.

Reviewers: tstellarAMD, arsenm

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D21136

llvm-svn: 272556
2016-06-13 16:05:57 +00:00
Matt Arsenault 80bc355048 AMDGPU: Fix post-RA verifier errors with trackLivenessAfterRegAlloc
The condition reg of the cndmask_b64 expansion can't be killed by
the first one, and the implicit super register implicit def is needed.

llvm-svn: 272554
2016-06-13 15:53:52 +00:00
Ulrich Weigand daae87aa21 [SystemZ] Enable index register memory constraints for inline ASM
This enables use of the 'R' and 'T' memory constraints for inline ASM
operands on SystemZ, which allow an index register as well as an
immediate displacement. This patch includes corresponding documentation
and test case updates.

As with the last patch of this kind, I moved the 'm' constraint to the
most general case, which is now 'T' (base + 20-bit signed displacement +
index register).

Author: colpell
Differential Revision: http://reviews.llvm.org/D21239

llvm-svn: 272547
2016-06-13 14:24:05 +00:00
Ranjeet Singh 933e1aa39f [ARM] Reverting r272544 because clang patch needs
to go in as soon as llvm patch has gone in because
tests will start breaking in Clang.

llvm-svn: 272546
2016-06-13 10:58:24 +00:00
Ranjeet Singh 8feacb330d [ARM] Add mrrc/mrrc2 co-processor intrinsics
MRRC/MRRC2 instruction writes to two registers. The
intrinsic definition returns a single uint64_t to
represent the write, this is a compact way of
representing a write to two 32 bit registers,
the alternative might have been two return a
struct of 2 uint32_t's but this isn't as nice.

Differential Revision: 

llvm-svn: 272544
2016-06-13 10:43:50 +00:00
Haojian Wu 7900ca1e7e Fix an enumeral mismatch warning.
Summary:
The "-Werror=enum-compare" shows that the statement is using two different enums:

enumeral mismatch in conditional expression: 'llvm::X86ISD::NodeType' vs 'llvm::ISD::NodeType'

A follow-up fix on D21235.

Reviewers: klimek

Subscribers: spatel, cfe-commits

Differential Revision: http://reviews.llvm.org/D21278

llvm-svn: 272539
2016-06-13 09:03:45 +00:00
Craig Topper 13cf7cac07 [AVX512] Remove maksed pshufd, pshuflw, and phufhw intrinsics and autoupgrade them to selects and shufflevector.
llvm-svn: 272527
2016-06-13 02:36:48 +00:00
Benjamin Kramer 4ca41fd09e Run clang-tidy's performance-unnecessary-copy-initialization over LLVM.
No functionality change intended.

llvm-svn: 272516
2016-06-12 17:30:47 +00:00
Benjamin Kramer d3f4c05aea Move instances of std::function.
Or replace with llvm::function_ref if it's never stored. NFC intended.

llvm-svn: 272513
2016-06-12 16:13:55 +00:00
Benjamin Kramer bdc4956bac Pass DebugLoc and SDLoc by const ref.
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.

llvm-svn: 272512
2016-06-12 15:39:02 +00:00
Sanjay Patel 977530a8c9 [x86, SSE] change patterns for CMPP to float types to allow matching with SSE1 (PR28044)
This patch is intended to solve:
https://llvm.org/bugs/show_bug.cgi?id=28044

By changing the definition of X86ISD::CMPP to use float types, we allow it to be created 
and pass legalization for an SSE1-only target where v4i32 is not legal.

The motivational trail for this change includes:
https://llvm.org/bugs/show_bug.cgi?id=28001

and eventually makes this trigger:
http://reviews.llvm.org/D21190

Ie, after this step, we should be free to have Clang generate FP compare IR instead of x86
intrinsics for SSE C packed compare intrinsics. (We can auto-upgrade and remove the LLVM 
sse.cmp intrinsics as a follow-up step.) Once we're generating vector IR instead of x86
intrinsics, a big pile of generic optimizations can trigger.

Differential Revision: http://reviews.llvm.org/D21235

llvm-svn: 272511
2016-06-12 15:03:25 +00:00
Craig Topper 1067986c5b [X86] Remove sse2 pshufd/pshuflw/pshufhw intrinsics and upgrade them to shufflevector.
llvm-svn: 272510
2016-06-12 14:11:32 +00:00
Craig Topper 251030babe [AVX512] Remove the masked palignr intrinsics that I forgot to remove when I added auto-upgrade code to turn them into shufflevectors and selects.
llvm-svn: 272497
2016-06-12 04:14:13 +00:00
Simon Pilgrim 3fc09f7be6 [CostModel][X86][SSE] Updated costs for vector BITREVERSE ops on SSSE3+ targets
To account for the fast PSHUFB implementation now available

llvm-svn: 272484
2016-06-11 19:23:02 +00:00
Simon Pilgrim 5b9bade8dd [X86][SSSE3] Added PSHUFB LUT implementation of BITREVERSE
PSHUFB can speed up BITREVERSE of byte vectors by performing LUT on the low/high nibbles separately and ORing the results. Wider integer vector types are already BSWAP'd beforehand so also make use of this approach.

llvm-svn: 272477
2016-06-11 15:44:13 +00:00
Simon Pilgrim b13961d25b Strip trailing whitespace. NFCI.
llvm-svn: 272476
2016-06-11 14:34:10 +00:00
Craig Topper 504fba5c8a [AVX512] Lower v8i64 and v16i32 to pshufd when possible.
llvm-svn: 272473
2016-06-11 13:43:21 +00:00
Simon Pilgrim 6800a45790 [X86][SSE] Added PSLLDQ/PSRLDQ as a target shuffle type
Ensure that PALIGNR/PSLLDQ/PSRLDQ are byte vectors so that they can be correctly decoded for target shuffle combining

llvm-svn: 272471
2016-06-11 13:38:28 +00:00
Simon Pilgrim 255fdd0666 [X86][SSE] Use vXi8 return type for PSLLDQ/PSRLDQ instructions
These are byte shift instructions and it will make shuffle combining a lot more straightforward if we can assume a vXi8 vector of bytes so decoded shuffle masks match the return type's number of elements

llvm-svn: 272468
2016-06-11 12:54:37 +00:00
Simon Pilgrim d386941676 [X86][AVX512] Tidied up VSHUFF32x4/VSHUFF64x2/VSHUFI32x4/VSHUFI64x2 comment generation
Now matches other shuffles

llvm-svn: 272464
2016-06-11 11:18:38 +00:00
Chandler Carruth 4c0e94dce6 Try a bit harder to remove the signed and unsigned comparison warning.
Hopefully this time it actually works and stays away.

llvm-svn: 272463
2016-06-11 09:13:00 +00:00
Chandler Carruth 306e270b83 Compare to an unsigned literal to avoid a -Wsign-compare warning.
llvm-svn: 272459
2016-06-11 08:02:01 +00:00
Craig Topper 40abd1cc61 [AVX512] Add support for lowering v32i16 shuffles with repeated lanes. This allows us to create 512-bit PSHUFLW/PSHUFHW.
llvm-svn: 272450
2016-06-11 03:27:42 +00:00
Craig Topper b9b86fcfff [AVX512] No need to check for BWI being enabled before lowering v32i16 and v64i8 shuffles. If we get this far the types are already legal which means BWI must be enabled.
llvm-svn: 272449
2016-06-11 03:27:37 +00:00
Sanjoy Das 39c226fdba [STLExtras] Introduce and use llvm::count_if; NFC
(This is split out from was D21115)

llvm-svn: 272435
2016-06-10 21:18:39 +00:00
Chad Rosier d1f6c840ee [AArch64] Move comments closer to relevant check. NFC.
llvm-svn: 272430
2016-06-10 20:49:18 +00:00
Chad Rosier c5083c2ccf [AArch64] Refactor a check earlier. NFC.
llvm-svn: 272429
2016-06-10 20:47:14 +00:00
Sanjay Patel b114fd65fc [x86] enable bitcasted fabs/fneg transforms
The vector cases don't change because we already have folds in X86ISelLowering
to look through and remove bitcasts.

llvm-svn: 272427
2016-06-10 20:33:50 +00:00
Nico Weber 2cf5e89e1d Remove a few gendered pronouns.
llvm-svn: 272422
2016-06-10 20:06:03 +00:00
Zhan Jun Liau ab42cbce98 [SystemZ] Support Compare and Traps
Support and generate Compare and Traps like CRT, CIT, etc.

Support Trap as legal DAG opcodes and generate "j .+2" for them by default.
Add support for Conditional Traps and use the If Converter to convert them into
the corresponding compare and trap opcodes.

Differential Revision: http://reviews.llvm.org/D21155

llvm-svn: 272419
2016-06-10 19:58:10 +00:00
Tom Stellard f3af841462 AMDGPU/SI: Don't use fixup_si_rodata for scratch rsrc relocations
Summary:
We need to set the fixup type to FK_Data_4 for the
SCRATCH_RSRC_DWORD[01] symbols, since these require absolute
relocations, and fixup_si_rodata is for relative relocations.

Reviewers: arsenm, kzhuravl

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21153

llvm-svn: 272417
2016-06-10 19:26:38 +00:00
Michael Kuperstein 9a0542a792 [X86] Add costs for SSE zext/sext to v4i64 to TTI
The costs are somewhat hand-wavy, but should be much closer to the truth
than what we get from BasicTTI.

Differential Revision: http://reviews.llvm.org/D21156

llvm-svn: 272406
2016-06-10 17:01:05 +00:00
Evandro Menezes a3a0a60cff [AArch64] Add preferred alignments for Exynos M1
Differential Revision: http://reviews.llvm.org/D21203

llvm-svn: 272400
2016-06-10 16:00:18 +00:00
Krzysztof Parzyszek 96cfc381e2 [Hexagon] Remove incorrect offset scaling
llvm-svn: 272399
2016-06-10 15:43:18 +00:00
Roman Shirokiy d93998f606 Test commit
llvm-svn: 272393
2016-06-10 13:12:48 +00:00
Sam Kolton 945231ada5 [AMDGPU] AsmParser: Support for sext() modifier in SDWA. Some code cleaning in AMDGPUOperand.
Summary:
sext() modifier is supported in SDWA instructions only for integer operands. Spec is unclear should integer operands support abs and neg modifiers with sext - for now they are not supported.
Renamed InputModsWithNoDefault to FloatInputMods. Added SextInputMods for operands that support sext() modifier.
Added AMDGPUOperand::Modifier struct to handle register and immediate modifiers.
Code cleaning in AMDGPUOperand class: organize method in groups (render-, predicate-methods...).

Reviewers: vpykhtin, artem.tamazov, tstellarAMD

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20968

llvm-svn: 272384
2016-06-10 09:57:59 +00:00
Roger Ferrer Ibanez 1efc17e1ec test commit: remove trailing whitespaces in README.txt
llvm-svn: 272380
2016-06-10 08:19:58 +00:00
Craig Topper 200d237e57 [AVX512] Add shuffle comment printing for masked VPERMPD/VPERMQ.
llvm-svn: 272371
2016-06-10 05:12:40 +00:00
Craig Topper 89c1761474 [AVX512] Fix shuffle comment printing to handle the masked versions of some shuffles. Previously we were printing the mask operands as the register names.
llvm-svn: 272367
2016-06-10 04:48:05 +00:00
Matt Arsenault 37fefd68cc AMDGPU: Fix trailing whitespace
llvm-svn: 272364
2016-06-10 02:18:02 +00:00
Matt Arsenault 58ddad5bd6 AMDGPU: v_cndmask_b32 does not def vcc
Fixes verifier errors after SIShrinkInstructions.

llvm-svn: 272351
2016-06-10 00:18:41 +00:00
Tom Stellard 26a2ab7477 AMDGPU/SI: Make sure to emit TargetConstant nodes when matching ds_*permute
Summary:
This fixes a bug with ds_*permute instructions where if it was passed a
constant address, then the offset operand would get assigned a register
operand instead of an immediate.

Reviewers: scchan, arsenm

Subscribers: arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D19994

llvm-svn: 272349
2016-06-10 00:01:04 +00:00
Tom Stellard 1d3940e8cc AMDGPU/SI: Use common topological sort algorithm in SIScheduleDAGMI
Reviewers: arsenm, axeldavy

Subscribers: MatzeB, arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D19823

llvm-svn: 272346
2016-06-09 23:48:02 +00:00
Matt Arsenault 7757c59e48 AMDGPU: Fix flat atomics
The flat atomics could already be selected, but only
when using flat instructions for global memory. Add
patterns for flat addresses.

llvm-svn: 272345
2016-06-09 23:42:54 +00:00
Matt Arsenault 887018179a AMDGPU: Fix i64 global cmpxchg
This was using extract_subreg sub0 to extract the low register
of the result instead of sub0_sub1, producing an invalid copy.

There doesn't seem to be a way to use the compound subreg indices
in tablegen since those are generated, so manually select it.

llvm-svn: 272344
2016-06-09 23:42:48 +00:00
Eric Christopher 1dbb23e162 Add aliases for mfvrsave/mtvrsave.
Update a test as we're now going to emit it for easier reading of
generated assembly as well.

llvm-svn: 272339
2016-06-09 23:27:48 +00:00
Matt Arsenault e2bd9a32bc AMDGPU: Run verifer after insert waits pass
llvm-svn: 272338
2016-06-09 23:19:14 +00:00
Matt Arsenault cfb61e780a AMDGPU: Remove incorrect assertion
I'm still not sure under what circumstances the offset here is non-0,
but private memory is not limited to 27-bits.

llvm-svn: 272337
2016-06-09 23:19:08 +00:00
Matt Arsenault c3a01ec9db AMDGPU: Properly initialize SIShrinkInstructions
llvm-svn: 272336
2016-06-09 23:18:47 +00:00
Simon Pilgrim 643734c565 [X86][AVX512] Added avx512 VPSLLDQ/VPSRLDQ instruction comments
llvm-svn: 272319
2016-06-09 22:03:15 +00:00
Simon Pilgrim f718682eb9 [X86][AVX512] Dropped avx512 VPSLLDQ/VPSRLDQ intrinsics
Auto-upgrade to generic shuffles like sse/avx2 implementations now that we can lower to VPSLLDQ/VPSRLDQ 

llvm-svn: 272308
2016-06-09 21:09:03 +00:00
Simon Pilgrim 47c76e201a [X86][AVX512] Fixed issue with v16i32 shuffles lowering to VPALIGNR
llvm-svn: 272307
2016-06-09 20:53:12 +00:00
Simon Pilgrim 0ab9d3026a [X86][AVX512] Added support for lowering 512-bit vector shuffles to bit/byte shifts
512-bit VPSLLDQ/VPSRLDQ can only be used for avx512bw targets so lowerVectorShuffleAsShift had to be adjusted to include the subtarget

llvm-svn: 272300
2016-06-09 20:13:58 +00:00
Justin Lebar ed2c282d4b [NVPTX] Add intrinsics for shfl instructions.
Summary:
Currently clang emits these instructions via inline (volatile) asm in
the CUDA headers.  Switching to intrinsics will let the optimizer reason
across calls to these intrinsics.

Reviewers: tra

Subscribers: llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D21160

llvm-svn: 272298
2016-06-09 20:04:08 +00:00
Wei Ding ed0f97fad2 AMDGPU/SI: Fix 32-bit fdiv lowering
We were using the fast fdiv lowering for all division, implementation of
IEEE754 fdiv is added.

http://reviews.llvm.org/D20557

llvm-svn: 272292
2016-06-09 19:17:15 +00:00
Ulrich Weigand 79564611d9 [SystemZ] Enable long displacement constraints for inline ASM operands
This enables use of the 'S' constraint for inline ASM operands on
SystemZ, which allows for a memory reference with a signed 20-bit
immediate displacement. This patch includes corresponding documentation
and test case updates.

I've changed the 'T' constraint to match the new behavior for 'S', as
'T' also uses a long displacement (though index constraints are still
not implemented). I also changed 'm' to match the behavior for 'S' as
this will allow for a wider range of displacements for 'm', though
correct me if that's not the right decision.

Author: colpell
Differential Revision: http://reviews.llvm.org/D21097

llvm-svn: 272266
2016-06-09 15:19:16 +00:00
Hrvoje Varga c962c4936e [mips][microMIPS] Implement BOVC, BNVC, EXT, INS and JALRC instructions
Differential Revision: http://reviews.llvm.org/D11798

llvm-svn: 272259
2016-06-09 12:57:23 +00:00
James Molloy a7dbf987b5 [Thumb] A branch is not part of an IT block
ReplaceTailWithBranchTo assumed that if an instruction is predicated, it must be part of an IT block. This is not correct for conditional branches.

No testcase as this was triggered by the reverted patch r272017 - test coverage will occur when that patch is re-reverted and there is no known way to trigger this in the meantime.

llvm-svn: 272258
2016-06-09 11:51:29 +00:00
Igor Breger f635367e2b [AVX512] Remove masked_move/blendm intrinsic from back-end.
This is complement patch to D21060.

Differential Revision: http://reviews.llvm.org/D21174

llvm-svn: 272257
2016-06-09 11:46:55 +00:00
Zlatko Buljan cd242c1655 [mips][microMIPS] Add CodeGen support for SEL.*, SELEQZ, SELNEZ, SELEQZ.*, SELNEZ.* and CMP.condn.fmt instructions
Differential Revision: http://reviews.llvm.org/D20862

llvm-svn: 272256
2016-06-09 11:15:53 +00:00
Sam Kolton c9bdcb75c4 [AMDGPU] Disassembler: Support for sdwa instructions
Reviewers: vpykhtin, tstellarAMD

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D21129

llvm-svn: 272255
2016-06-09 11:04:45 +00:00
Craig Topper 6f7288dc44 [AVX512] Fix shuffle decode printing for several instructions with write masks. There are still more bugs here with UNPCK and PALIGN for sure. But these were the easiest ones to fix.
llvm-svn: 272252
2016-06-09 07:49:08 +00:00
James Molloy feb9f4243b [Thumb] Select a BIC instead of AND if the immediate can be encoded more optimally negated
If an immediate is only used in an AND node, it is possible that the immediate can be more optimally materialized when negated. If this is the case, we can negate the immediate and use a BIC instead;

  int i(int a) {
    return a & 0xfffffeec;
  }

Used to produce:
    ldr r1, [CONSTPOOL]
    ands r0, r1
  CONSTPOOL: 0xfffffeec

And now produces:
    movs    r1, #255
    adds    r1, #20  ; Less costly immediate generation
    bics    r0, r1

llvm-svn: 272251
2016-06-09 07:39:08 +00:00
Craig Topper 7a2993093e [X86] Bring consistent naming to the SSE/AVX and AVX512 PALIGNR instructions. Then add shuffle decode printing for the EVEX forms which is made easier by having the naming structure more similar to other instructions.
llvm-svn: 272249
2016-06-09 07:06:38 +00:00
Craig Topper 565a5b5451 [X86] Fix bad comment in assert. NFC
llvm-svn: 272248
2016-06-09 07:06:33 +00:00
Saleem Abdulrasool 6c19ffc8bc AArch64: support the `.arch` directive in the IAS
Add support to the AArch64 IAS for the `.arch` directive.  This allows the
assembly input to use architectural functionality in part of a file.  This is
used in existing code like BoringSSL.

Resolves PR26016!

llvm-svn: 272241
2016-06-09 02:56:40 +00:00
Benjamin Kramer c321e53402 Apply most suggestions of clang-tidy's performance-unnecessary-value-param
Avoids unnecessary copies. All changes audited & pass tests with asan.
No functional change intended.

llvm-svn: 272190
2016-06-08 19:09:22 +00:00
Quentin Colombet d1cd30b218 [AArch64][RegisterBankInfo] G_OR are fine on either GPR or FPR.
Teach AArch64RegisterBankInfo that G_OR can be mapped on either GPR or
FPR for 64-bit or 32-bit values.

Add test cases demonstrating how this information is used to coalesce a
computation on a single register bank.

llvm-svn: 272170
2016-06-08 16:53:32 +00:00
Oliver Stannard b3378e2f3c [ARM] MSR instructions implicitly set CPSR
The MSR instructions can write to the CPSR, but we did not model this
fact, so we could emit them in the middle of IT blocks, changing the
condition flags for later instructions in the block.

The tests use two calls to llvm.write_register.i32 because it is valid
to use these instructions at the end of an IT block, which if conversion
does do in some cases. With two calls, the first clobbers the flags, so
a branch has to be used to make the second one conditional.

Differential Revision: http://reviews.llvm.org/D21139

llvm-svn: 272154
2016-06-08 15:26:34 +00:00
Vasileios Kalintiris a9e5154dc5 [mips] Add a proper file header in MipsFastISel.cpp
llvm-svn: 272138
2016-06-08 13:13:15 +00:00
Krzysztof Parzyszek b16882ddf1 [Hexagon] Modify HexagonExpandCondsets to handle subregisters
Also, switch to using functions from LiveIntervalAnalysis to update
live intervals, instead of performing the updates manually.

Re-committing r272045.

llvm-svn: 272135
2016-06-08 12:31:16 +00:00
Diana Picus 0781d10ac4 [ARM] Remove redundant check. NFC
isSwift is tested earlier and known to be false when we reach this code.

llvm-svn: 272127
2016-06-08 10:29:02 +00:00
Benjamin Kramer 46e38f3678 Avoid copies of std::strings and APInt/APFloats where we only read from it
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.

llvm-svn: 272126
2016-06-08 10:01:20 +00:00
Igor Breger 982e4003a6 [AVX512] Fix cvtusi2sd instruction Opcode, it should be 0x7B instead of 0x2A.
llvm-svn: 272122
2016-06-08 07:48:23 +00:00
Quentin Colombet a4ac7cdac2 [AArch64][RegisterBankInfo] Use the generic implementation of copyCost.
Long term we may want to give high cost at FPR to/from GPR copies.

llvm-svn: 272086
2016-06-08 01:24:00 +00:00
Quentin Colombet cfbdee2312 [RegisterBankInfo] Add a size argument for the cost of copy.
The cost of a copy may be different based on how many bits we have to
copy around. E.g., a 8-bit copy may be different than a 32-bit copy.

llvm-svn: 272084
2016-06-08 01:11:03 +00:00
Nicolai Haehnle c00e03b8f5 AMDGPU: Add amdgpu-ps-wqm-outputs function attributes
Summary:
The presence of this attribute indicates that VGPR outputs should be computed
in whole quad mode. This will be used by Mesa for prolog pixel shaders, so
that derivatives can be taken of shader inputs computed by the prolog, fixing
a bug.

The generated code could certainly be improved: if a prolog pixel shader is
used (which isn't common in modern OpenGL - they're used for gl_Color, polygon
stipples, and forcing per-sample interpolation), Mesa will use this attribute
unconditionally, because it has to be conservative. So WQM may be used in the
prolog when it isn't really needed, and furthermore a silly back-and-forth
switch is likely to happen at the boundary between prolog and main shader
parts.

Fixing this is a bit involved: we'd first have to add a mechanism by which
LLVM writes the WQM-related input requirements to the main shader part binary,
and then Mesa specializes the prolog part accordingly. At that point, we may
as well just compile a monolithic shader...

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95130

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D20839

llvm-svn: 272063
2016-06-07 21:37:17 +00:00
Eric Christopher 538d09d0dd Revert "Differential Revision: http://reviews.llvm.org/D20557"
Author: Wei Ding <wei.ding2@amd.com>
Date:   Tue Jun 7 19:04:44 2016 +0000

    Differential Revision: http://reviews.llvm.org/D20557

    git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272044
    91177308-0d34-0410-b5e6-96231b3b80d8

as it was breaking the bots.

This reverts commit r272044.

llvm-svn: 272056
2016-06-07 20:27:12 +00:00
Etienne Bergeron 22bfa83208 [stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the '/GS' compiler switch.
  * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
  * To be added to clang here: http://reviews.llvm.org/D20347

Some overview of buffer security check feature and implementation:
  * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
  * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
  * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html


For the following example:
```
int example(int offset, int index) {
  char buffer[10];
  memset(buffer, 0xCC, index);
  return buffer[index];
}
```

The MSVC compiler is adding these instructions to perform stack integrity check:
```
        push        ebp  
        mov         ebp,esp  
        sub         esp,50h  
  [1]   mov         eax,dword ptr [__security_cookie (01068024h)]  
  [2]   xor         eax,ebp  
  [3]   mov         dword ptr [ebp-4],eax  
        push        ebx  
        push        esi  
        push        edi  
        mov         eax,dword ptr [index]  
        push        eax  
        push        0CCh  
        lea         ecx,[buffer]  
        push        ecx  
        call        _memset (010610B9h)  
        add         esp,0Ch  
        mov         eax,dword ptr [index]  
        movsx       eax,byte ptr buffer[eax]  
        pop         edi  
        pop         esi  
        pop         ebx  
  [4]   mov         ecx,dword ptr [ebp-4]  
  [5]   xor         ecx,ebp  
  [6]   call        @__security_check_cookie@4 (01061276h)  
        mov         esp,ebp  
        pop         ebp  
        ret  
```

The instrumentation above is:
  * [1] is loading the global security canary,
  * [3] is storing the local computed ([2]) canary to the guard slot,
  * [4] is loading the guard slot and ([5]) re-compute the global canary,
  * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.

Overview of the current stack-protection implementation:
  * lib/CodeGen/StackProtector.cpp
    * There is a default stack-protection implementation applied on intermediate representation.
    * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
    * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
    * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
    * Guard manipulation and comparison are added directly to the intermediate representation.

  * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
  * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
    * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
      * see long comment above 'class StackProtectorDescriptor' declaration.
    * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
      * 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
    * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.

  * include/llvm/Target/TargetLowering.h
    * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.

  * lib/Target/X86/X86ISelLowering.cpp
    * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.

Function-based Instrumentation:
  * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
  * To support function-based instrumentation, this patch is
    * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
      * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
    * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
    * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
    * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).

Modifications
  * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
  * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)

Results

  * IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```

```
*** Final LLVM Code input to ISel ***

; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
  %StackGuardSlot = alloca i8*                                                  <<<-- Allocated guard slot
  %0 = call i8* @llvm.stackguard()                                              <<<-- Loading Stack Guard value
  call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot)                  <<<-- Prologue intrinsic call (store to Guard slot)
  %index.addr = alloca i32, align 4
  %offset.addr = alloca i32, align 4
  %buffer = alloca [10 x i8], align 1
  store i32 %index, i32* %index.addr, align 4
  store i32 %offset, i32* %offset.addr, align 4
  %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
  %1 = load i32, i32* %index.addr, align 4
  call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
  %2 = load i32, i32* %index.addr, align 4
  %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
  %3 = load i8, i8* %arrayidx, align 1
  %conv = sext i8 %3 to i32
  %4 = load volatile i8*, i8** %StackGuardSlot                                  <<<-- Loading Guard slot
  call void @__security_check_cookie(i8* %4)                                    <<<-- Epilogue function-based check
  ret i32 %conv
}
```

  * SelectionDAG generated instrumentation:

```
clang-cl /GS test.cc /O1 /c /FA
```

```
"?example@@YAHHH@Z":                    # @"\01?example@@YAHHH@Z"
# BB#0:                                 # %entry
        pushl   %esi
        subl    $16, %esp
        movl    ___security_cookie, %eax                                        <<<-- Loading Stack Guard value
        movl    28(%esp), %esi
        movl    %eax, 12(%esp)                                                  <<<-- Store to Guard slot
        leal    2(%esp), %eax
        pushl   %esi
        pushl   $204
        pushl   %eax
        calll   _memset
        addl    $12, %esp
        movsbl  2(%esp,%esi), %esi
        movl    12(%esp), %ecx                                                  <<<-- Loading Guard slot
        calll   @__security_check_cookie@4                                      <<<-- Epilogue function-based check
        movl    %esi, %eax
        addl    $16, %esp
        popl    %esi
        retl
```

Reviewers: kcc, pcc, eugenis, rnk

Subscribers: majnemer, llvm-commits, hans, thakis, rnk

Differential Revision: http://reviews.llvm.org/D20346

llvm-svn: 272053
2016-06-07 20:15:35 +00:00
Krzysztof Parzyszek 8b61759c05 Revert r272045 since GCC doesn't know how to compile it.
llvm-svn: 272048
2016-06-07 19:25:28 +00:00
Krzysztof Parzyszek 8424280eef [Hexagon] Modify HexagonExpandCondsets to handle subregisters
Also, switch to using functions from LiveIntervalAnalysis to update
live intervals, instead of performing the updates manually.

llvm-svn: 272045
2016-06-07 19:06:23 +00:00
Wei Ding a70216f1b3 Differential Revision: http://reviews.llvm.org/D20557
llvm-svn: 272044
2016-06-07 19:04:44 +00:00
Geoff Berry 486f49cc63 Reapply [AArch64] Fix isLegalAddImmediate() to return true for valid negative values.
Originally reviewed here: http://reviews.llvm.org/D17463

llvm-svn: 272023
2016-06-07 16:48:43 +00:00
Oliver Stannard 8de5f24d10 [ARM] Accept conditional versions of BXNS and BLXNS
These instructions end in "S" but are not flag-setting, so they need including
in the list of special cases in the assembly parser.

Differential Revision: http://reviews.llvm.org/D21077

llvm-svn: 272015
2016-06-07 14:58:48 +00:00
Simon Pilgrim 35c06a0282 [X86][SSE] Add general lowering of nontemporal vector loads (fixed bad merge)
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.

This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.

We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.

There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.

Differential Review: http://reviews.llvm.org/D20965

llvm-svn: 272011
2016-06-07 13:47:23 +00:00
Simon Pilgrim 9a89623b57 [X86][SSE] Add general lowering of nontemporal vector loads
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.

This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.

We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.

There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.

Differential Review: http://reviews.llvm.org/D20965

llvm-svn: 272010
2016-06-07 13:34:24 +00:00
James Molloy b101383fb5 [Thumb-1] Add optimized constant materialization for integers [256..512)
We can materialize these integers using a MOV; ADDi8 pair.

llvm-svn: 272007
2016-06-07 13:10:14 +00:00
Igor Breger 61e628591f [AVX512] Fix load opcode for fast isel.
Differential Revision: http://reviews.llvm.org/D21067

llvm-svn: 272006
2016-06-07 13:08:45 +00:00
Ulrich Weigand 6b0634b304 [PowerPC] Support multiple return values with fast isel
Using an LLVM IR aggregate return value type containing three
or more integer values causes an abort in the fast isel pass.

This patch adds two more registers to RetCC_PPC64_ELF_FIS to
allow returning up to four integers with fast isel, just the
same as is currently supported with regular isel (RetCC_PPC).

This is needed for Swift and (possibly) other non-clang frontends.

Fixes PR26190.

llvm-svn: 272005
2016-06-07 12:48:22 +00:00
Simon Pilgrim ca1da1bf07 [X86][SSE] Improved blend+zero target shuffle combining to use combined shuffle mask directly
We currently only combine to blend+zero if the target value type has 8 elements or less, but this was missing a lot of cases where the combined mask had been widened.

This change makes it so we use the combined mask to determine the blend value type, allowing us to catch more widened cases.

llvm-svn: 272003
2016-06-07 12:20:14 +00:00
James Molloy 53298a1808 [ARM] Shrink post-indexed LDR and STR to LDM/STM
A Thumb-2 post-indexed LDR instruction such as:

  ldr.w r0, [r1], #4

Can be rewritten as:

  ldm.n r1!, {r0}

LDMs can be more expensive than LDRs on some cores, so this has been enabled only in minsize mode.

llvm-svn: 272002
2016-06-07 12:13:34 +00:00
James Molloy 75afc95112 [ARM] Transform LDMs into writeback form to save code size
If we have an LDM that uses only low registers and doesn't write to its base register:

  ldm.w r0, {r1, r2, r3}

And that base register is dead after the LDM, then we can convert it to writeback form and use a narrow encoding:

  ldm.n r0!, {r1, r2, r3}

Obviously, this introduces a new register write and so can cause WAW hazards, so I've enabled it only in minsize mode. This is a code size trick that ARM Compiler 5 ("armcc") does that we don't.

llvm-svn: 272000
2016-06-07 11:47:24 +00:00
Peter Smith 353a2286e2 [ARM] Incorrect relocation type for Thumb2 B<cond>.w
The Thumb2 conditional branch B<cond>.W has a different encoding (T3) 
to the unconditional branch B.W (T4) as it needs to record <cond>. 
As the encoding is different the B<cond>.W is given a different 
relocation type. 

ELF for the ARM Architecture 4.6.1.6 (Table-13) states that 
R_ARM_THM_JUMP19 should be used for B<cond>.W. At present the 
MC layer is using the R_ARM_THM_JUMP24 from B.W.

This change makes B<cond>.W use R_ARM_THM_JUMP19 and alters the 
existing test that checks for R_ARM_THM_JUMP24 to expect 
R_ARM_THM_JUMP19.

llvm-svn: 271997
2016-06-07 10:34:33 +00:00
Craig Topper 2f90c1fedf [AVX512] Allow avx2 and sse41 nontemporal load intrinsics to select EVEX encoded instructions when VLX is enabled.
llvm-svn: 271988
2016-06-07 07:27:57 +00:00
Craig Topper e1cac15feb [AVX512] Remove unnecessary mayLoad, mayStore, hasSidEffects flags from instructions that have patterns that imply them. Add the same set of flags to instructions that don't have patterns to imply them.
llvm-svn: 271987
2016-06-07 07:27:54 +00:00
Craig Topper 0fcf925699 [AVX512] Add NoVLX to a couple patterns that have VLX equivalents. Ordering of the patterns in the .td file protects this, but its better to be explicit.
llvm-svn: 271986
2016-06-07 07:27:51 +00:00
Saleem Abdulrasool 532dcbc2c5 ARM: correct TLS access on WoA
TLS access requires an offset from the TLS index.  The index itself is the
section-relative distance of the symbol.  For ARM, the relevant relocation
(IMAGE_REL_ARM_SECREL) is applied as a constant.  This means that the value may
not be an immediate and must be lowered into a constant pool.  This offset will
not be base relocated.  We were previously emitting the actual address of the
symbol which would be base relocated and would therefore be the vaue offset by
the ImageBase + TLS Offset.

llvm-svn: 271974
2016-06-07 03:15:07 +00:00
Saleem Abdulrasool ce4eee4951 ARM: clang-format a couple of switches, add comments
clang-format a couple of switches in preparation for a future change.  Add some
enumeration comments

llvm-svn: 271973
2016-06-07 03:15:01 +00:00
Saleem Abdulrasool 0cd0cb992a ARM: normalise space in the patterns
Just adjust the whitespace for the selection patterns.  NFC.

llvm-svn: 271972
2016-06-07 03:14:57 +00:00
Matt Arsenault 02458c2d27 AMDGPU: Add function for getting instruction size
llvm-svn: 271936
2016-06-06 20:10:33 +00:00
Matt Arsenault 3b2e2a59e8 AMDGPU: Fix constantexpr addrspacecasts
If we had a constant group address space cast the queue pointer
wasn't enabled for the function, resulting in a crash on noreg
later.

llvm-svn: 271935
2016-06-06 20:03:31 +00:00
Artem Tamazov 135487767b [AMDGPU][llvm-mc] v_cndmask_b32: src2 is mandatory; do not enforce VOP2 when src2 == VCC.
Another step for unification llvm assembler/disassembler with sp3.
Besides, CodeGen output is a bit improved, thus changes in CodeGen tests.
Assembler/Disassembler tests updated/added.

Differential Revision: http://reviews.llvm.org/D20796

llvm-svn: 271900
2016-06-06 15:23:43 +00:00
Igor Breger edafb0595e [KNL] Fix UMULO lowering.
Differential Revision: http://reviews.llvm.org/D21013

llvm-svn: 271891
2016-06-06 12:24:52 +00:00
Benjamin Kramer f21beb2c74 Remove dead function with incredibly broken assert.
Found by clang-tidy's misc-assert-side-effect.

llvm-svn: 271887
2016-06-06 12:10:42 +00:00
Filipe Cabecinhas 6e7d5467c0 [NFC] Silence gcc warning (-Wsign-compare)
llvm-svn: 271882
2016-06-06 10:49:56 +00:00
Craig Topper 143446d5c1 [AVX512] Add PALIGNR shuffle lowering for v32i16 and v16i32.
llvm-svn: 271870
2016-06-06 05:39:10 +00:00
Simon Pilgrim 64c6de4525 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS raw mask decoding for target shuffle combines
llvm-svn: 271834
2016-06-05 15:21:30 +00:00
Simon Pilgrim 478295dadd [X86][XOP] Added VPERMIL2PD/VPERMIL2PS as a target shuffle type
llvm-svn: 271831
2016-06-05 15:01:45 +00:00
Simon Pilgrim 163987a235 [X86][XOP] Tidied up DecodeVPERMIL2PMask to more closely match DecodeVPERMILPMask.
llvm-svn: 271830
2016-06-05 14:33:43 +00:00
Craig Topper 8eeda57a40 [AVX512] Add support for lowering PALIGNR for v64i8.
Could do this for other types to, but this is what's needed to replace the instrinsic with native IR in clang.

llvm-svn: 271828
2016-06-05 06:29:12 +00:00
Craig Topper 9f51c9ef15 [AVX512] Fix PANDN combining for v4i32/v8i32 when VLX is enabled.
v4i32/v8i32 ANDs aren't promoted to v2i64/v4i64 when VLX is enabled.

llvm-svn: 271826
2016-06-05 05:35:11 +00:00
Simon Pilgrim 2ead861d07 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS shuffle mask comment decoding
llvm-svn: 271809
2016-06-04 21:44:28 +00:00
Craig Topper e609bd6600 [X86] Add the VR128L/H and VR256L/H to the list of vector register classes for inline asm constraints. Also fix the comment on the function.
llvm-svn: 271802
2016-06-04 20:15:08 +00:00
Saleem Abdulrasool 1fcdc23a6e X86: enable TLS on Windows itanium
Windows itanium is nearly identical to windows-msvc (MS ABI for C, itanium for
C++).  Enable the TLS support for the target similar to the MSVC model.

llvm-svn: 271797
2016-06-04 18:27:22 +00:00
Simon Pilgrim fd2eda4f64 [X86][AVX2] Fix v16i16 SHL lowering (PR27730)
The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.

The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.

llvm-svn: 271796
2016-06-04 16:45:33 +00:00
Craig Topper 6ae375c9ba [X86] Use smaller types to shrink the intrinsic lowering tables by about 12K.
llvm-svn: 271776
2016-06-04 04:32:17 +00:00
Craig Topper 5250634334 [X86] Use X86ISD::ABS for lowering pabs SSSE3/AVX intrinsics to match AVX512. Should allow those intrinsics to use the EVEX encoded instructions and get the extra registers when available.
llvm-svn: 271775
2016-06-04 04:32:15 +00:00
Chad Rosier be879ea751 [AArch64] Spot SBFX-compatible code expressed with sign_extend.
This is very similar to r271677, but for extracts from i32 with the SIGN_EXTEND
acting on a arithmetic shift.

llvm-svn: 271717
2016-06-03 20:05:49 +00:00
Derek Schuff 5859a9ed80 [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

Re-apply r271599 but instead of bailing with an error when a declared
function has multiple returns, replace it with a pointer argument. Also
add the test case I forgot to 'git add' last time around.

llvm-svn: 271703
2016-06-03 18:34:36 +00:00
Sjoerd Meijer 9bc93f6298 Code size optimisation: do not inline memcpy if this expansion results
in more instructions than the libary call.

Differential Revision: http://reviews.llvm.org/D20958

llvm-svn: 271678
2016-06-03 15:38:55 +00:00
Chad Rosier 2d658703e1 [AArch64] Spot SBFX-compatbile code expressed with sign_extend_inreg.
We were assuming all SBFX-like operations would have the shl/asr form, but often
when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead.

This is a port of r213754 from ARM to AArch64.

llvm-svn: 271677
2016-06-03 15:00:09 +00:00
Artem Tamazov f88397c84c [test/AMDGPU] Square-braced-syntax for registers: add macro test/example.
Test added as per discussion in http://reviews.llvm.org/D20588.
The macro is just a demonstration, useless in practice.
Coding style fixes.

Differential Revision: http://reviews.llvm.org/D20797

llvm-svn: 271675
2016-06-03 14:41:17 +00:00
Sjoerd Meijer d906bf1369 RAS extensions are part of ARMv8.2-A. This change enables them by introducing a
new instruction to ARM and AArch64 targets and several system registers.

Patch by: Roger Ferrer Ibanez and Oliver Stannard

Differential Revision: http://reviews.llvm.org/D20282

llvm-svn: 271670
2016-06-03 14:03:27 +00:00
Sjoerd Meijer 9da258d8e5 ARM target does not use printAliasInstr machinery which
forces having special checks in ArmInstPrinter::printInstruction. This
patch addresses this issue.

Not all special checks could be removed: either they involve elaborated
conditions under which the alias is emitted (e.g. ldm/stm on sp may be
pop/push but only if the number of registers is >= 2) or the number
of registers is multivalued (like happens again with ldm/stm) and they
do not match the InstAlias pattern which assumes single-valued operands
in the pattern.

Patch by: Roger Ferrer Ibanez

Differential Revision: http://reviews.llvm.org/D20237

llvm-svn: 271667
2016-06-03 13:19:43 +00:00
Sam Kolton a4a99ad1bc [AMDGPU] Assembler: More tests for SDWA instructions. Fix for SDWA float modifiers.
Summary: Depends on D20625

Reviewers: tstellarAMD, vpykhtin, artem.tamazov

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20674

llvm-svn: 271662
2016-06-03 11:43:09 +00:00
Daniel Sanders 43750eab82 [mips] EABI CodeGen is completely untested and seems to have bitrotted. Remove it.
Summary:
There are no tests*, no EABI buildbots, and simple test cases do not work.

* There is a single MIPS16 test using a mips*-gnueabi triple but this test
  doesn't test EABI and the triple doesn't cause EABI to be used.

Reviewers: sdardis

Subscribers: tberghammer, danalbert, srhines, dsanders, sdardis, llvm-commits

Differential Revision: http://reviews.llvm.org/D20906

llvm-svn: 271658
2016-06-03 10:38:09 +00:00
Sam Kolton 05ef1c940f [AMDGPU] Assembler: Custom converters for SDWA instructions. Support for _dpp and _sdwa suffixes in mnemonics.
Summary:
Added custom converters for SDWA instruction to support optional operands and modifiers.
Support for _dpp and _sdwa suffixes that allows to force DPP or SDWA encoding for instructions.

Reviewers: tstellarAMD, vpykhtin, artem.tamazov

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20625

llvm-svn: 271655
2016-06-03 10:27:37 +00:00
Chandler Carruth 9ac86efd4d Remove bogus initialization of the PPC and Hexagon SelectionDAGISel
subclasses. These are not passes proper. We don't support registering
them, they can't be constructed with default arguments, and the ID is
actually in a base class.

Only these two targets even had any boiler plate to try to do this, and
it had to be munged out of the INITIALIZE_PASS macros to work. What's
worse, the boiler plate has rotted and the "name" of the pass is
actually the description string now!!! =/ All of this is completely
unnecessary. No other target bothers, and nothing breaks if you don't
initialize them because CodeGen has an entirely separate initialization
path that is somewhat more durable than relying on the implicit
initialization the way the 'opt' tool does for registered passes.

llvm-svn: 271650
2016-06-03 10:13:31 +00:00
Chandler Carruth d474144a7a Use the standard INITIALIZE_PASS macro rather than hand rolling a (not
entirely correct) version of its contents.

llvm-svn: 271649
2016-06-03 10:13:29 +00:00
Daniel Sanders 6ba3dd6b71 [mips] Implement 'la' macro in PIC mode for O32.
Summary:
N32 support will follow in a later patch since the symbol version of 'la'
incorrectly believes N32 to have 64-bit pointers and rejects it early.

This fixes the three incorrectly expanded 'la' macros found in bionic.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: http://reviews.llvm.org/D20820

llvm-svn: 271644
2016-06-03 09:53:06 +00:00
Simon Pilgrim e85506b6e0 [X86][XOP] Support for VPERMIL2PD/VPERMIL2PS 2-input shuffle instructions
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.

The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.

Mask decoding/target shuffle support will be added in future patches.

Differential Revision: http://reviews.llvm.org/D20049

llvm-svn: 271633
2016-06-03 08:06:03 +00:00
Craig Topper 5e3d314488 [X86] Fix some isel patterns to remove an operand from some multiclasses. NFC
llvm-svn: 271631
2016-06-03 05:58:52 +00:00
Craig Topper e7ae106147 [AVX512] Ensure EVEX vpshufd, vpshuflw, and vpshufhw have isel priority over the VEX encoded ones.
llvm-svn: 271629
2016-06-03 05:31:04 +00:00
Craig Topper 01f53b1773 [AVX512] Fix shuffle comment printing for EVEX encoded PSHUFD, PSHUFHW, and PSHUFLW.
llvm-svn: 271628
2016-06-03 05:31:00 +00:00
Craig Topper dc70d8a4b7 [X86] Simplify a multiclass to remove a parameter. NFC
llvm-svn: 271627
2016-06-03 05:30:56 +00:00
Craig Topper 2388b4610a [X86] Remove unnecessary pattern predicates from the vector bit cast patterns. The types have to be legal and there are no alternative patterns. Saves almost 200 bytes in isel table.
llvm-svn: 271625
2016-06-03 04:15:27 +00:00
Craig Topper 19462f02bb [X86] Cleanup formatting a bit to align similar parts of adjacent lines.
llvm-svn: 271624
2016-06-03 04:15:25 +00:00
Craig Topper 895897f85b [X86] Remove redundant bitcast patterns for 128/256-bit vectors. These only differ from the SSE/AVX versions by the register class, but register class has no bearing on isel.
llvm-svn: 271623
2016-06-03 04:15:22 +00:00
Derek Schuff f5bae9c1ce Revert "[WebAssembly] Emit type signatures for declared functions"
This reverts r271599, it broke the integration tests.
More places than I expected had nontrival return types in imports, or
else the check was wrong.

llvm-svn: 271606
2016-06-02 23:02:44 +00:00
Derek Schuff 23b7d65fe5 [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

llvm-svn: 271599
2016-06-02 21:34:18 +00:00
Matt Arsenault 43578ec2a8 AMDGPU: Handle flat in getMemOpBaseRegImmOfs
It can still report the base register, and the uses give up when it
fails.

llvm-svn: 271575
2016-06-02 20:05:20 +00:00
Sanjay Patel dba8b4c04d transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Matt Arsenault d1097a38e2 AMDGPU: Cleanup load tests
There are a lot of different kinds of loads to test for,
and these were scattered around inconsistently with
some redundancy. Try to comprehensively test all loads
in a consistent way.

llvm-svn: 271571
2016-06-02 19:54:26 +00:00
Matt Arsenault 52dec8d36a AMDGPU: Temporary fix for broken store combine
llvm-svn: 271567
2016-06-02 19:00:55 +00:00
Matt Arsenault 8e00194be8 AMDGPU: Fix crashes on unknown processor name
If the processor name failed to parse for amdgcn,
the resulting output would have R600 ISA in it.

If the processor name was missing or invalid for R600,
the wavefront size would not be set and there would be
crashes from missing itinerary data.

Fixes crashes in future commit caused by dividing by the unset/0
wavefront size.

llvm-svn: 271561
2016-06-02 18:37:16 +00:00
Ahmed Bougacha 63f78b0206 [X86] Define segment MI operands as regs instead of i8imm.
We've been pretending that segments are i8imm since the initial
support (r68645), predating the addition of the SEGMENT_REG class
(r81895).  That happens to works, but is wrong, and inconsistent
with how we print (e.g., X86ATTInstPrinter::printMemReference)
and parse them (e.g., X86Operand::addMemOperands).

This change shouldn't affect any tool users, but is visible to
library users or out-of-tree tablegen backends: this causes
MCOperandInfo for the segment op to have an RC instead of "unknown",
and TII::getRegClass to actually return something.  As the registers
are reserved and no vregs of the class ever created, that shouldn't
change anything.

No test change; no suspicious getRegClass() in X86 and CodeGen.

llvm-svn: 271559
2016-06-02 18:29:15 +00:00
Matthias Braun 651cff42c4 AArch64: Do not test for CPUs, use SubtargetFeatures
Testing for specific CPUs has a number of problems, better use subtarget
features:
- When some tweak is added for a specific CPU it is often desirable for
  the next version of that CPU as well, yet we often forget to add it.
- It is hard to keep track of checks scattered around the target code;
  Declaring all target specifics together with the CPU in the tablegen
  file is a clear representation.
- Subtarget features can be tweaked from the command line.

To discourage people from using CPU checks in the future I removed the
isCortexXX(), isCyclone(), ... functions. I added an getProcFamily()
function for exceptional circumstances but made it clear in the comment
that usage is discouraged.

Reformat feature list in AArch64.td to have 1 feature per line in
alphabetical order to simplify merging and sorting for out of tree
tweaks.

No functional change intended.

Differential Revision: http://reviews.llvm.org/D20762

llvm-svn: 271555
2016-06-02 18:03:53 +00:00
Dimitry Andric 6a482a73d6 Only attempt to detect AVG if SSE2 is available
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support.  It turned out
this was because of the AVG pattern detection introduced in rL253952.

Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.

Also add a minimized test case.

Reviewers: congh, eli.friedman, spatel

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D20905

llvm-svn: 271548
2016-06-02 17:30:49 +00:00
Geoff Berry 66f6b65fed [PEI, AArch64] Use empty spaces in stack area for local stack slot allocation.
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.

AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.

Reviewers: t.p.northover, qcolombet, MatzeB

Subscribers: rengolin, mcrosier, llvm-commits, aemerson

Differential Revision: http://reviews.llvm.org/D20220

llvm-svn: 271527
2016-06-02 16:22:07 +00:00
Krzysztof Parzyszek 3d6fc83a58 [Hexagon] Expand COPY pseudo-instruction
Handle it locally instead of having the target-independent pass deal
with it. The generic pass does not preserve implicit uses, which may
be necessary.

llvm-svn: 271520
2016-06-02 14:33:08 +00:00
Krzysztof Parzyszek f69ff7120b [RDF] Ignore implicit defs when resetting <kill> flags
llvm-svn: 271519
2016-06-02 14:30:09 +00:00
Simon Pilgrim 0afd5a4d80 [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (llvm)
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.

Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.

Differential Revision: http://reviews.llvm.org/D20860

llvm-svn: 271510
2016-06-02 10:55:21 +00:00
Sjoerd Meijer 0b7bb16e5b This adds support for Cortex-A73 as an available target.
Differential Revision: http://reviews.llvm.org/D20865

llvm-svn: 271508
2016-06-02 10:48:52 +00:00
Craig Topper 048a08af66 [AVX512] Add 512-bit load/stores to fast isel.
llvm-svn: 271486
2016-06-02 04:51:37 +00:00
Craig Topper 292a86db5b [X86] No need to use 256-bit VMOVNTPS for integer types when only AVX1 is supported. VMOVNTDQ is available with AVX1.
We were getting this right for v4i64 but not the other integer types.

llvm-svn: 271482
2016-06-02 04:19:48 +00:00
Craig Topper ca9c0801e1 [X86] Add AVX 256-bit load and stores to fast isel.
I'm not sure why this was missing for so long.

This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit.

llvm-svn: 271481
2016-06-02 04:19:45 +00:00
Craig Topper 6611188633 [X86] Use uint16_t for a couple arrays of instruction opcodes. NFC
llvm-svn: 271480
2016-06-02 04:19:42 +00:00
Craig Topper 5bb9cda620 [AVX512] Remove LOADA/LOADU/STOREA/STOREU intrinsic types now that they are unused.
llvm-svn: 271479
2016-06-02 04:19:40 +00:00
Craig Topper f10fbfa738 [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271478
2016-06-02 04:19:36 +00:00
Matt Arsenault 598f55387a AMDGPU: Fix incorrectly setting kill flag when copying register tuples
This fixes some verifier errors when trackLivenessAfterRegAlloc is
enabled.

llvm-svn: 271446
2016-06-02 00:04:30 +00:00
Matt Arsenault d3e4c646ea AMDGPU: SIDebuggerInsertNops preserves CFG
This saves an additional run of the DominatorTree and
MachineLoopInfo

llvm-svn: 271444
2016-06-02 00:04:22 +00:00
Rafael Espindola 41410cc812 Avoid a load for local functions.
llvm-svn: 271437
2016-06-01 21:57:11 +00:00
Keno Fischer 5573483c5b [PPC64] Fix SUBFC8 Defs list
Fix PR27943 "Bad machine code: Using an undefined physical register".
SUBFC8 implicitly defines the CR0 register, but this was omitted in
the instruction definition.

Patch by Jameson Nash <jameson@juliacomputing.com>

Reviewers: hfinkel
Differential Revision: http://reviews.llvm.org/D20802

llvm-svn: 271425
2016-06-01 20:31:07 +00:00
Michael Zuckerman 6a894956fc Adding back-end support to two bit scanning intrinsics
Adding LLVM back-end support to two intrinsics dealing with bit scan: _bit_scan_forward and _bit_scan_reverse.
Their functionality is as described in Intel intrinsics guide:
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_forward&expand=371,370
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_reverse&expand=371,370

Commit on behalf of Omer Paparo Bivas


Differential Revision: http://reviews.llvm.org/D19915

llvm-svn: 271386
2016-06-01 12:02:37 +00:00
Oliver Stannard 92ca83cccd [ARM] Add additional matching for UBFX instructions
This adds an additional matcher to select UBFX(..) from SRL(AND(..)) in
ARMISelDAGToDAG to help with code size.

Patch by David Green.

Differential Revision: http://reviews.llvm.org/D20667

llvm-svn: 271384
2016-06-01 12:01:01 +00:00
Chris Dewhurst 53bde954db [Sparc] Allow passing of empty structs.
Passing an empty struct as a function call argument is now supported.

unit tests for various scenarios added.

llvm-svn: 271374
2016-06-01 08:48:56 +00:00
Craig Topper 4f2d5a68d3 Revert r271362 "[AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead."
Looks like something isn't quite right still. Also forgot to move the test cases to an autoupgrade test.

llvm-svn: 271363
2016-06-01 05:57:55 +00:00
Craig Topper dacd9d2bac [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271362
2016-06-01 05:35:16 +00:00
Kevin B. Smith ed0b620a65 [X86]: Add a pattern that uses GR16_ABCD rather than GR32_ABCD to avoid falsely marking whole 32 bit register as live.
Differential Revision: http://reviews.llvm.org/D20649

llvm-svn: 271341
2016-05-31 22:00:12 +00:00
Matthias Braun fe725c9241 ARM: Do not attempt to modify register class of physregs.
Physregs have no associated register class, do not attempt to modify it
in Thumb2InstrInfo::storeRegToStackSlot()/loadFromStackSlot().

llvm-svn: 271339
2016-05-31 21:39:12 +00:00
Rafael Espindola 4d29099f7f Delete AArch64II::MO_CONSTPOOL.
A constant pool holding the address of a variable in equivalent to
a got entry. It produces exactly the same instruction sequence as a
got use and unlike a got use this is not uniqued by the linker.

llvm-svn: 271311
2016-05-31 18:31:14 +00:00
Simon Dardis b60833c0ca [mips] Enforce compact branch register restrictions
Enforce compact branch register restrictions such as the use of the zero
register, both operands being the same register. Emit clear error in such
cases as the issue is subtle.

For bovc and bnvc, silently fixup such cases when emitting objects directly,
like LLVM started doing in rL269899.

Reviewers: vkalintiris, dsanders

Differential Review: http://reviews.llvm.org/D20475

llvm-svn: 271301
2016-05-31 17:34:42 +00:00
Matt Arsenault ec30eb507e AMDGPU: Remove unused address space
Also return a single StringRef instead of building a string.

llvm-svn: 271296
2016-05-31 16:57:45 +00:00
Rafael Espindola 7ad97b2fe4 Add a use of shouldAssumeDSOLocal to ARM.
Now this code path knows about position independent executables.

llvm-svn: 271290
2016-05-31 15:31:55 +00:00
Krzysztof Parzyszek a580273b3f [Hexagon] Disable expanding MUX instructions that define a subregister
The code in HexagonExpandCondsets.cpp does not handle those cases at the
moment.

llvm-svn: 271281
2016-05-31 14:27:10 +00:00
Yaron Keren a34bfa000f Do not modify a std::vector while looping it.
Introduced in r271244, this is probably undefined behaviour and asserts when
compiled with Visual C++ debug mode. 

On further note, the loop is quadratic with regard to the number of successors
since removeSuccessor is linear and could probably be modified to linear time.

llvm-svn: 271278
2016-05-31 13:45:05 +00:00
Ranjeet Singh 16c24f4d6e [ARM] Add backend support for load/store intrinsics.
Added support to map intrinsics
__builtin_arm_{ldc,ldcl,ldc2,ldc2l,stc,stcl,stc2,stc2l}
to their ARM instructions.

Differential Revision: http://reviews.llvm.org/D20564

llvm-svn: 271271
2016-05-31 12:39:30 +00:00
Simon Pilgrim e05dc45897 [X86][SSE] Add load-folding patterns for (V)CVTDQ2PD (PR27291)
Added patterns for (V)CVTDQ2PD -> 2f64 loading from a 64-bit source.

llvm-svn: 271269
2016-05-31 12:04:35 +00:00
Simon Dardis 03676dc969 [mips] bnec/beqc register constraint fix
beqc and bnec cannot have $rs == $rt. Inhibit compact branch creation
if that would occur.

Reviewers: vkalintiris, dsanders

Differential Revision: http://reviews.llvm.org/D20624

llvm-svn: 271260
2016-05-31 09:54:55 +00:00
Igor Breger 73ee8ba9b0 [AVX512] Fix intrinsic vcvtps2ph lowering.
Differential Revision: http://reviews.llvm.org/D20788

llvm-svn: 271255
2016-05-31 08:04:21 +00:00
Igor Breger 52bd1d5fcc Fix intrinsic vbroadcast{i32|f32}x2 lowering.
Differential Revision: http://reviews.llvm.org/D20780

llvm-svn: 271254
2016-05-31 07:43:39 +00:00
Craig Topper 50f85c22c5 [AVX512] Remove masked store intrinsics. Clang now emits generic masked store intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked stores.

llvm-svn: 271245
2016-05-31 01:50:02 +00:00
Saleem Abdulrasool d2f705ddf9 X86: permit using SjLj EH on x86 targets as an option
This adds support to the backed to actually support SjLj EH as an exception
model.  This is *NOT* the default model, and requires explicitly opting into it
from the frontend.  GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.

Addresses PR27749!

llvm-svn: 271244
2016-05-31 01:48:07 +00:00
Craig Topper 8287fd8abd [X86] Remove SSE/AVX unaligned store intrinsics as clang no longer uses them. Auto upgrade to native unaligned store instructions.
llvm-svn: 271236
2016-05-30 23:15:56 +00:00
Rafael Espindola 4f1062adb8 Fix a crash when producing COFF.
llvm-svn: 271229
2016-05-30 20:18:53 +00:00
Diana Picus f353a5e06d [BPF] Remove exit-on-error from tests (PR27768, PR27769)
The exit-on-error flag is necessary to avoid some assertions/unreachables. We
can get past them by creating a few dummy nodes.

Fixes PR27768, PR27769.

Differential Revision: http://reviews.llvm.org/D20726

llvm-svn: 271200
2016-05-30 08:28:34 +00:00
Rafael Espindola 9768d73c74 Move RelaxELFRel out to llvm-mc.
llvm-svn: 271160
2016-05-29 01:11:00 +00:00
Simon Pilgrim 9602d678cb [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Rafael Espindola 52bd330500 Fix production of R_X86_64_GOTPCRELX/R_X86_64_REX_GOTPCRELX.
We were producing R_X86_64_GOTPCRELX for invalid instructions and
sometimes producing R_X86_64_GOTPCRELX instead of
R_X86_64_REX_GOTPCRELX.

llvm-svn: 271118
2016-05-28 15:51:38 +00:00
Sanjay Patel 97c2c108fd [x86] avoid printing unnecessary sign bits of hex immediates in asm comments (PR20347)
It would be better to check the valid/expected size of the immediate operand, but this is
generally better than what we print right now.

Differential Revision: http://reviews.llvm.org/D20385

llvm-svn: 271114
2016-05-28 14:58:37 +00:00
Ahmed Bougacha a3dc1ba142 [X86] Try to zero elts when lowering 256-bit shuffle with PSHUFB.
Otherwise we fallback to a blend of PSHUFBs later on.

Differential Revision: http://reviews.llvm.org/D19661

llvm-svn: 271113
2016-05-28 14:38:04 +00:00
Rafael Espindola 2d39bb3c6a Simplify and clang-format a table.
llvm-svn: 271112
2016-05-28 11:13:34 +00:00
Rafael Espindola fe796dca90 Fix default reloc model on ARM.
llvm-svn: 271111
2016-05-28 10:41:15 +00:00
Renato Golin 9be88629d5 Revert "Revert "Map DynamicNoPIC to Static on non-darwin.""
This reverts commit r271096, as reverting it broke even more buildbots!

But that also means I'll break on ARM again... :(

llvm-svn: 271099
2016-05-28 04:47:13 +00:00
Renato Golin 4f22c51b09 Revert "Map DynamicNoPIC to Static on non-darwin."
This reverts commit r271052, as it broke some ARM buildbots.

llvm-svn: 271096
2016-05-28 04:24:26 +00:00
Krzysztof Parzyszek 07d7518540 [Hexagon] Add option to enable subregister liveness tracking
llvm-svn: 271088
2016-05-28 02:02:51 +00:00
Krzysztof Parzyszek 96bb4fe539 [Hexagon] Separate C8 and USR to avoid unwanted subregister composition
Composing subreg_loreg with subreg_oveflow leads to strange results with
lane masks for register classes with subreg_loreg. In particular, dead
lane detection generates incorrect code.

llvm-svn: 271087
2016-05-28 01:51:16 +00:00
Matthias Braun bcfd23673b AArch64: Fix indentation
llvm-svn: 271084
2016-05-28 01:06:51 +00:00
Matt Arsenault d8d304d1d6 AMDGPU: Fix trailing whitespace
llvm-svn: 271081
2016-05-28 00:50:51 +00:00
Matt Arsenault 7401516985 AMDGPU: Add fract intrinsic
Remove broken patterns matching it. This was matching the
unsafe math pattern and expanding the fix for the buggy instruction
from the pattern. The problems are also on CI. Remove the workarounds
and only use fract with unsafe math or from the intrinsic.

llvm-svn: 271078
2016-05-28 00:19:52 +00:00
Rafael Espindola eece113105 Start using shouldAssumeDSOLocal on ARM.
Given where this is used it should be a nop.

llvm-svn: 271066
2016-05-27 22:41:51 +00:00
Matthias Braun 27b6692fe2 AArch64Subtarget: Use default member initializers
llvm-svn: 271057
2016-05-27 22:14:09 +00:00
Rafael Espindola f9bda6805b Map DynamicNoPIC to Static on non-darwin.
DynamicNoPIC was only every used on darwin. This maps it to static on
ELF. It matches what is done on X86.

llvm-svn: 271052
2016-05-27 21:44:18 +00:00
Krzysztof Parzyszek 764fed98e6 [Hexagon] Use standard macros to initialize HexagonExpandCondsets pass
llvm-svn: 271045
2016-05-27 21:15:34 +00:00
Krzysztof Parzyszek d0f8e1cf0d [Hexagon] Do not create passes in the constructor of HexagonPassConfig
When running mir tests, a pass created in that constructor would not be
freed, leading to memory leaks.

llvm-svn: 271043
2016-05-27 20:48:39 +00:00
Michael Kuperstein a75c77b127 [X86] Detect SAD patterns and emit psadbw instructions.
This recommits r267649 with a fix for PR27539.

Differential Revision: http://reviews.llvm.org/D20598

llvm-svn: 271033
2016-05-27 18:53:22 +00:00
Ahmed Bougacha 346b98011c [X86] Clarify PSHUFB+blend lowering function name. NFC.
Also guard against v32i8 users.

llvm-svn: 271024
2016-05-27 17:58:17 +00:00
Ahmed Bougacha 655c2deaf6 [ARM] Remove tBLXr Pat made redundant by r269101. NFCI.
llvm-svn: 271023
2016-05-27 17:58:03 +00:00
Benjamin Kramer f6f815bf39 Use StringRef::startswith instead of find(...) == 0.
It's faster and easier to read.

llvm-svn: 271018
2016-05-27 16:54:57 +00:00
Benjamin Kramer 1cda54f369 [sparc] Simplify a slow and verbose way of checking if a string starts with "ld".
PR27904.

llvm-svn: 271016
2016-05-27 16:45:37 +00:00
Benjamin Kramer 82de7d323d Apply clang-tidy's misc-move-constructor-init throughout LLVM.
No functionality change intended, maybe a tiny performance improvement.

llvm-svn: 270997
2016-05-27 14:27:24 +00:00
Simon Dardis 4ccda502d5 [mips] Weaken asm predicate for memory offsets
The isMemWithSimmOffset predicate rejects relocations which is incorrect
behaviour. Linkers and other tools should handle|warn|error when the
field overflows.

Reviewers: dsanders, vkalintiris

Differential Revision: http://reviews.llvm.org/D20727

llvm-svn: 270995
2016-05-27 13:56:36 +00:00
Artem Tamazov 7da9b82e02 [AMDGPU][llvm-mc] Square-braced-syntax for registers - make ":expr2" optional.
Register numbers may be specified as assembly-time expressions.
This feature can be useful in macros and alike. However, expressions
are supported within sqare braces only.

Sqare braces were initially intended to support specifying of multiple
(pairs/quads...) registers. Syntax like v[8:8] which specifies single register
is also supported. That allows expressions but looks a bit unnatural.

This change supports syntax REG[EXPR].
Tests added.

Differential Revision: http://reviews.llvm.org/D20588

llvm-svn: 270990
2016-05-27 12:50:13 +00:00
Benjamin Kramer 4fed928f53 Avoid some copies by using const references.
clang-tidy's performance-unnecessary-copy-initialization with some manual
fixes. No functional changes intended.

llvm-svn: 270988
2016-05-27 12:30:51 +00:00
Benjamin Kramer 3e9a5d3468 Apply clang-tidy's misc-static-assert where it makes sense.
Also fold conditions into assert(0) where it makes sense. No functional
change intended.

llvm-svn: 270982
2016-05-27 11:36:04 +00:00
Benjamin Kramer 4ec6e9d50c [sparc] Remove some unused (and undefined) declarations.
No functionality change.

llvm-svn: 270981
2016-05-27 10:19:03 +00:00
Benjamin Kramer 922efd7a67 [hexagon] Move BlockRanges and RDF stuff into the llvm namespace.
No functional change intended.

llvm-svn: 270980
2016-05-27 10:06:40 +00:00
Benjamin Kramer 797fb96a9c [sparc] Move LEON passes into llvm namespace.
Also give them library visiblity while there.

llvm-svn: 270979
2016-05-27 10:06:27 +00:00
Simon Pilgrim 4642a57fbf Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim c013e5737b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Krzysztof Parzyszek da0b9a959e [Hexagon] Enable the post-RA scheduler
The aggressive anti-dependency breaker can rename the restored callee-
saved registers. To prevent this, mark these registers are live on all
paths to the return/tail-call instructions, and add implicit use operands
for them to these instructions.

llvm-svn: 270898
2016-05-26 19:44:28 +00:00
Chad Rosier 14aa2ad1f4 [AArch64] Generate rev16/rev32 from bswap + srl when upper bits are known zero.
Canonicalize (srl (bswap i32 x), 16) to (rotr (bswap i32 x), 16), if the high
16-bits of x are zero. Similarly, canonicalize (srl (bswap i64 x), 32) to
(rotr (bswap i64 x), 32), if the high 32-bits of x are zero.

test_rev_w_srl16:            test_rev_w_srl16:
  and w8, w0, #0xffff          and     w8, w0, #0xffff
  rev w8, w8           --->    rev16   w0, w8
  lsr     w0, w8, #16

test_rev_x_srl32:            test_rev_x_srl32:
  rev x8, x8           --->    rev32   x0, x8
  lsr x0, x8, #32

llvm-svn: 270896
2016-05-26 19:41:33 +00:00
Changpeng Fang 71369b3a39 AMDGPU/SI: Enable load-store-opt by default.
Summary: Enable load-store-opt by default, and update LIT tests.

Reviewers: arsenm

Differential Revision: http://reviews.llvm.org/D20694

llvm-svn: 270894
2016-05-26 19:35:29 +00:00
Artem Belevich 11f69ba0cf Init member structs in constructor.
Fixes build error on windows where MSVC does not
support list initialization inside member initializer list.

llvm-svn: 270877
2016-05-26 17:29:20 +00:00
Artem Belevich 49e9a81236 [NVPTX] Added NVVMIntrRange pass
NVVMIntrRange adds !range metadata to calls of NVVM intrinsics
that return values within known limited range.

This allows LLVM to generate optimal code for indexing arrays
based on tid/ctaid which is a frequently used pattern in CUDA code.

Differential Revision: http://reviews.llvm.org/D20644

llvm-svn: 270872
2016-05-26 17:02:56 +00:00
Artem Tamazov 6edc135d0f [AMDGPU][llvm-mc] s_getreg/setreg* - hwreg - factor out strings/literals etc.
Hwreg(...) syntax implementation unified with sendmsg(...).
Common strings moved to Utils
MathExtras.h functionality utilized.
Added missing build dependency in Disassembler.

Differential Revision: http://reviews.llvm.org/D20381

llvm-svn: 270871
2016-05-26 17:00:33 +00:00
Artem Tamazov b49c3361e5 Fix build warning introduced in r270552 "[AMDGPU][llvm-mc] Disassembler: support for TTMP/TBA/TMA registers."
llvm-svn: 270859
2016-05-26 15:52:16 +00:00
Simon Pilgrim cf340bd9c1 [X86][SSE] When lowering a 256-bit shuffle as PMOVZX, reduce the input vector to the lower 128-bit subvector.
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.

llvm-svn: 270858
2016-05-26 15:40:36 +00:00
Krzysztof Parzyszek de37cfb596 [Hexagon] Select the aggressive anti-dependency breaker
llvm-svn: 270857
2016-05-26 15:38:50 +00:00
Diana Picus 81bc3170e8 [AMDGPU] Remove exit-on-error flag from test (PR27762)
Similar to r269948, but for argument lowering.

Fixes PR27762

Differential Revision: http://reviews.llvm.org/D20430

llvm-svn: 270856
2016-05-26 15:24:55 +00:00
Diana Picus 20a8d8e97e [BPF] Remove exit-on-error flag in test (PR27767)
The exit-on-error flag is needed to avoid an assert where
llvm::SelectionDAGISel::LowerArguments doesn't create enough arguments. Fill up
with zeroes to reach the right number of args.

Fixes PR27767.

Differential Revision: http://reviews.llvm.org/D20571

llvm-svn: 270855
2016-05-26 15:23:50 +00:00
Chad Rosier 816a67da49 [AArch64] Generate a BFI/BFXIL from 'or (and X, MaskImm), OrImm'.
If and only if the value being inserted sets only known zero bits.

This combine transforms things like

  and w8, w0, #0xfffffff0
  movz w9, #5
  orr w0, w8, w9

into

  movz w8, #5
  bfxil w0, w8, #0, #4

The combine is tuned to make sure we always reduce the number of instructions.
We avoid churning code for what is expected to be performance neutral changes
(e.g., converted AND+OR to OR+BFI).

Differential Revision: http://reviews.llvm.org/D20387

llvm-svn: 270846
2016-05-26 13:27:56 +00:00
Rafael Espindola a224de06bc Use shouldAssumeDSOLocal on AArch64.
This reduces code duplication and now AArch64 also handles PIE.

llvm-svn: 270844
2016-05-26 12:42:55 +00:00
Igor Breger 8437bb70fd [AVX512] Fix intrinsic cmp{sd|ss} lowering.
Differential Revision: http://reviews.llvm.org/D20615

llvm-svn: 270843
2016-05-26 12:42:25 +00:00
Chris Dewhurst 9013d069b0 [Sparc] Extend the assembler printing support for Sparc back-end.
Allows display of floating-point registers and display of assembler meta-data output.

llvm-svn: 270829
2016-05-26 07:28:31 +00:00
Justin Lebar fea8a8d70a [NVPTX] Don't (incorrectly) say that the NVVMReflect pass preserves all analyses.
Reviewers: tra

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D20585

llvm-svn: 270790
2016-05-25 23:12:38 +00:00
Rafael Espindola 6b93bf5783 Don't repeat name in comment and git-clang-format.
llvm-svn: 270785
2016-05-25 22:44:06 +00:00
Rafael Espindola 6b4baa5f58 Sort includes.
llvm-svn: 270769
2016-05-25 21:37:29 +00:00
Simon Pilgrim 4810683ddf Simplify std::all_of/any_of predicates by using llvm::all_of/any_of. NFCI.
llvm-svn: 270753
2016-05-25 20:41:11 +00:00
Rafael Espindola 84f0562064 Fix shouldAssumeDSOLocal for private linkage.
llvm-svn: 270746
2016-05-25 19:55:16 +00:00
Matt Arsenault e57206d81b AMDGPU: Fix v2i64/v2f64 bitcasts
These operations tend to get promoted away to v4i32 so
this doesn't happen often.

llvm-svn: 270740
2016-05-25 18:07:36 +00:00
Matt Arsenault 1cc4991412 AMDGPU: Fix inconsistent lowering of select of vectors
f32 vectors would use a sequence of BFI instructions instead
of unrolled cmp + select. This was better in the case of a VALU
select with SGPR inputs, but we don't have a way of dealing with that
in the DAG.

llvm-svn: 270731
2016-05-25 17:34:58 +00:00
Sanjay Patel aedc347b29 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Sanjay Patel 3955360b24 [x86, AVX] allow explicit calls to VZERO* to modify state in VZeroUpperInserter pass (PR27823)
As noted in the review, there are still problems, so this doesn't the bug completely.

Differential Revision: http://reviews.llvm.org/D20529

llvm-svn: 270718
2016-05-25 16:39:47 +00:00
Simon Pilgrim 4298d06d0f [X86][SSE] Replace (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) lossless conversion intrinsics with generic IR
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.

Differential Revision: http://reviews.llvm.org/D20568

llvm-svn: 270678
2016-05-25 08:59:18 +00:00
Craig Topper 12e322a8cf [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
Nirav Dave e003bb7ead Soften assertion in AMDGPU emitPrologue.
[AMDGPU] emitPrologue looks for an unused unallocated SGPR that is not
the scratch descriptor. Continue search if unused register found fails
other requirements.

Reviewers: arsenm, tstellarAMD, nhaehnle

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D20526

llvm-svn: 270646
2016-05-25 01:45:42 +00:00