Commit Graph

37989 Commits

Author SHA1 Message Date
Quentin Colombet d1cd30b218 [AArch64][RegisterBankInfo] G_OR are fine on either GPR or FPR.
Teach AArch64RegisterBankInfo that G_OR can be mapped on either GPR or
FPR for 64-bit or 32-bit values.

Add test cases demonstrating how this information is used to coalesce a
computation on a single register bank.

llvm-svn: 272170
2016-06-08 16:53:32 +00:00
Oliver Stannard b3378e2f3c [ARM] MSR instructions implicitly set CPSR
The MSR instructions can write to the CPSR, but we did not model this
fact, so we could emit them in the middle of IT blocks, changing the
condition flags for later instructions in the block.

The tests use two calls to llvm.write_register.i32 because it is valid
to use these instructions at the end of an IT block, which if conversion
does do in some cases. With two calls, the first clobbers the flags, so
a branch has to be used to make the second one conditional.

Differential Revision: http://reviews.llvm.org/D21139

llvm-svn: 272154
2016-06-08 15:26:34 +00:00
Vasileios Kalintiris a9e5154dc5 [mips] Add a proper file header in MipsFastISel.cpp
llvm-svn: 272138
2016-06-08 13:13:15 +00:00
Krzysztof Parzyszek b16882ddf1 [Hexagon] Modify HexagonExpandCondsets to handle subregisters
Also, switch to using functions from LiveIntervalAnalysis to update
live intervals, instead of performing the updates manually.

Re-committing r272045.

llvm-svn: 272135
2016-06-08 12:31:16 +00:00
Diana Picus 0781d10ac4 [ARM] Remove redundant check. NFC
isSwift is tested earlier and known to be false when we reach this code.

llvm-svn: 272127
2016-06-08 10:29:02 +00:00
Benjamin Kramer 46e38f3678 Avoid copies of std::strings and APInt/APFloats where we only read from it
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.

llvm-svn: 272126
2016-06-08 10:01:20 +00:00
Igor Breger 982e4003a6 [AVX512] Fix cvtusi2sd instruction Opcode, it should be 0x7B instead of 0x2A.
llvm-svn: 272122
2016-06-08 07:48:23 +00:00
Quentin Colombet a4ac7cdac2 [AArch64][RegisterBankInfo] Use the generic implementation of copyCost.
Long term we may want to give high cost at FPR to/from GPR copies.

llvm-svn: 272086
2016-06-08 01:24:00 +00:00
Quentin Colombet cfbdee2312 [RegisterBankInfo] Add a size argument for the cost of copy.
The cost of a copy may be different based on how many bits we have to
copy around. E.g., a 8-bit copy may be different than a 32-bit copy.

llvm-svn: 272084
2016-06-08 01:11:03 +00:00
Nicolai Haehnle c00e03b8f5 AMDGPU: Add amdgpu-ps-wqm-outputs function attributes
Summary:
The presence of this attribute indicates that VGPR outputs should be computed
in whole quad mode. This will be used by Mesa for prolog pixel shaders, so
that derivatives can be taken of shader inputs computed by the prolog, fixing
a bug.

The generated code could certainly be improved: if a prolog pixel shader is
used (which isn't common in modern OpenGL - they're used for gl_Color, polygon
stipples, and forcing per-sample interpolation), Mesa will use this attribute
unconditionally, because it has to be conservative. So WQM may be used in the
prolog when it isn't really needed, and furthermore a silly back-and-forth
switch is likely to happen at the boundary between prolog and main shader
parts.

Fixing this is a bit involved: we'd first have to add a mechanism by which
LLVM writes the WQM-related input requirements to the main shader part binary,
and then Mesa specializes the prolog part accordingly. At that point, we may
as well just compile a monolithic shader...

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95130

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D20839

llvm-svn: 272063
2016-06-07 21:37:17 +00:00
Eric Christopher 538d09d0dd Revert "Differential Revision: http://reviews.llvm.org/D20557"
Author: Wei Ding <wei.ding2@amd.com>
Date:   Tue Jun 7 19:04:44 2016 +0000

    Differential Revision: http://reviews.llvm.org/D20557

    git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272044
    91177308-0d34-0410-b5e6-96231b3b80d8

as it was breaking the bots.

This reverts commit r272044.

llvm-svn: 272056
2016-06-07 20:27:12 +00:00
Etienne Bergeron 22bfa83208 [stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the '/GS' compiler switch.
  * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
  * To be added to clang here: http://reviews.llvm.org/D20347

Some overview of buffer security check feature and implementation:
  * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
  * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
  * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html


For the following example:
```
int example(int offset, int index) {
  char buffer[10];
  memset(buffer, 0xCC, index);
  return buffer[index];
}
```

The MSVC compiler is adding these instructions to perform stack integrity check:
```
        push        ebp  
        mov         ebp,esp  
        sub         esp,50h  
  [1]   mov         eax,dword ptr [__security_cookie (01068024h)]  
  [2]   xor         eax,ebp  
  [3]   mov         dword ptr [ebp-4],eax  
        push        ebx  
        push        esi  
        push        edi  
        mov         eax,dword ptr [index]  
        push        eax  
        push        0CCh  
        lea         ecx,[buffer]  
        push        ecx  
        call        _memset (010610B9h)  
        add         esp,0Ch  
        mov         eax,dword ptr [index]  
        movsx       eax,byte ptr buffer[eax]  
        pop         edi  
        pop         esi  
        pop         ebx  
  [4]   mov         ecx,dword ptr [ebp-4]  
  [5]   xor         ecx,ebp  
  [6]   call        @__security_check_cookie@4 (01061276h)  
        mov         esp,ebp  
        pop         ebp  
        ret  
```

The instrumentation above is:
  * [1] is loading the global security canary,
  * [3] is storing the local computed ([2]) canary to the guard slot,
  * [4] is loading the guard slot and ([5]) re-compute the global canary,
  * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.

Overview of the current stack-protection implementation:
  * lib/CodeGen/StackProtector.cpp
    * There is a default stack-protection implementation applied on intermediate representation.
    * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
    * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
    * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
    * Guard manipulation and comparison are added directly to the intermediate representation.

  * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
  * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
    * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
      * see long comment above 'class StackProtectorDescriptor' declaration.
    * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
      * 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
    * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.

  * include/llvm/Target/TargetLowering.h
    * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.

  * lib/Target/X86/X86ISelLowering.cpp
    * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.

Function-based Instrumentation:
  * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
  * To support function-based instrumentation, this patch is
    * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
      * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
    * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
    * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
    * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).

Modifications
  * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
  * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)

Results

  * IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```

```
*** Final LLVM Code input to ISel ***

; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
  %StackGuardSlot = alloca i8*                                                  <<<-- Allocated guard slot
  %0 = call i8* @llvm.stackguard()                                              <<<-- Loading Stack Guard value
  call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot)                  <<<-- Prologue intrinsic call (store to Guard slot)
  %index.addr = alloca i32, align 4
  %offset.addr = alloca i32, align 4
  %buffer = alloca [10 x i8], align 1
  store i32 %index, i32* %index.addr, align 4
  store i32 %offset, i32* %offset.addr, align 4
  %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
  %1 = load i32, i32* %index.addr, align 4
  call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
  %2 = load i32, i32* %index.addr, align 4
  %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
  %3 = load i8, i8* %arrayidx, align 1
  %conv = sext i8 %3 to i32
  %4 = load volatile i8*, i8** %StackGuardSlot                                  <<<-- Loading Guard slot
  call void @__security_check_cookie(i8* %4)                                    <<<-- Epilogue function-based check
  ret i32 %conv
}
```

  * SelectionDAG generated instrumentation:

```
clang-cl /GS test.cc /O1 /c /FA
```

```
"?example@@YAHHH@Z":                    # @"\01?example@@YAHHH@Z"
# BB#0:                                 # %entry
        pushl   %esi
        subl    $16, %esp
        movl    ___security_cookie, %eax                                        <<<-- Loading Stack Guard value
        movl    28(%esp), %esi
        movl    %eax, 12(%esp)                                                  <<<-- Store to Guard slot
        leal    2(%esp), %eax
        pushl   %esi
        pushl   $204
        pushl   %eax
        calll   _memset
        addl    $12, %esp
        movsbl  2(%esp,%esi), %esi
        movl    12(%esp), %ecx                                                  <<<-- Loading Guard slot
        calll   @__security_check_cookie@4                                      <<<-- Epilogue function-based check
        movl    %esi, %eax
        addl    $16, %esp
        popl    %esi
        retl
```

Reviewers: kcc, pcc, eugenis, rnk

Subscribers: majnemer, llvm-commits, hans, thakis, rnk

Differential Revision: http://reviews.llvm.org/D20346

llvm-svn: 272053
2016-06-07 20:15:35 +00:00
Krzysztof Parzyszek 8b61759c05 Revert r272045 since GCC doesn't know how to compile it.
llvm-svn: 272048
2016-06-07 19:25:28 +00:00
Krzysztof Parzyszek 8424280eef [Hexagon] Modify HexagonExpandCondsets to handle subregisters
Also, switch to using functions from LiveIntervalAnalysis to update
live intervals, instead of performing the updates manually.

llvm-svn: 272045
2016-06-07 19:06:23 +00:00
Wei Ding a70216f1b3 Differential Revision: http://reviews.llvm.org/D20557
llvm-svn: 272044
2016-06-07 19:04:44 +00:00
Geoff Berry 486f49cc63 Reapply [AArch64] Fix isLegalAddImmediate() to return true for valid negative values.
Originally reviewed here: http://reviews.llvm.org/D17463

llvm-svn: 272023
2016-06-07 16:48:43 +00:00
Oliver Stannard 8de5f24d10 [ARM] Accept conditional versions of BXNS and BLXNS
These instructions end in "S" but are not flag-setting, so they need including
in the list of special cases in the assembly parser.

Differential Revision: http://reviews.llvm.org/D21077

llvm-svn: 272015
2016-06-07 14:58:48 +00:00
Simon Pilgrim 35c06a0282 [X86][SSE] Add general lowering of nontemporal vector loads (fixed bad merge)
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.

This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.

We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.

There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.

Differential Review: http://reviews.llvm.org/D20965

llvm-svn: 272011
2016-06-07 13:47:23 +00:00
Simon Pilgrim 9a89623b57 [X86][SSE] Add general lowering of nontemporal vector loads
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.

This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.

We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.

There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.

Differential Review: http://reviews.llvm.org/D20965

llvm-svn: 272010
2016-06-07 13:34:24 +00:00
James Molloy b101383fb5 [Thumb-1] Add optimized constant materialization for integers [256..512)
We can materialize these integers using a MOV; ADDi8 pair.

llvm-svn: 272007
2016-06-07 13:10:14 +00:00
Igor Breger 61e628591f [AVX512] Fix load opcode for fast isel.
Differential Revision: http://reviews.llvm.org/D21067

llvm-svn: 272006
2016-06-07 13:08:45 +00:00
Ulrich Weigand 6b0634b304 [PowerPC] Support multiple return values with fast isel
Using an LLVM IR aggregate return value type containing three
or more integer values causes an abort in the fast isel pass.

This patch adds two more registers to RetCC_PPC64_ELF_FIS to
allow returning up to four integers with fast isel, just the
same as is currently supported with regular isel (RetCC_PPC).

This is needed for Swift and (possibly) other non-clang frontends.

Fixes PR26190.

llvm-svn: 272005
2016-06-07 12:48:22 +00:00
Simon Pilgrim ca1da1bf07 [X86][SSE] Improved blend+zero target shuffle combining to use combined shuffle mask directly
We currently only combine to blend+zero if the target value type has 8 elements or less, but this was missing a lot of cases where the combined mask had been widened.

This change makes it so we use the combined mask to determine the blend value type, allowing us to catch more widened cases.

llvm-svn: 272003
2016-06-07 12:20:14 +00:00
James Molloy 53298a1808 [ARM] Shrink post-indexed LDR and STR to LDM/STM
A Thumb-2 post-indexed LDR instruction such as:

  ldr.w r0, [r1], #4

Can be rewritten as:

  ldm.n r1!, {r0}

LDMs can be more expensive than LDRs on some cores, so this has been enabled only in minsize mode.

llvm-svn: 272002
2016-06-07 12:13:34 +00:00
James Molloy 75afc95112 [ARM] Transform LDMs into writeback form to save code size
If we have an LDM that uses only low registers and doesn't write to its base register:

  ldm.w r0, {r1, r2, r3}

And that base register is dead after the LDM, then we can convert it to writeback form and use a narrow encoding:

  ldm.n r0!, {r1, r2, r3}

Obviously, this introduces a new register write and so can cause WAW hazards, so I've enabled it only in minsize mode. This is a code size trick that ARM Compiler 5 ("armcc") does that we don't.

llvm-svn: 272000
2016-06-07 11:47:24 +00:00
Peter Smith 353a2286e2 [ARM] Incorrect relocation type for Thumb2 B<cond>.w
The Thumb2 conditional branch B<cond>.W has a different encoding (T3) 
to the unconditional branch B.W (T4) as it needs to record <cond>. 
As the encoding is different the B<cond>.W is given a different 
relocation type. 

ELF for the ARM Architecture 4.6.1.6 (Table-13) states that 
R_ARM_THM_JUMP19 should be used for B<cond>.W. At present the 
MC layer is using the R_ARM_THM_JUMP24 from B.W.

This change makes B<cond>.W use R_ARM_THM_JUMP19 and alters the 
existing test that checks for R_ARM_THM_JUMP24 to expect 
R_ARM_THM_JUMP19.

llvm-svn: 271997
2016-06-07 10:34:33 +00:00
Craig Topper 2f90c1fedf [AVX512] Allow avx2 and sse41 nontemporal load intrinsics to select EVEX encoded instructions when VLX is enabled.
llvm-svn: 271988
2016-06-07 07:27:57 +00:00
Craig Topper e1cac15feb [AVX512] Remove unnecessary mayLoad, mayStore, hasSidEffects flags from instructions that have patterns that imply them. Add the same set of flags to instructions that don't have patterns to imply them.
llvm-svn: 271987
2016-06-07 07:27:54 +00:00
Craig Topper 0fcf925699 [AVX512] Add NoVLX to a couple patterns that have VLX equivalents. Ordering of the patterns in the .td file protects this, but its better to be explicit.
llvm-svn: 271986
2016-06-07 07:27:51 +00:00
Saleem Abdulrasool 532dcbc2c5 ARM: correct TLS access on WoA
TLS access requires an offset from the TLS index.  The index itself is the
section-relative distance of the symbol.  For ARM, the relevant relocation
(IMAGE_REL_ARM_SECREL) is applied as a constant.  This means that the value may
not be an immediate and must be lowered into a constant pool.  This offset will
not be base relocated.  We were previously emitting the actual address of the
symbol which would be base relocated and would therefore be the vaue offset by
the ImageBase + TLS Offset.

llvm-svn: 271974
2016-06-07 03:15:07 +00:00
Saleem Abdulrasool ce4eee4951 ARM: clang-format a couple of switches, add comments
clang-format a couple of switches in preparation for a future change.  Add some
enumeration comments

llvm-svn: 271973
2016-06-07 03:15:01 +00:00
Saleem Abdulrasool 0cd0cb992a ARM: normalise space in the patterns
Just adjust the whitespace for the selection patterns.  NFC.

llvm-svn: 271972
2016-06-07 03:14:57 +00:00
Matt Arsenault 02458c2d27 AMDGPU: Add function for getting instruction size
llvm-svn: 271936
2016-06-06 20:10:33 +00:00
Matt Arsenault 3b2e2a59e8 AMDGPU: Fix constantexpr addrspacecasts
If we had a constant group address space cast the queue pointer
wasn't enabled for the function, resulting in a crash on noreg
later.

llvm-svn: 271935
2016-06-06 20:03:31 +00:00
Artem Tamazov 135487767b [AMDGPU][llvm-mc] v_cndmask_b32: src2 is mandatory; do not enforce VOP2 when src2 == VCC.
Another step for unification llvm assembler/disassembler with sp3.
Besides, CodeGen output is a bit improved, thus changes in CodeGen tests.
Assembler/Disassembler tests updated/added.

Differential Revision: http://reviews.llvm.org/D20796

llvm-svn: 271900
2016-06-06 15:23:43 +00:00
Igor Breger edafb0595e [KNL] Fix UMULO lowering.
Differential Revision: http://reviews.llvm.org/D21013

llvm-svn: 271891
2016-06-06 12:24:52 +00:00
Benjamin Kramer f21beb2c74 Remove dead function with incredibly broken assert.
Found by clang-tidy's misc-assert-side-effect.

llvm-svn: 271887
2016-06-06 12:10:42 +00:00
Filipe Cabecinhas 6e7d5467c0 [NFC] Silence gcc warning (-Wsign-compare)
llvm-svn: 271882
2016-06-06 10:49:56 +00:00
Craig Topper 143446d5c1 [AVX512] Add PALIGNR shuffle lowering for v32i16 and v16i32.
llvm-svn: 271870
2016-06-06 05:39:10 +00:00
Simon Pilgrim 64c6de4525 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS raw mask decoding for target shuffle combines
llvm-svn: 271834
2016-06-05 15:21:30 +00:00
Simon Pilgrim 478295dadd [X86][XOP] Added VPERMIL2PD/VPERMIL2PS as a target shuffle type
llvm-svn: 271831
2016-06-05 15:01:45 +00:00
Simon Pilgrim 163987a235 [X86][XOP] Tidied up DecodeVPERMIL2PMask to more closely match DecodeVPERMILPMask.
llvm-svn: 271830
2016-06-05 14:33:43 +00:00
Craig Topper 8eeda57a40 [AVX512] Add support for lowering PALIGNR for v64i8.
Could do this for other types to, but this is what's needed to replace the instrinsic with native IR in clang.

llvm-svn: 271828
2016-06-05 06:29:12 +00:00
Craig Topper 9f51c9ef15 [AVX512] Fix PANDN combining for v4i32/v8i32 when VLX is enabled.
v4i32/v8i32 ANDs aren't promoted to v2i64/v4i64 when VLX is enabled.

llvm-svn: 271826
2016-06-05 05:35:11 +00:00
Simon Pilgrim 2ead861d07 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS shuffle mask comment decoding
llvm-svn: 271809
2016-06-04 21:44:28 +00:00
Craig Topper e609bd6600 [X86] Add the VR128L/H and VR256L/H to the list of vector register classes for inline asm constraints. Also fix the comment on the function.
llvm-svn: 271802
2016-06-04 20:15:08 +00:00
Saleem Abdulrasool 1fcdc23a6e X86: enable TLS on Windows itanium
Windows itanium is nearly identical to windows-msvc (MS ABI for C, itanium for
C++).  Enable the TLS support for the target similar to the MSVC model.

llvm-svn: 271797
2016-06-04 18:27:22 +00:00
Simon Pilgrim fd2eda4f64 [X86][AVX2] Fix v16i16 SHL lowering (PR27730)
The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.

The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.

llvm-svn: 271796
2016-06-04 16:45:33 +00:00
Craig Topper 6ae375c9ba [X86] Use smaller types to shrink the intrinsic lowering tables by about 12K.
llvm-svn: 271776
2016-06-04 04:32:17 +00:00
Craig Topper 5250634334 [X86] Use X86ISD::ABS for lowering pabs SSSE3/AVX intrinsics to match AVX512. Should allow those intrinsics to use the EVEX encoded instructions and get the extra registers when available.
llvm-svn: 271775
2016-06-04 04:32:15 +00:00
Chad Rosier be879ea751 [AArch64] Spot SBFX-compatible code expressed with sign_extend.
This is very similar to r271677, but for extracts from i32 with the SIGN_EXTEND
acting on a arithmetic shift.

llvm-svn: 271717
2016-06-03 20:05:49 +00:00
Derek Schuff 5859a9ed80 [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

Re-apply r271599 but instead of bailing with an error when a declared
function has multiple returns, replace it with a pointer argument. Also
add the test case I forgot to 'git add' last time around.

llvm-svn: 271703
2016-06-03 18:34:36 +00:00
Sjoerd Meijer 9bc93f6298 Code size optimisation: do not inline memcpy if this expansion results
in more instructions than the libary call.

Differential Revision: http://reviews.llvm.org/D20958

llvm-svn: 271678
2016-06-03 15:38:55 +00:00
Chad Rosier 2d658703e1 [AArch64] Spot SBFX-compatbile code expressed with sign_extend_inreg.
We were assuming all SBFX-like operations would have the shl/asr form, but often
when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead.

This is a port of r213754 from ARM to AArch64.

llvm-svn: 271677
2016-06-03 15:00:09 +00:00
Artem Tamazov f88397c84c [test/AMDGPU] Square-braced-syntax for registers: add macro test/example.
Test added as per discussion in http://reviews.llvm.org/D20588.
The macro is just a demonstration, useless in practice.
Coding style fixes.

Differential Revision: http://reviews.llvm.org/D20797

llvm-svn: 271675
2016-06-03 14:41:17 +00:00
Sjoerd Meijer d906bf1369 RAS extensions are part of ARMv8.2-A. This change enables them by introducing a
new instruction to ARM and AArch64 targets and several system registers.

Patch by: Roger Ferrer Ibanez and Oliver Stannard

Differential Revision: http://reviews.llvm.org/D20282

llvm-svn: 271670
2016-06-03 14:03:27 +00:00
Sjoerd Meijer 9da258d8e5 ARM target does not use printAliasInstr machinery which
forces having special checks in ArmInstPrinter::printInstruction. This
patch addresses this issue.

Not all special checks could be removed: either they involve elaborated
conditions under which the alias is emitted (e.g. ldm/stm on sp may be
pop/push but only if the number of registers is >= 2) or the number
of registers is multivalued (like happens again with ldm/stm) and they
do not match the InstAlias pattern which assumes single-valued operands
in the pattern.

Patch by: Roger Ferrer Ibanez

Differential Revision: http://reviews.llvm.org/D20237

llvm-svn: 271667
2016-06-03 13:19:43 +00:00
Sam Kolton a4a99ad1bc [AMDGPU] Assembler: More tests for SDWA instructions. Fix for SDWA float modifiers.
Summary: Depends on D20625

Reviewers: tstellarAMD, vpykhtin, artem.tamazov

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20674

llvm-svn: 271662
2016-06-03 11:43:09 +00:00
Daniel Sanders 43750eab82 [mips] EABI CodeGen is completely untested and seems to have bitrotted. Remove it.
Summary:
There are no tests*, no EABI buildbots, and simple test cases do not work.

* There is a single MIPS16 test using a mips*-gnueabi triple but this test
  doesn't test EABI and the triple doesn't cause EABI to be used.

Reviewers: sdardis

Subscribers: tberghammer, danalbert, srhines, dsanders, sdardis, llvm-commits

Differential Revision: http://reviews.llvm.org/D20906

llvm-svn: 271658
2016-06-03 10:38:09 +00:00
Sam Kolton 05ef1c940f [AMDGPU] Assembler: Custom converters for SDWA instructions. Support for _dpp and _sdwa suffixes in mnemonics.
Summary:
Added custom converters for SDWA instruction to support optional operands and modifiers.
Support for _dpp and _sdwa suffixes that allows to force DPP or SDWA encoding for instructions.

Reviewers: tstellarAMD, vpykhtin, artem.tamazov

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20625

llvm-svn: 271655
2016-06-03 10:27:37 +00:00
Chandler Carruth 9ac86efd4d Remove bogus initialization of the PPC and Hexagon SelectionDAGISel
subclasses. These are not passes proper. We don't support registering
them, they can't be constructed with default arguments, and the ID is
actually in a base class.

Only these two targets even had any boiler plate to try to do this, and
it had to be munged out of the INITIALIZE_PASS macros to work. What's
worse, the boiler plate has rotted and the "name" of the pass is
actually the description string now!!! =/ All of this is completely
unnecessary. No other target bothers, and nothing breaks if you don't
initialize them because CodeGen has an entirely separate initialization
path that is somewhat more durable than relying on the implicit
initialization the way the 'opt' tool does for registered passes.

llvm-svn: 271650
2016-06-03 10:13:31 +00:00
Chandler Carruth d474144a7a Use the standard INITIALIZE_PASS macro rather than hand rolling a (not
entirely correct) version of its contents.

llvm-svn: 271649
2016-06-03 10:13:29 +00:00
Daniel Sanders 6ba3dd6b71 [mips] Implement 'la' macro in PIC mode for O32.
Summary:
N32 support will follow in a later patch since the symbol version of 'la'
incorrectly believes N32 to have 64-bit pointers and rejects it early.

This fixes the three incorrectly expanded 'la' macros found in bionic.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: http://reviews.llvm.org/D20820

llvm-svn: 271644
2016-06-03 09:53:06 +00:00
Simon Pilgrim e85506b6e0 [X86][XOP] Support for VPERMIL2PD/VPERMIL2PS 2-input shuffle instructions
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.

The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.

Mask decoding/target shuffle support will be added in future patches.

Differential Revision: http://reviews.llvm.org/D20049

llvm-svn: 271633
2016-06-03 08:06:03 +00:00
Craig Topper 5e3d314488 [X86] Fix some isel patterns to remove an operand from some multiclasses. NFC
llvm-svn: 271631
2016-06-03 05:58:52 +00:00
Craig Topper e7ae106147 [AVX512] Ensure EVEX vpshufd, vpshuflw, and vpshufhw have isel priority over the VEX encoded ones.
llvm-svn: 271629
2016-06-03 05:31:04 +00:00
Craig Topper 01f53b1773 [AVX512] Fix shuffle comment printing for EVEX encoded PSHUFD, PSHUFHW, and PSHUFLW.
llvm-svn: 271628
2016-06-03 05:31:00 +00:00
Craig Topper dc70d8a4b7 [X86] Simplify a multiclass to remove a parameter. NFC
llvm-svn: 271627
2016-06-03 05:30:56 +00:00
Craig Topper 2388b4610a [X86] Remove unnecessary pattern predicates from the vector bit cast patterns. The types have to be legal and there are no alternative patterns. Saves almost 200 bytes in isel table.
llvm-svn: 271625
2016-06-03 04:15:27 +00:00
Craig Topper 19462f02bb [X86] Cleanup formatting a bit to align similar parts of adjacent lines.
llvm-svn: 271624
2016-06-03 04:15:25 +00:00
Craig Topper 895897f85b [X86] Remove redundant bitcast patterns for 128/256-bit vectors. These only differ from the SSE/AVX versions by the register class, but register class has no bearing on isel.
llvm-svn: 271623
2016-06-03 04:15:22 +00:00
Derek Schuff f5bae9c1ce Revert "[WebAssembly] Emit type signatures for declared functions"
This reverts r271599, it broke the integration tests.
More places than I expected had nontrival return types in imports, or
else the check was wrong.

llvm-svn: 271606
2016-06-02 23:02:44 +00:00
Derek Schuff 23b7d65fe5 [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

llvm-svn: 271599
2016-06-02 21:34:18 +00:00
Matt Arsenault 43578ec2a8 AMDGPU: Handle flat in getMemOpBaseRegImmOfs
It can still report the base register, and the uses give up when it
fails.

llvm-svn: 271575
2016-06-02 20:05:20 +00:00
Sanjay Patel dba8b4c04d transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Matt Arsenault d1097a38e2 AMDGPU: Cleanup load tests
There are a lot of different kinds of loads to test for,
and these were scattered around inconsistently with
some redundancy. Try to comprehensively test all loads
in a consistent way.

llvm-svn: 271571
2016-06-02 19:54:26 +00:00
Matt Arsenault 52dec8d36a AMDGPU: Temporary fix for broken store combine
llvm-svn: 271567
2016-06-02 19:00:55 +00:00
Matt Arsenault 8e00194be8 AMDGPU: Fix crashes on unknown processor name
If the processor name failed to parse for amdgcn,
the resulting output would have R600 ISA in it.

If the processor name was missing or invalid for R600,
the wavefront size would not be set and there would be
crashes from missing itinerary data.

Fixes crashes in future commit caused by dividing by the unset/0
wavefront size.

llvm-svn: 271561
2016-06-02 18:37:16 +00:00
Ahmed Bougacha 63f78b0206 [X86] Define segment MI operands as regs instead of i8imm.
We've been pretending that segments are i8imm since the initial
support (r68645), predating the addition of the SEGMENT_REG class
(r81895).  That happens to works, but is wrong, and inconsistent
with how we print (e.g., X86ATTInstPrinter::printMemReference)
and parse them (e.g., X86Operand::addMemOperands).

This change shouldn't affect any tool users, but is visible to
library users or out-of-tree tablegen backends: this causes
MCOperandInfo for the segment op to have an RC instead of "unknown",
and TII::getRegClass to actually return something.  As the registers
are reserved and no vregs of the class ever created, that shouldn't
change anything.

No test change; no suspicious getRegClass() in X86 and CodeGen.

llvm-svn: 271559
2016-06-02 18:29:15 +00:00
Matthias Braun 651cff42c4 AArch64: Do not test for CPUs, use SubtargetFeatures
Testing for specific CPUs has a number of problems, better use subtarget
features:
- When some tweak is added for a specific CPU it is often desirable for
  the next version of that CPU as well, yet we often forget to add it.
- It is hard to keep track of checks scattered around the target code;
  Declaring all target specifics together with the CPU in the tablegen
  file is a clear representation.
- Subtarget features can be tweaked from the command line.

To discourage people from using CPU checks in the future I removed the
isCortexXX(), isCyclone(), ... functions. I added an getProcFamily()
function for exceptional circumstances but made it clear in the comment
that usage is discouraged.

Reformat feature list in AArch64.td to have 1 feature per line in
alphabetical order to simplify merging and sorting for out of tree
tweaks.

No functional change intended.

Differential Revision: http://reviews.llvm.org/D20762

llvm-svn: 271555
2016-06-02 18:03:53 +00:00
Dimitry Andric 6a482a73d6 Only attempt to detect AVG if SSE2 is available
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support.  It turned out
this was because of the AVG pattern detection introduced in rL253952.

Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.

Also add a minimized test case.

Reviewers: congh, eli.friedman, spatel

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D20905

llvm-svn: 271548
2016-06-02 17:30:49 +00:00
Geoff Berry 66f6b65fed [PEI, AArch64] Use empty spaces in stack area for local stack slot allocation.
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.

AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.

Reviewers: t.p.northover, qcolombet, MatzeB

Subscribers: rengolin, mcrosier, llvm-commits, aemerson

Differential Revision: http://reviews.llvm.org/D20220

llvm-svn: 271527
2016-06-02 16:22:07 +00:00
Krzysztof Parzyszek 3d6fc83a58 [Hexagon] Expand COPY pseudo-instruction
Handle it locally instead of having the target-independent pass deal
with it. The generic pass does not preserve implicit uses, which may
be necessary.

llvm-svn: 271520
2016-06-02 14:33:08 +00:00
Krzysztof Parzyszek f69ff7120b [RDF] Ignore implicit defs when resetting <kill> flags
llvm-svn: 271519
2016-06-02 14:30:09 +00:00
Simon Pilgrim 0afd5a4d80 [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (llvm)
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.

Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.

Differential Revision: http://reviews.llvm.org/D20860

llvm-svn: 271510
2016-06-02 10:55:21 +00:00
Sjoerd Meijer 0b7bb16e5b This adds support for Cortex-A73 as an available target.
Differential Revision: http://reviews.llvm.org/D20865

llvm-svn: 271508
2016-06-02 10:48:52 +00:00
Craig Topper 048a08af66 [AVX512] Add 512-bit load/stores to fast isel.
llvm-svn: 271486
2016-06-02 04:51:37 +00:00
Craig Topper 292a86db5b [X86] No need to use 256-bit VMOVNTPS for integer types when only AVX1 is supported. VMOVNTDQ is available with AVX1.
We were getting this right for v4i64 but not the other integer types.

llvm-svn: 271482
2016-06-02 04:19:48 +00:00
Craig Topper ca9c0801e1 [X86] Add AVX 256-bit load and stores to fast isel.
I'm not sure why this was missing for so long.

This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit.

llvm-svn: 271481
2016-06-02 04:19:45 +00:00
Craig Topper 6611188633 [X86] Use uint16_t for a couple arrays of instruction opcodes. NFC
llvm-svn: 271480
2016-06-02 04:19:42 +00:00
Craig Topper 5bb9cda620 [AVX512] Remove LOADA/LOADU/STOREA/STOREU intrinsic types now that they are unused.
llvm-svn: 271479
2016-06-02 04:19:40 +00:00
Craig Topper f10fbfa738 [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271478
2016-06-02 04:19:36 +00:00
Matt Arsenault 598f55387a AMDGPU: Fix incorrectly setting kill flag when copying register tuples
This fixes some verifier errors when trackLivenessAfterRegAlloc is
enabled.

llvm-svn: 271446
2016-06-02 00:04:30 +00:00
Matt Arsenault d3e4c646ea AMDGPU: SIDebuggerInsertNops preserves CFG
This saves an additional run of the DominatorTree and
MachineLoopInfo

llvm-svn: 271444
2016-06-02 00:04:22 +00:00
Rafael Espindola 41410cc812 Avoid a load for local functions.
llvm-svn: 271437
2016-06-01 21:57:11 +00:00
Keno Fischer 5573483c5b [PPC64] Fix SUBFC8 Defs list
Fix PR27943 "Bad machine code: Using an undefined physical register".
SUBFC8 implicitly defines the CR0 register, but this was omitted in
the instruction definition.

Patch by Jameson Nash <jameson@juliacomputing.com>

Reviewers: hfinkel
Differential Revision: http://reviews.llvm.org/D20802

llvm-svn: 271425
2016-06-01 20:31:07 +00:00
Michael Zuckerman 6a894956fc Adding back-end support to two bit scanning intrinsics
Adding LLVM back-end support to two intrinsics dealing with bit scan: _bit_scan_forward and _bit_scan_reverse.
Their functionality is as described in Intel intrinsics guide:
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_forward&expand=371,370
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_reverse&expand=371,370

Commit on behalf of Omer Paparo Bivas


Differential Revision: http://reviews.llvm.org/D19915

llvm-svn: 271386
2016-06-01 12:02:37 +00:00
Oliver Stannard 92ca83cccd [ARM] Add additional matching for UBFX instructions
This adds an additional matcher to select UBFX(..) from SRL(AND(..)) in
ARMISelDAGToDAG to help with code size.

Patch by David Green.

Differential Revision: http://reviews.llvm.org/D20667

llvm-svn: 271384
2016-06-01 12:01:01 +00:00
Chris Dewhurst 53bde954db [Sparc] Allow passing of empty structs.
Passing an empty struct as a function call argument is now supported.

unit tests for various scenarios added.

llvm-svn: 271374
2016-06-01 08:48:56 +00:00
Craig Topper 4f2d5a68d3 Revert r271362 "[AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead."
Looks like something isn't quite right still. Also forgot to move the test cases to an autoupgrade test.

llvm-svn: 271363
2016-06-01 05:57:55 +00:00
Craig Topper dacd9d2bac [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271362
2016-06-01 05:35:16 +00:00
Kevin B. Smith ed0b620a65 [X86]: Add a pattern that uses GR16_ABCD rather than GR32_ABCD to avoid falsely marking whole 32 bit register as live.
Differential Revision: http://reviews.llvm.org/D20649

llvm-svn: 271341
2016-05-31 22:00:12 +00:00
Matthias Braun fe725c9241 ARM: Do not attempt to modify register class of physregs.
Physregs have no associated register class, do not attempt to modify it
in Thumb2InstrInfo::storeRegToStackSlot()/loadFromStackSlot().

llvm-svn: 271339
2016-05-31 21:39:12 +00:00
Rafael Espindola 4d29099f7f Delete AArch64II::MO_CONSTPOOL.
A constant pool holding the address of a variable in equivalent to
a got entry. It produces exactly the same instruction sequence as a
got use and unlike a got use this is not uniqued by the linker.

llvm-svn: 271311
2016-05-31 18:31:14 +00:00
Simon Dardis b60833c0ca [mips] Enforce compact branch register restrictions
Enforce compact branch register restrictions such as the use of the zero
register, both operands being the same register. Emit clear error in such
cases as the issue is subtle.

For bovc and bnvc, silently fixup such cases when emitting objects directly,
like LLVM started doing in rL269899.

Reviewers: vkalintiris, dsanders

Differential Review: http://reviews.llvm.org/D20475

llvm-svn: 271301
2016-05-31 17:34:42 +00:00
Matt Arsenault ec30eb507e AMDGPU: Remove unused address space
Also return a single StringRef instead of building a string.

llvm-svn: 271296
2016-05-31 16:57:45 +00:00
Rafael Espindola 7ad97b2fe4 Add a use of shouldAssumeDSOLocal to ARM.
Now this code path knows about position independent executables.

llvm-svn: 271290
2016-05-31 15:31:55 +00:00
Krzysztof Parzyszek a580273b3f [Hexagon] Disable expanding MUX instructions that define a subregister
The code in HexagonExpandCondsets.cpp does not handle those cases at the
moment.

llvm-svn: 271281
2016-05-31 14:27:10 +00:00
Yaron Keren a34bfa000f Do not modify a std::vector while looping it.
Introduced in r271244, this is probably undefined behaviour and asserts when
compiled with Visual C++ debug mode. 

On further note, the loop is quadratic with regard to the number of successors
since removeSuccessor is linear and could probably be modified to linear time.

llvm-svn: 271278
2016-05-31 13:45:05 +00:00
Ranjeet Singh 16c24f4d6e [ARM] Add backend support for load/store intrinsics.
Added support to map intrinsics
__builtin_arm_{ldc,ldcl,ldc2,ldc2l,stc,stcl,stc2,stc2l}
to their ARM instructions.

Differential Revision: http://reviews.llvm.org/D20564

llvm-svn: 271271
2016-05-31 12:39:30 +00:00
Simon Pilgrim e05dc45897 [X86][SSE] Add load-folding patterns for (V)CVTDQ2PD (PR27291)
Added patterns for (V)CVTDQ2PD -> 2f64 loading from a 64-bit source.

llvm-svn: 271269
2016-05-31 12:04:35 +00:00
Simon Dardis 03676dc969 [mips] bnec/beqc register constraint fix
beqc and bnec cannot have $rs == $rt. Inhibit compact branch creation
if that would occur.

Reviewers: vkalintiris, dsanders

Differential Revision: http://reviews.llvm.org/D20624

llvm-svn: 271260
2016-05-31 09:54:55 +00:00
Igor Breger 73ee8ba9b0 [AVX512] Fix intrinsic vcvtps2ph lowering.
Differential Revision: http://reviews.llvm.org/D20788

llvm-svn: 271255
2016-05-31 08:04:21 +00:00
Igor Breger 52bd1d5fcc Fix intrinsic vbroadcast{i32|f32}x2 lowering.
Differential Revision: http://reviews.llvm.org/D20780

llvm-svn: 271254
2016-05-31 07:43:39 +00:00
Craig Topper 50f85c22c5 [AVX512] Remove masked store intrinsics. Clang now emits generic masked store intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked stores.

llvm-svn: 271245
2016-05-31 01:50:02 +00:00
Saleem Abdulrasool d2f705ddf9 X86: permit using SjLj EH on x86 targets as an option
This adds support to the backed to actually support SjLj EH as an exception
model.  This is *NOT* the default model, and requires explicitly opting into it
from the frontend.  GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.

Addresses PR27749!

llvm-svn: 271244
2016-05-31 01:48:07 +00:00
Craig Topper 8287fd8abd [X86] Remove SSE/AVX unaligned store intrinsics as clang no longer uses them. Auto upgrade to native unaligned store instructions.
llvm-svn: 271236
2016-05-30 23:15:56 +00:00
Rafael Espindola 4f1062adb8 Fix a crash when producing COFF.
llvm-svn: 271229
2016-05-30 20:18:53 +00:00
Diana Picus f353a5e06d [BPF] Remove exit-on-error from tests (PR27768, PR27769)
The exit-on-error flag is necessary to avoid some assertions/unreachables. We
can get past them by creating a few dummy nodes.

Fixes PR27768, PR27769.

Differential Revision: http://reviews.llvm.org/D20726

llvm-svn: 271200
2016-05-30 08:28:34 +00:00
Rafael Espindola 9768d73c74 Move RelaxELFRel out to llvm-mc.
llvm-svn: 271160
2016-05-29 01:11:00 +00:00
Simon Pilgrim 9602d678cb [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Rafael Espindola 52bd330500 Fix production of R_X86_64_GOTPCRELX/R_X86_64_REX_GOTPCRELX.
We were producing R_X86_64_GOTPCRELX for invalid instructions and
sometimes producing R_X86_64_GOTPCRELX instead of
R_X86_64_REX_GOTPCRELX.

llvm-svn: 271118
2016-05-28 15:51:38 +00:00
Sanjay Patel 97c2c108fd [x86] avoid printing unnecessary sign bits of hex immediates in asm comments (PR20347)
It would be better to check the valid/expected size of the immediate operand, but this is
generally better than what we print right now.

Differential Revision: http://reviews.llvm.org/D20385

llvm-svn: 271114
2016-05-28 14:58:37 +00:00
Ahmed Bougacha a3dc1ba142 [X86] Try to zero elts when lowering 256-bit shuffle with PSHUFB.
Otherwise we fallback to a blend of PSHUFBs later on.

Differential Revision: http://reviews.llvm.org/D19661

llvm-svn: 271113
2016-05-28 14:38:04 +00:00
Rafael Espindola 2d39bb3c6a Simplify and clang-format a table.
llvm-svn: 271112
2016-05-28 11:13:34 +00:00
Rafael Espindola fe796dca90 Fix default reloc model on ARM.
llvm-svn: 271111
2016-05-28 10:41:15 +00:00
Renato Golin 9be88629d5 Revert "Revert "Map DynamicNoPIC to Static on non-darwin.""
This reverts commit r271096, as reverting it broke even more buildbots!

But that also means I'll break on ARM again... :(

llvm-svn: 271099
2016-05-28 04:47:13 +00:00
Renato Golin 4f22c51b09 Revert "Map DynamicNoPIC to Static on non-darwin."
This reverts commit r271052, as it broke some ARM buildbots.

llvm-svn: 271096
2016-05-28 04:24:26 +00:00
Krzysztof Parzyszek 07d7518540 [Hexagon] Add option to enable subregister liveness tracking
llvm-svn: 271088
2016-05-28 02:02:51 +00:00
Krzysztof Parzyszek 96bb4fe539 [Hexagon] Separate C8 and USR to avoid unwanted subregister composition
Composing subreg_loreg with subreg_oveflow leads to strange results with
lane masks for register classes with subreg_loreg. In particular, dead
lane detection generates incorrect code.

llvm-svn: 271087
2016-05-28 01:51:16 +00:00
Matthias Braun bcfd23673b AArch64: Fix indentation
llvm-svn: 271084
2016-05-28 01:06:51 +00:00
Matt Arsenault d8d304d1d6 AMDGPU: Fix trailing whitespace
llvm-svn: 271081
2016-05-28 00:50:51 +00:00
Matt Arsenault 7401516985 AMDGPU: Add fract intrinsic
Remove broken patterns matching it. This was matching the
unsafe math pattern and expanding the fix for the buggy instruction
from the pattern. The problems are also on CI. Remove the workarounds
and only use fract with unsafe math or from the intrinsic.

llvm-svn: 271078
2016-05-28 00:19:52 +00:00
Rafael Espindola eece113105 Start using shouldAssumeDSOLocal on ARM.
Given where this is used it should be a nop.

llvm-svn: 271066
2016-05-27 22:41:51 +00:00
Matthias Braun 27b6692fe2 AArch64Subtarget: Use default member initializers
llvm-svn: 271057
2016-05-27 22:14:09 +00:00
Rafael Espindola f9bda6805b Map DynamicNoPIC to Static on non-darwin.
DynamicNoPIC was only every used on darwin. This maps it to static on
ELF. It matches what is done on X86.

llvm-svn: 271052
2016-05-27 21:44:18 +00:00
Krzysztof Parzyszek 764fed98e6 [Hexagon] Use standard macros to initialize HexagonExpandCondsets pass
llvm-svn: 271045
2016-05-27 21:15:34 +00:00
Krzysztof Parzyszek d0f8e1cf0d [Hexagon] Do not create passes in the constructor of HexagonPassConfig
When running mir tests, a pass created in that constructor would not be
freed, leading to memory leaks.

llvm-svn: 271043
2016-05-27 20:48:39 +00:00
Michael Kuperstein a75c77b127 [X86] Detect SAD patterns and emit psadbw instructions.
This recommits r267649 with a fix for PR27539.

Differential Revision: http://reviews.llvm.org/D20598

llvm-svn: 271033
2016-05-27 18:53:22 +00:00
Ahmed Bougacha 346b98011c [X86] Clarify PSHUFB+blend lowering function name. NFC.
Also guard against v32i8 users.

llvm-svn: 271024
2016-05-27 17:58:17 +00:00
Ahmed Bougacha 655c2deaf6 [ARM] Remove tBLXr Pat made redundant by r269101. NFCI.
llvm-svn: 271023
2016-05-27 17:58:03 +00:00
Benjamin Kramer f6f815bf39 Use StringRef::startswith instead of find(...) == 0.
It's faster and easier to read.

llvm-svn: 271018
2016-05-27 16:54:57 +00:00
Benjamin Kramer 1cda54f369 [sparc] Simplify a slow and verbose way of checking if a string starts with "ld".
PR27904.

llvm-svn: 271016
2016-05-27 16:45:37 +00:00
Benjamin Kramer 82de7d323d Apply clang-tidy's misc-move-constructor-init throughout LLVM.
No functionality change intended, maybe a tiny performance improvement.

llvm-svn: 270997
2016-05-27 14:27:24 +00:00
Simon Dardis 4ccda502d5 [mips] Weaken asm predicate for memory offsets
The isMemWithSimmOffset predicate rejects relocations which is incorrect
behaviour. Linkers and other tools should handle|warn|error when the
field overflows.

Reviewers: dsanders, vkalintiris

Differential Revision: http://reviews.llvm.org/D20727

llvm-svn: 270995
2016-05-27 13:56:36 +00:00
Artem Tamazov 7da9b82e02 [AMDGPU][llvm-mc] Square-braced-syntax for registers - make ":expr2" optional.
Register numbers may be specified as assembly-time expressions.
This feature can be useful in macros and alike. However, expressions
are supported within sqare braces only.

Sqare braces were initially intended to support specifying of multiple
(pairs/quads...) registers. Syntax like v[8:8] which specifies single register
is also supported. That allows expressions but looks a bit unnatural.

This change supports syntax REG[EXPR].
Tests added.

Differential Revision: http://reviews.llvm.org/D20588

llvm-svn: 270990
2016-05-27 12:50:13 +00:00
Benjamin Kramer 4fed928f53 Avoid some copies by using const references.
clang-tidy's performance-unnecessary-copy-initialization with some manual
fixes. No functional changes intended.

llvm-svn: 270988
2016-05-27 12:30:51 +00:00
Benjamin Kramer 3e9a5d3468 Apply clang-tidy's misc-static-assert where it makes sense.
Also fold conditions into assert(0) where it makes sense. No functional
change intended.

llvm-svn: 270982
2016-05-27 11:36:04 +00:00
Benjamin Kramer 4ec6e9d50c [sparc] Remove some unused (and undefined) declarations.
No functionality change.

llvm-svn: 270981
2016-05-27 10:19:03 +00:00
Benjamin Kramer 922efd7a67 [hexagon] Move BlockRanges and RDF stuff into the llvm namespace.
No functional change intended.

llvm-svn: 270980
2016-05-27 10:06:40 +00:00
Benjamin Kramer 797fb96a9c [sparc] Move LEON passes into llvm namespace.
Also give them library visiblity while there.

llvm-svn: 270979
2016-05-27 10:06:27 +00:00
Simon Pilgrim 4642a57fbf Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim c013e5737b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Krzysztof Parzyszek da0b9a959e [Hexagon] Enable the post-RA scheduler
The aggressive anti-dependency breaker can rename the restored callee-
saved registers. To prevent this, mark these registers are live on all
paths to the return/tail-call instructions, and add implicit use operands
for them to these instructions.

llvm-svn: 270898
2016-05-26 19:44:28 +00:00
Chad Rosier 14aa2ad1f4 [AArch64] Generate rev16/rev32 from bswap + srl when upper bits are known zero.
Canonicalize (srl (bswap i32 x), 16) to (rotr (bswap i32 x), 16), if the high
16-bits of x are zero. Similarly, canonicalize (srl (bswap i64 x), 32) to
(rotr (bswap i64 x), 32), if the high 32-bits of x are zero.

test_rev_w_srl16:            test_rev_w_srl16:
  and w8, w0, #0xffff          and     w8, w0, #0xffff
  rev w8, w8           --->    rev16   w0, w8
  lsr     w0, w8, #16

test_rev_x_srl32:            test_rev_x_srl32:
  rev x8, x8           --->    rev32   x0, x8
  lsr x0, x8, #32

llvm-svn: 270896
2016-05-26 19:41:33 +00:00
Changpeng Fang 71369b3a39 AMDGPU/SI: Enable load-store-opt by default.
Summary: Enable load-store-opt by default, and update LIT tests.

Reviewers: arsenm

Differential Revision: http://reviews.llvm.org/D20694

llvm-svn: 270894
2016-05-26 19:35:29 +00:00
Artem Belevich 11f69ba0cf Init member structs in constructor.
Fixes build error on windows where MSVC does not
support list initialization inside member initializer list.

llvm-svn: 270877
2016-05-26 17:29:20 +00:00
Artem Belevich 49e9a81236 [NVPTX] Added NVVMIntrRange pass
NVVMIntrRange adds !range metadata to calls of NVVM intrinsics
that return values within known limited range.

This allows LLVM to generate optimal code for indexing arrays
based on tid/ctaid which is a frequently used pattern in CUDA code.

Differential Revision: http://reviews.llvm.org/D20644

llvm-svn: 270872
2016-05-26 17:02:56 +00:00
Artem Tamazov 6edc135d0f [AMDGPU][llvm-mc] s_getreg/setreg* - hwreg - factor out strings/literals etc.
Hwreg(...) syntax implementation unified with sendmsg(...).
Common strings moved to Utils
MathExtras.h functionality utilized.
Added missing build dependency in Disassembler.

Differential Revision: http://reviews.llvm.org/D20381

llvm-svn: 270871
2016-05-26 17:00:33 +00:00
Artem Tamazov b49c3361e5 Fix build warning introduced in r270552 "[AMDGPU][llvm-mc] Disassembler: support for TTMP/TBA/TMA registers."
llvm-svn: 270859
2016-05-26 15:52:16 +00:00
Simon Pilgrim cf340bd9c1 [X86][SSE] When lowering a 256-bit shuffle as PMOVZX, reduce the input vector to the lower 128-bit subvector.
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.

llvm-svn: 270858
2016-05-26 15:40:36 +00:00
Krzysztof Parzyszek de37cfb596 [Hexagon] Select the aggressive anti-dependency breaker
llvm-svn: 270857
2016-05-26 15:38:50 +00:00
Diana Picus 81bc3170e8 [AMDGPU] Remove exit-on-error flag from test (PR27762)
Similar to r269948, but for argument lowering.

Fixes PR27762

Differential Revision: http://reviews.llvm.org/D20430

llvm-svn: 270856
2016-05-26 15:24:55 +00:00
Diana Picus 20a8d8e97e [BPF] Remove exit-on-error flag in test (PR27767)
The exit-on-error flag is needed to avoid an assert where
llvm::SelectionDAGISel::LowerArguments doesn't create enough arguments. Fill up
with zeroes to reach the right number of args.

Fixes PR27767.

Differential Revision: http://reviews.llvm.org/D20571

llvm-svn: 270855
2016-05-26 15:23:50 +00:00
Chad Rosier 816a67da49 [AArch64] Generate a BFI/BFXIL from 'or (and X, MaskImm), OrImm'.
If and only if the value being inserted sets only known zero bits.

This combine transforms things like

  and w8, w0, #0xfffffff0
  movz w9, #5
  orr w0, w8, w9

into

  movz w8, #5
  bfxil w0, w8, #0, #4

The combine is tuned to make sure we always reduce the number of instructions.
We avoid churning code for what is expected to be performance neutral changes
(e.g., converted AND+OR to OR+BFI).

Differential Revision: http://reviews.llvm.org/D20387

llvm-svn: 270846
2016-05-26 13:27:56 +00:00
Rafael Espindola a224de06bc Use shouldAssumeDSOLocal on AArch64.
This reduces code duplication and now AArch64 also handles PIE.

llvm-svn: 270844
2016-05-26 12:42:55 +00:00
Igor Breger 8437bb70fd [AVX512] Fix intrinsic cmp{sd|ss} lowering.
Differential Revision: http://reviews.llvm.org/D20615

llvm-svn: 270843
2016-05-26 12:42:25 +00:00
Chris Dewhurst 9013d069b0 [Sparc] Extend the assembler printing support for Sparc back-end.
Allows display of floating-point registers and display of assembler meta-data output.

llvm-svn: 270829
2016-05-26 07:28:31 +00:00
Justin Lebar fea8a8d70a [NVPTX] Don't (incorrectly) say that the NVVMReflect pass preserves all analyses.
Reviewers: tra

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D20585

llvm-svn: 270790
2016-05-25 23:12:38 +00:00
Rafael Espindola 6b93bf5783 Don't repeat name in comment and git-clang-format.
llvm-svn: 270785
2016-05-25 22:44:06 +00:00
Rafael Espindola 6b4baa5f58 Sort includes.
llvm-svn: 270769
2016-05-25 21:37:29 +00:00
Simon Pilgrim 4810683ddf Simplify std::all_of/any_of predicates by using llvm::all_of/any_of. NFCI.
llvm-svn: 270753
2016-05-25 20:41:11 +00:00
Rafael Espindola 84f0562064 Fix shouldAssumeDSOLocal for private linkage.
llvm-svn: 270746
2016-05-25 19:55:16 +00:00
Matt Arsenault e57206d81b AMDGPU: Fix v2i64/v2f64 bitcasts
These operations tend to get promoted away to v4i32 so
this doesn't happen often.

llvm-svn: 270740
2016-05-25 18:07:36 +00:00
Matt Arsenault 1cc4991412 AMDGPU: Fix inconsistent lowering of select of vectors
f32 vectors would use a sequence of BFI instructions instead
of unrolled cmp + select. This was better in the case of a VALU
select with SGPR inputs, but we don't have a way of dealing with that
in the DAG.

llvm-svn: 270731
2016-05-25 17:34:58 +00:00
Sanjay Patel aedc347b29 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Sanjay Patel 3955360b24 [x86, AVX] allow explicit calls to VZERO* to modify state in VZeroUpperInserter pass (PR27823)
As noted in the review, there are still problems, so this doesn't the bug completely.

Differential Revision: http://reviews.llvm.org/D20529

llvm-svn: 270718
2016-05-25 16:39:47 +00:00
Simon Pilgrim 4298d06d0f [X86][SSE] Replace (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) lossless conversion intrinsics with generic IR
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.

Differential Revision: http://reviews.llvm.org/D20568

llvm-svn: 270678
2016-05-25 08:59:18 +00:00
Craig Topper 12e322a8cf [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
Nirav Dave e003bb7ead Soften assertion in AMDGPU emitPrologue.
[AMDGPU] emitPrologue looks for an unused unallocated SGPR that is not
the scratch descriptor. Continue search if unused register found fails
other requirements.

Reviewers: arsenm, tstellarAMD, nhaehnle

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D20526

llvm-svn: 270646
2016-05-25 01:45:42 +00:00
Dan Gohman d530f68d45 [WebAssembly] Put __stack_pointer in the offset field of loads and stores.
Instead of this:

i32.const       $push10=, __stack_pointer
i32.load        $push11=, 0($pop10)

Emit this:

i32.const       $push10=, 0
i32.load        $push11=, __stack_pointer($pop10)

It's not currently clear which is better, though there's a chance the second
form may be better at overall compression. We can revisit this when we have
more data; for now it makes sense to make PEI consistent with isel.

Differential Revision: http://reviews.llvm.org/D20411

llvm-svn: 270635
2016-05-24 23:47:41 +00:00
Konstantin Zhuravlyov 29ddd2b2f2 [AMDGPU][NFC] Rename ReserveTrapVGPRs -> ReserveRegs
Differential Revision: http://reviews.llvm.org/D20081

llvm-svn: 270594
2016-05-24 18:37:18 +00:00
Sam Kolton 11de370cca [AMDGPU] Assembler: rework parsing of optional operands.
Summary:
Change process of parsing of optional operands. All optional operands use same parsing method - parseOptionalOperand().
No default values are added to OperandsVector.
Get rid of WORKAROUND_USE_DUMMY_OPERANDS_INSTEAD_MUTIPLE_DEFAULT_OPERANDS.

Reviewers: tstellarAMD, vpykhtin, artem.tamazov, nhaustov

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D20527

llvm-svn: 270556
2016-05-24 12:38:33 +00:00
Artem Tamazov 212a251c8d [AMDGPU][llvm-mc] Disassembler: support for TTMP/TBA/TMA registers.
Differential Revision: http://reviews.llvm.org/D20476

llvm-svn: 270552
2016-05-24 12:05:16 +00:00
Igor Breger 23c2090606 [llvm][AVX512][intrinsics] Fix vperm{b|w|d|q|ps|pd} intrinsics. Index is second argument to buildin function but it is first instruction operand.
Differential Revision: http://reviews.llvm.org/D20515

llvm-svn: 270548
2016-05-24 11:06:22 +00:00
Sagar Thakur 672c710de4 [MIPS][LLVM-MC] Fix Disassemble of Negative Offset
Patch by Nitesh Jain.

Summary: The type of Imm in MipsDisassembler.cpp was incorrect since SignExtend64 return int64_t type.As per the MIPSr6 doc ,the offset is added to the address of the instruction following the branch (not the branch itself), to form a PC-relative effective target address hence “4” is added to the offset. The offset of some test case are update to reflect the changes due to “ + 4 ” offset and new test case for negative offset are added.

Reviewers: dsanders, vkalintiris
Differential Revision: http://reviews.llvm.org/D17540

llvm-svn: 270542
2016-05-24 09:57:10 +00:00
Simon Pilgrim 14000b3cea [CostModel][X86][XOP] Added XOP costmodel for BITREVERSE
Now that we have a nice fast VPPERM solution. Added framework for future intrinsic costs as well.

llvm-svn: 270537
2016-05-24 08:17:50 +00:00
Dan Gohman 73d7a555b9 [WebAssembly] Basic TargetTransformInfo support for SIMD128.
llvm-svn: 270508
2016-05-23 22:47:07 +00:00
James Y Knight fdcc727da6 [SPARC] Fix 8 and 16-bit atomic load and store.
They were accidentally using the 32-bit load/store instruction for
8/16-bit operations, due to incorrect patterns

(8/16-bit cmpxchg and atomicrmw will be fixed in subsequent changes)

llvm-svn: 270486
2016-05-23 20:33:00 +00:00
Sanjay Patel 8099fb7e0a fix typo; NFC
llvm-svn: 270469
2016-05-23 18:01:20 +00:00
Sanjay Patel 13a0d49813 use range-loop; NFCI
llvm-svn: 270467
2016-05-23 18:00:50 +00:00
Dan Gohman 6c8f20d73d [WebAssembly] Speed up LiveIntervals updating.
Use the more specific LiveInterval::removeSegment instead of
LiveInterval::shrinkToUses when we know the specific range that's
being removed.

llvm-svn: 270463
2016-05-23 17:42:57 +00:00
Krzysztof Parzyszek 4a3e285ec3 [Hexagon] Move some debug-only variable declarations into DEBUG
llvm-svn: 270459
2016-05-23 17:31:30 +00:00
Aaron Ballman a81264ba09 Removing a switch statement that contains only a default label; NFC.
llvm-svn: 270444
2016-05-23 15:52:59 +00:00
Diana Picus b2da61196e [BPF] Remove exit-on-error flag in test (PR27766)
The exit-on-error flag on the many_args1.ll test is needed to avoid an
unreachable in BPFTargetLowering::LowerCall. We can also avoid it by ignoring
any superfluous arguments to the call (i.e. any arguments after the first 5).

Fixes PR27766.

Differential Revision: http://reviews.llvm.org/D20471

v2 of r270419

llvm-svn: 270440
2016-05-23 14:57:19 +00:00
Renato Golin 2546b5ac5f Reverts "[BPF] Remove exit-on-error flag in test (PR27766)"
This patch reverts r270419 because it broke a lot of buildbots,
mostly Windows. We'd like help in investigating the issues, but
for now, it should stay out.

llvm-svn: 270433
2016-05-23 13:02:11 +00:00
Diana Picus eaf34cf67e [BPF] Remove exit-on-error flag in test (PR27766)
The exit-on-error flag on the many_args1.ll test is needed to avoid an
unreachable in BPFTargetLowering::LowerCall. We can also avoid it by ignoring
any superfluous arguments to the call (i.e. any arguments after the first 5).

Fixes PR27766

llvm-svn: 270419
2016-05-23 12:33:34 +00:00
Chris Dewhurst 6bc3e13133 [Sparc] LEON erratum fix - Delay Slot Filler modification.
This code should have been with the previous check-in (r270417) and prevents the DelaySlotFiller pass being utilized in functions where the erratum fix has been applied as this will break the run-time code.

llvm-svn: 270418
2016-05-23 11:52:28 +00:00
Chris Dewhurst 4f7cac3674 [Sparc][LEON] LEON Erratum fix. Insert NOP after LD or LDF instruction.
Due to an erratum in some versions of LEON, we must insert a NOP after any LD or LDF instruction to ensure the processor has time to load the value correctly before using it. This pass will implement that erratum fix.

The code will have no effect for other Sparc, but non-LEON processors.

Differential Review: http://reviews.llvm.org/D20353

llvm-svn: 270417
2016-05-23 10:56:36 +00:00
Sam Kolton 1bdcef7697 [AMDGPU] Assembler: refactor parsing of modifiers and immediates. Allow modifiers for imms.
Reviewers: nhaustov, tstellarAMD

Subscribers: kzhuravl, arsenm

Differential Revision: http://reviews.llvm.org/D20166

llvm-svn: 270415
2016-05-23 09:59:02 +00:00
Jacob Baungard Hansen 1e35ffe520 Test commit
llvm-svn: 270414
2016-05-23 09:41:44 +00:00
Craig Topper a6d0104823 [X86] Use instruction aliases to replace custom asm parser code for optimizing moves to use 2 byte VEX prefix.
llvm-svn: 270394
2016-05-23 04:02:27 +00:00
Craig Topper 95bdabd338 [AVX512] Add patterns to implement stores of extracts of least signficant subvectors using XMM or YMM stores instead of the vector extract instructions.
Similar is already done for AVX and we had lost it going to AVX512VL.

llvm-svn: 270383
2016-05-22 23:44:33 +00:00
Sanjay Patel 2959ff4a88 [x86, AVX] don't add a vzeroupper if that's what the code is already doing (PR27823)
This isn't the complete fix, but it handles the trivial examples of duplicate vzero* ops in PR27823:
https://llvm.org/bugs/show_bug.cgi?id=27823
...and amusingly, the bogus cases already exist as regression tests, so let's take this baby step.

We'll need to do more in the general case where there's legitimate AVX usage in the function + there's
already a vzero in the code.

Differential Revision: http://reviews.llvm.org/D20477

llvm-svn: 270378
2016-05-22 20:22:47 +00:00
Igor Breger 2ba64ab9ae [AVX512] Implement missing patterns for any_extend load lowering.
Differential Revision: http://reviews.llvm.org/D20513

llvm-svn: 270357
2016-05-22 10:21:04 +00:00
Craig Topper 5f3fef884f [AVX512] The AVX512 file only need subtract_subvector index 0 patterns where the source is 512-bits. The 256-bit source patterns were redundant with AVX.
llvm-svn: 270356
2016-05-22 07:40:58 +00:00
Craig Topper a1041ff001 [AVX512] Add an AddedComplexity line to the 512-bit insert_subvector undef index 0 patterns. This gives them higher priority than the memory patterns. This matches AVX1/2.
llvm-svn: 270355
2016-05-22 07:40:40 +00:00
Craig Topper de5498546e [AVX512] Change the AddedComplexity on some patterns to match their AVX/SSE equivalents. This helps group them close together in the isel tables and enable table compression.
llvm-svn: 270354
2016-05-22 06:09:34 +00:00
Craig Topper 33c550cb95 [AVX512] Add a couple patterns to fix some cases where two vector mask inversions could appear in a row.
llvm-svn: 270344
2016-05-22 00:39:30 +00:00
Craig Topper dbac1ff9c1 [AVX512] Remove seemingly unnecessary AddedComplexity adjustment.
llvm-svn: 270343
2016-05-22 00:39:27 +00:00
Craig Topper 3fc3b4453d [X86] Remove unnecessary alignment check on patterns that use VEXTRACTF128 for integer types when only AVX1 is supported.
llvm-svn: 270335
2016-05-21 22:50:18 +00:00
Craig Topper db960eddfa [AVX512] Add patterns for extracting subvectors and storing to memory.
llvm-svn: 270334
2016-05-21 22:50:14 +00:00
Craig Topper 03b849eb44 [AVX512] Capitalize the Z in VEXTRACTPSzmr. Lowercase z has been primarily used to indicating the zero masking behavior which is not the case here. NFC
llvm-svn: 270333
2016-05-21 22:50:11 +00:00
Craig Topper d5da6a39f2 [AVX512] Rename vector extract instructions so 'mr' intead of 'rm' to reflect the fact that memory is the destination.
llvm-svn: 270332
2016-05-21 22:50:09 +00:00
Craig Topper 08a6857c82 [AVX512] Fix copy/paste mistake a I made in a comment.
llvm-svn: 270331
2016-05-21 22:50:04 +00:00
Michael Zuckerman a63a129749 [Clang][AVX512][intrinsics] Fix rcp and sqrt intrinsics.
Differential Revision: http://reviews.llvm.org/D20438

llvm-svn: 270322
2016-05-21 14:44:18 +00:00
Michael Zuckerman 11b55b29d1 [Clang][AVX512][intrinsics] Fix vscalef intrinsics.
Differential Revision: http://reviews.llvm.org/D20324

llvm-svn: 270321
2016-05-21 11:09:53 +00:00
Craig Topper 02626c076b [AVX512] Add patterns for VEXTRACT v16i16->v8i16 and v32i8->v16i8. Disable AVX2 versions of vector extract when AVX512VL is enabled.
llvm-svn: 270318
2016-05-21 07:08:56 +00:00
Craig Topper 22ae353207 [AVX512] Disable AVX2 VPERMD, VPERMQ, VPERMPS, and VPERMPD patterns when AVX512VL is enabled. Also add shuffle comment printing for AVX512VL VPERMPD/VPERMQ to keep some tests that now use these instructions instead of the AVX2 ones.
llvm-svn: 270317
2016-05-21 06:07:18 +00:00
Craig Topper 6be70deda3 [AVX512] Disable AVX/AVX2 VBROADCASTSS/VBROADCASTSD patterns when AVX512VL is enabled.
llvm-svn: 270316
2016-05-21 05:47:25 +00:00
Matt Arsenault 7f9eabd2c2 AMDGPU: Define priorities for register classes
Allocating larger register classes first should give better allocation
results (and more importantly for myself, make the lit tests more stable
with respect to scheduler changes).

Patch by Matthias Braun

llvm-svn: 270312
2016-05-21 03:55:07 +00:00
Craig Topper 97565ded80 [AVX512] Disable AVX/AVX2 patterns for VPSADBW and VPMULUDQ when the AVX512VL/AVX512BWI equivalents are available.
llvm-svn: 270311
2016-05-21 03:52:32 +00:00
Craig Topper b395105584 [X86] Convert some SSE2/AVX2 intrinsics to ISD opcodes during lowering instead of pattern matching the intrinsics. This unifies handling with AVX512 and allows these intrinsics to select EVEX encoded instructions to increase available registers.
llvm-svn: 270310
2016-05-21 03:52:28 +00:00
Matt Arsenault 71e6676169 AMDGPU: Cleanup lowering actions
These are kind of a mess and hard to follow, particularly
for loads and stores. Fix various redundant, unnecessary
and dead settings.

llvm-svn: 270307
2016-05-21 02:27:49 +00:00
Matt Arsenault 81a709503d AMDGPU: Fix high bits after division optimization
This is essentially doing a 24-bit signed division with FP.
We need to truncate to the N bit result.

llvm-svn: 270305
2016-05-21 01:53:33 +00:00
Dylan McKay 52ed0aa203 [AVR] Add AVRMCAsmInfo
llvm-svn: 270302
2016-05-21 01:06:37 +00:00
Matt Arsenault b6e1cc2a92 AMDGPU: Fix verifier error when spilling SGPRs
The current SGPR spilling test does not stress this
because it is using s_buffer_load instructions to
increase SGPR pressure and spill, but their output
operands have the same SReg_32_XM0 constraint. This fixes
an error when the SReg_32 output from most instructions
is spilled.

llvm-svn: 270301
2016-05-21 00:53:42 +00:00
Matt Arsenault 8f5e008534 AMDGPU: Fix relationship between SReg_32 and SReg_32_XM0
llvm-svn: 270300
2016-05-21 00:53:28 +00:00
Dylan McKay 28ae31731e [AVR] Fix header files in MCTargetDesc
Everything now compiles successfully, but there are still undefined
references.

llvm-svn: 270298
2016-05-21 00:35:07 +00:00
Matt Arsenault 4945905f5f AMDGPU: Handle cbranch vccz/vccnz
llvm-svn: 270297
2016-05-21 00:29:40 +00:00
Matt Arsenault 72fcd5f597 AMDGPU: Implement ReverseBranchCondition
llvm-svn: 270296
2016-05-21 00:29:34 +00:00
Matt Arsenault 6d09380532 AMDGPU: Implement AnalyzeBranch
Original patch by Tom Stellard

llvm-svn: 270295
2016-05-21 00:29:27 +00:00
Dan Gohman b7c2400fa7 [WebAssembly] Optimize away return instructions using fallthroughs.
This saves a small amount of code size, and is a first small step toward
passing values on the stack across block boundaries.

Differential Review: http://reviews.llvm.org/D20450

llvm-svn: 270294
2016-05-21 00:21:56 +00:00
Dylan McKay be8e2e0fa8 [AVR] Fix signuature of AVRTargetMachine constructor
llvm-svn: 270292
2016-05-20 23:39:04 +00:00
Justin Bogner dc8af06b27 SDAG: Implement Select instead of SelectImpl in PPCDAGToDAGISel
- Where we were returning a node before, call ReplaceNode instead.
- Where we would return null to fall back to another selector, rename
  the method to try* and return a bool for success.
- Where we were calling SelectNodeTo, just return afterwards.

Part of llvm.org/pr26808.

llvm-svn: 270283
2016-05-20 21:43:23 +00:00
Jacques Pienaar 5ffdef55f0 [lanai] Change reloc to use PIC_ by default and cleanup.
* Change reloc to PIC_;
* Cleanup (clang-format & modify test);

llvm-svn: 270282
2016-05-20 21:41:53 +00:00
David Majnemer 498f2fd11b Address post-review for r270246
This gets rid of some unnecessary SmallStrings in
X86TargetMachine::getSubtargetImpl.

No functionality change is intended.

llvm-svn: 270270
2016-05-20 20:41:24 +00:00
Jun Bum Lim b21d4e17a2 [AArch64] Disable narrow load merge by default
Summary:
As this optimization converts two loads into one load with two shift instructions,
it could potentially hurt performance if a loop is arithmetic operation intensive.

Reviewers: t.p.northover, mcrosier, jmolloy

Subscribers: evandro, jmolloy, aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20172

llvm-svn: 270251
2016-05-20 18:45:49 +00:00
David Majnemer ca29023b02 [X86] Reduce memory allocations in X86TargetMachine::getSubtargetImpl
We performed a number of memory allocations each time getTTI was called,
remove them by using SmallString.
No functionality change intended.

llvm-svn: 270246
2016-05-20 18:16:06 +00:00
Sanjay Patel 5496a23316 fix comments; NFC
llvm-svn: 270237
2016-05-20 17:07:19 +00:00
Sanjay Patel 1dc57cb944 use range-loops; NFCI
llvm-svn: 270236
2016-05-20 17:00:10 +00:00
Sanjay Patel 8bc63b2f47 fix documentation comments; NFC
llvm-svn: 270234
2016-05-20 16:46:01 +00:00
Simon Pilgrim 55ef3da27b [X86][AVX] Generalized matching for target shuffle combines
This patch is a first step towards a more extendible method of matching combined target shuffle masks.

Initially this just pulls out the existing basic mask matches and adds support for some 256/512 bit equivalents. Future patterns will require a number of features to be added but I wanted to keep this patch simple.

I hope we can avoid duplication between shuffle lowering and combining and share more complex pattern match functions in future commits.

Differential Revision: http://reviews.llvm.org/D19198

llvm-svn: 270230
2016-05-20 16:19:30 +00:00
Rafael Espindola c7e9813228 Refactor X86 symbol access classification.
This refactors the logic in X86 to avoid code duplication. It also
splits it in two steps: it first decides if a symbol is local to the DSO
and then uses that information to decide how to access it.

The first part is implemented by shouldAssumeDSOLocal. It is not in any
way specific to X86. In a followup patch I intend to move it to
somewhere common and reused it in other backends.

llvm-svn: 270209
2016-05-20 12:20:10 +00:00
Rafael Espindola 8571aa3d5d Simplify handling of hidden stubs on PowerPC.
We now handle them just like non hidden ones. This was already the case
on x86 (r207518) and arm (r207517).

llvm-svn: 270205
2016-05-20 12:00:52 +00:00
NAKAMURA Takumi 0e57b13743 SparcISelLowering.cpp: Add missing StringSwitch.h
llvm-svn: 270200
2016-05-20 10:53:56 +00:00
Chris Dewhurst ad74117af4 [Sparc] Implement getRegisterByName.
Allows Sparc registers to be specifically referred to in inline assembly.

llvm-svn: 270198
2016-05-20 10:21:01 +00:00
Chris Dewhurst 0dfa6bc004 [Sparc] Enable more inline assembly constraints.
Note: This is specifically to allow GCC's test pr44707 to pass.

Trivial change, not put for differential revision. Test included.

llvm-svn: 270192
2016-05-20 09:03:01 +00:00
Craig Topper b182715a52 [X86] Fix another AVX pattern to only be disable if VLX and BWI are supported.
llvm-svn: 270182
2016-05-20 05:10:27 +00:00
Jacques Pienaar 813e83734d [lanai] Use Optional<Reloc> in LanaiTargetMachine.
Follow r269988 and use Optional<Reloc>.

llvm-svn: 270176
2016-05-20 03:21:37 +00:00