Commit Graph

36327 Commits

Author SHA1 Message Date
James Molloy 4eba0154fb [AArch64] Slight cleanup in FPLoadBalancing
Instead of the convoluted if-statment we can just use getColor. This also fixes
a bug where we relied upon the parity of tablegen-generated register indexes
(instead of using the machine encoding).

llvm-svn: 261990
2016-02-26 09:10:53 +00:00
Craig Topper c929349912 [X86] Null out some redundant patterns for masked vector register to register moves. These can be accomplished with both aligned and unaligned opcodes.
Currently aligned is what is being used so remove the redundant patterns for the unaligned versions. But don't do this for the byte and word vector types since they don't have aligned versions.

llvm-svn: 261985
2016-02-26 06:50:29 +00:00
Craig Topper d50b5f8abc [X86] Add test cases for r261977 and fix a grammatical error.
llvm-svn: 261983
2016-02-26 06:50:24 +00:00
Craig Topper 4d187630de [X86] Remove a couple returns after llvm_unreachables. NFC
llvm-svn: 261979
2016-02-26 05:29:39 +00:00
Craig Topper a11be0be89 [X86] Use inclusive ranges for XMM/YMM/ZMM registers in is32Extended and isX86_64ExtendedReg. NFC
llvm-svn: 261978
2016-02-26 05:29:35 +00:00
Craig Topper 29c2273369 [X86] Explicitly diagnose use of %xmm16-%xmm31, %ymm16-%ymm31 and %zmm16-%zmm31 when AVX512 is not enabled in the asm parser.
llvm-svn: 261977
2016-02-26 05:29:32 +00:00
David L Kreitzer 602ba70a0b Reformatted a comment to fit the 80 column limit. NFC.
llvm-svn: 261916
2016-02-25 18:50:45 +00:00
Hongbin Zheng 3f97840721 Introduce analysis pass to compute PostDominators in the new pass manager. NFC
Differential Revision: http://reviews.llvm.org/D17537

llvm-svn: 261902
2016-02-25 17:54:07 +00:00
Tim Northover aa35bd26c7 ARM: disallow pc as a base register in Thumb2 memory ops.
These should all be deferring to the "OP (literal)" variant according to the
ARM ARM.

llvm-svn: 261895
2016-02-25 16:54:52 +00:00
Geoff Berry 62d472507e [AArch64] Clean up callee-save CFI emission. NFC.
Summary:
Avoid special case for FP, LR CFI emission and just allow general
AArch64FrameLowering::emitCalleeSavedFrameMoves() to handle them.  Also,
stop recalculating the stack offsets in emitCalleeSavedFrameMoves()
since we can just reuse the previously calculated offset stored in the
MachineFrameInfo.

Depends on D17000

Reviewers: t.p.northover, rengolin, mcrosier, jmolloy

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D17004

llvm-svn: 261885
2016-02-25 16:36:08 +00:00
Nikolay Haustov 161a158e5c [AMDGPU] Disassembler: Support for all VOP1 instructions.
Support all instructions with VOP1 encoding with 32 or 64-bit operands for VI subtarget:

VGPR_32 and VReg_64 operand register classes
VS_32 and VS_64 operand register classes with inline and literal constants
Tests for VOP1 instructions.

Patch by: skolton

Reviewers: arsenm, tstellarAMD

Review: http://reviews.llvm.org/D17194
llvm-svn: 261878
2016-02-25 16:09:14 +00:00
Igor Breger 45ef10f110 AVX512F: Add GATHER/SCATTER assembler Intel syntax tests for knl/skx/avx . Change memory operand parser handling.
Differential Revision: http://reviews.llvm.org/D17564

llvm-svn: 261862
2016-02-25 13:30:17 +00:00
Hrvoje Varga 46458d0bcc [mips][microMIPS] Implement DINSU, DINSM, DINS instructions
Differential Revision: http://reviews.llvm.org/D16181

llvm-svn: 261860
2016-02-25 12:53:29 +00:00
Nikolay Haustov 2e4c72977c [AMDGPU] Assembler: Simplify handling of optional operands
Resubmit with index problem fixed. Verified with valgrind.

Prepare to support DPP encodings.

For DPP encodings, we want row_mask/bank_mask/bound_ctrl to be optional operands.
However this means that when parsing instruction which has no mnemonic prefix,
we cannot add both default values for VOP3 and for DPP optional operands
to OperandVector - neither instructions would match. So add default values
for optional operands to MCInst during conversion instead.

Mark more operands as IsOptional = 1 in .td files.
Do not add default values for optional operands to OperandVector in AMDGPUAsmParser.
Add default values for optional operands during conversion using new helper addOptionalImmOperand.
Change to cvtVOP3_2_mod to check instruction flag instead of presence of modifiers. In the future, cvtVOP3* functions can be combined into one.
Separate cvtFlat and cvtFlatAtomic.
Fix CNDMASK_B32 definition to have no modifiers.

Review: http://reviews.llvm.org/D17445
llvm-svn: 261856
2016-02-25 10:58:54 +00:00
Simon Pilgrim e4178ae510 [X86][SSE3] Added combine support for MOVDDUP/MOVSHDUP/MOVSLDUP target shuffles
Now that PerformShuffleCombine can handle unary shuffles.

llvm-svn: 261843
2016-02-25 09:12:12 +00:00
NAKAMURA Takumi 3d3d0f4151 Revert r261742, "[AMDGPU] Assembler: Simplify handling of optional operands"
It brought undefined behavior.

llvm-svn: 261839
2016-02-25 08:35:27 +00:00
Elena Demikhovsky e5bbca6ae2 Optimized loading (zextload) of i1 value from memory.
This patch is a partial revert of https://llvm.org/svn/llvm-project/llvm/trunk@237793.
Extra "and" causes performance degradation.

We assume that i1 is stored in zero-extended form. And store operation is responsible for zeroing upper bits.

Differential Revision: http://reviews.llvm.org/D17541

llvm-svn: 261828
2016-02-25 07:05:12 +00:00
Tim Northover ca8e7e2e23 AArch64: remove CRC feature from Cyclone.
Turns out we don't actually support those instructions.

llvm-svn: 261759
2016-02-24 18:10:17 +00:00
Anton Korobeynikov 064dbac212 `MSP430InstrInfo::loadRegFromStackSlot` forgets to set register def.
Summary:
For instance, compiling the below results in a panic:

```
llc: ../lib/CodeGen/InlineSpiller.cpp:1140: bool (anonymous namespace)::InlineSpiller::foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned int> >, llvm::MachineInstr *): Assertion `MO->isDead() && "Cannot fold physreg def"' failed.
#0 0x00007f50fbcf353e llvm::sys::PrintStackTrace(llvm::raw_ostream&) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:321:15
#1 0x00007f50fbcf3929 PrintStackTraceSignalHandler(void*) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:380:1
#2 0x00007f50fbcf22a3 llvm::sys::RunSignalHandlers() /home/h/3rd/llvm/build/../lib/Support/Signals.cpp:45:5
#3 0x00007f50fbcf3bb4 SignalHandler(int) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:210:1
#4 0x00007f50fa87a180 (/lib/x86_64-linux-gnu/libc.so.6+0x35180)
#5 0x00007f50fa87a107 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35107)
#6 0x00007f50fa87b4e8 abort (/lib/x86_64-linux-gnu/libc.so.6+0x364e8)
#7 0x00007f50fa873226 (/lib/x86_64-linux-gnu/libc.so.6+0x2e226)
#8 0x00007f50fa8732d2 (/lib/x86_64-linux-gnu/libc.so.6+0x2e2d2)
#9 0x00007f50fddd9287 (anonymous namespace)::InlineSpiller::foldMemoryOperand(llvm::ArrayRef<std::pair<llvm::MachineInstr*, unsigned int> >, llvm::MachineInstr*) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1141:21
#10 0x00007f50fddd9ee9 (anonymous namespace)::InlineSpiller::spillAroundUses(unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1286:9
#11 0x00007f50fddd388b (anonymous namespace)::InlineSpiller::spillAll() /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1338:21
#12 0x00007f50fddd221d (anonymous namespace)::InlineSpiller::spill(llvm::LiveRangeEdit&) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1391:3
#13 0x00007f50fdfd921b (anonymous namespace)::RAGreedy::selectOrSplitImpl(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&, llvm::SmallSet<unsigned int, 16u, std::less<unsigned int> >&, unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2555:5
#14 0x00007f50fdfd647b (anonymous namespace)::RAGreedy::selectOrSplit(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2221:12
#15 0x00007f50fdfc89f9 llvm::RegAllocBase::allocatePhysRegs() /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocBase.cpp:110:14
#16 0x00007f50fdfd6337 (anonymous namespace)::RAGreedy::runOnMachineFunction(llvm::MachineFunction&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2611:3
#17 0x00007f50fded33ee llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/CodeGen/MachineFunctionPass.cpp:43:3
#18 0x00007f50fd6cdc6f llvm::FPPassManager::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1550:23
#19 0x00007f50fd6cdf85 llvm::FPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1571:16
#20 0x00007f50fd6ce71a (anonymous namespace)::MPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1627:23
#21 0x00007f50fd6ce246 llvm::legacy::PassManagerImpl::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1730:16
#22 0x00007f50fd6cec31 llvm::legacy::PassManager::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1761:3
#23 0x0000000000415bdc compileModule(char**, llvm::LLVMContext&) /home/h/3rd/llvm/build/../tools/llc/llc.cpp:405:5
#24 0x0000000000414571 main /home/h/3rd/llvm/build/../tools/llc/llc.cpp:211:13
#25 0x00007f50fa866b45 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b45)
#26 0x0000000000414296 _start (/home/h/3rd/llvm/build/bin/llc+0x414296)
Stack dump:
0.	Program arguments: ./bin/llc -mtriple msp430 loadstore.ll 
1.	Running pass 'Function Pass Manager' on module 'loadstore.ll'.
2.	Running pass 'Greedy Register Allocator' on function '@inc'
```

Original IR:

```llvm
%struct.VeryLarge = type { i8, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32 }

; Function Attrs: norecurse nounwind
define void @inc(%struct.VeryLarge* noalias nocapture sret %agg.result, %struct.VeryLarge* byval align 1 %s) #0 {
entry:
  %p0 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 0
  %0 = load i8, i8* %p0, align 1, !tbaa !1
  %p1 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 1
  %1 = load i32, i32* %p1, align 1, !tbaa !6
  %p2 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 2
  %2 = load i32, i32* %p2, align 1, !tbaa !7
  %p3 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 3
  %3 = load i32, i32* %p3, align 1, !tbaa !8
  %p4 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 4
  %4 = load i32, i32* %p4, align 1, !tbaa !9
  %p5 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 5
  %5 = load i32, i32* %p5, align 1, !tbaa !10
  %p6 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 6
  %6 = load i32, i32* %p6, align 1, !tbaa !11
  %p7 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 7
  %7 = load i32, i32* %p7, align 1, !tbaa !12
  %p8 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 8
  %8 = load i32, i32* %p8, align 1, !tbaa !13
  %p9 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 9
  %9 = load i32, i32* %p9, align 1, !tbaa !14
  %p10 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 10
  %10 = load i32, i32* %p10, align 1, !tbaa !15
  %p11 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 11
  %11 = load i32, i32* %p11, align 1, !tbaa !16
  %p12 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 12
  %12 = load i32, i32* %p12, align 1, !tbaa !17
  %p13 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 13
  %13 = load i32, i32* %p13, align 1, !tbaa !18
  %p14 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 14
  %14 = load i32, i32* %p14, align 1, !tbaa !19
  %p15 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 15
  %15 = load i32, i32* %p15, align 1, !tbaa !20
  %p16 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 16
  %16 = load i32, i32* %p16, align 1, !tbaa !21
  %p17 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 17
  %17 = load i32, i32* %p17, align 1, !tbaa !22
  %p18 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 18
  %18 = load i32, i32* %p18, align 1, !tbaa !23
  %p19 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 19
  %19 = load i32, i32* %p19, align 1, !tbaa !24
  %p20 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 20
  %20 = load i32, i32* %p20, align 1, !tbaa !25
  %p21 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 21
  %21 = load i32, i32* %p21, align 1, !tbaa !26
  %p22 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 22
  %22 = load i32, i32* %p22, align 1, !tbaa !27
  %p23 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 23
  %23 = load i32, i32* %p23, align 1, !tbaa !28
  %p24 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 24
  %24 = load i32, i32* %p24, align 1, !tbaa !29
  %p25 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 25
  %25 = load i32, i32* %p25, align 1, !tbaa !30
  %p26 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 26
  %26 = load i32, i32* %p26, align 1, !tbaa !31
  %p27 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 27
  %27 = load i32, i32* %p27, align 1, !tbaa !32
  %p28 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 28
  %28 = load i32, i32* %p28, align 1, !tbaa !33
  %p29 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 29
  %29 = load i32, i32* %p29, align 1, !tbaa !34
  %p30 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 30
  %30 = load i32, i32* %p30, align 1, !tbaa !35
  %p31 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 31
  %31 = load i32, i32* %p31, align 1, !tbaa !36
  %p32 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 32
  %32 = load i32, i32* %p32, align 1, !tbaa !37
  %add = add i8 %0, 1
  store i8 %add, i8* %p0, align 1, !tbaa !1
  %add2 = add i32 %1, 2
  store i32 %add2, i32* %p1, align 1, !tbaa !6
  %add3 = add i32 %2, 3
  store i32 %add3, i32* %p2, align 1, !tbaa !7
  %add4 = add i32 %3, 4
  store i32 %add4, i32* %p3, align 1, !tbaa !8
  %add5 = add i32 %4, 5
  store i32 %add5, i32* %p4, align 1, !tbaa !9
  %add6 = add i32 %5, 6
  store i32 %add6, i32* %p5, align 1, !tbaa !10
  %add7 = add i32 %6, 7
  store i32 %add7, i32* %p6, align 1, !tbaa !11
  %add8 = add i32 %7, 8
  store i32 %add8, i32* %p7, align 1, !tbaa !12
  %add9 = add i32 %8, 9
  store i32 %add9, i32* %p8, align 1, !tbaa !13
  %add10 = add i32 %9, 10
  store i32 %add10, i32* %p9, align 1, !tbaa !14
  %add11 = add i32 %10, 11
  store i32 %add11, i32* %p10, align 1, !tbaa !15
  %add12 = add i32 %11, 12
  store i32 %add12, i32* %p11, align 1, !tbaa !16
  %add13 = add i32 %12, 13
  store i32 %add13, i32* %p12, align 1, !tbaa !17
  %add14 = add i32 %13, 14
  store i32 %add14, i32* %p13, align 1, !tbaa !18
  %add15 = add i32 %14, 15
  store i32 %add15, i32* %p14, align 1, !tbaa !19
  %add16 = add i32 %15, 16
  store i32 %add16, i32* %p15, align 1, !tbaa !20
  %add17 = add i32 %16, 17
  store i32 %add17, i32* %p16, align 1, !tbaa !21
  %add18 = add i32 %17, 18
  store i32 %add18, i32* %p17, align 1, !tbaa !22
  %add19 = add i32 %18, 19
  store i32 %add19, i32* %p18, align 1, !tbaa !23
  %add20 = add i32 %19, 20
  store i32 %add20, i32* %p19, align 1, !tbaa !24
  %add21 = add i32 %20, 21
  store i32 %add21, i32* %p20, align 1, !tbaa !25
  %add22 = add i32 %21, 22
  store i32 %add22, i32* %p21, align 1, !tbaa !26
  %add23 = add i32 %22, 23
  store i32 %add23, i32* %p22, align 1, !tbaa !27
  %add24 = add i32 %23, 24
  store i32 %add24, i32* %p23, align 1, !tbaa !28
  %add25 = add i32 %24, 25
  store i32 %add25, i32* %p24, align 1, !tbaa !29
  %add26 = add i32 %25, 26
  store i32 %add26, i32* %p25, align 1, !tbaa !30
  %add27 = add i32 %26, 27
  store i32 %add27, i32* %p26, align 1, !tbaa !31
  %add28 = add i32 %27, 28
  store i32 %add28, i32* %p27, align 1, !tbaa !32
  %add29 = add i32 %28, 29
  store i32 %add29, i32* %p28, align 1, !tbaa !33
  %add30 = add i32 %29, 30
  store i32 %add30, i32* %p29, align 1, !tbaa !34
  %add31 = add i32 %30, 31
  store i32 %add31, i32* %p30, align 1, !tbaa !35
  %add32 = add i32 %31, 32
  store i32 %add32, i32* %p31, align 1, !tbaa !36
  %add33 = add i32 %32, 33
  store i32 %add33, i32* %p32, align 1, !tbaa !37
  %33 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %agg.result, i32 0, i32 0
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %33, i8* %p0, i32 129, i32 1, i1 false), !tbaa.struct !38
  ret void
}

; Function Attrs: argmemonly nounwind
declare void @llvm.memcpy.p0i8.p0i8.i32(i8* nocapture, i8* nocapture readonly, i32, i32, i1) #1

attributes #0 = { norecurse nounwind "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { argmemonly nounwind }

!llvm.ident = !{!0}

!0 = !{!"clang version 3.8.0 (git://github.com/llvm-mirror/clang 40ef2b7531472c41212c4719a9294aeb7bddebbc) (git://github.com/llvm-mirror/llvm c601eaf55606dfb9ad372b514b77aa00d1409be1)"}
!1 = !{!2, !3, i64 0}
!2 = !{!"", !3, i64 0, !5, i64 1, !5, i64 5, !5, i64 9, !5, i64 13, !5, i64 17, !5, i64 21, !5, i64 25, !5, i64 29, !5, i64 33, !5, i64 37, !5, i64 41, !5, i64 45, !5, i64 49, !5, i64 53, !5, i64 57, !5, i64 61, !5, i64 65, !5, i64 69, !5, i64 73, !5, i64 77, !5, i64 81, !5, i64 85, !5, i64 89, !5, i64 93, !5, i64 97, !5, i64 101, !5, i64 105, !5, i64 109, !5, i64 113, !5, i64 117, !5, i64 121, !5, i64 125}
!3 = !{!"omnipotent char", !4, i64 0}
!4 = !{!"Simple C/C++ TBAA"}
!5 = !{!"int", !3, i64 0}
!6 = !{!2, !5, i64 1}
!7 = !{!2, !5, i64 5}
!8 = !{!2, !5, i64 9}
!9 = !{!2, !5, i64 13}
!10 = !{!2, !5, i64 17}
!11 = !{!2, !5, i64 21}
!12 = !{!2, !5, i64 25}
!13 = !{!2, !5, i64 29}
!14 = !{!2, !5, i64 33}
!15 = !{!2, !5, i64 37}
!16 = !{!2, !5, i64 41}
!17 = !{!2, !5, i64 45}
!18 = !{!2, !5, i64 49}
!19 = !{!2, !5, i64 53}
!20 = !{!2, !5, i64 57}
!21 = !{!2, !5, i64 61}
!22 = !{!2, !5, i64 65}
!23 = !{!2, !5, i64 69}
!24 = !{!2, !5, i64 73}
!25 = !{!2, !5, i64 77}
!26 = !{!2, !5, i64 81}
!27 = !{!2, !5, i64 85}
!28 = !{!2, !5, i64 89}
!29 = !{!2, !5, i64 93}
!30 = !{!2, !5, i64 97}
!31 = !{!2, !5, i64 101}
!32 = !{!2, !5, i64 105}
!33 = !{!2, !5, i64 109}
!34 = !{!2, !5, i64 113}
!35 = !{!2, !5, i64 117}
!36 = !{!2, !5, i64 121}
!37 = !{!2, !5, i64 125}
!38 = !{i64 0, i64 1, !39, i64 1, i64 4, !40, i64 5, i64 4, !40, i64 9, i64 4, !40, i64 13, i64 4, !40, i64 17, i64 4, !40, i64 21, i64 4, !40, i64 25, i64 4, !40, i64 29, i64 4, !40, i64 33, i64 4, !40, i64 37, i64 4, !40, i64 41, i64 4, !40, i64 45, i64 4, !40, i64 49, i64 4, !40, i64 53, i64 4, !40, i64 57, i64 4, !40, i64 61, i64 4, !40, i64 65, i64 4, !40, i64 69, i64 4, !40, i64 73, i64 4, !40, i64 77, i64 4, !40, i64 81, i64 4, !40, i64 85, i64 4, !40, i64 89, i64 4, !40, i64 93, i64 4, !40, i64 97, i64 4, !40, i64 101, i64 4, !40, i64 105, i64 4, !40, i64 109, i64 4, !40, i64 113, i64 4, !40, i64 117, i64 4, !40, i64 121, i64 4, !40, i64 125, i64 4, !40}
!39 = !{!3, !3, i64 0}
!40 = !{!5, !5, i64 0}
```



Reviewers: asl

Subscribers: qcolombet

Differential Revision: http://reviews.llvm.org/D17441

llvm-svn: 261746
2016-02-24 15:15:02 +00:00
Simon Pilgrim 3b6feeaa7c [X86][SSE41] Combine vector blends with zero
Part 2 of 2
This patch add support for combining target shuffles into blends-with-zero.

Differential Revision: http://reviews.llvm.org/D17483

llvm-svn: 261745
2016-02-24 15:14:21 +00:00
Simon Pilgrim dd01f70085 [X86][SSE41] Combine insertion of zero scalars into vector blends with zero
Part 1 of 2
This patch attempts to replace the insertion of zero scalars with a vector blend with zero, avoiding the use of the integer insertion instructions (which are particularly slow on many targets).
(Part 2 will add support for combining multiple blends-with-zero).

Differential Revision: http://reviews.llvm.org/D17483

llvm-svn: 261743
2016-02-24 14:53:27 +00:00
Nikolay Haustov 4f073ca7fa [AMDGPU] Assembler: Simplify handling of optional operands
Prepare to support DPP encodings.

For DPP encodings, we want row_mask/bank_mask/bound_ctrl to be optional operands. However this means that when parsing instruction which has no mnemonic prefix, we cannot add both default values for VOP3 and for DPP optional operands to OperandVector - neither instructions would match. So add default values for optional operands to MCInst during conversion instead.

Mark more operands as IsOptional = 1 in .td files.
Do not add default values for optional operands to OperandVector in AMDGPUAsmParser.
Add default values for optional operands during conversion using new helper addOptionalImmOperand.
Change to cvtVOP3_2_mod to check instruction flag instead of presence of modifiers. In the future, cvtVOP3* functions can be combined into one.
Separate cvtFlat and cvtFlatAtomic.
Fix CNDMASK_B32 definition to have no modifiers.

Review: http://reviews.llvm.org/D17445

Reviewers: tstellarAMD
llvm-svn: 261742
2016-02-24 14:22:47 +00:00
Nikolay Haustov 3c728e43aa [AMDGPU] fix amd_kernel_code_t bit field position as per spec (added missing reserved fields)
lit tests passed before and after because it doesn't test the binary representation of amd_kernel_code_t.

Patch by: Valery Pykhtin (Valery.Pykhtin@amd.com)

Reviewers: arsenm
llvm-svn: 261732
2016-02-24 10:54:25 +00:00
David Majnemer e2ad73759d [CodeView] Describe variables live in x87 registers
We didn't have a mapping from LLVM's x87 floating point registers to
CodeView's encoding.

llvm-svn: 261730
2016-02-24 10:01:24 +00:00
Simon Pilgrim c291c03702 [X86][SSE] Don't get target shuffle operands prematurely.
PerformShuffleCombine should be usable by unary and binary target shuffles, but was attempting to get the first two operands whatever the instruction type. Since these are only used for VECTOR_SHUFFLE instructions for one particular combine I've moved them inside the relevant if statement.

llvm-svn: 261727
2016-02-24 09:07:47 +00:00
Michael Zuckerman a1f2d27da2 [LLVM][AVX512][PSHUFHW ][PSHUFLW ] Change imm8 to int
Differential Revision: http://reviews.llvm.org/D17538

llvm-svn: 261725
2016-02-24 08:39:05 +00:00
Igor Breger c7ba5699c5 AVX512: Add vpmovzxbw/d/q ,vpmovzxw/d/q ,vpmovzxbdq lowering patterns that support 256bit inputs like AVX patterns ( that are disable in case HasVLX , see SS41I_pmovx_avx2_patterns).
Differential Revision: http://reviews.llvm.org/D17504

llvm-svn: 261724
2016-02-24 08:15:20 +00:00
Justin Bogner 38e5217b1e X86: Wrap a helper for an assert in #ifndef NDEBUG
This function is used in exactly one place, and only in asserts
builds. Move it a few lines up before the use and only define it when
asserts are enabled. Fixes the release build under -Werror.

Also remove the forward declaration and commentary that was basically
identical to the code itself.

llvm-svn: 261722
2016-02-24 07:58:02 +00:00
Matt Arsenault cd09961fb3 AMDGPU: Check cheaper condition before SignBitIsZero
Don't do an expensive computeKnownBits call when we
can do the cheap check for legal offsets first.

llvm-svn: 261720
2016-02-24 04:55:29 +00:00
Derek Schuff f9c0a5c377 Revert "[WebAssembly] Stackify code emitted by eliminateFrameIndex"
This reverts r261685 due to wasm test breakage.

llvm-svn: 261702
2016-02-23 22:13:21 +00:00
Tim Northover 87442c1133 AArch64: rename compact unwind forms back to UNWIND_ARM64_*. NFC.
Looks like the global rename last year was a bit over-zealous. These things
really are referred to with ARM64 elsewhere (ld64, libunwind, ...).

llvm-svn: 261698
2016-02-23 21:49:05 +00:00
Derek Schuff b21570cc1d [WebAssembly] Stackify code emitted by eliminateFrameIndex
llvm-svn: 261685
2016-02-23 21:25:17 +00:00
Tim Northover b9edc5ca21 ARM: fix handling of movw/movt relocations with addend.
We were emitting only one half of a the paired relocations needed for these
instructions because we decided that an offset needed a scattered relocation.
In fact, movw/movt relocations can be paired without being scattered.

llvm-svn: 261679
2016-02-23 20:20:23 +00:00
Geoff Berry 27b1ded41a [AArch64] Generate csinv instruction more often
Reviewers: t.p.northover, jmolloy

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D17546

llvm-svn: 261675
2016-02-23 19:34:13 +00:00
Davide Italiano 62b7f7a398 [X86ISelLowering] Stop typing the same return over and over and over.
llvm-svn: 261666
2016-02-23 18:39:38 +00:00
Weiming Zhao 5e0c3ebe9e Fix PR25339: ARM Constant Island
Summary:
Currently, the ARM Constant Island may not converge (or not converge quickly).
This patch let it move to the closest water after the user if it doesn't converge after 15 iterations.

This address https://llvm.org/bugs/show_bug.cgi?id=25339

Reviewers: t.p.northover, srhines, kristof.beyls, aadg, rengolin

Subscribers: weimingz, aemerson, rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D16890

llvm-svn: 261665
2016-02-23 18:39:19 +00:00
Derek Schuff 770bdfe30b [WebAssembly] Add TODO comment to revisit red zone size
llvm-svn: 261664
2016-02-23 18:17:46 +00:00
Derek Schuff 4b3bb213b2 [WebAssembly] Implement red zone for user stack
Implements a mostly-conventional redzone for the userspace
stack. Because we have unsigned load/store offsets we continue to use a
local SP subtracted from the incoming SP but do not write it back to
memory.

Differential Revision: http://reviews.llvm.org/D17525

llvm-svn: 261662
2016-02-23 18:13:07 +00:00
Geoff Berry a1c6269c91 [AArch64] Fix fastcc -tailcallopt epilog code generation.
Summary:
Fix a bug in epilog generation where the incoming stack arguments were
not being popped for fastcc functions when -tailcallopt was passed.

Reviewers: t.p.northover, mcrosier, jmolloy, rengolin

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D16894

llvm-svn: 261650
2016-02-23 16:54:36 +00:00
Aaron Ballman 8374c1f785 Silencing a signed vs unsigned mismatch.
llvm-svn: 261640
2016-02-23 15:02:43 +00:00
Chad Rosier 44a6d443d1 [AArch64] Fix comment typo in Cyclone scheduling defs. NFC.
llvm-svn: 261637
2016-02-23 14:05:13 +00:00
Junmo Park 453f4aa4dd [ARM] fix initialization of PredictableSelectIsExpensive
Summary:
If we want classify OoO or not, using getSchedModel().isOutOfOrder()
could be more proper way than using Subtarget->isLikeA9().

Reviewers: jmolloy, rengolin

Differential Revision: http://reviews.llvm.org/D17433

llvm-svn: 261623
2016-02-23 09:56:58 +00:00
Nikolay Haustov 2a62b3c244 [AMDGPU] Fix operands of S_BFE_U64 and S_BFM_B64
src1 of s_bfe_u64 is 32-bit (same as s_bfe_i64).
src0 and src1 of s_bfm_b64 are 32-bit.
Update tests.

Review: http://reviews.llvm.org/D17480

Reviewers: arsenm
llvm-svn: 261621
2016-02-23 09:19:14 +00:00
Igor Breger 1360f4db47 AVX512: Fix predicate of AVX pcmpeqw/b , pcmpgtb/w/d instructions . AVX512 version of this instructions return result in kmask register, so AVX patterns should not be disabled.
Differential Revision: http://reviews.llvm.org/D17517

llvm-svn: 261619
2016-02-23 08:55:33 +00:00
Duncan P. N. Exon Smith 6307eb5518 CodeGen: TII: Take MachineInstr& in predicate API, NFC
Change TargetInstrInfo API to take `MachineInstr&` instead of
`MachineInstr*` in the functions related to predicated instructions
(I'll try to come back later and get some of the rest).  All of these
functions require non-null parameters already, so references are more
clear.  As a bonus, this happens to factor away a host of implicit
iterator => pointer conversions.

No functionality change intended.

llvm-svn: 261605
2016-02-23 02:46:52 +00:00
David Majnemer 964b70d559 [X86] Create mergeable constant pool entries for AVX
We supported creating mergeable constant pool entries for smaller
constants but not for 32-byte AVX constants.

llvm-svn: 261584
2016-02-22 22:23:11 +00:00
Derek Schuff 27e3b8a6e3 [WebAssembly] Fix writeback of stack pointer with dynamic alloca
Previously the stack pointer was only written back to memory in the
prolog. But this is wrong for dynamic allocas, for which
target-independent codegen handles SP updates after the prolog (and
possibly even in another BB). Instead update the SP global in
ADJCALLSTACKDOWN which is generated after the SP update sequence.
This will have further refinements when we add red zone support.

llvm-svn: 261579
2016-02-22 21:57:17 +00:00
Duncan P. N. Exon Smith d84f600653 CodeGen: Bring back MachineBasicBlock::iterator::getInstrIterator()...
This is a little embarrassing.

When I reverted r261504 (getIterator() => getInstrIterator()) in
r261567, I did a `git grep` to see if there were new calls to
`getInstrIterator()` that I needed to migrate.  There were 10-20 hits,
and I blindly did a `sed ...` before calling `ninja check`.

However, these were `MachineInstrBundleIterator::getInstrIterator()`,
which predated r261567.  Perhaps coincidentally, these had an identical
name and return type.

This commit undoes my careless sed and restores
`MachineBasicBlock::iterator::getInstrIterator()`.

llvm-svn: 261577
2016-02-22 21:30:15 +00:00
Davide Italiano 2ec4717c2c [X86ISelLowering] Consolidate duplicated code in a single place.
llvm-svn: 261573
2016-02-22 21:06:46 +00:00
Matt Arsenault fa67bdbde0 AMDGPU/R600: Implement allowsMisalignedMemoryAccess
This avoids some test regressions in a future commit
when unaligned operations are expanded when they
have custom lowering.

llvm-svn: 261570
2016-02-22 21:04:16 +00:00