Commit Graph

22524 Commits

Author SHA1 Message Date
Paul Robinson 746edea0ae Revert "Fix out-of-order stepping behavior in programs with sunk instructions."
This reverts commit 30419e150cd940893a13b345e85f96053850208f.
aka r318679.  It caused "sanitizer-windows" bot to fail.

llvm-svn: 318684
2017-11-20 19:07:52 +00:00
Paul Robinson f0b02965e4 Fix out-of-order stepping behavior in programs with sunk instructions.
MachineSink attempts to place instructions near the basic blocks where
they are needed.  Once an instruction has been sunk, its location
relative to other instructions is no longer consistent with the
original source code. In order to ensure correct single-stepping and
profiling, the debug location for sunk instructions is either merged
with the insertion point or erased if the target successor block is
empty.

Patch by Matthew Voss!

Differential Revision: https://reviews.llvm.org/D39933

llvm-svn: 318679
2017-11-20 18:42:17 +00:00
Dmitry Preobrazhensky a0342dc9eb [AMDGPU][MC][GFX8][GFX9] Corrected names of integer v_{add/addc/sub/subrev/subb/subbrev}
See bug 34765: https://bugs.llvm.org//show_bug.cgi?id=34765

Reviewers: tamazov, SamWot, arsenm, vpykhtin

Differential Revision: https://reviews.llvm.org/D40088

llvm-svn: 318675
2017-11-20 18:24:21 +00:00
Tony Jiang f75f4d6573 [MachineCSE] Add new callback for is caller preserved or constant physregs
The instructions addis,addi, bl are used to calculate the address of TLS thread
local variables. These TLS access code sequences are generated repeatedly every
time the thread local variable is accessed. By communicating to Machine CSE that
X2 is guaranteed to have the same value within the same function call (so called
Caller Preserved Physical Register), the redundant TLS access code sequences are
cleaned up.

Differential Revision: https://reviews.llvm.org/D39173

llvm-svn: 318661
2017-11-20 16:55:07 +00:00
Yaxun Liu 45d25e12ab [AMDGPU] Update test r600.amdgpu-alias-analysis.ll
Manually update test r600.amdgpu-alias-analysis.ll for amdgiz environment
since it cannot be done by script.

The two pointers are swapped in the output because PrintResults in
AliasAnalysisEvaluator.cpp sorts the strings obtained from printAsOperand
before printing them.

Differential Revision: https://reviews.llvm.org/D40131

llvm-svn: 318660
2017-11-20 16:53:13 +00:00
Simon Dardis 1631d6ce13 [mips] Reorder target specific passes
Move the hazard scheduling pass to after the long branch pass, as the
long branch pass can create forbiddden slot hazards. Rather than complicating
the implementation of the long branch pass to handle forbidden slot hazards,
just reorder the passes.

llvm-svn: 318657
2017-11-20 15:59:18 +00:00
Jonas Paulsson 12e3a58842 [SystemZ] Bugfix for handling of subregisters in getRegAllocationHints().
The 32 bit subreg indices of GR128 registers must also be checked for in
getRC32().

Review: Ulrich Weigand.
llvm-svn: 318652
2017-11-20 14:54:03 +00:00
Tony Jiang 438bf4a66b [PPC] Heuristic to choose between a X-Form VSX ld/st vs a X-Form FP ld/st.
The VSX versions have the advantage of a full 64-register target whereas the FP
ones have the advantage of lower latency and higher throughput. So what we’re
after is using the faster instructions in low register pressure situations and
using the larger register file in high register pressure situations.

The heuristic chooses between the following 7 pairs of instructions.
PPC::LXSSPX vs PPC::LFSX
PPC::LXSDX vs PPC::LFDX
PPC::STXSSPX vs PPC::STFSX
PPC::STXSDX vs PPC::STFDX
PPC::LXSIWAX vs PPC::LFIWAX
PPC::LXSIWZX vs PPC::LFIWZX
PPC::STXSIWX vs PPC::STFIWX

Differential Revision: https://reviews.llvm.org/D38486

llvm-svn: 318651
2017-11-20 14:38:30 +00:00
Sander de Smalen 0c5a29b6be [AArch64][TableGen] Skip tied result operands for InstAlias
Summary:
This patch fixes an issue so that the right alias is printed when the instruction has tied operands. It checks the number of operands in the resulting instruction as opposed to the alias, and then skips over tied operands that should not be printed in the alias.

This allows to generate the preferred assembly syntax for the AArch64 'ins' instruction, which should always be displayed as 'mov' according to the ARM Architecture Reference Manual. Several unit tests have changed as a result, but only to reflect the preferred disassembly. Some other InstAlias patterns (movk/bic/orr) needed a slight adjustment to stop them becoming the default and breaking other unit tests.

Please note that the patch is mostly the same as https://reviews.llvm.org/D29219 which was reverted because of an issue found when running TableGen with the Address Sanitizer. That issue has been addressed in this iteration of the patch.


Reviewers: rengolin, stoklund, huntergr, SjoerdMeijer, rovka

Reviewed By: rengolin, SjoerdMeijer

Subscribers: fhahn, aemerson, javed.absar, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D40030

llvm-svn: 318650
2017-11-20 14:36:40 +00:00
Valery Pykhtin f2fe9725ea AMDGPU: Partial ILP scheduler port from SelectionDAG to SchedulingDAG (experimental)
Differential revision: https://reviews.llvm.org/D39897

llvm-svn: 318649
2017-11-20 14:35:53 +00:00
Diana Picus 3ac504035a [ARM GlobalISel] Add test for RSBri. NFC
Add instruction selector test for RSBri, which is derived from
AsI1_rbin_irs, and make sure it doesn't get mistaken for SUBri, which is
derived from the very similar AsI1_bin_irs pattern.

llvm-svn: 318643
2017-11-20 11:05:31 +00:00
Diana Picus 6db48f7d6b [ARM GlobalISel] Clean up binary operator tests. NFC
Remove some of the instruction selector tests for binary operators (and,
or, xor). These are all derived from the same kind of TableGen pattern,
AsI1_bin_irs, so there's no point in testing all of them.

llvm-svn: 318642
2017-11-20 10:35:35 +00:00
Mohammed Agabaria 115f68ea3e [LV][X86] Support of AVX2 Gathers code generation and update the LV with this
This patch depends on: https://reviews.llvm.org/D35348

Support of pattern selection of masked gathers of AVX2 (X86\AVX2 code gen)
Update LoopVectorize to generate gathers for AVX2 processors.

Reviewers: delena, zvi, RKSimon, craig.topper, aaboud, igorb

Reviewed By: delena, RKSimon

Differential Revision: https://reviews.llvm.org/D35772

llvm-svn: 318641
2017-11-20 08:18:12 +00:00
Craig Topper 198f7d78d3 [X86] Regenerate a test with broadcast comments. NFC
llvm-svn: 318640
2017-11-20 08:15:04 +00:00
Sanjay Patel c0218923e1 [x86] add sqrt tests for partially-inline-libcalls (PR31455)
llvm-svn: 318630
2017-11-19 17:31:37 +00:00
Craig Topper bece74c694 [X86] Add test cases for rndscaless/sd intrinsics.
Also fix the memop in the ins for these instructions. Not sure what effect this has.

llvm-svn: 318624
2017-11-19 06:24:26 +00:00
Craig Topper 512e9e7f3f [X86] Improve load folding of scalar rcp28 and rsqrt28 instructions using sse_load_f32/f64.
llvm-svn: 318623
2017-11-19 05:42:54 +00:00
Alexei Starovoitov 9a67245d88 [bpf] allow direct and indirect calls
kernel verifier is becoming smarter and soon will support
direct and indirect function calls.
Remove obsolete error from BPF backend.
Make call to use PCRel_4 fixup.
'bpf to bpf' calls are distinguished from 'bpf to kernel' calls
by insn->src_reg == BPF_PSEUDO_CALL == 1 which is used as relocation
indicator similar to ld_imm64->src_reg == BPF_PSEUDO_MAP_FD == 1
The actual 'call' instruction remains the same for both
'bpf to kernel' and 'bpf to bpf' calls.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 318614
2017-11-19 01:35:00 +00:00
Craig Topper 9a94dfc457 [X86] Switch cannonlake to use the SkylakeServer scheduling model instead of Haswell.
Cannonlake comes after skylake and supports avx512 so this is probably a closer model for now.

llvm-svn: 318613
2017-11-19 01:25:30 +00:00
Craig Topper 81037f385e [X86] Add skeleton support for icelake CPU.
There are several patches out for review right now to implement Icelake features. This adds a CPU to collect them under.

llvm-svn: 318612
2017-11-19 01:12:00 +00:00
Simon Pilgrim 41fc45c4e6 [X86][AVX512VL] Add AVX512VL tests to the vselect packss tests.
PR34553 has gone, adding tests to ensure it doesn't come back.

vselect_packss_v16i64 still has some awful codegen on AVX512 targets....

llvm-svn: 318599
2017-11-18 19:47:59 +00:00
Craig Topper 65b8a2c9e1 [X86] Add another gather test with v8i8 sign extended indices.
This requires the indices to be legalized and sign extended.

llvm-svn: 318597
2017-11-18 19:25:35 +00:00
Sanjay Patel c0d1b2ee2e [x86] add tests for unnecessary shuffling; NFC
llvm-svn: 318592
2017-11-18 16:25:38 +00:00
Martin Storsjo 94b59240e2 [X86] Output cfi directives for saved XMM registers even if no GPRs are saved
This makes sure that functions that only clobber xmm registers
(on win64) also get the right cfi directives, if dwarf exceptions
are enabled.

Differential Revision: https://reviews.llvm.org/D40191

llvm-svn: 318591
2017-11-18 06:23:48 +00:00
Quentin Colombet c0d34d38cb [AArch64] Map G_LOAD on FPR when the definition goes to a copy to FPR
We used to detect loads feeding fp instructions, but we were
failing to take into account cases where this happens through copies.
For instance, loads can fed copies coming from the ABI lowering
of floating point arguments/results.

llvm-svn: 318589
2017-11-18 04:28:59 +00:00
Quentin Colombet 63816c0957 [AArch64] Map G_STORE on FPR when the source comes from a FPR copy
We used to detect that stores were fed by fp instructions, but we were
failing to take into account cases where this happens through copies.
For instance, stores can be fed by copies coming from the ABI lowering
of floating point arguments.

llvm-svn: 318588
2017-11-18 04:28:58 +00:00
Quentin Colombet fe538f7145 [RegisterBankInfo] Relax the assert of having matching type sizes on default mappings
Instead of asserting that the type sizes are exactly equal, we check
that the new size is big enough to contain the original type.
We have to relax this constrain because, right now, we sometimes
specify that things that are smaller than a storage type are legal
instead of widening everything to the size of a storage type.
E.g., we say that G_AND s16 is legal and we map that on GPR32.

This is something we may revisit in the future (either by changing
the legalization process or keeping track separately of the storage
size and the size of the type), but let us reflect the reality of
the situation for now.

llvm-svn: 318587
2017-11-18 04:28:58 +00:00
Quentin Colombet 91801f68aa [AArch64][RegisterBankInfo] Teach instruction mapping about gpr32 -> fpr16 cross copies
Turns out this copies can actually occur because of the way we lower the
ABI for half.

llvm-svn: 318586
2017-11-18 04:28:56 +00:00
Matt Arsenault a41351e37c AMDGPU: Move hazard avoidance out of waitcnt pass.
This is mostly moving VMEM clause breaking into
the hazard recognizer. Also move another hazard
currently handled in the waitcnt pass.

Also stops breaking clauses unless xnack is enabled.

llvm-svn: 318557
2017-11-17 21:35:32 +00:00
Simon Pilgrim 7834009726 [X86] Merge scheduling tests for SHLD/SHRD
Reduces spsce used and makes it easier to compare the 2 values for the equivalent instructions

llvm-svn: 318541
2017-11-17 18:35:49 +00:00
Dmitry Preobrazhensky 682a654758 [AMDGPU][MC][GFX9][disassembler] Corrected decoding of op_sel_hi for v_mad_mix*
See bug 35148: https://bugs.llvm.org//show_bug.cgi?id=35148

Reviewers: tamazov, SamWot, arsenm

Differential Revision: https://reviews.llvm.org/D39492

llvm-svn: 318526
2017-11-17 15:15:40 +00:00
Martin Storsjo b4c907edd7 [ARM] Use dwarf exception handling on MinGW
Enabling and using dwarf exceptions seems like an easier path
to take, than to make the COFF/ARM backend output EHABI directives.
Previously, no EH model was enabled at all on this target.

There's no point in setting UseIntegratedAssembler to false since
GNU binutils doesn't support Windows on ARM, and since we don't
need to support external assembler, we don't need to use register
numbers in cfi directives.

Differential Revision: https://reviews.llvm.org/D39532

llvm-svn: 318510
2017-11-17 08:04:40 +00:00
Matt Arsenault 03c67d1eb2 AMDGPU: Fix breaking SMEM clauses
This was completely ignoring subregisters,
so was not very useful. Also only break them
if xnack is actually enabled.

llvm-svn: 318505
2017-11-17 04:18:24 +00:00
Aditya Nandakumar 69855491ee [GISel]: DCE copy instructions during legalization
We might have instructions such as ext(copy(trunc)), and while cleaning
up legalization artifacts, we can also dce the copies that are in
between legalization artifacts.

llvm-svn: 318501
2017-11-17 02:44:55 +00:00
Yi Kong 39bcd4ed3e [ARM] 't' asm constraint should accept i32
't' constraint normally only accepts f32 operands, but for VCVT the
operands can be i32. LLVM is overly restrictive and rejects asm like:

  float foo() {
    float result;
    __asm__ __volatile__(
      "vcvt.f32.s32 %[result], %[arg1]\n"
      : [result]"=t"(result)
      : [arg1]"t"(0x01020304) );
    return result;
  }

Relax the value type for 't' constraint to either f32 or i32.

Differential Revision: https://reviews.llvm.org/D40137

llvm-svn: 318472
2017-11-16 23:38:17 +00:00
Craig Topper 089082378f [X86] Add DAG combine to remove sext i32->i64 from gather/scatter instructions.
Only do this pre-legalize in case we're using the sign extend to legalize for KNL.

This recovers all of the tests that changed when I stopped SelectionDAGBuilder from deleting sign extends.

There's more work that could be done here particularly to fix the i8->i64 test case that experienced split.

llvm-svn: 318468
2017-11-16 23:09:06 +00:00
Craig Topper 1120bb9f59 [X86] Add gather test with index sign extended from i8 type.
Previously SelectionDAGBuilder would remove this sign extend leading to a failure during isel.

The codegen here isn't very nice as we ended up triggering a split.

llvm-svn: 318467
2017-11-16 23:09:03 +00:00
Craig Topper 242374e219 [X86] Don't remove sign extend of gather/scatter indices during SelectionDAGBuilder.
The sign extend might be from an i16 or i8 type and was inserted by InstCombine to match the pointer width. X86 gather legalization isn't currently detecting this to reinsert a sign extend to make things legal.

It's a bit weird for the SelectionDAGBuilder to do this kind of optimization in the first place. With this removed we can at least lean on InstCombine somewhat to ensure the index is i32 or i64.

I'll work on trying to recover some of the test cases by removing sign extends in the backend when its safe to do so with an understanding of the current legalizer capabilities.

This should fix PR30690.

llvm-svn: 318466
2017-11-16 23:08:57 +00:00
Craig Topper e85ff4f732 [X86] Pre-truncate gather/scatter indices that have element sizes larger than 64-bits before Legalize.
The wider element type will normally cause legalize to try to split and scalarize the gather/scatter, but we can't handle that. Instead, truncate the index early so the gather/scatter node is insulated from the legalization.

This really shouldn't happen in practice since InstCombine will normalize index types to the same size as pointers.

llvm-svn: 318452
2017-11-16 20:23:22 +00:00
Yonghong Song ce96738dee bpf: print backward branch target properly
Currently, it prints the backward branch offset as unsigned value
like below:
       7:       7d 34 0b 00 00 00 00 00         if r4 s>= r3 goto 11 <LBB0_3>
       8:       b7 00 00 00 00 00 00 00         r0 = 0
LBB0_2:
       9:       07 00 00 00 01 00 00 00         r0 += 1
      ......
      17:       bf 31 00 00 00 00 00 00         r1 = r3
      18:       6d 32 f6 ff 00 00 00 00         if r2 s> r3 goto 65526 <LBB0_3+0x7FFB0>

The correct print insn 18 should be:
      18:       6d 32 f6 ff 00 00 00 00         if r2 s> r3 goto -10 <LBB0_2>

To provide better clarity and be consistent with kernel verifier output,
the insn 7 output is changed to the following with "+" added to
non-negative branch offset:
       7:       7d 34 0b 00 00 00 00 00         if r4 s>= r3 goto +11 <LBB0_3>

Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 318442
2017-11-16 19:15:36 +00:00
Guozhi Wei 433e8d3e04 [PPC] Change i32 constant in store instruction to i64
This patch changes all i32 constant in store instruction to i64 with truncation, to increase the chance that the referenced constant can be shared with other i64 constant.

Differential Revision: https://reviews.llvm.org/D39352

llvm-svn: 318436
2017-11-16 18:27:34 +00:00
Yaxun Liu 407ca36b27 Let llvm.invariant.group.barrier accepts pointer to any address space
llvm.invariant.group.barrier may accept pointers to arbitrary address space.

This patch let it accept pointers to i8 in any address space and returns
pointer to i8 in the same address space.

Differential Revision: https://reviews.llvm.org/D39973

llvm-svn: 318413
2017-11-16 16:32:16 +00:00
Simon Pilgrim e8e6acdac9 [X86] Add scheduling tests for SHLD/SHRD
llvm-svn: 318402
2017-11-16 14:13:48 +00:00
Diana Picus bfdf7b6c39 [ARM GlobalISel] Add tests for BIC. NFC
Add instruction selector tests for BICrr and BICri, which are handled by
TableGen.

llvm-svn: 318398
2017-11-16 13:32:47 +00:00
Diana Picus 4d242b18b2 [ARM GlobalISel] Add tests for REVSH patterns. NFC
Add instruction selector tests for some of the REVSH patterns handled by
TableGen.

llvm-svn: 318393
2017-11-16 12:29:28 +00:00
Yaxun Liu 0844ff2aa7 Fix pointer EVT in SelectionDAGBuilder::visitAlloca
SelectionDAGBuilder::visitAlloca assumes alloca address space is 0, which is
incorrect for triple amdgcn---amdgiz and causes isel failure.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D40095

llvm-svn: 318392
2017-11-16 12:22:19 +00:00
Sam Parker 43fa5911a1 [DAGCombine] Enable more srl -> load combines
Change the calculation for the desired ValueType for non-sign
extending loads, as in those cases we don't care about the
higher bits. This creates a smaller ExtVT and allows for such
combinations as:
(srl (zextload i16, [addr]), 8) -> (zextload i8, [addr + 1])

Differential Revision: https://reviews.llvm.org/D40034

llvm-svn: 318390
2017-11-16 11:28:26 +00:00
Craig Topper 46a5d58b8c [X86] Update TTI to report that v1iX/v1fX types aren't legal for masked gather/scatter/load/store.
The type legalizer will try to scalarize these operations if it sees them, but there is no handling for scalarizing them. This leads to a fatal error. With this change they will now be scalarized by the mem intrinsic scalarizing pass before SelectionDAG.

llvm-svn: 318380
2017-11-16 06:02:05 +00:00
Yaxun Liu 4d9a4d7ac8 Fix APInt bit size in processDbgDeclares
processDbgDeclares assumes pointer size is the same for different addr spaces.
It uses pointer size for addr space 0 for all pointers, which causes assertion
in stripAndAccumulateInBoundsConstantOffsets for amdgcn---amdgiz since
pointer in addr space 5 has different size than in addr space 0.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D40085

llvm-svn: 318370
2017-11-16 02:54:49 +00:00
Yonghong Song 4c3ce59e61 bpf: enable llvm-objdump to print out symbolized jmp target
Add hook in BPF backend so that llvm-objdump can print out
the jmp target with label names, e.g.,
  ...
  if r1 != 2 goto 6 <LBB0_2>
  ...
  goto 7 <LBB0_4>
  ...
 LBB0_2:
  ...
 LBB0_4:
  ...

Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 318358
2017-11-16 00:52:30 +00:00
Matt Arsenault 301162c4fe AMDGPU: Replace i64 add/sub lowering
Use VOP3 add/addc like usual.

This has some tradeoffs. Inline immediates fold
a little better, but other constants are worse off.
SIShrinkInstructions could be made smarter to handle
these cases.

This allows us to avoid selecting scalar adds where we
need to track the carry in scc and replace its users.
This makes it easier to use the carryless VALU adds.

llvm-svn: 318340
2017-11-15 21:51:43 +00:00
Dan Gohman 89bf88c87c [WebAssembly] Update cfg-stackify.ll to remove the workaround added in r318288.
Remove -switch-peel-threshold=100 and update the expected results in test10
in cfg-stackify.ll.

llvm-svn: 318338
2017-11-15 21:38:33 +00:00
Sean Fertile 0f0837e84e [PowerPC] Implement mayBeEmittedAsTailCall for PPC
Implements TargetLowering callback 'mayBeEmittedAsTailCall' that enables
CodeGenPrepare to duplicate returns when they might enable a tail-call.

Differential Revision: https://reviews.llvm.org/D39777

llvm-svn: 318321
2017-11-15 18:58:27 +00:00
Simon Pilgrim 56415772d6 [X86] Add CBW/CDQ/CDQE/CQO/CWD/CWDE to WriteALU schedule class
Some CPUs are already overriding these sign extension instructions but we should be able to use the WriteALU schedule class by default.

Differential Revision: https://reviews.llvm.org/D39899

llvm-svn: 318308
2017-11-15 17:11:24 +00:00
Petar Jovanovic cd729ead01 [mips] Improve genConstMult() to work with arbitrary precision
APInt is now used instead of uint64_t in function genConstMult() allowing
multiplication optimizations with constants of arbitrary length.

Patch by Milos Stojanovic.

Differential Revision: https://reviews.llvm.org/D38130

llvm-svn: 318296
2017-11-15 15:24:04 +00:00
Ilya Biryukov ee7a96229e Workaround CodeGen/WebAssembly/cfg-stackify.ll failure after r318202
By disabling the introduced optimization.

llvm-svn: 318288
2017-11-15 10:50:43 +00:00
Matt Arsenault 45b98189bd AMDGPU: Don't use MUBUF vaddr if address may overflow
Effectively revert r263964. Before we would not
allow this if vaddr was not known to be positive.

llvm-svn: 318240
2017-11-15 00:45:43 +00:00
Matt Arsenault c8903125cd AMDGPU: Handle or in multi-use shl ptr combine
llvm-svn: 318223
2017-11-14 23:46:42 +00:00
Hans Wennborg 1403100b6b Fix switch-lower-peel-top-case.ll isel pass is not registered error
The test was doing -stop-after=isel, but that pass is actually the
AMDGPUDAGToDAGISel pass, which might not be built when targeting x86_64.
This changes the test to -stop-after=expand-isel-pseudos instead.

Follow-up to r318202.

llvm-svn: 318220
2017-11-14 23:30:28 +00:00
Tim Renouf 39e7ce8f21 [AMDGPU] updated PAL metadata record keys
Summary: The ABI changed before specification was finalized.

Reviewers: kzhuravl, dstuttard

Subscribers: wdng, nhaehnle, yaxunl, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D39807

llvm-svn: 318213
2017-11-14 23:05:36 +00:00
Aditya Nandakumar e6201c8724 [GISel]: Rework legalization algorithm for better elimination of
artifacts along with DCE

Legalization Artifacts are all those insts that are there to make the
type system happy. Currently, the target needs to say all combinations
of extends and truncs are legal and there's no way of verifying that
post legalization, we only have *truly* legal instructions. This patch
changes roughly the legalization algorithm to process all illegal insts
at one go, and then process all truncs/extends that were added to
satisfy the type constraints separately trying to combine trivial cases
until they converge. This has the added benefit that, the target
legalizerinfo can only say which truncs and extends are okay and the
artifact combiner would combine away other exts and truncs.

Updated legalization algorithm to roughly the following pseudo code.

WorkList Insts, Artifacts;
collect_all_insts_and_artifacts(Insts, Artifacts);

do {
  for (Inst in Insts)
         legalizeInstrStep(Inst, Insts, Artifacts);
  for (Artifact in Artifacts)
         tryCombineArtifact(Artifact, Insts, Artifacts);
} while(!Insts.empty());

Also, wrote a simple wrapper equivalent to SetVector, except for
erasing, it avoids moving all elements over by one and instead just
nulls them out.

llvm-svn: 318210
2017-11-14 22:42:19 +00:00
Rong Xu dc07ae259e [CodeGen] Fix the test case added in r318202
Add the -mtriple option to filter some platforms.

llvm-svn: 318206
2017-11-14 22:08:37 +00:00
Rong Xu 3573d8da36 [CodeGen] Peel off the dominant case in switch statement in lowering
This patch peels off the top case in switch statement into a branch if the
probability exceeds a threshold. This will help the branch prediction and
avoids the extra compares when lowering into chain of branches.

Differential Revision: http://reviews.llvm.org/D39262

llvm-svn: 318202
2017-11-14 21:44:09 +00:00
Hans Wennborg e1ecd61b98 Rename CountingFunctionInserter and use for both mcount and cygprofile calls, before and after inlining
Clang implements the -finstrument-functions flag inherited from GCC, which
inserts calls to __cyg_profile_func_{enter,exit} on function entry and exit.

This is useful for getting a trace of how the functions in a program are
executed. Normally, the calls remain even if a function is inlined into another
function, but it is useful to be able to turn this off for users who are
interested in a lower-level trace, i.e. one that reflects what functions are
called post-inlining. (We use this to generate link order files for Chromium.)

LLVM already has a pass for inserting similar instrumentation calls to
mcount(), which it does after inlining. This patch renames and extends that
pass to handle calls both to mcount and the cygprofile functions, before and/or
after inlining as controlled by function attributes.

Differential Revision: https://reviews.llvm.org/D39287

llvm-svn: 318195
2017-11-14 21:09:45 +00:00
Matt Arsenault 9ba465a972 AMDGPU: Error on stack size overflow
llvm-svn: 318189
2017-11-14 20:33:14 +00:00
Ulrich Weigand 5f4373a2fc [SystemZ] Do not crash when selecting an OR of two constants
In rare cases, common code will attempt to select an OR of two
constants.  This confuses the logic in splitLargeImmediate,
causing an internal error during isel.  Fixed by simply leaving
this case to common code to handle.

This fixes PR34859.

llvm-svn: 318187
2017-11-14 20:00:34 +00:00
Easwaran Raman 0d55b55bb6 [CodeGenPrepare] Disable div bypass when working set size is huge.
Summary:
Bypass of slow divs based on operand values is currently disabled for
-Os. Do the same when profile summary is available and the working set
size of the application is huge. This is similar to how loop peeling is
guarded by hasHugeWorkingSetSize. In the div bypass case, the generated
extra code (and the extra branch) tendss to outweigh the benefits of the
bypass. This results in noticeable performance improvement on an
internal application.

Reviewers: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39992

llvm-svn: 318179
2017-11-14 19:31:51 +00:00
Ulrich Weigand 55b8590e03 [SystemZ] Fix invalid codegen using RISBMux on out-of-range bits
Before using the 32-bit RISBMux set of instructions we need to
verify that the input bits are actually within range of the 32-bit
instruction.  This fixer PR35289.

llvm-svn: 318177
2017-11-14 19:20:46 +00:00
Simon Dardis 35d90aea7a [mips] Simplify test for 5.0.1 (NFC)
Simplify testing that an emergency spill slot is used when MSA
is used so that it can be included in the 5.0.1 release.

llvm-svn: 318172
2017-11-14 19:11:45 +00:00
Yaxun Liu 0b2f73fd84 CodeGen: Fix TargetLowering::LowerCallTo for sret value type
TargetLowering::LowerCallTo assumes that sret value type corresponds to a
pointer in default address space, which is incorrect, since sret value type
should correspond to a pointer in alloca address space, which may not
be the default address space. This causes assertion for amdgcn target
in amdgiz environment.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D39996

llvm-svn: 318167
2017-11-14 18:46:52 +00:00
Ilya Biryukov e7329a7882 Use input redirection in WebAssembly/comdat.ll test.
To match how the other tests do it.

llvm-svn: 318153
2017-11-14 14:26:42 +00:00
Simon Pilgrim 600174e740 [X86][AVX] Add scheduling test for vmovntdq 256-bit store
Needs to use inline asm as domain will otherwise be changed to float (vmovntps)

llvm-svn: 318151
2017-11-14 14:03:29 +00:00
Tim Northover 5cdc4f9c33 ARM: correctly update CFG when splitting BB to fix branch.
Because the block-splitting code is multi-purpose, we have to meddle with the
branches when using it to fixup a conditional branch destination. We got the
code right, but forgot to update the CFG so the verifier complained when
expensive checks were on.

Probably harmless since constant-islands comes so late, but best to fix it
anyway.

llvm-svn: 318148
2017-11-14 11:43:54 +00:00
Diana Picus 21a42bcc0b [ARM GlobalISel] Remove C++ code for G_CONSTANT
Get rid of the handwritten instruction selector code for handling
G_CONSTANT. This code wasn't checking all the preconditions correctly
anyway, so it's better to leave it to TableGen, which can handle at
least some cases correctly (e.g. MOVi, MOVi16, folding into binary
operations). Also add tests to cover those cases.

llvm-svn: 318146
2017-11-14 11:20:32 +00:00
Momchil Velikov dc86e1444d [ARM] Fix incorrect conversion of a tail call to an ordinary call
When we emit a tail call for Armv8-M, but then discover that the caller needs to
save/restore `LR`, we convert the tail call to an ordinary one, since restoring
`LR` takes extra instructions, which may negate the benefits of the tail
call. If the callee, however, takes stack arguments, this conversion is
incorrect, since nothing has been done to pass the stack arguments.

Thus the patch reverts https://reviews.llvm.org/rL294000

Also, we improve the instruction sequence for popping `LR` in the case when we
couldn't immediately find a scratch low register, but we can use as a temporary
one of the callee-saved low registers and restore `LR` before popping other
callee-saves.

Differential Revision: https://reviews.llvm.org/D39599

llvm-svn: 318143
2017-11-14 10:36:52 +00:00
Matt Arsenault b3a255eaf9 AMDGPU: Fix test
llvm-svn: 318138
2017-11-14 06:40:00 +00:00
Dylan McKay 8443bcc898 [AVR] Remove the select-mbb-placement-bug.ll test
This test was originally added when an old bug was fixed that caused
broken iterator code to break basic block placement.

The issue has an extremely low chance of every being a problem again.

This specific test is very flaky and fails often due to upstream
changes.

I have removed this test because it negates more value than it returns.

llvm-svn: 318134
2017-11-14 04:32:49 +00:00
Matt Arsenault 57c37b2dcd AMDGPU: Fix producing saveexec when the copy is spilled
If the register from the copy from exec was spilled,
the copy before the spill was deleted leaving a spill
of undefined register verifier error and miscompiling.
Check for other use instructions of the copy register.

llvm-svn: 318132
2017-11-14 02:16:54 +00:00
Sam Clegg 999660761e [WebAssembly] Explicily disable comdat support for wasm output
For now at least.  We clearly need some kind of comdat or
linkonce_odr support for wasm but currently COMDAT is not
supported.

Disable COMDAT support in the same way we do the Mach-O.  This
also causes clang not to generated COMDATs.

Differential Revision: https://reviews.llvm.org/D39873

llvm-svn: 318123
2017-11-14 00:49:16 +00:00
Matt Arsenault 4b7938c658 AMDGPU: Fix not converting d16 load/stores to offset
Fixes missed optimization with new MUBUF instructions.

llvm-svn: 318106
2017-11-13 23:24:26 +00:00
Matt Arsenault 4eea3f3da3 AMDGPU: Implement computeKnownBitsForTargetNode for mbcnt
llvm-svn: 318100
2017-11-13 22:55:05 +00:00
Evgeniy Stepanov 76d5ac4906 [arm] Fix Unnecessary reloads from GOT.
Summary:
This fixes PR35221.
Use pseudo-instructions to let MachineCSE hoist global address computation.

Subscribers: aemerson, javed.absar, kristof.beyls, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39871

llvm-svn: 318081
2017-11-13 20:45:38 +00:00
Daniel Sanders b78ac6e322 [globalisel][tablegen] Add support for extload.
llvm-svn: 318068
2017-11-13 18:30:23 +00:00
Craig Topper c314f461dd [X86] Allow X86ISD::Wrapper to be folded into the base of gather/scatter address
If the base of our gather corresponds to something contained in X86ISD::Wrapper we should be able to fold it into the address.

This patch refactors some of the address matching to more fully use the X86ISelAddressMode struct and the getAddressOperands helper. A new helper function matchVectorAddress is added to call matchWrapper or fall back to matchAddressBase.

We should also be able to support constant offsets from a wrapper, but I'll look into that in a future patch. We may even be able to completely reuse matchAddress here, but I wanted to start simple and work up to it.

Differential Revision: https://reviews.llvm.org/D39927

llvm-svn: 318057
2017-11-13 17:53:59 +00:00
Diana Picus 69aa20e3ca [ARM GlobalISel] Update legalizer test
Make one of the legalizer tests a bit more robust by making sure all
values we're interested in are used (either in a store or a return) and
by using loads instead of constants for obtaining values on fewer than
32 bits. This should make the test less fragile to changes in the
legalize combiner, since those loads are legal (as opposed to the
constants, which were being widened and thus produced opportunities for
the legalize combiner).

llvm-svn: 318047
2017-11-13 16:02:42 +00:00
Omer Paparo Bivas 4c679e1435 Inserting a base test for X86 performance nops
Change-Id: I69da08b617d7fae8024c5aee04720eb465f39b81
llvm-svn: 318041
2017-11-13 15:02:39 +00:00
Uriel Korach 2aa707bdaa [X86] test/testn intrinsics lowering to IR. llvm part.
Remove builtins from llvm and add AutoUpgrade support.
Also add fast-isel tests for the TEST and TESTN instructions.

Differential Revision: https://reviews.llvm.org/D38736

llvm-svn: 318036
2017-11-13 12:51:18 +00:00
Momchil Velikov 842aa90192 [ARM] Place jump table as the first operand in additions
When generating table jump code for switch statements, place the jump
table label as the first operand in the various addition instructions
in order to enable addressing mode selectors to better match index
computation and possibly fold them into the addressing mode of the
table entry load instruction.

Differential revision: https://reviews.llvm.org/D39752

llvm-svn: 318033
2017-11-13 11:56:48 +00:00
Jina Nahias 9a7f9f123c [x86][AVX512] Lowering shuffle i/f intrinsics to LLVM IR
This patch, together with a matching clang patch (https://reviews.llvm.org/D38672), implements the lowering of X86 shuffle i/f intrinsics to IR.

Differential Revision: https://reviews.llvm.org/D38671

Change-Id: I1e7d359a74743e995ec356237a85214ce55d3661
llvm-svn: 318026
2017-11-13 09:16:39 +00:00
Gadi Haber c9f2300652 [X86][SKX] Adding scheduling info of non-intrinsic + commutable SKX opcodes.
Updated the scheduling information of the SKX subtarget  in the file X86SchedSkylakeServer.td under lib/Target/X86 to:
1. add regular opcodes in addition to the suffixed "_Int" opcodes
2. add the (V)MAXCPD/MAXCPS/MAXCSD/MAXCSS/MINCPD/MINCPS/MINCSD/MINCSS
    instructions that are equivalent to their counterparts without the 'C' as they are part of a hack to
    make floating point min/max commutable under fast math.

Reviewers: zvi, RKSimon, craig.topper
Differential Revision: https://reviews.llvm.org/D39833

Change-Id: Ie13702a5ce1b1a08af91ca637a52b6962881e7d6
llvm-svn: 318024
2017-11-13 08:42:07 +00:00
Craig Topper 75d71540f8 [X86] Use sse_load_f32/f64 to improve load folding of scalar vfscalefss/sd, vrcp14ss/sd, rsqrt14ss/sd instructions.
llvm-svn: 318022
2017-11-13 08:07:33 +00:00
Craig Topper c748455e51 [X86] Regenerate test. NFC
llvm-svn: 318021
2017-11-13 08:07:31 +00:00
Craig Topper ca8abedb2a [X86] Use sse_load_f32/f64 to improve load folding for scalar VFPCLASS intrinsics.
llvm-svn: 318019
2017-11-13 06:46:48 +00:00
Craig Topper bf328f263e [X86] Add tests for missed opportunities to fold a 128-bit vector load into vfpclassss and vpfpclasssd.
llvm-svn: 318018
2017-11-13 06:46:46 +00:00
Craig Topper d4f6094091 [X86] Fix SQRTSS/SQRTSD/RCPSS/RCPSD intrinsics to use sse_load_f32/sse_load_f64 to increase load folding opportunities.
llvm-svn: 318016
2017-11-13 05:25:24 +00:00
Craig Topper 24389c6746 [X86] Add tests for full vector loads to fold-load-unops.ll.
We should be able to fold a full vector load into a scalar intrinsic. Since it's legal to narrow a load.

llvm-svn: 318015
2017-11-13 05:25:23 +00:00
Craig Topper a95a1fd42d [X86] Regenerate fold-load-unops.ll and add and avx512f command line.
llvm-svn: 318014
2017-11-13 05:25:21 +00:00
Matt Arsenault fbe9533509 AMDGPU: Fix multi-use shl/add combine
This was using a custom function that didn't handle the
addressing modes properly for private. Use
isLegalAddressingMode to avoid duplicating this.

Additionally, skip the combine if there is only one use
since the standard combine will handle it.

llvm-svn: 318013
2017-11-13 05:11:54 +00:00
Craig Topper deee24b83c [X86] Use sse_load_f32/f64 in patterns for the memory forms of VRNDSCALESS/SD.
llvm-svn: 318009
2017-11-13 02:03:01 +00:00
Craig Topper 63157c4784 [X86] Use EVEX encoded VRNDSCALE instructions to implement the legacy round intrinsics.
The VRNDSCALE instructions implement a superset of the (V)ROUND instructions. They are equivalent if the upper 4-bits of the immediate are 0.

This patch lowers the legacy intrinsics to the VRNDSCALE ISD node and masks the upper bits of the immediate to 0. This allows us to take advantage of the larger register encoding space.

We should maybe consider converting VRNDSCALE back to VROUND in the EVEX to VEX pass if the extended registers are not being used.

I notice some load folding opportunities being missed for the VRNDSCALESS/SD instructions that I'll try to fix in future patches.

llvm-svn: 318008
2017-11-13 02:03:00 +00:00
Matt Arsenault 90e4f719e1 Fix some misc. -enable-var-scope violations
llvm-svn: 318006
2017-11-13 01:47:52 +00:00
Matt Arsenault e1cd482fda AMDGPU: Select d16 loads into low component of register
llvm-svn: 318005
2017-11-13 00:22:09 +00:00
Matt Arsenault 70b9282015 AMDGPU: Fix -enable-var-scope violations
llvm-svn: 318004
2017-11-12 23:53:44 +00:00
Matt Arsenault cf9b6d8d57 AMDGPU: Fix missing gfx9 atomic inc/dec tests
The global instructions weren't tested. Plus there
were also some -enable-var-scope violations and
broken check prefixes.

llvm-svn: 318003
2017-11-12 23:40:12 +00:00
Craig Topper b42a23ff8f [X86] Add an X86ISD::RANGES opcode to use for the scalar intrinsics.
This fixes a bug where we selected packed instructions for scalar intrinsics.

llvm-svn: 317999
2017-11-12 18:51:09 +00:00
Craig Topper 6b53c4a982 [X86] Add test cases and command lines demonstrating how we accidentally select vrangeps/vrangepd from vrangess/vrangesd instrinsics when the rounding mode is CUR_DIRECTION
llvm-svn: 317998
2017-11-12 18:51:08 +00:00
Craig Topper ac250825c6 [X86] Use vrndscaleps/pd for 128/256 ffloor/ftrunc/fceil/fnearbyint/frint when avx512vl is enabled.
This matches what we do for scalar and 512-bit types.

llvm-svn: 317991
2017-11-11 21:44:51 +00:00
Craig Topper ae9ffa1f5a [X86] Remove avx512-round.ll. The 512-bit rounding tests are now in vec_floor.ll with 128/256 sizes.
llvm-svn: 317990
2017-11-11 21:44:50 +00:00
Craig Topper e44fc7836e [X86] Add avx512vl command line to vec_floor.ll. Add 512-bit test cases.
llvm-svn: 317989
2017-11-11 21:44:49 +00:00
Craig Topper a9f48803d7 [X86] Add avx512f command line to rounding-ops.ll
llvm-svn: 317988
2017-11-11 21:44:48 +00:00
Craig Topper 1a20db2108 [X86] Regenerate rounding-ops.ll with update_llc_test_checks.py
llvm-svn: 317987
2017-11-11 21:44:47 +00:00
Craig Topper 0ccec70ff5 [X86] Add scalar register class versions of VRNDSCALE instructions and rename the existing versions to _Int.
This is consistent with out normal implementation of scalar instructions.

While there disable load folding for the patterns with IMPLICIT_DEF unless optimizing for size which is also our standard practice.

llvm-svn: 317977
2017-11-11 08:24:15 +00:00
Craig Topper 4d80c5dafc [X86] Regenerate avx512-round.ll test.
llvm-svn: 317976
2017-11-11 08:24:13 +00:00
Craig Topper 1a093934a9 [X86] Set the execution domain for vptest instruction to the integer domain.
llvm-svn: 317973
2017-11-11 06:19:12 +00:00
Daniel Sanders 7e52367398 [globalisel][tablegen] Import signextload and zeroextload.
Allow a pattern rewriter to be installed in CodeGenDAGPatterns and use it to
correct situations where SelectionDAG and GlobalISel disagree on
representation. For example, it would rewrite:
  (sextload:i32 $ptr)<<unindexedload>><<sextload>><<sextloadi16>
to:
  (sext:i32 (load:i16 $ptr)<<unindexedload>>)

I'd have preferred to replace the fragments and have the expansion happen
naturally as part of PatFrag expansion but the type inferencing system can't
cope with loads of types narrower than those mentioned in register classes.
This is because the SDTCisInt's on the sext constrain both the result and
operand to the 'legal' integer types (where legal is defined as 'a register
class can contain the type') which immediately rules the narrower types out.
Several targets (those with only one legal integer type) would then go on to
crash on the SDTCisOpSmallerThanOp<> when it removes all the possible types
for the result of the extend.

Also, improve isObviouslySafeToFold() slightly to automatically return true for
neighbouring instructions. There can't be any re-ordering problems if
re-ordering isn't happenning. We'll need to improve it further to handle
sign/zero-extending loads when the extend and load aren't immediate neighbours
though.

llvm-svn: 317971
2017-11-11 03:23:44 +00:00
Craig Topper 0eb4a43384 [X86] Correct the execution domain on ROUND/VROUND instructions.
llvm-svn: 317968
2017-11-11 02:26:05 +00:00
Craig Topper ffd48e3c27 [SelectionDAG] Teach SelectionDAGBuilder's getUniformBase for gather/scatter handling to accept GEPs with more than 2 operands if the middle operands are all 0s
Currently we can only get a uniform base from a simple GEP with 2 operands. This causes us to miss address folding opportunities for simple global array accesses as the test case shows.

This patch adds support for larger GEPs if the other indices are 0 since those don't require any additional computations to be inserted.

We may also want to handle constant splats of zero here, but I'm leaving that for future work when I have a real world example.

Differential Revision: https://reviews.llvm.org/D39911

llvm-svn: 317947
2017-11-10 22:50:50 +00:00
Craig Topper cad1c95b31 [X86] Add test case to demonstrate failure to fold the address computation of a simple gather from a global array. NFC
llvm-svn: 317905
2017-11-10 18:48:18 +00:00
Jatin Bhateja aaa5944ad4 [WebAssembly] Fix stack offsets of return values from call lowering.
Summary: Fixes PR35220

Reviewers: vadimcn, alexcrichton

Reviewed By: alexcrichton

Subscribers: pepyakin, alexcrichton, jfb, dschuff, sbc100, jgravelle-google, llvm-commits, aheejin

Differential Revision: https://reviews.llvm.org/D39866

llvm-svn: 317895
2017-11-10 16:26:04 +00:00
Simon Pilgrim c0eef8f6b0 [X86] Add scheduling tests for DAA/DAS
llvm-svn: 317892
2017-11-10 15:49:41 +00:00
Simon Pilgrim f9f1064993 [X86] Test non-i64 shld/shll tests on x86_64 targets as well as i686
llvm-svn: 317888
2017-11-10 13:43:04 +00:00
Simon Pilgrim a9d58fae6a [X86] Add scheduling tests
- CBW etc sign extensions
 - CLC/CLD/CMC flag modifiers
 - CPUID

llvm-svn: 317885
2017-11-10 12:32:34 +00:00
Alexander Timofeev 28da06778f [AMDGPU] Prevent Machine Copy Propagation from replacing live copy with the dead one
Differential revision: https://reviews.llvm.org/D38754

llvm-svn: 317884
2017-11-10 12:21:10 +00:00
Simon Pilgrim 7ca61e31ac [X86] Added TODO list for missing generic x86 instruction scheduling tests.
Not sure if we want to add the more exotic system instructions (IRET etc.) as well?

llvm-svn: 317882
2017-11-10 12:04:39 +00:00
Jonas Paulsson 4b017e682d [RegAlloc, SystemZ] Increase number of LOCRs by passing "hard" regalloc hints.
* The method getRegAllocationHints() is now of bool type instead of void. If
true is returned, regalloc (AllocationOrder) will *only* try to allocate the
hints, as opposed to merely trying them before non-hinted registers.

* TargetRegisterInfo::getRegAllocationHints() is implemented for SystemZ with
an increase in number of LOCRs.

In this case, it is desired to force the hints even though there is a slight
increase in spilling, because if a non-hinted register would be allocated,
the LOCRMux pseudo would have to be expanded with a jump sequence. The LOCR
(Load On Condition) SystemZ instruction must have both operands in either the
low or high part of the 64 bit register.

Reviewers: Quentin Colombet and Ulrich Weigand
https://reviews.llvm.org/D36795

llvm-svn: 317879
2017-11-10 08:46:26 +00:00
Craig Topper 1a0da2db5f [X86] Add support for combining FMADDSUB(A, B, FNEG(C))->FMSUBADD(A, B, C)
Support the opposite direction as well. Also add a TODO for not being able to combine FMSUB/FNMADD/FNMSUB with FNEG.

llvm-svn: 317878
2017-11-10 08:22:37 +00:00
Yaxun Liu 35845f06a4 [AMDGPU] Fix pointer info for lowering load/store for r600 for amdgiz environment
r600 uses dummy pointer info for lowering load/store. Since dummy pointer info
assumes address space 0, this causes isel failure when temporary load/store SDNodes
are generated for amdgiz environment.

Since the offest is not constant, FixedStack pseudo source value cannot be used
to create the pointer info. This patch creates pointer info using llvm undef value.
At least this provides correct address space so that isel can be done correctly.

Differential Revision: https://reviews.llvm.org/D39698

llvm-svn: 317862
2017-11-10 02:03:28 +00:00
Yaxun Liu 920cc2f813 [AMDGPU] Fix pointer info for pseudo source for r600
The pointer info for pseudo source for r600 is not correct when
alloca addr space is not 0, which causes invalid SDNode for r600---amdgiz.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D39670

llvm-svn: 317861
2017-11-10 01:53:24 +00:00
Ulrich Weigand d39e9dca1b [SystemZ] Add support for the "o" inline asm constraint
We don't really need any special handling of "offsettable"
memory addresses, but since some existing code uses inline
asm statements with the "o" constraint, add support for this
constraint for compatibility purposes.

llvm-svn: 317807
2017-11-09 16:31:57 +00:00
Simon Dardis c2d3e38ba6 [mips] Correct microMIP's jump and add unconditional branch pseudo
Correct the definition of 'j' as being unavailable for microMIPS32R6 and
provide the 'b' assembly idiom for codegen purposes for microMIPS32r3.

Provide the necessary 'br' pattern for microMIPS32R6 as it now longer
incorrectly uses the 'j' instruction.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39741

llvm-svn: 317801
2017-11-09 16:02:18 +00:00
Alex Bradbury 18ff303bed [RISCV] Re-generate test/CodeGen/RISCV/alu32.ll using update_llc_test_checks.py
No real change, but makes it marginally easier to merge the remainder of the
out-of-tree patches.

llvm-svn: 317796
2017-11-09 15:45:42 +00:00
Andrew V. Tischenko f8c75b8794 Sched model improving on btver2: JFPU01 resource, vtestp* for xmm.
Differential Revision: https://reviews.llvm.org/D39802

llvm-svn: 317785
2017-11-09 14:19:59 +00:00
Andrew V. Tischenko 3543f0a712 Add -print-schedule scheduling comments to inline asm.
Differential Revision: https://reviews.llvm.org/D39728

llvm-svn: 317782
2017-11-09 12:45:40 +00:00
Craig Topper 7a6e294a6c [X86] Make X86ISD::FMADDS3 isel patterns commutable.
This was missed when FMADDS3 was split from X86ISD::FMADDS3_RND.

llvm-svn: 317769
2017-11-09 06:17:05 +00:00
Marek Olsak 58410f37ff AMDGPU: Merge BUFFER_STORE_DWORD_OFFEN/OFFSET into x2, x4
Summary:
Only 56 shaders (out of 48486) are affected.

Totals from affected shaders (changed stats only):
SGPRS: 2420 -> 2460 (1.65 %)
Spilled VGPRs: 94 -> 112 (19.15 %)
Scratch size: 524 -> 528 (0.76 %) dwords per thread
Code Size: 187400 -> 184992 (-1.28 %) bytes

One DiRT Showdown shader spills 6 more VGPRs.
One Grid Autosport shader spills 12 more VGPRs.

The other 54 shaders only have a decrease in code size.
(I'm ignoring the SGPR noise)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D39012

llvm-svn: 317755
2017-11-09 01:52:55 +00:00
Marek Olsak 5cec64195c AMDGPU: Lower buffer store and atomic intrinsics manually
Summary:
Without this, SIMemoryLegalizer inserts s_waitcnt vmcnt(0) before every
buffer store and atomic instruction.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D39060

llvm-svn: 317754
2017-11-09 01:52:48 +00:00
Marek Olsak 4c421a2db2 AMDGPU: Merge BUFFER_LOAD_DWORD_OFFSET into x2, x4
Summary: Only 3 (out of 48486) shaders are affected.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D38951

llvm-svn: 317753
2017-11-09 01:52:36 +00:00
Marek Olsak 6a0548acaa AMDGPU: Merge BUFFER_LOAD_DWORD_OFFEN into x2, x4
Summary:
-9.9% code size decrease in affected shaders.

Totals (changed stats only):
SGPRS: 2151462 -> 2170646 (0.89 %)
VGPRS: 1634612 -> 1640288 (0.35 %)
Spilled SGPRs: 8942 -> 8940 (-0.02 %)
Code Size: 52940672 -> 51727288 (-2.29 %) bytes
Max Waves: 373066 -> 371718 (-0.36 %)

Totals from affected shaders:
SGPRS: 283520 -> 302704 (6.77 %)
VGPRS: 227632 -> 233308 (2.49 %)
Spilled SGPRs: 3966 -> 3964 (-0.05 %)
Code Size: 12203080 -> 10989696 (-9.94 %) bytes
Max Waves: 44070 -> 42722 (-3.06 %)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38950

llvm-svn: 317752
2017-11-09 01:52:30 +00:00
Marek Olsak b953cc36e2 AMDGPU: Merge S_BUFFER_LOAD_DWORD_IMM into x2, x4
Summary:
Only constant offsets (*_IMM opcodes) are merged.
It reuses code for LDS load/store merging.
It relies on the scheduler to group loads.

The results are mixed, I think they are mostly positive. Most shaders are
affected, so here are total stats only:

 SGPRS: 2072198 -> 2151462 (3.83 %)
 VGPRS: 1628024 -> 1634612 (0.40 %)
 Spilled SGPRs: 7883 -> 8942 (13.43 %)
 Spilled VGPRs: 97 -> 101 (4.12 %)
 Scratch size: 1488 -> 1492 (0.27 %) dwords per thread
 Code Size: 60222620 -> 52940672 (-12.09 %) bytes
 Max Waves: 374337 -> 373066 (-0.34 %)

There is 13.4% increase in SGPR spilling, DiRT Showdown spills a few more
VGPRs (now 37), but 12% decrease in code size.

These are the new stats for SGPR spilling. We already spill a lot SGPRs,
so it's uncertain whether more spilling will make any difference since
SGPRs are always spilled to VGPRs:

 SGPR SPILLING APPS   Shaders SpillSGPR AvgPerSh
 alien_isolation         2938       100      0.0
 batman_arkham_origins    589         6      0.0
 bioshock-infinite       1769         4      0.0
 borderlands2            3968        22      0.0
 counter_strike_glob..   1142        60      0.1
 deus_ex_mankind_div..   1410        79      0.1
 dirt-showdown            533         4      0.0
 dirt_rally               364      1163      3.2
 divinity                1052         2      0.0
 dota2                   1747         7      0.0
 f1-2015                  776      1515      2.0
 grid_autosport          1767      1505      0.9
 hitman                  1413       273      0.2
 left_4_dead_2           1762         4      0.0
 life_is_strange         1296        26      0.0
 mad_max                  358        96      0.3
 metro_2033_redux        2670        60      0.0
 payday2                 1362        22      0.0
 portal                   474         3      0.0
 saints_row_iv           1704         8      0.0
 serious_sam_3_bfe        392      1348      3.4
 shadow_of_mordor        1418        12      0.0
 shadow_warrior          3956       239      0.1
 talos_principle          324      1735      5.4
 thea                     172        17      0.1
 tomb_raider             1449       215      0.1
 total_war_warhammer      242        56      0.2
 ue4_effects_cave         295        55      0.2
 ue4_elemental            572        12      0.0
 unigine_tropics          210        56      0.3
 unigine_valley           278       152      0.5
 victor_vran             1262        84      0.1
 yofrankie                 82         2      0.0

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38949

llvm-svn: 317751
2017-11-09 01:52:23 +00:00
Marek Olsak ffadcb744b AMDGPU: Fold immediate offset into BUFFER_LOAD_DWORD lowered from SMEM
Summary:
-5.3% code size in affected shaders.

Changed stats only:

48486 shaders in 30489 tests
Totals:
SGPRS: 2086406 -> 2072430 (-0.67 %)
VGPRS: 1626872 -> 1627960 (0.07 %)
Spilled SGPRs: 7865 -> 7912 (0.60 %)
Code Size: 60978060 -> 60188764 (-1.29 %) bytes
Max Waves: 374530 -> 374342 (-0.05 %)

Totals from affected shaders:
SGPRS: 299664 -> 285688 (-4.66 %)
VGPRS: 233844 -> 234932 (0.47 %)
Spilled SGPRs: 3959 -> 4006 (1.19 %)
Code Size: 14905272 -> 14115976 (-5.30 %) bytes
Max Waves: 46202 -> 46014 (-0.41 %)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38915

llvm-svn: 317750
2017-11-09 01:52:17 +00:00
Craig Topper 93e27d2ecc [X86] Make sure we don't read too many operands from X86ISD::FMADDS1/FMADDS3 nodes when doing FNEG combine.
r317453 added new ISD nodes without rounding modes that were added to an existing if/else chain. But all the previous nodes handled there included a rounding mode. The final code after this if/else chain expected an extra operand that isn't present for the new nodes.

llvm-svn: 317748
2017-11-09 01:06:47 +00:00
Craig Topper 61f81f9637 [X86] Preserve memory refs when folding loads into divides.
This is similar to what we already do for multiplies. Without this we can't unfold and hoist an invariant load.

llvm-svn: 317732
2017-11-08 22:26:39 +00:00
Dan Gohman 2c74fe977d Add an @llvm.sideeffect intrinsic
This patch implements Chandler's idea [0] for supporting languages that
require support for infinite loops with side effects, such as Rust, providing
part of a solution to bug 965 [1].

Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual
effect, but which appears to optimization passes to have obscure side effects,
such that they don't optimize away loops containing it. It also teaches
several optimization passes to ignore this intrinsic, so that it doesn't
significantly impact optimization in most cases.

As discussed on llvm-dev [2], this patch is the first of two major parts.
The second part, to change LLVM's semantics to have defined behavior
on infinite loops by default, with a function attribute for opting into
potential-undefined-behavior, will be implemented and posted for review in
a separate patch.

[0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html
[1] https://bugs.llvm.org/show_bug.cgi?id=965
[2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html

Differential Revision: https://reviews.llvm.org/D38336

llvm-svn: 317729
2017-11-08 21:59:51 +00:00
Reid Kleckner 7adb2fdbba Revert "Correct dwarf unwind information in function epilogue for X86"
This reverts r317579, originally committed as r317100.

There is a design issue with marking CFI instructions duplicatable. Not
all targets support the CFIInstrInserter pass, and targets like Darwin
can't cope with duplicated prologue setup CFI instructions. The compact
unwind info emission fails.

When the following code is compiled for arm64 on Mac at -O3, the CFI
instructions end up getting tail duplicated, which causes compact unwind
info emission to fail:
  int a, c, d, e, f, g, h, i, j, k, l, m;
  void n(int o, int *b) {
    if (g)
      f = 0;
    for (; f < o; f++) {
      m = a;
      if (l > j * k > i)
        j = i = k = d;
      h = b[c] - e;
    }
  }

We get assembly that looks like this:
; BB#1:                                 ; %if.then
Lloh3:
	adrp	x9, _f@GOTPAGE
Lloh4:
	ldr	x9, [x9, _f@GOTPAGEOFF]
	mov	 w8, wzr
Lloh5:
	str		wzr, [x9]
	stp	x20, x19, [sp, #-16]!   ; 8-byte Folded Spill
	.cfi_def_cfa_offset 16
	.cfi_offset w19, -8
	.cfi_offset w20, -16
	cmp		w8, w0
	b.lt	LBB0_3
	b	LBB0_7
LBB0_2:                                 ; %entry.if.end_crit_edge
Lloh6:
	adrp	x8, _f@GOTPAGE
Lloh7:
	ldr	x8, [x8, _f@GOTPAGEOFF]
Lloh8:
	ldr		w8, [x8]
	stp	x20, x19, [sp, #-16]!   ; 8-byte Folded Spill
	.cfi_def_cfa_offset 16
	.cfi_offset w19, -8
	.cfi_offset w20, -16
	cmp		w8, w0
	b.ge	LBB0_7
LBB0_3:                                 ; %for.body.lr.ph

Note the multiple .cfi_def* directives. Compact unwind info emission
can't handle that.

llvm-svn: 317726
2017-11-08 21:31:14 +00:00
Dan Gohman 7726026061 [WebAssembly] Add a test for inline-asm "m" constraints.
llvm-svn: 317711
2017-11-08 19:37:24 +00:00
Dan Gohman 0828ba1e1e [WebAssembly] Call signExtend to get sign extended register
Patch by Jatin Bhateja!

Differential Revision: https://reviews.llvm.org/D39529

llvm-svn: 317710
2017-11-08 19:24:21 +00:00
Dan Gohman b465aa0504 [WebAssembly] Revise the strategy for inline asm.
Previously, an "r" constraint would mean the compiler provides a value
on WebAssembly's operand stack. This was tricky to use properly,
particularly since it isn't possible to declare a new local from within
an inline asm string.

With this patch, "r" provides the value in a WebAssembly local, and the
local index is provided to the inline asm string. This requires inline
asm to use get_local and set_local to read the register. This does
potentially result in larger code size, however inline asm should
hopefully be quite rare in WebAssembly.

This also means that the "m" constraint can no longer be supported, as
WebAssembly has nothing like a "memory operand" that includes an
implicit get_local.

This fixes PR34599 for the wasm32-unknown-unknown-wasm target (though
not for the ELF target).

llvm-svn: 317707
2017-11-08 19:18:08 +00:00
Simon Pilgrim 1c1fd15959 [X86] Add some initial scheduling tests for generic x86 instructions
These will be using inline asm to ensure we have coverage that we're unlikely to get from lowering of basic ir.

Currently waiting for D39728 to land to add support for scheduler comments for inline asm.

llvm-svn: 317698
2017-11-08 16:35:42 +00:00
Alex Bradbury a337675cdb [RISCV] Initial support for function calls
Note that this is just enough for simple function call examples to generate 
working code. Support for varargs etc follows in future patches.

Differential Revision: https://reviews.llvm.org/D29936

llvm-svn: 317691
2017-11-08 13:41:21 +00:00
Alex Bradbury 74913e1c70 [RISCV] Codegen for conditional branches
A good portion of this patch is the extra functions that needed to be 
implemented to support the test case. e.g. storeRegToStackSlot, 
loadRegFromStackSlot, eliminateFrameIndex.

Setting ISD::BR_CC to Expand may appear non-obvious on an architecture with 
branch+cmp instructions. However, I found it much easier to deal with matching 
the expanded form.

I had to change simm13_lsb0 and simm21_lsb0 to inherit from the 
Operand<OtherVT> class rather than Operand<i32> in order to keep tablegen 
happy. This isn't a big deal, but it does seem a shame to lose the uniformity 
across immediate types when there's not an obvious benefit (I'm hoping a 
tablegen expert will educate me on what I'm missing here!).

Differential Revision: https://reviews.llvm.org/D29935

llvm-svn: 317690
2017-11-08 13:31:40 +00:00
Alex Bradbury ec8aa91305 [RISCV] Codegen support for memory operations on global addresses
Differential Revision: https://reviews.llvm.org/D39103

llvm-svn: 317688
2017-11-08 13:24:21 +00:00
Alex Bradbury cfa6291bb1 [RISCV] Codegen support for memory operations
This required the implementation of RISCVTargetInstrInfo::copyPhysReg. Support
for lowering global addresses follow in the next patch.

Differential Revision: https://reviews.llvm.org/D29934

llvm-svn: 317685
2017-11-08 12:20:01 +00:00
Alex Bradbury 0f0e1b54f0 [RISCV] Codegen support for materializing constants
Differential Revision: https://reviews.llvm.org/D39101

llvm-svn: 317684
2017-11-08 12:02:22 +00:00
Simon Dardis 789f7ca265 [mips] Guard indirect and tailcall pseudo instructions correctly.
Previously these pseudo instructions were not guarded by ISA, so their
select was dependant on the ordering of the entries in the DAG matcher.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39723

llvm-svn: 317681
2017-11-08 11:13:44 +00:00
Craig Topper 65e6d0b758 [X86] Add patterns to fold EVEX store with EVEX encoded vcvtps2ph instructions. Remove bad pattern that had vf432 vcvtps2ph storing 128-bits.
llvm-svn: 317662
2017-11-08 04:00:31 +00:00
Craig Topper b832ee68b4 [X86] Allow legacy vcvtps2ph intrinsics to select EVEX encoded instructions. Rely on EVEX->VEX to convert back.
Missed store folding opportunities will be fixed in a subsequent commit.

llvm-svn: 317661
2017-11-08 04:00:30 +00:00
Matt Arsenault 4709ab9124 AMDGPU: Set correct sched model on v_mad_u64_u32
llvm-svn: 317645
2017-11-08 00:48:25 +00:00
Sriraman Tallam 056b3fd6fb Attribute nonlazybind should not affect calls to functions with hidden visibility.
Differential Revision: https://reviews.llvm.org/D39625

llvm-svn: 317639
2017-11-08 00:01:05 +00:00
Justin Lebar da9e0bd3a2 [NVPTX] Implement __nvvm_atom_add_gen_d builtin.
Summary:
This just seems to have been an oversight.  We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.

Reviewers: tra

Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39638

llvm-svn: 317623
2017-11-07 22:10:54 +00:00
Graham Yiu 5cd044e8c8 Use new vector insert half-word and byte instructions when we see insertelement on '8 x i16' and '16 x i8' types. Also extended existing lit testcase to cover these cases.
Differential Revision: https://reviews.llvm.org/D34630

llvm-svn: 317613
2017-11-07 20:55:43 +00:00
Petar Jovanovic e2a585dddc Reland "Correct dwarf unwind information in function epilogue for X86"
Reland r317100 with minor fix regarding ComputeCommonTailLength function in
BranchFolding.cpp. Skipping top CFI instructions block needs to executed on
several more return points in ComputeCommonTailLength().

Original r317100 message:

"Correct dwarf unwind information in function epilogue for X86"

This patch aims to provide correct dwarf unwind information in function
epilogue for X86.

It consists of two parts. The first part inserts CFI instructions that set
appropriate cfa offset and cfa register in emitEpilogue() in
X86FrameLowering. This part is X86 specific.

The second part is platform independent and ensures that:

- CFI instructions do not affect code generation
- Unwind information remains correct when a function is modified by
  different passes. This is done in a late pass by analyzing information
  about cfa offset and cfa register in BBs and inserting additional CFI
  directives where necessary.

Changed CFI instructions so that they:

- are duplicable
- are not counted as instructions when tail duplicating or tail merging
- can be compared as equal

Added CFIInstrInserter pass:

- analyzes each basic block to determine cfa offset and register valid at
  its entry and exit
- verifies that outgoing cfa offset and register of predecessor blocks match
  incoming values of their successors
- inserts additional CFI directives at basic block beginning to correct the
  rule for calculating CFA

Having CFI instructions in function epilogue can cause incorrect CFA
calculation rule for some basic blocks. This can happen if, due to basic
block reordering, or the existence of multiple epilogue blocks, some of the
blocks have wrong cfa offset and register values set by the epilogue block
above them.

CFIInstrInserter is currently run only on X86, but can be used by any target
that implements support for adding CFI instructions in epilogue.

Patch by Violeta Vukobrat.

llvm-svn: 317579
2017-11-07 14:40:27 +00:00
Simon Pilgrim 9a6b720f4f [X86] Regenerate select tests
llvm-svn: 317571
2017-11-07 13:21:02 +00:00
Kristof Beyls af9814a1fc [GlobalISel] Enable legalizing non-power-of-2 sized types.
This changes the interface of how targets describe how to legalize, see
the below description.

1. Interface for targets to describe how to legalize.

In GlobalISel, the API in the LegalizerInfo class is the main interface
for targets to specify which types are legal for which operations, and
what to do to turn illegal type/operation combinations into legal ones.

For each operation the type sizes that can be legalized without having
to change the size of the type are specified with a call to setAction.
This isn't different to how GlobalISel worked before. For example, for a
target that supports 32 and 64 bit adds natively:

  for (auto Ty : {s32, s64})
    setAction({G_ADD, 0, s32}, Legal);

or for a target that needs a library call for a 32 bit division:

  setAction({G_SDIV, s32}, Libcall);

The main conceptual change to the LegalizerInfo API, is in specifying
how to legalize the type sizes for which a change of size is needed. For
example, in the above example, how to specify how all types from i1 to
i8388607 (apart from s32 and s64 which are legal) need to be legalized
and expressed in terms of operations on the available legal sizes
(again, i32 and i64 in this case). Before, the implementation only
allowed specifying power-of-2-sized types (e.g. setAction({G_ADD, 0,
s128}, NarrowScalar).  A worse limitation was that if you'd wanted to
specify how to legalize all the sized types as allowed by the LLVM-IR
LangRef, i1 to i8388607, you'd have to call setAction 8388607-3 times
and probably would need a lot of memory to store all of these
specifications.

Instead, the legalization actions that need to change the size of the
type are specified now using a "SizeChangeStrategy".  For example:

   setLegalizeScalarToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerAndNarrowToLargest);

This example indicates that for type sizes for which there is a larger
size that can be legalized towards, do it by Widening the size.
For example, G_ADD on s17 will be legalized by first doing WidenScalar
to make it s32, after which it's legal.
The "NarrowToLargest" indicates what to do if there is no larger size
that can be legalized towards. E.g. G_ADD on s92 will be legalized by
doing NarrowScalar to s64.

Another example, taken from the ARM backend is:
   for (unsigned Op : {G_SDIV, G_UDIV}) {
     setLegalizeScalarToDifferentSizeStrategy(Op, 0,
         widenToLargerTypesUnsupportedOtherwise);
     if (ST.hasDivideInARMMode())
       setAction({Op, s32}, Legal);
     else
       setAction({Op, s32}, Libcall);
   }

For this example, G_SDIV on s8, on a target without a divide
instruction, would be legalized by first doing action (WidenScalar,
s32), followed by (Libcall, s32).

The same principle is also followed for when the number of vector lanes
on vector data types need to be changed, e.g.:

   setAction({G_ADD, LLT::vector(8, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(16, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(8, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(2, 32)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 32)}, LegalizerInfo::Legal);
   setLegalizeVectorElementToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerTypesUnsupportedOtherwise);

As currently implemented here, vector types are legalized by first
making the vector element size legal, followed by then making the number
of lanes legal. The strategy to follow in the first step is set by a
call to setLegalizeVectorElementToDifferentSizeStrategy, see example
above.  The strategy followed in the second step
"moreToWiderTypesAndLessToWidest" (see code for its definition),
indicating that vectors are widened to more elements so they map to
natively supported vector widths, or when there isn't a legal wider
vector, split the vector to map it to the widest vector supported.

Therefore, for the above specification, some example legalizations are:
  * getAction({G_ADD, LLT::vector(3, 3)})
    returns {WidenScalar, LLT::vector(3, 8)}
  * getAction({G_ADD, LLT::vector(3, 8)})
    then returns {MoreElements, LLT::vector(8, 8)}
  * getAction({G_ADD, LLT::vector(20, 8)})
    returns {FewerElements, LLT::vector(16, 8)}


2. Key implementation aspects.

How to legalize a specific (operation, type index, size) tuple is
represented by mapping intervals of integers representing a range of
size types to an action to take, e.g.:

       setScalarAction({G_ADD, LLT:scalar(1)},
                       {{1, WidenScalar},  // bit sizes [ 1, 31[
                        {32, Legal},       // bit sizes [32, 33[
                        {33, WidenScalar}, // bit sizes [33, 64[
                        {64, Legal},       // bit sizes [64, 65[
                        {65, NarrowScalar} // bit sizes [65, +inf[
                       });

Please note that most of the code to do the actual lowering of
non-power-of-2 sized types is currently missing, this is just trying to
make it possible for targets to specify what is legal, and how non-legal
types should be legalized.  Probably quite a bit of further work is
needed in the actual legalizing and the other passes in GlobalISel to
support non-power-of-2 sized types.

I hope the documentation in LegalizerInfo.h and the examples provided in the
various {Target}LegalizerInfo.cpp and LegalizerInfoTest.cpp explains well
enough how this is meant to be used.

This drops the need for LLT::{half,double}...Size().


Differential Revision: https://reviews.llvm.org/D30529

llvm-svn: 317560
2017-11-07 10:34:34 +00:00
Bjorn Steinbrink c02b237e46 [X86] Don't clobber reserved registers with stack adjustments
Summary:
Calls using invoke in funclet based functions are assumed to clobber
all registers, which causes the stack adjustment using pops to consider
all registers not defined by the call to be undefined, which can
unfortunately include the base pointer, if one is needed.

To prevent this (and possibly other hazards), skip reserved registers
when looking for candidate registers.

This fixes issue #45034 in the Rust compiler.

Reviewers: mkuper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39636

llvm-svn: 317551
2017-11-07 08:50:21 +00:00
Craig Topper e7fb300226 [X86] Add patterns to fold a 64-bit load into the EVEX vcvtph2ps instructions.
llvm-svn: 317548
2017-11-07 07:13:07 +00:00
Craig Topper 0231b1d445 [X86] Add patterns for folding a v16i8 with the VEX vcvtph2ps intrinsics.
Disable the peephole pass to prove that the pattern is working.

llvm-svn: 317547
2017-11-07 07:13:06 +00:00
Craig Topper 65fc53320b [X86] Add a test for a 128-bit vector load feeding a cvtph2ps intrinsic.
The instruction only loads 64-bits, but we should be able to fold a wider load and let it be narrowed.

llvm-svn: 317546
2017-11-07 07:13:05 +00:00
Craig Topper 8942b33f84 [X86] Remove alignment from a load in the f16c intrinsic test. The alignment shouldn't be required for load folding.
llvm-svn: 317545
2017-11-07 07:13:04 +00:00
Craig Topper cf8e6d0a76 [X86] Add support for using EVEX instructions for the legacy vcvtph2ps intrinsics.
Looks like there's some missed load folding opportunities for i64 loads.

llvm-svn: 317544
2017-11-07 07:13:03 +00:00
Craig Topper 75510dd6f7 [X86] Add AVX512VL command line to f16c intrinsic test to show missed EVEX opportunities for the legacy intrinsics.
llvm-svn: 317543
2017-11-07 07:13:01 +00:00
Craig Topper afc3c8206e [X86] Use IMPLICIT_DEF in VEX/EVEX vcvtss2sd/vcvtsd2ss patterns instead of a COPY_TO_REGCLASS.
ExeDepsFix pass should take care of making the registers match.

llvm-svn: 317542
2017-11-07 04:44:22 +00:00
Craig Topper 428a4e6374 [X86] Make FeatureAVX512 imply FeatureF16C.
The EVEX to VEX pass is already assuming this is true under AVX512VL. We had special patterns to use zmm instructions if VLX and F16C weren't available.

Instead just make AVX512 imply F16C to make the EVEX to VEX behavior explicitly legal and remove the extra patterns.

All known CPUs with AVX512 have F16C so this should safe for now.

llvm-svn: 317521
2017-11-06 22:49:04 +00:00
Bjorn Pettersson a42ed3e361 [MIRPrinter] Use %subreg.xxx syntax for subregister index operands
Summary:
Print %subreg.<subregidxname> instead of just the subregister
index when printing immediate operands corresponding to subreg
indices in INSERT_SUBREG, EXTRACT_SUBREG, SUBREG_TO_REG and
REG_SEQUENCE.

Reviewers: qcolombet, MatzeB

Reviewed By: MatzeB

Subscribers: nhaehnle, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D39696

llvm-svn: 317513
2017-11-06 21:46:06 +00:00
Graham Yiu 030621bbcb Adds code to PPC ISEL lowering to recognize byte inserts from vector_shuffles, and use P9 shift and vector insert byte instructions instead of vperm. Extends tests from vector insert half-word.
Differential Revision: https://reviews.llvm.org/D34497

llvm-svn: 317503
2017-11-06 20:18:30 +00:00
Guozhi Wei e3b8d9a312 [PPC] Use xxbrd to speed up bswap64
Power doesn't have bswap instructions, so llvm generates following code sequence for bswap64.

  rotldi   5, 3, 16
  rotldi   4, 3, 8
  rotldi   9, 3, 24
  rotldi   10, 3, 32
  rotldi   11, 3, 48
  rotldi   12, 3, 56
  rldimi 4, 5, 8, 48
  rldimi 4, 9, 16, 40
  rldimi 4, 10, 24, 32
  rldimi 4, 11, 40, 16
  rldimi 4, 12, 48, 8
  rldimi 4, 3, 56, 0

But Power9 has vector bswap instructions, they can also be used to speed up scalar bswap intrinsic. With this patch, bswap64 can be translated to:

  mtvsrdd 34, 3, 3
  xxbrd 34, 34
  mfvsrld 3, 34

Differential Revision: https://reviews.llvm.org/D39510

llvm-svn: 317499
2017-11-06 19:09:38 +00:00
Matt Arsenault 4f6318fe1b AMDGPU: Select v_mad_u64_u32 and v_mad_i64_i32
llvm-svn: 317492
2017-11-06 17:04:37 +00:00
Yaxun Liu cc56a8b108 [AMDGPU] Change alloca addr space of r600 to 5 for amdgiz environment
Differential Revision: https://reviews.llvm.org/D39657

llvm-svn: 317479
2017-11-06 14:32:33 +00:00
Yaxun Liu 1ac16619d2 [AMDGPU] Fix assertion due to assuming pointer in default addr space is 32 bit
The backend assumes pointer in default addr space is 32 bit, which is not
true for the new addr space mapping and causes assertion for unresolved
functions.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D39643

llvm-svn: 317476
2017-11-06 13:01:33 +00:00
Uriel Korach bb86686a8b [X86][AVX512] Improve lowering of AVX512 test intrinsics
Added TESTM and TESTNM to the list of instructions that already zeroing unused upper bits
and does not need the redundant shift left and shift right instructions afterwards.
Added a pattern for TESTM and TESTNM in iselLowering, so now icmp(neq,and(X,Y), 0) goes folds into TESTM
and icmp(eq,and(X,Y), 0) goes folds into TESTNM
This commit is a preparation for lowering the test and testn X86 intrinsics to IR.

Differential Revision: https://reviews.llvm.org/D38732

llvm-svn: 317465
2017-11-06 09:22:38 +00:00
Zvi Rackover 3122698040 X86 ISel: Basic support for variable-index vector permutations
Summary:
Try to lower a BUILD_VECTOR composed of extract-extract chains that can be
reasoned to be a permutation of a vector by indices in a non-constant vector.

We saw this pattern created by ISPC, which resolts to creating it due to the
requirement that shufflevector's mask operand be a *constant* vector.
I didn't check this but we could possibly use this pattern for lowering the X86 permute
C-instrinsics instead of llvm.x86 instrinsics.

This change can be followed by more improvements:
1. Handle vectors with undef elements.
2. Utilize pshufb and zero-mask-blending to support more effiecient
   construction of vectors with constant-0 elements.
3. Use smaller-element vectors of same width, and "interpolate" the indices,
   when no native operation available.

Reviewers: RKSimon, craig.topper

Reviewed By: RKSimon

Subscribers: chandlerc, DavidKreitzer

Differential Revision: https://reviews.llvm.org/D39126

llvm-svn: 317463
2017-11-06 08:25:46 +00:00
Jina Nahias 7b705f1f91 [x86][AVX512] Lowering Broadcastm intrinsics to LLVM IR
This patch, together with a matching clang patch (https://reviews.llvm.org/D38683), implements the lowering of X86 broadcastm intrinsics to IR.

Differential Revision: https://reviews.llvm.org/D38684

Change-Id: I709ac0b34641095397e994c8ff7e15d1315b3540
llvm-svn: 317458
2017-11-06 07:09:24 +00:00
Craig Topper 70eaeae7f0 [X86] Use EVEX encoded intrinsics for legacy FMA intrinsics when possible.
llvm-svn: 317454
2017-11-06 05:48:26 +00:00
Craig Topper 25cfa4cb55 [X86] Add avx512vl command line to fma-instrinsics-x86.ll
Some of these demonstrate a missed EVEX to VEX compression because we aren't prefering EVEX instructions during isel.

llvm-svn: 317452
2017-11-06 05:48:24 +00:00
Craig Topper 7e48aa89c7 [X86] Simplify command lines on the fma-instrinsics-x86.ll test and add -show-mc-encoding.
Use feature names instead of CPU names.

A future commit will add avx512vl command lines to demonstrate missed use of EVEX instructions.

llvm-svn: 317451
2017-11-06 05:48:23 +00:00
Craig Topper eff606cc0e [X86] Use EVEX encoded instructions for legacy scalar sqrt intrinsics.
Fixes PR35161.

llvm-svn: 317445
2017-11-06 04:04:01 +00:00
Craig Topper 4e2f53511a [X86] Remove some more RCP and RSQRT patterns from InstrAVX512.td that I missed in r317413.
llvm-svn: 317441
2017-11-05 21:14:05 +00:00
Simon Pilgrim 879c5b15c4 [X86][SSE] Tests for integer min/max horizontal reductions
Matching patterns that vectorizers should have created for us. 

The experimental intrinsics should probably be added as well.

llvm-svn: 317439
2017-11-05 19:48:24 +00:00
Simon Pilgrim f8105cf357 [X86][AVX] Regenerate test. NFCI.
llvm-svn: 317424
2017-11-04 21:18:06 +00:00
Craig Topper 692c8efe30 [X86] Don't use RCP14 and RSQRT14 for reciprocal estimations or for legacy SSE rcp/rsqrt intrinsics when AVX512 features are enabled.
Summary:
AVX512 added RCP14 and RSQRT instructions which improve accuracy over the legacy RCP and RSQRT instruction, but not enough accuracy to remove the need for a Newton Raphson refinement.

Currently we use these new instructions for the legacy packed SSE instrinics, but not the scalar instrinsics. And we use it for fast math optimization of division and reciprocal sqrt.

I think switching the legacy instrinsics maybe surprising to the user since it changes the answer based on which processor you're using regardless of any fastmath settings. It's also weird that we did something different between scalar and packed.

As far at the reciprocal estimation, I think it creates unnecessary deltas in our output behavior (and prevents EVEX->VEX). A little playing around with gcc and icc and godbolt suggest they don't change which instructions they use here.

This patch adds new X86ISD nodes for the RCP14/RSQRT14 and uses those for the new intrinsics. Leaving the old intrinsics to use the old instructions.

Going forward I think our focus should be on
-Supporting 512-bit vectors, which will have to use the RCP14/RSQRT14.
-Using RSQRT28/RCP28 to remove the Newton Raphson step on processors with AVX512ER
-Supporting double precision.

Reviewers: zvi, DavidKreitzer, RKSimon

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39583

llvm-svn: 317413
2017-11-04 18:26:41 +00:00
Craig Topper be1f219050 [X86] Regenerate a couple more tests that I missed in r317410.
llvm-svn: 317412
2017-11-04 18:26:39 +00:00
Craig Topper e5d44cefea [X86] Teach EVEX->VEX pass to turn SHUFI32X4/SHUFF32X4/SHUFI64X/SHUFF64X2 into VPERM2F128/VPERM2I128.
This recovers some of the tests that were changed by r317403.

llvm-svn: 317410
2017-11-04 18:10:03 +00:00
Yaxun Liu 0d9673cff2 [AMDGPU] Remove hardcoded address space value from AMDGPULibFunc
AMDGPULibFunc hardcodes address space values of the old address space mapping,
which causes invalid addrspacecast instructions and undefined functions in
APPSDK sample MonteCarloAsianDP.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D39616

llvm-svn: 317409
2017-11-04 17:37:43 +00:00
Craig Topper a96d62b360 [X86] Teach shuffle lowering to use 256-bit SHUF128 when possible.
This allows masked operations to be used and allows the register allocator to use YMM16-31 if necessary.

As a follow up I'll look into teaching EVEX->VEX how to turn this back into PERM2X128 if any of the additional features don't work out.

llvm-svn: 317403
2017-11-04 06:44:47 +00:00
Craig Topper d21a53f246 [X86] Give unary PERMI priority over SHUF128 in lowerV8I64VectorShuffle to make it possible to fold a load.
llvm-svn: 317382
2017-11-03 22:48:13 +00:00
Evandro Menezes 9dcf099944 [AArch64] Fix the number of iterations for the Newton series
The number of iterations was incorrectly determined for DP FP vector types
and the tests were insufficient to flag this issue.

Differential revision: https://reviews.llvm.org/D39507

llvm-svn: 317349
2017-11-03 18:56:36 +00:00
Jun Bum Lim f5fb3d745d [LICM] sink through non-trivially replicable PHI
Summary:
The current LICM allows sinking an instruction only when it is exposed to exit
blocks through a trivially replacable PHI of which all incoming values are the
same instruction. This change enhance LICM to sink a sinkable instruction
through non-trivially replacable PHIs by spliting predecessors of loop
exits.

Reviewers: hfinkel, majnemer, davidxl, bmakam, mcrosier, danielcdh, efriedma, jtony

Reviewed By: efriedma

Subscribers: nemanjai, dberlin, llvm-commits

Differential Revision: https://reviews.llvm.org/D37163

llvm-svn: 317335
2017-11-03 16:24:53 +00:00
Simon Dardis d3b9f61c52 [mips] Match 'ins' and its' variants with C++ code
Change the ISel matching of 'ins', 'dins[mu]' from tablegen code to
C++ code. This resolves an issue where ISel would select 'dins' instead
of 'dinsm' when the instructions size and position were individually in
range but their sum was out of range according to the ISA specification.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39117

llvm-svn: 317331
2017-11-03 15:35:13 +00:00
Andrew V. Tischenko 0916c6b654 Fix for Bug 34475 - LOCK/REP/REPNE prefixes emitted as instruction on their own.
Differential Revision: https://reviews.llvm.org/D39546

llvm-svn: 317330
2017-11-03 15:25:13 +00:00
Clement Courbet 063bed9baf re-land [ExpandMemCmp] Split ExpandMemCmp from CodeGen into its own pass."
Fix undefined references: ExpandMemCmp belongs to CodeGen/, not Scalar/.

llvm-svn: 317318
2017-11-03 12:12:27 +00:00
Simon Pilgrim ae1f013495 [X86][SSE] Add PACKUS support to combineVectorTruncation
Similar to the existing code to lower to PACKSS, we can use PACKUS if the input vector's leading zero bits extend all the way to the packed/truncated value.

We have to account for pre-SSE41 targets not supporting PACKUSDW

llvm-svn: 317315
2017-11-03 11:33:48 +00:00
Diana Picus d1b618177a [globalisel][tablegen] Skip src child predicates
The GlobalISel TableGen backend didn't check for predicates on the
source children. This caused it to generate code for ARM patterns such
as SMLABB or similar, but without properly checking for the sext_16_node
part of the operands. This in turn meant that we would select SMLABB
instead of MLA for simple sequences such as s32 + s32 * s32, which is
wrong (we want a MLA on the full operands, not just their bottom 16
bits).

This patch forces TableGen to skip patterns with predicates on the src
children, so it doesn't generate code for SMLABB and other similar ARM
instructions at all anymore. AArch64 and X86 are not affected.

Differential Revision: https://reviews.llvm.org/D39554

llvm-svn: 317313
2017-11-03 10:30:19 +00:00
Martin Storsjo 9befcd7d8d [AArch64] Use dwarf exception handling on MinGW
Ideally we should probably produce WinEH here as well, but until
then, we can use dwarf exceptions, without any further changes
required in clang, libunwind or libcxxabi.

Differential Revision: https://reviews.llvm.org/D39535

llvm-svn: 317304
2017-11-03 07:33:20 +00:00
Craig Topper 333897ec31 [X86] Remove PALIGNR/VALIGN handling from combineBitcastForMaskedOp and move to isel patterns instead. Prefer 128-bit VALIGND/VALIGNQ over PALIGNR during lowering when possible.
llvm-svn: 317299
2017-11-03 06:48:02 +00:00
Sriraman Tallam 7cdb10f1aa Avoid PLT for external calls when attribute nonlazybind is used.
Differential Revision: https://reviews.llvm.org/D39065

llvm-svn: 317292
2017-11-03 00:10:19 +00:00
Vedant Kumar fb15180054 [Verifier] Remove the -verify-debug-info cl::opt
This cl::opt has been dead for a while. It's no longer possible to run
the verifier without also verifying debug info.

llvm-svn: 317288
2017-11-02 23:44:20 +00:00
Quentin Colombet b6afac1f9a [AArch64][RegisterBankInfo] Add mapping for G_FPEXT.
This fixes http://llvm.org/PR32560. We were missing a description for
half floating point type and as a result were using the FPR 32 mapping.
Because of the size mismatch the generic code was complaining that the
default mapping is not appropriate. Fix the mapping description so that
the default mapping can be properly applied.

llvm-svn: 317287
2017-11-02 23:38:19 +00:00
Craig Topper 086c04c8a7 [X86] Give AVX512VL instructions priority over their AVX equivalents.
I thought we had gotten all these priority bugs worked out, but I guess not.

llvm-svn: 317283
2017-11-02 23:23:37 +00:00
Krzysztof Parzyszek 058014fca5 [Hexagon] Prefer L2_loadrub_io over L4_loadrub_rr
If the offset is an immediate, avoid putting it in a register
to get Rs+Rt<<#0.

llvm-svn: 317275
2017-11-02 21:56:59 +00:00
Clement Courbet 82bade615b Revert "[ExpandMemCmp] Split ExpandMemCmp from CodeGen into its own pass."
undefined reference to `llvm::TargetPassConfig::ID' on
clang-ppc64le-linux-multistage

This reverts commit eea333c33fa73ad225ef28607795984829f65688.

llvm-svn: 317213
2017-11-02 15:53:10 +00:00
Clement Courbet 1dc37b9c3b [ExpandMemCmp] Split ExpandMemCmp from CodeGen into its own pass.
Summary:
This is mostly a noop (most of the test diffs are renamed blocks).
There are a few temporary register renames (eax<->ecx) and a few blocks are
shuffled around.

See the discussion in PR33325 for more details.

Reviewers: spatel

Subscribers: mgorny

Differential Revision: https://reviews.llvm.org/D39456

llvm-svn: 317211
2017-11-02 15:02:51 +00:00
Ayman Musa a37d1130d7 [X86] Fix bug in legalize vector types - Split large loads
When splitting a large load to smaller legally-typed loads, the last load should be padded to reach the size of the previous one so a CONCAT_VECTORS node could reunite them again.
The code currently pads the last load to reach the size of the first load (instead of the previous).

Differential Revision: https://reviews.llvm.org/D38495

Change-Id: Ib60b55ed26ce901fabf68108daf52683fbd5013f
llvm-svn: 317206
2017-11-02 13:07:06 +00:00
Simon Dardis 725acb2d91 [mips] Use register scavenging with MSA.
MSA stores and loads to the stack are more likely to require an
emergency GPR spill slot due to the smaller offsets available
with those instructions.

Handle this by overestimating the size of the stack by determining
the largest offset presuming that all callee save registers are
spilled and accounting of incoming arguments when determining
whether an emergency spill slot is required.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39056

llvm-svn: 317204
2017-11-02 12:47:22 +00:00
Michael Zuckerman 0c20b690a2 Adding test for extraxt sub vector load and store avx512
Change-Id: Iefcb0ec6b6aa1b530ce5358081f02e6e522a8e50
llvm-svn: 317202
2017-11-02 12:19:36 +00:00
Francis Visoiu Mistrih 66d2c269dc [AsmPrinterDwarf] Add support for .cfi_restore directive
As of today we only use .cfi_offset to specify the offset of a CSR, but
we never use .cfi_restore when the CSR is restored.

If we want to perform a more advanced type of shrink-wrapping, we need
to use .cfi_restore in order to switch the CFI state between blocks.

This patch only aims at adding support for the directive.

Differential Revision: https://reviews.llvm.org/D36114

llvm-svn: 317199
2017-11-02 12:00:58 +00:00
Sam Parker 242052c6b4 [ARM] and, or, xor and add with shl combine
The generic dag combiner will fold:

(shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)
(shl (or x, c1), c2) -> (or (shl x, c2), c1 << c2)

This can create constants which are too large to use as an immediate.
Many ALU operations are also able of performing the shl, so we can
unfold the transformation to prevent a mov imm instruction from being
generated.

Other patterns, such as b + ((a << 1) | 510), can also be simplified
in the same manner.

Differential Revision: https://reviews.llvm.org/D38084

llvm-svn: 317197
2017-11-02 10:43:10 +00:00
Andrew V. Tischenko 3c8bf5ec37 The patch updates sched numbers for YMM AVX instrs such as VMOVx, VORx, VXOR, VPERMILx, VBROADCASTx, etc.
PR32857 should be closed.
Differential Revision: https://reviews.llvm.org/D39227

llvm-svn: 317196
2017-11-02 10:33:41 +00:00
Steven Wu d0f59f0a5b [X86] Fix fast-isel-int-float-conversion test
Test is failing due to the revert in r317136. Fix the test to make all
the bots happy.

llvm-svn: 317153
2017-11-02 01:34:15 +00:00
Petar Jovanovic bb5c84fb57 Revert "Correct dwarf unwind information in function epilogue for X86"
This reverts r317100 as it introduced sanitizer-x86_64-linux-autoconf
buildbot failure (build #15606).

llvm-svn: 317136
2017-11-01 23:05:52 +00:00
Simon Pilgrim e152c2c447 [X86][SSE] Add PACKUS support to LowerTruncate
Similar to the existing code to lower to PACKSS, we can use PACKUS if the input vector's leading zero bits extend all the way to the packed/truncated value.

We have to account for pre-SSE41 targets not supporting PACKUSDW

llvm-svn: 317128
2017-11-01 21:52:29 +00:00
Craig Topper 4e56ba271e [X86] Add custom code to EVEX to VEX pass to turn unmasked 128-bit VPALIGND/Q into VPALIGNR if the extended registers aren't being used.
This will enable us to prefer VALIGND/Q during shuffle lowering in order to get the extended register encoding space when BWI isn't available. But if we end up not using the extended registers we can switch VPALIGNR for the shorter VEX encoding.

Differential Revision: https://reviews.llvm.org/D39401

llvm-svn: 317122
2017-11-01 21:00:59 +00:00
Daniel Sanders 9cbe7c7f93 [globalisel][tablegen] Add support for multi-insn emission
The importer will now accept nested instructions in the result pattern such as
(ADDWrr $a, (SUBWrr $b, $c)). This is only valid when the nested instruction
def's a single vreg and the parent instruction consumes a single vreg where a
nested instruction is specified. The importer will automatically create a vreg
to connect the two using the type information from the pattern. This vreg will
be constrained to the register classes given in the instruction definitions*.

* REG_SEQUENCE is explicitly rejected because of this. The definition doesn't
  constrain to a register class and it therefore needs special handling.

llvm-svn: 317117
2017-11-01 19:57:57 +00:00
Craig Topper ca1aa83cbe [X86] Prevent fast isel from folding loads into the instructions listed in hasPartialRegUpdate.
This patch moves the check for opt size and hasPartialRegUpdate into the lower level implementation of foldMemoryOperandImpl to catch the entry point that fast isel uses.

We're still folding undef register instructions in AVX that we should also probably disable, but that's a problem for another patch.

Unfortunately, this requires reordering a bunch of functions which is why the diff is so large. I can do the function reordering separately if we want.

Differential Revision: https://reviews.llvm.org/D39402

llvm-svn: 317112
2017-11-01 18:10:06 +00:00
Graham Yiu 671526148c Adds code to PPC ISEL lowering to recognize half-word inserts from vector_shuffles, and use P9 shift and vector insert instructions instead of vperm.
Differential Revision: https://reviews.llvm.org/D34160

llvm-svn: 317111
2017-11-01 18:06:56 +00:00
Craig Topper 56db9d6bec [X86] Regnerate test to attempt to fix build bot failure.
llvm-svn: 317107
2017-11-01 17:44:12 +00:00
Craig Topper 5ae677e102 [X86] Add 64-bit int to float/double conversion with AVX to X86FastISel::X86SelectSIToFP
Summary:
[X86] Teach fast isel to handle i64 sitofp with AVX.

For some reason we only handled i32 sitofp with AVX. But with SSE only we support i64 so we should do the same with AVX.

Also add i686 command lines for the 32-bit tests. 64-bit tests are in a separate file to avoid a fast-isel abort failure in 32-bit mode.

Reviewers: RKSimon, zvi

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39450

llvm-svn: 317102
2017-11-01 16:23:06 +00:00
Andrew V. Tischenko 3d971e39f8 Update VCVTx, VMOVNTPx and VROUNDYPx instructions scheduling on btver2.
Differential Revision: https://reviews.llvm.org/D39059

llvm-svn: 317101
2017-11-01 16:10:20 +00:00
Petar Jovanovic f2faee92aa Correct dwarf unwind information in function epilogue for X86
This patch aims to provide correct dwarf unwind information in function
epilogue for X86.

It consists of two parts. The first part inserts CFI instructions that set
appropriate cfa offset and cfa register in emitEpilogue() in
X86FrameLowering. This part is X86 specific.

The second part is platform independent and ensures that:

- CFI instructions do not affect code generation
- Unwind information remains correct when a function is modified by
  different passes. This is done in a late pass by analyzing information
  about cfa offset and cfa register in BBs and inserting additional CFI
  directives where necessary.

Changed CFI instructions so that they:

- are duplicable
- are not counted as instructions when tail duplicating or tail merging
- can be compared as equal

Added CFIInstrInserter pass:

- analyzes each basic block to determine cfa offset and register valid at
  its entry and exit
- verifies that outgoing cfa offset and register of predecessor blocks match
  incoming values of their successors
- inserts additional CFI directives at basic block beginning to correct the
  rule for calculating CFA

Having CFI instructions in function epilogue can cause incorrect CFA
calculation rule for some basic blocks. This can happen if, due to basic
block reordering, or the existence of multiple epilogue blocks, some of the
blocks have wrong cfa offset and register values set by the epilogue block
above them.

CFIInstrInserter is currently run only on X86, but can be used by any target
that implements support for adding CFI instructions in epilogue.


Patch by Violeta Vukobrat.

Differential Revision: https://reviews.llvm.org/D35844

llvm-svn: 317100
2017-11-01 16:04:11 +00:00
Simon Pilgrim 2b09f39b4d Regenerate PACKUS/TRUNCS test (PR31773)
llvm-svn: 317096
2017-11-01 15:27:23 +00:00
Roger Ferrer Ibanez 9dfbc10522 Revert r313618 "[ARM] Use ADDCARRY / SUBCARRY"
That change causes PR35103, so reverting until I figure it out.

llvm-svn: 317092
2017-11-01 14:06:57 +00:00
Simon Pilgrim f657ba0cb6 [X86][SSE] Truncate with PACKSS any input with sufficient sign-bits
So far we've only been using PACKSS truncations with 'all-bits or zero-bits' patterns (vector comparison results etc.). When really we can safely use it for any case as long as the number of sign bits reach down to the last 16-bits (or 8-bits if we're truncating to bytes).

The next steps after this is add the equivalent support for PACKUS and to support packing to sub-128 bit vectors for truncating stores etc.

Differential Revision: https://reviews.llvm.org/D39476

llvm-svn: 317086
2017-11-01 11:47:44 +00:00
Marek Olsak 5914ece6aa AMDGPU: Select s_buffer_load_dword with a non-constant SGPR offset
Summary:
Apps that benefit:
- alien isolation
- bioshock infinite
- civilization: beyond earth
- company of heroes 2
- dirt showdown
- dota 2
- F1 2015
- grid autosport
- hitman
- legend of grimrock
- serious sam 3: bfe
- shadow warrior
- talos principle
- total war: warhammer
- UE4 demos: effects cave, elemental, sun temple

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38914

llvm-svn: 317038
2017-10-31 21:06:42 +00:00
Simon Pilgrim a5793820fd [X86][AVX512] Regenerate tests to remove retl/retq regex
These are only testing 64-bit targets so we don't need the regex

llvm-svn: 317021
2017-10-31 18:43:24 +00:00
Simon Pilgrim c1927f12db [X86][AVX512] Split AVX512F and AVX512BW bool-vector bitcast tests
llvm-svn: 317020
2017-10-31 18:41:48 +00:00
Craig Topper beed653135 [X86] Make AVX512_512_SET0 XMM16-31 lower to 128-bit XOR when AVX512VL is enabled. Use 128-bit VLX instruction when VLX is enabled.
Unfortunately, this weakens our ability to do domain fixing when AVX512DQ is not enabled, but it is consistent with our 256-bit behavior.

Maybe we should add custom handling to domain fixing to allow EVEX integer XOR/AND/OR/ANDN to switch to VEX encoded fp instructions if the high registers aren't being used?

llvm-svn: 316978
2017-10-31 06:01:04 +00:00
Craig Topper 9f01f6093c [X86] Add AVX512 support to fast isel's X86ChooseCmpOpcode.
llvm-svn: 316955
2017-10-30 21:09:19 +00:00
Stefan Pintilie 6262fd4b0a Revert "[PowerPC] Try to simplify a Swap if it feeds a Splat"
Revert r316478.
A test case has failed.
Will recommit this change once we find and fix the failure.

This reverts commit 7c330fabaedaba3d02c58bc3cc1198896c895f34.

llvm-svn: 316952
2017-10-30 19:55:38 +00:00
Simon Pilgrim 017f896adb [SelectionDAG] Add VSELECT demanded elts support to computeKnownBits
llvm-svn: 316947
2017-10-30 19:31:08 +00:00
Zvi Rackover dfd9e7dda8 X86 Tests: Update the variable-index permute tests with FP types. NFC.
These cases will be addressed in a future update to D39126.

llvm-svn: 316946
2017-10-30 19:29:15 +00:00
Simon Pilgrim 28b6219bd6 [X86][SSE] Add another computeKnownBits test showing missing VSELECT demandedelts support
llvm-svn: 316945
2017-10-30 19:19:58 +00:00
Simon Pilgrim 96a0b9ef54 [SelectionDAG] Add VSELECT support to computeKnownBits
llvm-svn: 316944
2017-10-30 19:08:21 +00:00
Simon Pilgrim b81fbf44c7 [X86][SSE] computeKnownBits tests showing missing VSELECT demandedelts support
llvm-svn: 316940
2017-10-30 18:48:31 +00:00
Simon Pilgrim 10d9e27067 [X86][AVX512] Cleanup scheduler tests - split GENERIC and SKX targets
llvm-svn: 316938
2017-10-30 18:37:27 +00:00
Simon Pilgrim 5da11dfd24 [SelectionDAG] Add SELECT demanded elts support to ComputeNumSignBits
llvm-svn: 316933
2017-10-30 17:53:51 +00:00
Simon Pilgrim ce82f7b694 [X86][SSE] ComputeNumSignBits tests showing missing VSELECT demandedelts support
llvm-svn: 316932
2017-10-30 17:46:50 +00:00
Simon Pilgrim ad62cbe5d9 [X86][AVX] Add missing vcvtpd2dq/vcvtps2dq scheduling tests
llvm-svn: 316926
2017-10-30 17:23:17 +00:00
Simon Pilgrim 76d36c5f0f [X86][SSE] Add clflush scheduling test
llvm-svn: 316925
2017-10-30 17:20:50 +00:00
Jina Nahias 5bf6620b15 [X86][AVX512] Adding a pattern for broadcastm intrinsic.
Differential Revision: https://reviews.llvm.org/D38312

Change-Id: I71c8605a8e4c98013ef25289694afc5cfd46bb0b
llvm-svn: 316921
2017-10-30 16:37:28 +00:00
Fangrui Song 2696db90d1 [PPC CodeGen] Fix the bitreverse.i64 intrinsic.
Summary: The two 32-bit words were swapped. Update a test omitted in reverted r316270.

Reviewers: jtony, aaron.ballman

Subscribers: nemanjai, kbarton

Differential Revision: https://reviews.llvm.org/D39163

llvm-svn: 316916
2017-10-30 16:03:44 +00:00
Craig Topper 4e13d4de52 [X86] Make sure we don't create locked inc/dec instructions when the carry flag is being used.
Summary:
INC/DEC don't update the carry flag so we need to make sure we don't try to use it.

This patch introduces new X86ISD opcodes for locked INC/DEC. Teaches lowerAtomicArithWithLOCK to emit these nodes if INC/DEC is not slow or the function is being optimized for size. An additional flag is added that allows the INC/DEC to be disabled if the caller determines that the carry flag is being requested.

The test_sub_1_cmp_1_setcc_ugt test is currently showing this bug. The other test case changes are recovering cases that were regressed in r316860.

This should fully fix PR35068 finishing the fix started in r316860.

Reviewers: RKSimon, zvi, spatel

Reviewed By: zvi

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39411

llvm-svn: 316913
2017-10-30 14:51:37 +00:00
Craig Topper 367cc12fa9 [X86] Remove AVX512 early out from X86FastISel::X86SelectCmp.
This shouldn't be needed anymore since i1 isn't a legal type.

llvm-svn: 316912
2017-10-30 14:50:11 +00:00
Craig Topper 87a148ad0e [X86] Regenerate test using update_llc_test_checks.py
llvm-svn: 316911
2017-10-30 14:50:10 +00:00
Yaxun Liu c928f2a6d4 [AMDGPU] Emit metadata for hidden arguments for kernel enqueue
Identifies kernels which performs device side kernel enqueues and emit
metadata for the associated hidden kernel arguments. Such kernels are
marked with calls-enqueue-kernel function attribute by
AMDGPUOpenCLEnqueueKernelLowering pass and later on
hidden kernel arguments metadata HiddenDefaultQueue and
HiddenCompletionAction are emitted for them.

Differential Revision: https://reviews.llvm.org/D39255

llvm-svn: 316907
2017-10-30 14:30:28 +00:00
Clement Courbet b2c3eb8cf1 [CodeGen][ExpandMemcmp] Allow memcmp to expand to vector loads (2).
- Targets that want to support memcmp expansions now return the list of
   supported load sizes.
 - Expansion codegen does not assume that all power-of-two load sizes
   smaller than the max load size are valid. For examples, this is not the
   case for x86(32bit)+sse2.

Fixes PR34887.

llvm-svn: 316905
2017-10-30 14:19:33 +00:00
Javed Absar 5cde1ccb29 [GlobalISel|ARM] : Allow legalizing G_FSUB
Adding support for VSUB.
Reviewed by: @rovka
Differential Revision: https://reviews.llvm.org/D39261

llvm-svn: 316902
2017-10-30 13:51:56 +00:00
Diana Picus 6e41e6ac6c [ARM GlobalISel] Fixup r316572. NFC
Just missed a few spots...

llvm-svn: 316897
2017-10-30 11:58:09 +00:00
Jina Nahias e63db55c67 Revert "[X86][AVX512] Adding a pattern for broadcastm intrinsic."
This reverts commit r316890.

Change-Id: I683cceee9848ef309b452293086b1f26a941950d
llvm-svn: 316894
2017-10-30 10:35:53 +00:00
Jina Nahias 70280f9a0d [X86][AVX512] Adding a pattern for broadcastm intrinsic.
Differential Revision: https://reviews.llvm.org/D38312

Change-Id: I6551fb13879e098aed74de410e29815cf37d9ab5
llvm-svn: 316890
2017-10-30 09:59:52 +00:00
Simon Pilgrim 601ae238b7 [SelectionDAG] Add SEXT/AND/XOR/Or demanded elts support to ComputeNumSignBits
llvm-svn: 316875
2017-10-29 22:03:37 +00:00
Simon Pilgrim aa65159b0a [X86][SSE] Split ComputeNumSignBits SEXT/AND/XOR/OR demandedelts test
Max depth was being exceeded which could prevent some combines working

llvm-svn: 316871
2017-10-29 21:35:28 +00:00
Simon Pilgrim 61921f779f [X86][SSE] ComputeNumSignBits tests showing missing SEXT/AND/XOR/OR demandedelts support
llvm-svn: 316868
2017-10-29 20:49:27 +00:00
Simon Pilgrim 7613a7b564 [SelectionDAG] Add SRA/SHL demanded elts support to ComputeNumSignBits
Introduce a isConstOrDemandedConstSplat helper function that can recognise a constant splat build vector for at least the demanded elts we care about.

llvm-svn: 316866
2017-10-29 18:19:37 +00:00
Simon Pilgrim b56fb4a2bb [X86][SSE] ComputeNumSignBits tests showing missing SHL/SRA demandedelts support
llvm-svn: 316865
2017-10-29 18:01:31 +00:00
Craig Topper 41d363fea1 [X86] Add a slow-incdec command line to atomic-eflags-reuse.ll
I believe the test_sub_1_cmp_1_setcc_ugt test case is being miscompiled in the fast inc/dec case.

llvm-svn: 316864
2017-10-29 17:15:09 +00:00
Craig Topper 495a1bc893 [X86] Remove combine that turns X86ISD::LSUB into X86ISD::LADD. Update patterns that depended on this.
If the carry flag is being used, this transformation isn't safe.

This does prevent some test cases from using DEC now, but I'll try to look into that separately.

Fixes PR35068.

llvm-svn: 316860
2017-10-29 06:51:04 +00:00
Craig Topper 5f2289a13c [X86] Add AVX512 support to X86FastISel::X86SelectFPExt and X86FastISel::X86SelectFPTrunc.
llvm-svn: 316856
2017-10-29 02:50:31 +00:00
Craig Topper 8373336a22 [X86] Use update_llc_test_checks.py to regenerate fast-isel-int-float-conversion.ll
llvm-svn: 316855
2017-10-29 02:25:48 +00:00
Craig Topper 7d1ed9ec83 [X86] Use update_llc_test_checks.py to regenerate fast-isel-fptrunc-fpext.ll
llvm-svn: 316854
2017-10-29 02:18:43 +00:00
Craig Topper 1e30d783dd [X86] Add AVX512 support to X86FastISel::X86MaterializeFP
llvm-svn: 316853
2017-10-29 02:18:41 +00:00
Simon Pilgrim b37a24e82f [SelectionDAG] Add support for INSERT_SUBVECTOR to computeKnownBits
llvm-svn: 316847
2017-10-28 22:10:40 +00:00
Simon Pilgrim 294f88dfa0 [X86][SSE] Combine 128-bit target shuffles to PACKSS/PACKUS.
llvm-svn: 316845
2017-10-28 20:51:27 +00:00
Sanjay Patel b049173157 [SimplifyCFG] use pass options and remove the latesimplifycfg pass
This is no-functional-change-intended.

This is repackaging the functionality of D30333 (defer switch-to-lookup-tables) and 
D35411 (defer folding unconditional branches) with pass parameters rather than a named
"latesimplifycfg" pass. Now that we have individual options to control the functionality,
we could decouple when these fire (but that's an independent patch if desired). 

The next planned step would be to add another option bit to disable the sinking transform
mentioned in D38566. This should also make it clear that the new pass manager needs to
be updated to limit simplifycfg in the same way as the old pass manager.

Differential Revision: https://reviews.llvm.org/D38631

llvm-svn: 316835
2017-10-28 18:43:07 +00:00
Craig Topper abe5dbafff [X86] Correct the alignments on the aligned test cases in fast-isel-vecload.ll to make sure they test selection of aligned loads.
llvm-svn: 316833
2017-10-28 17:37:51 +00:00
Simon Pilgrim d09c1ac20f [SelectionDAG] Support 'bit preserving' floating points bitcasts on computeKnownBits/ComputeNumSignBits
For cases where we know the floating point representations match the bitcasted integer equivalent, allow bitcasting to these types.

This is especially useful for the X86 floating point compare results which return all/zero bits but as a floating point type.

Differential Revision: https://reviews.llvm.org/D39289

llvm-svn: 316831
2017-10-28 14:27:53 +00:00
Craig Topper 39cfdc664d [X86] Add avx command lines to fast-isel-constpool.ll to improve coverage.
llvm-svn: 316829
2017-10-28 06:31:48 +00:00
Craig Topper ea83f85da0 [X86] Use update_llc_test_checks.py to regenerate fast-isel-constpool.ll
llvm-svn: 316828
2017-10-28 06:31:46 +00:00
Craig Topper 8ca5863dd8 [X86] Add a fast-isel test for the i8 pseudo cmov.
llvm-svn: 316827
2017-10-28 06:10:03 +00:00
Craig Topper fd0a35a649 [X86] Add avx command lines to two fast-isel tests to get coverage of selecting vucomiss/vucomisd.
The selection of these shows up as a code coverage hole when looking at the llvm-cov link on llvm.org

llvm-svn: 316823
2017-10-28 02:03:59 +00:00
Craig Topper 4390c61fad [X86] Use update_llc_test_checks.py to regenerate fast-isel-select-cmov2.ll
llvm-svn: 316822
2017-10-28 02:03:58 +00:00
Tom Stellard d0c6cf2e8c AMDGPU/GlobalISel: Mark 32-bit G_FADD as legal
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, igorb, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D38439

llvm-svn: 316815
2017-10-27 23:57:41 +00:00
Krzysztof Parzyszek 4dc04e6a70 [Hexagon] Adjust patterns to reflect instruction selection preferences
llvm-svn: 316804
2017-10-27 22:24:49 +00:00
Guozhi Wei 7c67009fe5 [DAGCombine] Don't combine sext with extload if sextload is not supported and extload has multi users
In function DAGCombiner::visitSIGN_EXTEND_INREG, sext can be combined with extload even if sextload is not supported by target, then

  if sext is the only user of extload, there is no big difference, no harm no benefit.
  if extload has more than one user, the combined sextload may block extload from combining with other zext, causes extra zext instructions generated. As demonstrated by the attached test case.

This patch add the constraint that when sextload is not supported by target, sext can only be combined with extload if it is the only user of extload.

Differential Revision: https://reviews.llvm.org/D39108

llvm-svn: 316802
2017-10-27 21:54:24 +00:00
Rafael Espindola 2393c3b4e1 Handle undefined weak hidden symbols on all architectures.
We were handling the non-hidden case in lib/Target/TargetMachine.cpp,
but the hidden case was handled in architecture dependent code and
only X86_64 and AArch64 were covered.

While it is true that some code sequences in some ABIs might be able
to produce the correct value at runtime, that doesn't seem to be the
common case.

I left the AArch64 code in place since it also forces a got access for
non-pic code. It is not clear if that is needed, but it is probably
better to change that in another commit.

llvm-svn: 316799
2017-10-27 21:18:48 +00:00
Craig Topper b904c70005 [X86] Add fast-isel tests for integer shifts. We definitely had no coverage of i16 and i32/i64 are only tested by larger tests.
llvm-svn: 316796
2017-10-27 21:00:56 +00:00
Artur Gainullin af7ba8ff6b Improve clamp recognition in ValueTracking.
Summary:
ValueTracking was recognizing not all variations of clamp. Swapping of
true value and false value of select was added to fix this problem. The
first patch was reverted because it caused miscompile in NVPTX target. 
Added corresponding test cases.

Reviewers: spatel, majnemer, efriedma, reames

Subscribers: llvm-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D39240

llvm-svn: 316795
2017-10-27 20:53:41 +00:00
Craig Topper 58fe564e93 [X86] Add avx512vl command line to fast-isel-nontemporal.ll
llvm-svn: 316789
2017-10-27 20:13:06 +00:00
Krzysztof Parzyszek 92a2635bbd [Hexagon] Fix an incorrect assertion in HexagonConstExtenders.cpp
Making sure that an instruction has fewer operands than required, then
attempting to access one out of range is going to fail.

llvm-svn: 316785
2017-10-27 18:52:28 +00:00
Simon Pilgrim 1bfaa453a3 [X86][SSE] Add tests for inserting all-bits (-1) into a vector
We should be able to do this by re-materializing an all-bits vector and then blending with it

llvm-svn: 316779
2017-10-27 18:14:12 +00:00
Clement Courbet be684eee82 [CodeGen][ExpandMemCmp][NFC] Simplify load sequence generation.
llvm-svn: 316763
2017-10-27 12:34:18 +00:00
Matt Arsenault 878827d93a DAG: Fold fma (fneg x), K, y -> fma x, -K, y
llvm-svn: 316753
2017-10-27 09:06:07 +00:00
Clement Courbet b237f044a4 [CodeGen][ExpandMemcmp][NFC] Make tests more complete.
llvm-svn: 316749
2017-10-27 08:33:51 +00:00
Sean Fertile 57d46b8436 Add subclass data to the FoldingSetNode for MemIntrinsicSDNodes.
Not having the subclass data on an MemIntrinsicSDNodes means it was possible
to try to fold 2 nodes with the same operands but differing MMO flags. This
would trip an assertion when trying to refine the alignment between the 2
MachineMemOperands.

Differential Revision: https://reviews.llvm.org/D38898

llvm-svn: 316737
2017-10-27 04:02:51 +00:00
Eli Friedman d5dfb62de7 [ARM] Honor -mfloat-abi for libcall calling convention
As far as I can tell, this matches gcc: -mfloat-abi determines the
calling convention for all functions except those explicitly defined as
soft-float in the ARM RTABI.

This change only affects cases where the user specifies -mfloat-abi to
override the default calling convention derived from the target triple.

Fixes https://bugs.llvm.org//show_bug.cgi?id=34530.

Differential Revision: https://reviews.llvm.org/D38299

llvm-svn: 316708
2017-10-26 21:42:32 +00:00
Craig Topper b8d7d4d683 [X86] Improve handling of UDIVREM8_ZEXT_HREG/SDIVREM8_SEXT_HREG to support 64-bit extensions.
If the extend type is 64-bits, emit a 32-bit -> 64-bit extend after the UDIVREM8_ZEXT_HREG/UDIVREM8_SEXT_HREG operation.

This gives a shorter encoding for the second extend in the sext case, and allows us to completely remove the second extend in the zext case.

This also adds known bit and num sign bits support for UDIVREM8_ZEXT_HREG/SDIVREM8_SEXT_HREG.

Differential Revision: https://reviews.llvm.org/D38275

llvm-svn: 316702
2017-10-26 21:12:03 +00:00
Sanjay Patel ac50f3e907 [x86] use an insert op to put one variable element into a constant of vectors
Instead of loading (a potential ton of) scalar constants, load those as a vector and then insert into it.

Differential Revision: https://reviews.llvm.org/D38756

llvm-svn: 316685
2017-10-26 18:27:55 +00:00
Konstantin Zhuravlyov 6f8e515185 AMDGPU: Commit missing fence-barrier test
This should have been committed with memory model implementation

llvm-svn: 316680
2017-10-26 17:54:09 +00:00
Sean Fertile c70d28bff5 Represent runtime preemption in the IR.
Currently we do not represent runtime preemption in the IR, which has several
drawbacks:

  1) The semantics of GlobalValues differ depending on the object file format
     you are targeting (as well as the relocation-model and -fPIE value).
  2) We have no way of disabling inlining of run time interposable functions,
     since in the IR we only know if a function is link-time interposable.
     Because of this llvm cannot support elf-interposition semantics.
  3) In LTO builds of executables we will have extra knowledge that a symbol
     resolved to a local definition and can't be preemptable, but have no way to
     propagate that knowledge through the compiler.

This patch adds preemptability specifiers to the IR with the following meaning:

dso_local --> means the compiler may assume the symbol will resolve to a
 definition within the current linkage unit and the symbol may be accessed
 directly even if the definition is not within this compilation unit.

dso_preemptable --> means that the compiler must assume the GlobalValue may be
replaced with a definition from outside the current linkage unit at runtime.

To ease transitioning dso_preemptable is treated as a 'default' in that
low-level codegen will still do the same checks it did previously to see if a
symbol should be accessed indirectly. Eventually when IR producers emit the
specifiers on all Globalvalues we can change dso_preemptable to mean 'always
access indirectly', and remove the current logic.

Differential Revision: https://reviews.llvm.org/D20217

llvm-svn: 316668
2017-10-26 15:00:26 +00:00
Marek Olsak 2232243863 AMDGPU: Handle s_buffer_load_dword hazard on SI
Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D39171

llvm-svn: 316666
2017-10-26 14:43:02 +00:00
Simon Dardis 13452383cd [mips] Fix PR35071
PR35071 exposed the fact that MipsInstrInfo::removeBranch did not walk past
debug instructions when removing branches for the control flow optimizer, which
lead to duplicated conditional branches. If the target of the branch was a
removable block, only the conditional branch in the terminating position would
have it's MBB operands updated, leaving the first branch with a dangling MBB
operand. The MIPS long branch pass would then trigger an assertion when
attempting to examine the instruction with dangling MBB operand.

This resolves PR35071.

Thanks to Alex Richardson for reporting the issue!

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39288

llvm-svn: 316654
2017-10-26 10:58:36 +00:00
Hiroshi Inoue b72b1fb0de [PowerPC] Use record-form instruction for Less-or-Equal -1 and Greater-or-Equal 1
Currently a record-form instruction is used for comparison of "greater than -1" and "less than 1" by modifying the predicate (e.g. LT 1 into LE 0) in addition to the naive case of comparison against 0.
This patch also enables emitting a record-form instruction for "less than or equal to -1" (i.e. "less than 0") and "greater than or equal to 1" (i.e. "greater than 0") to increase the optimization opportunities.

Differential Revision: https://reviews.llvm.org/D38941

llvm-svn: 316647
2017-10-26 09:01:51 +00:00
Alexander Richardson d3aa475988 Fix CodeGen/AMDGPU/fcanonicalize-elimination.ll on FreeBSD 11.0
Summary:
On FreeBSD11.0 the FileCheck NOT string "1.0" will be matched by
`.amd_amdgpu_isa "amdgcn-unknown-freebsd11.0--gfx802"` at the end of the
file. Add a CHECK for that directive to avoid failing the test.

Reviewers: rampitec, kzhuravl

Reviewed By: rampitec, kzhuravl

Subscribers: emaste, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits, krytarowski

Differential Revision: https://reviews.llvm.org/D39306

llvm-svn: 316616
2017-10-25 21:44:21 +00:00