Commit Graph

99251 Commits

Author SHA1 Message Date
Tim Northover 79f43f195c GlobalISel: translate memset & memmove.
llvm-svn: 293541
2017-01-30 19:33:07 +00:00
Matt Arsenault af635240d5 AMDGPU: Undo sub x, c -> add x, -c canonicalization
This is worse if the original constant is an inline immediate.

This should also be done for 64-bit adds, but requires fixing
operand folding bugs first.

llvm-svn: 293540
2017-01-30 19:30:24 +00:00
Krzysztof Parzyszek 3695d06a10 [RDF] Add support for regmasks
llvm-svn: 293538
2017-01-30 19:16:30 +00:00
Tim Northover 480609d0f3 GlobalISel: permit unused vregs without a register-class after ISel.
This can happen if earlier combining has removed all uses of some VReg, which
is fine and shouldn't flag an error.

llvm-svn: 293537
2017-01-30 19:12:50 +00:00
Benjamin Kramer a9df941403 Fix the GCC build.
This is fairly ugly, but apparently GCC still doesn't understand C++11.

llvm-svn: 293535
2017-01-30 19:05:09 +00:00
Simon Pilgrim ffe2535cf6 Use SelectionDAG::getBuildVector helper function where possible. NFCI.
llvm-svn: 293532
2017-01-30 18:53:45 +00:00
Benjamin Kramer a846e0b082 [MC] Remove global constructors from MCSectionMachO.cpp.
llvm-svn: 293526
2017-01-30 18:46:26 +00:00
Matt Arsenault 0c3293844b AMDGPU: Run AMDGPUCodeGenPrepare after inlining
With leaf functions, this makes nonsensical decisions
based on the uniformity of the arguments.

llvm-svn: 293525
2017-01-30 18:40:29 +00:00
Sanjay Patel 373db5ba6c [InstCombine] enable (X >>?exact C1) << C2 --> X >>?exact (C1-C2) for vectors with splat constants
llvm-svn: 293524
2017-01-30 18:40:23 +00:00
Justin Bogner 8f520a73b2 SDAG: Update ChainNodesMatched during UpdateChains if a node is replaced
Previously, we would hit UB (or the ISD::DELETED_NODE assert) if we
happened to replace a node during UpdateChains, because it would be
left in the list we were iterating over. This nulls out the pointer
when that happens so that we can avoid the issue.

Fixes llvm.org/PR31710

llvm-svn: 293522
2017-01-30 18:29:46 +00:00
Simon Pilgrim 0a5ab5c4db Use SelectionDAG::getBuildVector/getSplatBuildVector helper functions where possible. NFCI.
llvm-svn: 293520
2017-01-30 18:20:42 +00:00
Marcos Pividori d2406ea900 [libFuzzer] Implement TmpDir() for Windows.
Differential Revision: https://reviews.llvm.org/D28977

llvm-svn: 293516
2017-01-30 18:14:53 +00:00
Daniel Berlin a53a72243a NewGVN: Instead of changeToUnreachable, insert an instruction SimplifyCFG will turn into unreachable when it runs
llvm-svn: 293515
2017-01-30 18:12:56 +00:00
Matt Arsenault ee3f0acf20 AMDGPU: Make i32 uaddo/usubo legal
llvm-svn: 293514
2017-01-30 18:11:38 +00:00
Matt Arsenault 32e6bfa20f DAG: Fold fneg into compare with constant into the constant
fcmp (fneg x), c, pred -> fcmp x, -c, (swap pred)

InstCombine already does this.

llvm-svn: 293512
2017-01-30 17:57:28 +00:00
Krzysztof Parzyszek 49ffff12e5 [RDF] Extract the physical register information into a separate class
llvm-svn: 293510
2017-01-30 17:46:56 +00:00
Tom Stellard 7a19d56f73 Revert "AMDGPU/GlobalISel: Add support for simple shaders"
This reverts commit r293503.

Revert while I investigate some of the buildbot failures.

llvm-svn: 293509
2017-01-30 17:42:41 +00:00
Sanjay Patel 062c14af5c [InstCombine] use auto with obvious type; NFC
llvm-svn: 293508
2017-01-30 17:38:55 +00:00
Sanjay Patel 77732d5033 [InstCombine] enable (X <<nsw C1) >>s C2 --> X <<nsw (C1-C2) for vectors with splat constants
llvm-svn: 293507
2017-01-30 17:19:32 +00:00
David Blaikie a66696f210 unique_ptrify some containers in GlobalISel::RegisterBankInfo
To simplify/clarify memory ownership, make leaks (as one was found/fixed
recently) harder to write, etc.

(also, while I was there - removed a duplicate lookup in a container)

llvm-svn: 293506
2017-01-30 17:13:56 +00:00
Matt Arsenault 41c1499504 AMDGPU: Fix atomic_inc/atomic_dec + ds_swizzle not being divergent
llvm-svn: 293504
2017-01-30 17:09:47 +00:00
Tom Stellard e48f60aec8 AMDGPU/GlobalISel: Add support for simple shaders
Summary: We can select constant/global G_LOAD, global G_STORE, and G_GEP.

Reviewers: qcolombet, MatzeB, t.p.northover, ab, arsenm

Subscribers: mehdi_amini, vkalintiris, kzhuravl, wdng, nhaehnle, mgorny, yaxunl, tony-tye, modocache, llvm-commits, dberris

Differential Revision: https://reviews.llvm.org/D26730

llvm-svn: 293503
2017-01-30 17:09:15 +00:00
Daniel Berlin e19f0e01a8 Revert "NewGVN: Make unreachable blocks be marked with unreachable"
This reverts commit r293196

Besides making things look nicer, ATM, we'd like to preserve analysis
more than we'd like to destroy the CFG.  We'll probably revisit in the future

llvm-svn: 293501
2017-01-30 17:06:55 +00:00
Simon Pilgrim 098998aef0 [X86][SSE] Add support for combining PINSRW+ASSERTZEXT+PEXTRW patterns with target shuffles
llvm-svn: 293500
2017-01-30 16:58:34 +00:00
Matt Arsenault 0c687390fe DAG: Constant fold fp16_to_fp/fp16_to_fp
This fixes emitting conversions of constants on targets
without legal f16 that need to use these for legalization.

llvm-svn: 293499
2017-01-30 16:57:41 +00:00
Sanjay Patel 8e644c08ee [InstCombine] fixed to propagate 'exact' on lshr
The original shift is bigger, so this may qualify as 'obvious', 
but here's an attempt at an Alive-based proof:

Name: exact
Pre: (C1 u< C2)
%a = shl i8 %x, C1
%b = lshr exact i8 %a, C2 
  =>
%c = lshr exact i8 %x, C2 - C1
%b = and i8 %c, ((1 << width(C1)) - 1) u>> C2

Optimization is correct!

llvm-svn: 293498
2017-01-30 16:53:03 +00:00
Benjamin Kramer 585756568c [Coroutines] Add header guard to header that's missing one.
llvm-svn: 293494
2017-01-30 16:32:20 +00:00
Adam Nemet e7bdf227f6 [Inliner] Fold analysis remarks into missed remarks
This significantly reduces the noise level of these messages.

llvm-svn: 293492
2017-01-30 16:22:45 +00:00
Krzysztof Parzyszek b561cf953a [RDF] Add phis for entry block live-ins (in addition to function live-ins)
llvm-svn: 293491
2017-01-30 16:20:30 +00:00
Haicheng Wu f8dc2d8c8b [Inliner] Fix a comment to match the code. NFC.
TotalAltCost => TotalSecondaryCost

Differential Revision: https://reviews.llvm.org/D29231

llvm-svn: 293490
2017-01-30 16:15:14 +00:00
Sanjay Patel 1196d7cd7f [InstCombine] enable lshr(shl X, C1), C2 folds for vectors with splat constants
llvm-svn: 293489
2017-01-30 16:11:40 +00:00
Rafael Espindola e0eba3c493 Only print architecture dependent flags for that architecture.
Different architectures can have different meaning for flags in the
SHF_MASKPROC mask, so we should always check what the architecture use
before checking the flag.

NFC for now, but will allow fixing the value of an xmos flag.

llvm-svn: 293484
2017-01-30 15:38:43 +00:00
Benjamin Kramer 73564981fe [Hexagon] Make header self-contained.
llvm-svn: 293482
2017-01-30 14:55:33 +00:00
Asaf Badouh e11d2d73bf [X86][MCU] Minor bug fix for r293469 + test case
llvm-svn: 293478
2017-01-30 13:14:37 +00:00
Marek Olsak e81adb52b1 AMDGPU: Remove a useless VI SMRD pattern
Summary: already covered by complex patterns

Reviewers: arsenm, nhaehnle, tstellarAMD

Subscribers: kzhuravl, wdng, yaxunl, tony-tye

Differential Revision: https://reviews.llvm.org/D28995

llvm-svn: 293477
2017-01-30 12:25:14 +00:00
Marek Olsak 8e93529020 AMDGPU: Fix assembler encoding for EXP instructions on VI
Reviewers: arsenm, tstellarAMD

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye

Differential Revision: https://reviews.llvm.org/D28992

llvm-svn: 293476
2017-01-30 12:25:03 +00:00
Daniel Berlin 9d8a335ce0 Revert "[MemorySSA] Revert r293361 and r293363, as the tests fail under asan."
This reverts commit r293471, reapplying r293361 and r293363 with a fix
for an out-of-bounds read.

llvm-svn: 293474
2017-01-30 11:35:39 +00:00
Sam McCall b9d6c10c2d [MemorySSA] Revert r293361 and r293363, as the tests fail under asan.
llvm-svn: 293471
2017-01-30 09:19:50 +00:00
Kristof Beyls 65a12c012f [GlobalISel] Add support for indirectbr
Differential Revision: https://reviews.llvm.org/D28079

llvm-svn: 293470
2017-01-30 09:13:18 +00:00
Asaf Badouh 53713df0c2 [X86][MCU] replace select with bit manipulation instead of branches
Differential Revision: https://reviews.llvm.org/D28354


 

llvm-svn: 293469
2017-01-30 08:16:59 +00:00
Craig Topper f6df4a6978 [AVX-512] Remove duplicate CodeGenOnly patterns for scalar register broadcast. We can use COPY_TO_REGCLASS like AVX does.
This causes stack spill slots be oversized sometimes, but the same should already be happening with AVX.

llvm-svn: 293464
2017-01-30 06:59:06 +00:00
Sam McCall a682dfb3e5 Include LLVMDumpValue in release builds.
This part of the C API is still used in language bindings.

llvm-svn: 293460
2017-01-30 05:40:52 +00:00
Jonas Paulsson 3f71d6a38e [LoopVectorize] Improve getVectorCallCost() getScalarizationOverhead() call.
By calling getScalarizationOverhead with the CallInst instead of the types of
its arguments, we make sure that only unique call arguments are added to the
scalarization cost.

getScalarizationOverhead() is extended to handle calls by only passing on the
actual call arguments (which is not all the operands).

This also eliminates a wrapper function with the same name.

review: Hal Finkel
llvm-svn: 293459
2017-01-30 05:38:05 +00:00
Craig Topper 0265a39472 [AVX-512] Remove KSET0B/KSET1B in favor of the patterns that select KSET0W/KSET1W for v8i1.
llvm-svn: 293458
2017-01-30 05:37:47 +00:00
Davide Italiano 6c77de0367 [MemorySSA] Correct an assertion surrounding with parentheses.
llvm-svn: 293453
2017-01-30 03:16:43 +00:00
Craig Topper 3b7e823f92 [AVX-512] Don't reuse VSHLI/VSRLI for mask register shifts. VSHLI/VSHRI shift within elements while KSHIFT moves whole elements.
llvm-svn: 293448
2017-01-30 00:06:01 +00:00
Chris Ray 30b3fafb94 [X86][Disassembler] Added SALC instruction
Reviewers: joe.abbey, craig.topper

Reviewed By: craig.topper

Subscribers: majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D29201

llvm-svn: 293447
2017-01-29 23:02:47 +00:00
Craig Topper db919caf1b [AVX-512] Fix lowering for mask register concatenation with undef in the lower half.
Previously this test case fired an assertion in getNode because we tried to create an insert_subvector with both input types the same size and the index pointing to half the vector width.

llvm-svn: 293446
2017-01-29 22:53:33 +00:00
Chris Ray ba3741cb2b [X86] Fixing flag usage for RCL and RCR
Summary: The RCL and RCR instructions use the carry flag.

Reviewers: craig.topper

Reviewed By: craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29237

llvm-svn: 293441
2017-01-29 20:05:30 +00:00
Matthias Braun a4976c6166 MachineInstr: Remove parameter from dump()
The primary use of the dump() functions in LLVM is for use in a
debugger. Unfortunately lldb does not seem to handle default arguments
so using `p SomeMI.dump()` fails and you have to type the longer `p
SomeMI.dump(nullptr)`. Remove the paramter to make the most common use
easy. (You can always construct something like `p
SomeMI.print(dbgs(),MyTII)` if you need more features).

Differential Revision: https://reviews.llvm.org/D29241

llvm-svn: 293440
2017-01-29 18:20:42 +00:00
Simon Pilgrim 76073f8d22 [X86][SSE] Lower scalar_to_vector(0) to zero vector
Replaces an xor+movd/movq with an xorps which will be shorter in codesize, avoid an int-fpu transfer, allow modern cores to fast path the result during decode and helps other combines recognise an all-zero vector.

The only reason I can think of that we'd want to keep scalar_to_vector in this case is to help recognise the upper elts are undef but this doesn't seem to be a problem.

Differential Revision: https://reviews.llvm.org/D29097

llvm-svn: 293438
2017-01-29 18:13:37 +00:00
Matthias Braun de58b61b5d llvm-c: Keep LLVMDumpModule() even in release builds
While this probably should be considered a dump debugger utility, the C
API currently has no other ways to print a module to stderr for error
reporting purposes, so keep it even in release builds.

llvm-svn: 293436
2017-01-29 17:52:03 +00:00
Sanjay Patel 062adaab83 [InstCombine] enable (X >>?,exact C1) << C2 --> X << (C2 - C1) for vectors with splats
llvm-svn: 293435
2017-01-29 17:11:18 +00:00
Saleem Abdulrasool 5282eed06c ARM: support `-mlong-calls` with AEABI TLS on ELF
Support lowering AEABI TLS access (__aeabi_read_tp) with long calls.
This requires adjusting the call sequence to use an indirect call to get
full addressability.

Resolves PR31769!

llvm-svn: 293433
2017-01-29 16:46:22 +00:00
Sanjay Patel 14a4b8185f [ValueTracking] clean up lookThroughCast; NFCI
1. Use auto with dyn_cast.
2. Don't use else after return.
3. Convert chain of 'else if' to switch.
4. Improve variable names.

llvm-svn: 293432
2017-01-29 16:34:57 +00:00
Elena Demikhovsky 17fe27f1f2 [X86 Codegen] Fixed a bug in unsigned saturation
PACKUSWB converts Signed word to Unsigned byte, (the same about DW) and it can't be used for umin+truncate pattern.
AVX-512 VPMOVUS* instructions fit the pattern since they convert Unsigned to Unsigned.

See https://llvm.org/bugs/show_bug.cgi?id=31773

Differential Revision: https://reviews.llvm.org/D29196

llvm-svn: 293431
2017-01-29 13:18:30 +00:00
Daniel Berlin 9f376b7b37 NewGVN: Fix where newline is printed in debug printing of memory equivalence
llvm-svn: 293428
2017-01-29 10:26:03 +00:00
Igor Breger 9ea154d4ad [X86][GlobalISel] Add limited argument lowering support to the IRTranslator.
Summary:
Add limited (i8/i16/i32/i64)  argument lowering support to the IRTranslator.
Inspired by commit 289940.

Reviewers: t.p.northover, qcolombet, ab, zvi, rovka

Reviewed By: rovka

Subscribers: dberris, rovka, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D28987

llvm-svn: 293427
2017-01-29 08:35:42 +00:00
Chandler Carruth 8e9c0a8472 [ArgPromote] Move static helpers to modern LLVM naming conventions while
here. NFC.

Simple refactoring while prepping a port to the new PM.

Differential Revision: https://reviews.llvm.org/D29249

llvm-svn: 293426
2017-01-29 08:03:21 +00:00
Chandler Carruth ae9ce3d402 [ArgPromote] Run clang-format to normalize remarkably idiosyncratic
formatting that has evolved here over the past years prior to making
somewhat invasive changes to thread new PM support through the business
logic.

Differential Revision: https://reviews.llvm.org/D29248

llvm-svn: 293425
2017-01-29 08:03:19 +00:00
Chandler Carruth cd836cd4ee [ArgPromote] Re-arrange the code in a more typical, logical way.
This arranges the static helpers in an order where they are defined
prior to their use to avoid the need of forward declarations, and
collect the core pass components at the bottom below their helpers.

This also folds one trivial function into the pass itself. Factoring
this 'runImpl' was an attempt to help porting to the new pass manager,
however in my attempt to begin this port in earnest it turned out to not
be a substantial help. I think it will be easier to factor things
without it.

This is an NFC change and does a minimal amount of edits over all.
Subsequent NFC cleanups will normalize the formatting with clang-format
and improve the basic doxygen commenting.

Differential Revision: https://reviews.llvm.org/D29247

llvm-svn: 293424
2017-01-29 08:03:16 +00:00
Craig Topper 135da1faf5 [SelectionDAG] Make SDNode::getConstantOperandVal an inline method.
It's operation already exists manually in many places without using the method.

llvm-svn: 293421
2017-01-29 06:08:02 +00:00
Justin Hibbits 10b6147e23 Add some Book-E instructions to the asm parser and printer.
Summary:
Adds the following instructions:
* mfpmr
* mtpmr
* icblc
* icblq
* icbtls

Fix the scheduling for mtspr on e5500, which uses CFX0, instead of
SFX0/SFX1 as on e500mc.

Addresses PR 31538.

Differential Revision: https://reviews.llvm.org/D29002

llvm-svn: 293417
2017-01-29 04:55:57 +00:00
Craig Topper 4753736abf [DAGCombiner] Use unsigned for a constant vector index instead of APInt.
The type system requires that the number of vector elements should fit in 32-bits so this should be safe.

llvm-svn: 293414
2017-01-29 04:38:21 +00:00
Craig Topper d15730902b [DAGCombiner] Remove unnecessary check on the size of the type of the index of EXTRACT_SUBVECTOR.
The type system already requires that the number of vector elements must fit in 32-bits so an index should as well. Even if the type of the index were larger all we care about is that the constant index can fit in 64-bits so that we can call getZExtValue.

llvm-svn: 293413
2017-01-29 04:38:19 +00:00
Craig Topper 24cdbe8fa6 [DAGCombiner] Make sure index of EXTRACT_SUBVECTOR is a constant before trying to use getConstantOperandVal.
llvm-svn: 293412
2017-01-29 04:38:16 +00:00
Xinliang David Li fd3f645f9d Add support to dump dot graph block layout after MBP
Differential Revision: https://reviews.llvm.org/D29141

llvm-svn: 293408
2017-01-29 01:57:02 +00:00
Davide Italiano 9d8f6f8a45 Remove inclusion of SSAUpdater from several passes.
It is, in fact, unused. Found while reviewing Danny's new
SSAUpdater and porting passes to it to see how the new API
looked like.

llvm-svn: 293407
2017-01-29 01:55:24 +00:00
Craig Topper 6533e40e9d [X86] Fix vector ANDN matching to work correctly when both inputs to the AND are XORs.
llvm-svn: 293403
2017-01-28 23:52:09 +00:00
Davide Italiano 9b8738d7c8 [PM] MLSM has been enabled for a way. Reclaim a cl::opt.
llvm-svn: 293401
2017-01-28 23:45:37 +00:00
Kostya Serebryany ac2a633467 [libfuzzer] include errno.h. On Ubuntu 14.04 we got away w/o it, but other systems seem to require it
llvm-svn: 293389
2017-01-28 18:56:05 +00:00
Will Dietz f47d26ac2b RuntimeDyldELF: Don't abort on R_X86_64_NONE, it's a no-oop.
llvm-svn: 293388
2017-01-28 18:39:01 +00:00
Will Dietz 10294b932c AMDGPU: Add GlobalISel to required_libraries.
llvm-svn: 293387
2017-01-28 18:13:08 +00:00
Mohammad Shahid 3121334d32 [SLP] Vectorize loads of consecutive memory accesses, accessed in non-consecutive (jumbled) way.
The jumbled scalar loads will be sorted while building the tree and these accesses will be marked to generate shufflevector after the vectorized load with proper mask.

Reviewers: hfinkel, mssimpso, mkuper

Differential Revision: https://reviews.llvm.org/D26905

Change-Id: I9c0c8e6f91a00076a7ee1465440a3f6ae092f7ad
llvm-svn: 293386
2017-01-28 17:59:44 +00:00
Arpith Chacko Jacob 2b156edf56 [NVPTX] Add intrinsics to support named barriers.
Support for barrier synchronization between a subset of threads
in a CTA through one of sixteen explicitly specified barriers.
These intrinsics are not directly exposed in CUDA but are
critical for forthcoming support of OpenMP on NVPTX GPUs.

The intrinsics allow the synchronization of an arbitrary
(multiple of 32) number of threads in a CTA at one of 16
distinct barriers. The two intrinsics added are as follows:

call void @llvm.nvvm.barrier.n(i32 10)
waits for all threads in a CTA to arrive at named barrier #10.

call void @llvm.nvvm.barrier(i32 15, i32 992)
waits for 992 threads in a CTA to arrive at barrier #15.

Detailed description of these intrinsics are available in the PTX manual.
http://docs.nvidia.com/cuda/parallel-thread-execution/#parallel-synchronization-and-communication-instructions

Reviewers: hfinkel, jlebar
Differential Revision: https://reviews.llvm.org/D17657

llvm-svn: 293384
2017-01-28 16:38:15 +00:00
Daniel Sanders b96a945bf5 stripDebugInfo() should remove DILocation's found in !llvm.loop metadata
Summary:
Patch by Michele Scandale
(with a small tweak to 'CHECK-NOT' the last DILocation in the test)

Subscribers: bogner, llvm-commits

Differential Revision: https://reviews.llvm.org/D27980

llvm-svn: 293377
2017-01-28 11:22:05 +00:00
Taewook Oh 505a25aec5 [InstCombine] Merge DebugLoc when speculatively hoisting store instruction
Summary: Along with https://reviews.llvm.org/D27804, debug locations need to be merged when hoisting store instructions as well. Not sure if just dropping debug locations would make more sense for this case, but as the branch instruction will have at least different discriminator with the hoisted store instruction, I think there will be no difference in practice.

Reviewers: aprantl, andreadb, danielcdh

Reviewed By: aprantl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29062

llvm-svn: 293372
2017-01-28 07:05:43 +00:00
Matthias Braun 194ded551c Use print() instead of dump() in code
llvm-svn: 293371
2017-01-28 06:53:55 +00:00
Richard Trieu 3de487b2e8 [WebAssembly] Use print instead of dump method.
This fixes non-debug non-assert builds after r293359.

llvm-svn: 293368
2017-01-28 03:23:49 +00:00
Matthias Braun 25bcaba50e Use print() instead of dump() in code
The dump() functions are meant to be used in a debugger, code should
typically use something like print(errs());

llvm-svn: 293365
2017-01-28 02:47:46 +00:00
Daniel Berlin ee6e3a598a MemorySSA: Allow movement to arbitrary places
Summary: Extend the MemorySSAUpdater API to allow movement to arbitrary places

Reviewers: davide, george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29239

llvm-svn: 293363
2017-01-28 02:26:39 +00:00
Quentin Colombet 8cf1163c4f [RegisterBankInfo] Emit proper type for remapped registers.
When the OperandsMapper creates virtual registers, it used to just create
plain scalar register with the right size. This may confuse the
instruction selector because we lose the information of the instruction
using those registers what supposed to do. The MachineVerifier complains
about that already.

With this patch, the OperandsMapper still creates plain scalar register,
but the expectation is for the mapping function to remap the type
properly. The default mapping function has been updated to do that.

rdar://problem/30231850

llvm-svn: 293362
2017-01-28 02:23:48 +00:00
Daniel Berlin 2f1ab4ba79 MemorySSA: Fix block numbering invalidation and replacement bugs discovered by updater
llvm-svn: 293361
2017-01-28 02:22:52 +00:00
Matthias Braun 8c209aa877 Cleanup dump() functions.
We had various variants of defining dump() functions in LLVM. Normalize
them (this should just consistently implement the things discussed in
http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html

For reference:
- Public headers should just declare the dump() method but not use
  LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
- The definition of a dump method should look like this:
  #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
  LLVM_DUMP_METHOD void MyClass::dump() {
    // print stuff to dbgs()...
  }
  #endif

llvm-svn: 293359
2017-01-28 02:02:38 +00:00
Daniel Berlin ae6b8b6933 MemorySSA: Move updater to its own file
llvm-svn: 293357
2017-01-28 01:35:02 +00:00
Daniel Berlin 60ead05f80 Introduce a basic MemorySSA updater, that supports insertDef,
insertUse, moveBefore and moveAfter operations.

Summary:
This creates a basic MemorySSA updater that handles arbitrary
insertion of uses and defs into MemorySSA, as well as arbitrary
movement around the CFG. It replaces the current splice API.

It can be made to handle arbitrary control flow changes.
Currently, it uses the same updater algorithm from D28934.

The main difference is because MemorySSA is single variable, we have
the complete def and use list, and don't need anyone to give it to us
as part of the API.  We also have to rename stores below us in some
cases.

If we go that direction in that patch, i will merge all the updater
implementations (using an updater_traits or something to provide the
get* functions we use, called read*/write* in that patch).

Sadly, the current SSAUpdater algorithm is way too slow to use for
what we are doing here.

I have updated the tests we have to basically build memoryssa
incrementally using the updater api, and make sure it still comes out
the same.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29047

llvm-svn: 293356
2017-01-28 01:23:13 +00:00
Quentin Colombet 351099022a [RegisterCoalescing] Recommit the patch "Remove partial redundent copy".
In r292621, the recommit fixes a bug related with live interval update
after the partial redundent copy is moved.

This recommit solves an additional bug related to the lack of update of
subranges.

The original patch is to solve the performance problem described in
PR27827. Register coalescing sometimes cannot remove a copy because of
interference. But if we can find a reverse copy in one of the predecessor
block of the copy, the copy is partially redundent and we may remove the
copy partially by moving it to the predecessor block without the
reverse copy.

Differential Revision: https://reviews.llvm.org/D28585

Re-apply r292621

Revert "Revert rL292621. Caused some internal build bot failures in apple."

This reverts commit r292984.

Original patch: Wei Mi <wmi@google.com>
Subrange fix: Mostly Matthias Braun <matze@braunis.de>

llvm-svn: 293353
2017-01-28 01:05:27 +00:00
Evgeniy Stepanov d0852873e5 Fix memory leak in globalisel.
#0 0x89cdeb in operator new[](unsigned long) /code/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:84:37
    #1 0x4ec87c4 in llvm::RegisterBankInfo::ValueMapping const* llvm::RegisterBankInfo::getOperandsMapping<llvm::RegisterBankInfo::ValueMapping const* const*>(llvm::RegisterBankInfo::ValueMapping const* const*, llvm::RegisterBankInfo::ValueMapping const* const*) const /code/llvm/lib/CodeGen/GlobalISel/RegisterBankInfo.cpp:297:9
    #2 0x9327ee in llvm::AArch64RegisterBankInfo::getInstrMapping(llvm::MachineInstr const&) const /code/llvm/lib/Target/AArch64/AArch64RegisterBankInfo.cpp:540:30
    #3 0x4eb8d07 in llvm::RegBankSelect::assignInstr(llvm::MachineInstr&) /code/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp:546:24
    #4 0x4eb9dd2 in llvm::RegBankSelect::runOnMachineFunction(llvm::MachineFunction&) /code/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp:624:12
    #5 0x3141875 in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /code/llvm/lib/CodeGen/MachineFunctionPass.cpp:62:13
    #6 0x396128d in llvm::FPPassManager::runOnFunction(llvm::Function&) /code/llvm/lib/IR/LegacyPassManager.cpp:1513:27
    #7 0x3961832 in llvm::FPPassManager::runOnModule(llvm::Module&) /code/llvm/lib/IR/LegacyPassManager.cpp:1534:16
    #8 0x3962540 in runOnModule /code/llvm/lib/IR/LegacyPassManager.cpp:1590:27
    #9 0x3962540 in llvm::legacy::PassManagerImpl::run(llvm::Module&) /code/llvm/lib/IR/LegacyPassManager.cpp:1693
    #10 0x8ae368 in compileModule(char**, llvm::LLVMContext&) /code/llvm/tools/llc/llc.cpp:562:8
    #11 0x8a7a1b in main /code/llvm/tools/llc/llc.cpp:316:22

llvm-svn: 293351
2017-01-28 00:46:30 +00:00
Eugene Zelenko e79c077ef9 [ARM] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 293348
2017-01-27 23:58:02 +00:00
Tim Northover 12bd22fbee GlobalISel: don't leak super-entry BB when merging with IR-level one.
We have to delete the block manually or it leaks. That triggers failures in
-fsanitize=leak bots (unsurprisingly), which should be fixed by this patch.

llvm-svn: 293347
2017-01-27 23:54:31 +00:00
Sanjay Patel febcb9ce54 [InstCombine] move icmp transforms that might be recognized as min/max and inf-loop (PR31751)
This is a minimal patch to avoid the infinite loop in:
https://llvm.org/bugs/show_bug.cgi?id=31751

But the general problem is bigger: we're not canonicalizing all of the min/max forms reported
by value tracking's matchSelectPattern(), and we don't define min/max consistently. Some code
uses matchSelectPattern(), other code uses matchers like m_Umax, and others have their own
inline definitions which may be subtly different from any of the above.

The reason that the test cases in this patch need a cast op to trigger is because we don't
(yet) canonicalize all min/max forms based on matchSelectPattern() in 
canonicalizeMinMaxWithConstant(), but we do make min/max+cast transforms based on 
matchSelectPattern() in visitSelectInst().

The location of the icmp transforms that trigger the inf-loop seems arbitrary at best, so
I'm moving those behind the min/max fence in visitICmpInst() as the quick fix.

llvm-svn: 293345
2017-01-27 23:26:27 +00:00
Peter Collingbourne 5ad775f2e8 Analysis: Add appropriate const qualification to functions in TypeMetadataUtils.cpp. NFC.
llvm-svn: 293341
2017-01-27 22:55:30 +00:00
Kostya Serebryany 6d58dbb62f [libFuzzer] make shmem more robust in the presence of signals
llvm-svn: 293339
2017-01-27 22:41:30 +00:00
Artem Tamazov 33b01e9cfe [AMDGPU][mc] Fix memory corruption uncovered by AddressSanitizer during coverage/smoke Gfx7/8 testing.
Coverage/smoke Gfx7/8 tests were committed r292922 but then reverted
by r292974 due to AddressSanitizer failure, which is fixed by this patch.
Tests to be re-committed soon.

llvm-svn: 293338
2017-01-27 22:19:42 +00:00
Tim Northover d8b85584f2 GlobalISel: set correct regclass for LOAD_STACK_GUARD.
Since it's not actually a generic MI, its register operands need a RegClass,
which is conveniently the target's pointer RegClass.

llvm-svn: 293335
2017-01-27 21:31:24 +00:00
Tim Northover c9bc8a5580 GlobalISel: mark incoming landing-pad registers as live.
Should fix machine verifier failures.

llvm-svn: 293334
2017-01-27 21:31:17 +00:00
Krzysztof Parzyszek 35ce5dac7f [Hexagon] Remove unused variable (and silence a warning)
llvm-svn: 293331
2017-01-27 20:40:14 +00:00
Mehdi Amini 453ab3522b Fix ASAN failure in cxa_demangle
Found with ASAN + libFuzzer by Kostya Serebryany <kcc@google.com>

llvm-svn: 293330
2017-01-27 20:32:16 +00:00
Mehdi Amini 888dee444b Global DCE performance improvement
Change the original algorithm so that it scales better when meeting
very large bitcode where every instruction does not implies a global.

The target query is "how to you get all the globals referenced by
another global"?

Before this patch, it was doing this by walking the body (or the
initializer) and collecting the references. What this patch is doing,
it precomputing the answer to this query for the whole module by
walking the use-list of every global instead.

Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>

Differential Revision: https://reviews.llvm.org/D28549

llvm-svn: 293328
2017-01-27 19:48:57 +00:00
Xinliang David Li d289e4541f [PGO] add debug option to view raw count after prof use annotation
Differential Revision: https://reviews.llvm.org/D29045

llvm-svn: 293325
2017-01-27 19:06:25 +00:00
Matthias Braun c91e28af4b ScheduleDAGInstrs: Do not try to toggle kill flags on debug uses
Preparation for upcoming changes. No testcase as none of the public
targets bundles early enough and has a post machine scheduler enabled at
the same time. The error is also easily catched by asserts.

llvm-svn: 293324
2017-01-27 18:53:07 +00:00
Matthias Braun 26e8c350f9 ScheduleDAGInstrs: Cleanup toggleKillFlag(); NFC
llvm-svn: 293323
2017-01-27 18:53:05 +00:00
Matthias Braun bd7d91838e ScheduleDAGInstrs: Cleanup; NFC
Comment, doxygen and a bit of whitespace cleanup.

llvm-svn: 293322
2017-01-27 18:53:00 +00:00
Tom Stellard 08efb7ebf6 AMDGPU/SI: Move some ISel helpers into utils so they can be shared with GISel
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D29068

llvm-svn: 293321
2017-01-27 18:41:14 +00:00
Konstantin Zhuravlyov a304c83608 [AMDGPU] Grab MCSubtargetInfo from TargetMachine instead of constructing it
Differential Revision: https://reviews.llvm.org/D29224

llvm-svn: 293318
2017-01-27 18:32:40 +00:00
Chris Ray 535e7d1547 [X86] Adding FFREEP instruction.
Summary: Small change to get the FREEP instruction to decode properly.

Reviewers: craig.topper

Reviewed By: craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29193

llvm-svn: 293314
2017-01-27 18:02:53 +00:00
Anna Thomas e7d865e34e NFC: Add debug tracing for more cases where loop unrolling fails.
llvm-svn: 293313
2017-01-27 17:57:05 +00:00
Matt Arsenault d8f7ea381f AMDGPU: Enable FeatureFlatForGlobal on Volcanic Islands
Accomplishes what r292982 was supposed to, which ended up
only really making the necessary test changes.

This should be applied to the 4.0 branch.

Patch by Vedran Miletić <vedran@miletic.net>

llvm-svn: 293310
2017-01-27 17:42:26 +00:00
Matt Arsenault 32b9600a7e NVPTX: Make NVPTXInferAddressSpaces preserve CFG
llvm-svn: 293308
2017-01-27 17:30:39 +00:00
Jun Bum Lim b99a06b7c9 [CodeGenPrep]No negative cost in the ExtLd promotion
Summary: This change prevent the signed value of cost from being negative as the value is passed as an unsigned argument.

Reviewers: mcrosier, jmolloy, qcolombet, javed.absar

Reviewed By: mcrosier, qcolombet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28871

llvm-svn: 293307
2017-01-27 17:16:37 +00:00
Stanislav Mekhanoshin f6c1feb8c3 [AMDGPU] Turn AMDGPUUnifyMetadata back into module pass
With the adjustPassManager interface that is now possible to use
custom early module passes.

Differential Revision: https://reviews.llvm.org/D29189

llvm-svn: 293300
2017-01-27 16:38:10 +00:00
Mehdi Amini 1726fc698c Fix BasicAA incorrect assumption on GEP
This is fixing pr31761: BasicAA is deducing NoAlias
on the result of the GEP if the base pointer is itself NoAlias.

This is possible only if the NoAlias on the base pointer is
deduced with a non-sized query: this should guarantee that
the pointers are belonging to different memory allocation
and that the GEP can't legally jump from one to another.

Differential Revision: https://reviews.llvm.org/D29216

llvm-svn: 293293
2017-01-27 16:12:22 +00:00
Ivan Krasin c05c9db364 Avoid using unspecified ordering in MetadataLoader::MetadataLoaderImpl::parseOneMetadata.
Summary:
MetadataLoader::MetadataLoaderImpl::parseOneMetadata uses
the following construct in a number of places:

```
MetadataList.assignValue(<...>, NextMetadataNo++);
```

There, NextMetadataNo gets incremented, and since the order
of arguments evaluation is not specified, that can happen
before or after other arguments are evaluated.

In a few cases the other arguments indirectly use NextMetadataNo.
For instance, it's

```
MetadataList.assignValue(
    GET_OR_DISTINCT(DIModule,
                    (Context, getMDOrNull(Record[1]),
                     getMDString(Record[2]), getMDString(Record[3]),
                     getMDString(Record[4]), getMDString(Record[5]))),
    NextMetadataNo++);
```

getMDOrNull calls getMD that uses NextMetadataNo:

```
MetadataList.getMetadataFwdRef(NextMetadataNo);
```

Therefore, the order of evaluation becomes important. That caused
a very subtle LLD crash that only happens if compiled with GCC or
if LLD is built with LTO. In the case if LLD is compiled with Clang
and regular linking mode, everything worked as intended.

This change extracts incrementing of NextMetadataNo outside of
the arguments list to guarantee the correct order of evaluation.

For the record, this has taken 3 days to track to the origin. It all
started with a ThinLTO bot in Chrome not being able to link a target
if debug info is enabled.

Reviewers: pcc, mehdi_amini

Reviewed By: mehdi_amini

Subscribers: aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D29204

llvm-svn: 293291
2017-01-27 15:54:49 +00:00
Simon Dardis ca74dd79e9 [mips] Recommit: "N64 static relocation model support"
This patch makes one change to GOT handling and two changes to N64's
relocation model handling. Furthermore, the jumptable encodings have
been corrected for static N64.

Big GOT handling is now done via a new SDNode MipsGotHi - this node is
unconditionally lowered to an lui instruction.

The first change to N64's relocation handling is the lifting of the
restriction that N64 always uses PIC. Now it is possible to target static
environments.

The second change adds support for 64 bit symbols and enables them by
default. Previously N64 had patterns for sym32 mode only. In this mode all
symbols are assumed to have 32 bit addresses. sym32 mode support
is selectable with attribute 'sym32'. A follow on patch for clang will
add the necessary frontend parameter.

This partially resolves PR/23485.

Thanks to Brooks Davis for reporting the issue!

This version corrects a "Conditional jump or move depends on uninitialised
value(s)" error detected by valgrind present in the original commit.

Reviewers: dsanders, seanbruno, zoran.jovanovic, vkalintiris

Differential Revision: https://reviews.llvm.org/D23652

llvm-svn: 293279
2017-01-27 11:36:52 +00:00
Alexey Bataev 4015bf8372 [SLP] Refactoring of horizontal reduction analysis, NFC.
Some checks in SLP horizontal reduction analysis function are performed
several times, though it is enough to perform these checks only once
during an initial attempt at adding candidate for the reduction
instruction/reduced value.

Differential Revision: https://reviews.llvm.org/D29175

llvm-svn: 293274
2017-01-27 10:54:04 +00:00
Chandler Carruth fd2d7c72fc [LICM] When we are recomputing the alias sets for a subloop, we cannot
skip sub-subloops.

The logic to skip subloops dated from when this code was shared with the
cached case. Once it was factored out to only run in the case of
recomputed subloops it became a dangerous bug. If a subsubloop contained
an interfering instruction it would be silently skipped from the alias
sets for LICM.

With the old pass manager this was extremely hard to trigger as it would
require failing to visit these subloops with the LICM pass but then
visiting the outer loop somehow. I've not yet contrived any test case
that actually manages to trigger this.

But with the new pass manager we don't do the cross-loop caching hack
that the old PM does and so we recompute alias set information from
first principles. While this seems much cleaner and simpler it exposed
this bug and would subtly miscompile code due to failing to correctly
model the aliasing constraints of deeply nested loops.

llvm-svn: 293273
2017-01-27 10:27:32 +00:00
Jonas Paulsson bb0ed3e732 [DAGTypeLegalizer] Handle SIGN/ZERO_EXTEND in WidenVecRes_Convert().
In case of a SIGN/ZERO_EXTEND of an incomplete vector type (using only a
partial number of available vector elements), WidenVecRes_Convert() used to
resort to scalarization.

This patch adds a handling of the (common) case where an input vector can be
found of same width as the widened result vector, by converting the node to
SIGN/ZERO_EXTEND_VECTOR_INREG.

Review: Eli Friedman
llvm-svn: 293268
2017-01-27 07:46:26 +00:00
Richard Trieu 0b79aa3373 Fix unused variable warning.
llvm-svn: 293260
2017-01-27 06:06:05 +00:00
Saleem Abdulrasool 26c00e3700 ARM: fix vectorized division on WoA
The Windows on ARM target uses custom division for normal division as
the backend needs to insert division-by-zero checks.  However, it is
designed to only handle non-vectorized division.  ARM has custom
lowering for vectorized division as that can avoid loading registers
with the values and invoke a division routine for each one, preferring
to lower using NEON instructions.  Fall back to the custom lowering for
the NEON instructions if we encounter a vectorized division.

Resolves PR31778!

llvm-svn: 293259
2017-01-27 03:41:53 +00:00
Daniel Berlin c479686af2 NewGVN: Add basic dead and redundant store elimination
Summary:
This adds basic dead and redundant store elimination to
NewGVN.  Unlike our current DSE, it will happily do cross-block DSE if
it meets our requirements.

We get a bunch of DSE's simple.ll cases, and some stuff it doesn't.
Unlike DSE, however, we only try to eliminate stores of the same value
to the same memory location, not just general stores to the same
memory location.

Reviewers: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29149

llvm-svn: 293258
2017-01-27 02:37:11 +00:00
NAKAMURA Takumi 0d299191d0 NVPTXCodeGen: Add IPO to libdeps, since r293189.
llvm-svn: 293256
2017-01-27 02:11:10 +00:00
Tim Shen 601ba8c583 [APFloat] Reduce some dispatch boilerplates. NFC.
Summary: This is an attempt to reduce the verbose manual dispatching code in APFloat. This doesn't handle multiple dispatch on single discriminator (e.g. APFloat::add(const APFloat&)), nor handles multiple dispatch on multiple discriminators (e.g. APFloat::convert()).

Reviewers: hfinkel, echristo, jlebar

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D29161

llvm-svn: 293255
2017-01-27 02:11:07 +00:00
Justin Lebar 25ebe2d767 [NVPTX] [InstCombine] Add llvm_unreachable to appease MSVC.
llvm-svn: 293253
2017-01-27 02:04:07 +00:00
Justin Lebar e3ac0fb948 [NVPTX] Fix use-after-stack-free bug in InstCombineCalls.
Introduced in r293244.

llvm-svn: 293251
2017-01-27 01:49:39 +00:00
Xin Tong e5f8d643d4 Constant fold switch inst when looking for trivial conditions to unswitch on.
Summary: Constant fold switch inst when looking for trivial conditions to unswitch on.

Reviewers: sanjoy, chenli, hfinkel, efriedma

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D29037

llvm-svn: 293250
2017-01-27 01:42:20 +00:00
Chandler Carruth baabda9317 [PM] Port LoopLoadElimination to the new pass manager and wire it into
the main pipeline.

This is a very straight forward port. Nothing weird or surprising.

This brings the number of missing passes from the new PM's pipeline down
to three.

llvm-svn: 293249
2017-01-27 01:32:26 +00:00
Quentin Colombet 89dbea06f1 [ARM][LegalizerInfo] Specify the type of the opcode.
This is to fix the win7 bot that does not seem to be very
good at infering the type when it gets used in an initiliazer list.

llvm-svn: 293248
2017-01-27 01:30:46 +00:00
Quentin Colombet 24203cf997 [AArch64][LegalizerInfo] Specify the type of the opcode.
This is an attempt to fix the win7 bot that does not seem to be very
good at infering the type when it gets used in an initiliazer list.

llvm-svn: 293246
2017-01-27 01:13:30 +00:00
Quentin Colombet e15e460c05 Revert "[AArch64][LegalizerInfo] Specify the type of the initialization list."
This reverts commit r293238.
Even with that the win7 bot is still failing:
http://lab.llvm.org:8011/builders/lld-x86_64-win7/builds/3862

llvm-svn: 293245
2017-01-27 01:13:25 +00:00
Justin Lebar 698c31b8db [NVPTX] Upgrade NVVM intrinsics in InstCombineCalls.
Summary:
There are many NVVM intrinsics that we can't entirely get rid of, but
that nonetheless often correspond to target-generic LLVM intrinsics.

For example, if flush denormals to zero (ftz) is enabled, we can convert
@llvm.nvvm.ceil.ftz.f to @llvm.ceil.f32.  On the other hand, if ftz is
disabled, we can't do this, because @llvm.ceil.f32 will be lowered to a
non-ftz PTX instruction.  In this case, we can, however, simplify the
non-ftz nvvm ceil intrinsic, @llvm.nvvm.ceil.f, to @llvm.ceil.f32.

These transformations are particularly useful because they let us
constant fold instructions that appear in libdevice, the bitcode library
that ships with CUDA and essentially functions as its libm.

Reviewers: tra

Subscribers: hfinkel, majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D28794

llvm-svn: 293244
2017-01-27 00:58:58 +00:00
Justin Lebar 322c127bee [ValueTracking] Add comment that CannotBeOrderedLessThanZero does the wrong thing for powi.
Summary:
CannotBeOrderedLessThanZero(powi(x, exp)) returns true if
CannotBeOrderedLessThanZero(x).  But powi(-0, exp) is negative if exp is
odd, so we actually want to return SignBitMustBeZero(x).

Except that also isn't right, because we want to return true if x is
NaN, even if x has a negative sign bit.

What we really need in order to fix this is a consistent approach in
this function to handling the sign bit of NaNs.  Without this it's very
difficult to say what the correct behavior here is.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28927

llvm-svn: 293243
2017-01-27 00:58:34 +00:00
Justin Lebar cb9b41dd76 [LangRef] Make @llvm.sqrt(x) return undef, rather than have UB, for negative x.
Summary:
Some frontends emit a speculate-and-select idiom for sqrt, wherein they compute
sqrt(x), check if x is negative, and select NaN if it is:

  %cmp = fcmp olt double %a, -0.000000e+00
  %sqrt = call double @llvm.sqrt.f64(double %a)
  %ret = select i1 %cmp, double 0x7FF8000000000000, double %sqrt

This is technically UB as the LangRef is written today if %a is ever less than
-0.  But emitting code that's compliant with the current definition of sqrt
would require a branch, which would then prevent us from matching this idiom in
SelectionDAG (which we do today -- ISD::FSQRT has defined behavior on negative
inputs), because SelectionDAG looks at one BB at a time.

Nothing in LLVM takes advantage of this undefined behavior, as far as we can
tell, and the fact that llvm.sqrt has UB dates from its initial addition to the
LangRef.

Reviewers: arsenm, mehdi_amini, hfinkel

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D28797

llvm-svn: 293242
2017-01-27 00:58:03 +00:00
Chandler Carruth a95ff38924 [PM] Flesh out almost all of the late loop passes.
With this the per-module pass pipeline is *extremely* close to the
legacy PM. The missing pieces are:
- PruneEH (or some equivalent)
- ArgumentPromotion
- LoopLoadElimination
- LoopUnswitch

I'm going to work through those in essentially that order but this seems
like a worthwhile incremental step toward the end state.

One difference in what I have here from the legacy PM is that I've
consolidated some of the per-function passes at the very end of the
pipeline into the main optimization function pipeline. The intervening
passes are *really* uninteresting and so this seems very likely to have
any effect other than minor improvement to locality.

Note that there are still some failures in the test suite, but the
compiler doesn't crash or assert.

Differential Revision: https://reviews.llvm.org/D29114

llvm-svn: 293241
2017-01-27 00:50:21 +00:00
Kostya Serebryany 70182deaae [libFuzzer] simplify the value profiling callback further: don't use (idx MOD prime) on the hot path where it is useless anyway
llvm-svn: 293239
2017-01-27 00:39:12 +00:00
Quentin Colombet 86fc8305ec [AArch64][LegalizerInfo] Specify the type of the initialization list.
This is an attempt to fix the win7 bot that does not seem to be very
good at infering the type.

llvm-svn: 293238
2017-01-27 00:39:03 +00:00
Kostya Serebryany 8e9ac42742 [libFuzzer] make sure (again) that __builtin_popcountl is compiled into popcnt
llvm-svn: 293237
2017-01-27 00:20:55 +00:00
Kostya Serebryany 7f058972ee [libFuzzer] simplify the value profile code and disable asan/msan on it
llvm-svn: 293236
2017-01-27 00:09:59 +00:00
Adrian McCarthy 8f713190e7 NFC: Rename PDB_ReaderType::Raw to Native for consistency with the NativeSession rename.
llvm-svn: 293235
2017-01-27 00:01:55 +00:00
Eugene Zelenko e6cf4374b0 [ARM] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 293229
2017-01-26 23:40:06 +00:00
Tim Northover 09aac4ad2a GlobalISel: support debug intrinsics.
The translation scheme is mostly cribbed from FastISel, and it's not entirely
convincing semantically. But it does seem to work in the common cases and allow
variables to be printed so it can't be all wrong.

llvm-svn: 293228
2017-01-26 23:39:14 +00:00
Sanjoy Das 7516192a71 Revert a couple of InstCombine/Guard checkins
This change reverts:

r293061: "[InstCombine] Canonicalize guards for NOT OR condition"
r293058: "[InstCombine] Canonicalize guards for AND condition"

They miscompile cases like:

```
declare void @llvm.experimental.guard(i1, ...)

define void @test_guard_not_or(i1 %A, i1 %B) {
  %C = or i1 %A, %B
  %D = xor i1 %C, true
  call void(i1, ...) @llvm.experimental.guard(i1 %D, i32 20, i32 30)[ "deopt"() ]
  ret void
}
```

because they do transfer the `i32 20, i32 30` parameters to newly
created guard instructions.

llvm-svn: 293227
2017-01-26 23:38:11 +00:00
Andrew Kaylor a0a1164ce4 Add intrinsics for constrained floating point operations
This commit introduces a set of experimental intrinsics intended to prevent
optimizations that make assumptions about the rounding mode and floating point
exception behavior.  These intrinsics will later be extended to specify
flush-to-zero behavior.  More work is also required to model instruction
dependencies in machine code and to generate these instructions from clang
(when required by pragmas and/or command line options that are not currently
supported).

Differential Revision: https://reviews.llvm.org/D27028

llvm-svn: 293226
2017-01-26 23:27:59 +00:00
Chandler Carruth 79b733bc6b [PM] Enable the main loop pass pipelines with everything but
loop-unswitch in the main pipelines for the new PM.

All of these now work, and Clang built using this pipeline can build the
test suite and SPEC without hitting any asserts of ASan failures.

There are still some bugs hiding though -- 7 tests regress with the new
PM. I'm going to be investigating these, but it seems worthwhile to at
least get the pipelines in place so that others can play with them, and
they aren't completely broken.

Differential Revision: https://reviews.llvm.org/D29113

llvm-svn: 293225
2017-01-26 23:21:17 +00:00
Krzysztof Parzyszek d6c8e3c9ce [Hexagon] Require IPO library in Hexagon build
This should unbreak the Hexagon build bots.

llvm-svn: 293221
2017-01-26 23:03:22 +00:00
Daniel Berlin 1ea5f324bd NewGVN: Fix bug exposed by PR31761
Summary:
This does not actually fix the testcase in PR31761 (discussion is
ongoing on the testcase), but does fix a bug it exposes, where stores
were not properly clobbering loads.

We accomplish this by unifying the memory equivalence infratructure
back into the normal congruence infrastructure, and then properly
destroying congruence classes when memory state leaders disappear.

Reviewers: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29195

llvm-svn: 293216
2017-01-26 22:21:48 +00:00
Sanjay Patel 50753f02c2 [InstCombine] fold (X >>u C) << C --> X & (-1 << C)
We already have this fold when the lshr has one use, but it doesn't need that
restriction. We may be able to remove some code from foldShiftedShift().

Also, move the similar:
(X << C) >>u C --> X & (-1 >>u C)
...directly into visitLShr to help clean up foldShiftByConstOfShiftByConst().

That whole function seems questionable since it is called by commonShiftTransforms(),
but there's really not much in common if we're checking the shift opcodes for every
fold.

llvm-svn: 293215
2017-01-26 22:08:10 +00:00
Krzysztof Parzyszek c8b943860f [Hexagon] Add Hexagon-specific loop idiom recognition pass
llvm-svn: 293213
2017-01-26 21:41:10 +00:00
Daniel Berlin db3c7be069 NewGVN: Add algorithm overview
llvm-svn: 293212
2017-01-26 21:39:49 +00:00
Sanjay Patel b0d96d327e [InstCombine] use m_APInt to allow (X << C) >>u C --> X & (-1 >>u C) with splat vectors
llvm-svn: 293208
2017-01-26 20:52:27 +00:00
Balaram Makam b73d2962ba [AArch64] Refine Kryo Machine Model
Summary: Refine floating point SQRT and DIV with accurate latency information.

Reviewers: mcrosier

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D29191

llvm-svn: 293204
2017-01-26 20:10:41 +00:00
Kyle Butt c4614b3e76 [IfConversion] Use reverse_iterator to simplify. NFC
This simplifies skipping debug instructions and shrinking ranges.

llvm-svn: 293202
2017-01-26 20:02:47 +00:00
Sean Fertile 3c8c385a77 [PPC] cleanup of mayLoad/mayStore flags and memory operands.
1) Explicitly sets mayLoad/mayStore property in the tablegen files on load/store
   instructions.
2) Updated the flags on a number of intrinsics indicating that they write
    memory.
3) Added SDNPMemOperand flags for some target dependent SDNodes so that they
   propagate their memory operand

Review: https://reviews.llvm.org/D28818
llvm-svn: 293200
2017-01-26 18:59:15 +00:00
Daniel Berlin 2b83492eee NewGVN: Make unreachable blocks be marked with unreachable
llvm-svn: 293196
2017-01-26 18:30:29 +00:00
Stanislav Mekhanoshin 81598117b6 Replace addEarlyAsPossiblePasses callback with adjustPassManager
This change introduces adjustPassManager target callback giving a
target an opportunity to tweak PassManagerBuilder before pass
managers are populated.

This generalizes and replaces addEarlyAsPossiblePasses target
callback. In particular that can be used to add custom passes to
extension points other than EP_EarlyAsPossible.

Differential Revision: https://reviews.llvm.org/D28336

llvm-svn: 293189
2017-01-26 16:49:08 +00:00
Nirav Dave d32a421f75 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r293184 which is failing in LTO builds

llvm-svn: 293188
2017-01-26 16:46:13 +00:00
Serge Rogatch e09ba748cf [XRay][Arm32] Reduce the portion of the stub and implement more staging for tail calls - in LLVM
Summary:
This patch provides more staging for tail calls in XRay Arm32 . When the logging part of XRay is ready for tail calls, its support in the core part of XRay Arm32 may be as easy as changing the number passed to the handler from 1 to 2.
Coupled patch:
- https://reviews.llvm.org/D28674

Reviewers: dberris, rengolin

Reviewed By: dberris

Subscribers: llvm-commits, iid_iunknown, aemerson, rengolin, dberris

Differential Revision: https://reviews.llvm.org/D28673

llvm-svn: 293185
2017-01-26 16:17:03 +00:00
Nirav Dave de6516c466 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
* Simplify Consecutive Merge Store Candidate Search

    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.

    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.

    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).

    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)

    Additional Minor Changes:

      1. Finishes removing unused AliasLoad code

      2. Unifies the chain aggregation in the merged stores across code
         paths

      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.

      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.

      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence

      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.

      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)

      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.

    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.

    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:

      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.

    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

llvm-svn: 293184
2017-01-26 16:02:24 +00:00
Rafael Espindola 82149a1aa9 Use shouldAssumeDSOLocal in classifyGlobalReference.
And teach shouldAssumeDSOLocal that ppc has no copy relocations.

The resulting code handle a few more case than before. For example, it
knows that a weak symbol can be resolved to another .o file, but it
will still be in the main executable.

llvm-svn: 293180
2017-01-26 15:02:31 +00:00
Simon Pilgrim 027bb453d9 [X86][SSE] Add support for combining ANDNP byte masks with target shuffles
llvm-svn: 293178
2017-01-26 14:31:12 +00:00
Daniil Fukalov b09dac59fc [SCEV] Introduce add operation inlining limit
Inlining in getAddExpr() can cause abnormal computational time in some cases.
New parameter -scev-addops-inline-threshold is intruduced with default value 500.

Reviewers: sanjoy

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D28812

llvm-svn: 293176
2017-01-26 13:33:17 +00:00
Simon Pilgrim 3057fd53f9 [X86][SSE] Pull out target shuffle resolve code into helper. NFCI.
Pulled out code that removed unused inputs from a target shuffle mask into a helper function to allow it to be reused in a future commit.

llvm-svn: 293175
2017-01-26 13:06:02 +00:00
Valery Pykhtin 75d1de903f [AMDGPU] Fix typo in GCNSchedStrategy
Differential revision: https://reviews.llvm.org/D28980

llvm-svn: 293171
2017-01-26 10:51:47 +00:00
Simon Dardis 5b67a4f75f Revert "[mips] N64 static relocation model support"
This reverts commit r293164. There are multiple tests failing.

llvm-svn: 293170
2017-01-26 10:46:07 +00:00
Chandler Carruth 6f4ed077d0 [LV] Fix an issue where forming LCSSA in the place that we did would
change the set of uniform instructions in the loop causing an assert
failure.

The problem is that the legalization checking also builds data
structures mapping various facts about the loop body. The immediate
cause was the set of uniform instructions. If these then change when
LCSSA is formed, the data structures would already have been built and
become stale. The included test case triggered an assert in loop
vectorize that was reduced out of the new PM's pipeline.

The solution is to form LCSSA early enough that no information is cached
across the changes made. The only really obvious position is outside of
the main logic to vectorize the loop. This also has the advantage of
removing one case where forming LCSSA could mutate the loop but we
wouldn't track that as a "Changed" state.

If it is significantly advantageous to do some legalization checking
prior to this, we can do a more careful positioning but it seemed best
to just back off to a safe position first.

llvm-svn: 293168
2017-01-26 10:41:09 +00:00
Simon Dardis 09e65efd09 [mips] N64 static relocation model support
This patch makes one change to GOT handling and two changes to N64's
relocation model handling. Furthermore, the jumptable encodings have
been corrected for static N64.

Big GOT handling is now done via a new SDNode MipsGotHi - this node is
unconditionally lowered to an lui instruction.

The first change to N64's relocation handling is the lifting of the
restriction that N64 always uses PIC. Now it is possible to target static
environments.

The second change adds support for 64 bit symbols and enables them by
default. Previously N64 had patterns for sym32 mode only. In this mode all
symbols are assumed to have 32 bit addresses. sym32 mode support
is selectable with attribute 'sym32'. A follow on patch for clang will
add the necessary frontend parameter.

This partially resolves PR/23485.

Thanks to Brooks Davis for reporting the issue!

Reviewers: dsanders, seanbruno, zoran.jovanovic, vkalintiris

Differential Revision: https://reviews.llvm.org/D23652

llvm-svn: 293164
2017-01-26 10:19:02 +00:00
Diana Picus 278c722e6d [ARM] GlobalISel: Load i1, i8 and i16 args from stack
Add support for loading i1, i8 and i16 arguments from the stack, with or without
the ABI extension flags.

When the ABI extension flags are present, we load a 4-byte value, otherwise we
preserve the size of the load and let the instruction selector replace it with a
LDRB/LDRH. This generates the same thing as DAGISel.

Differential Revision: https://reviews.llvm.org/D27803

llvm-svn: 293163
2017-01-26 09:20:47 +00:00
Chandler Carruth 41421df02b [PM] Use PoisoningVH correctly when merely deleting entries in a map
with it.

This code was dereferencing the PoisoningVH which isn't allowed once it
is poisoned. But the code itself really doesn't need to access the
pointer, it is just doing the safe stuff of clearing out data structures
keyed on the pointer value.

Change the code to use iterators to erase directly from a DenseMap. This
is also substantially more efficient as it avoids lots of hashing and
lookups to do the erasure. DenseMap supports iterating behind the
iteration which is fairly easy to implement.

Sadly, I don't have a test case here. I'm not even close and I don't
know that I ever will be. The issue is that several of the tricky
aspects of fixing this only show up when you cause the stack's
SmallVector to be in *EXACTLY* the right location. I only ever got
a reproduction for those with Clang, and only with *exactly* the right
command line flags. Any adjustment, even to seemingly unrelated flags,
would make partial and half-way solutions magically start to "work". In
good news, all of this was caught with the LLVM test suite. Also, there
is no *specific* code here that is untested, just that the old pattern
of code won't immediately fail on any test case I've managed to
contrive.

llvm-svn: 293160
2017-01-26 08:31:54 +00:00
Craig Topper bad53cce26 [AVX-512] Move the combine that runs combineBitcastForMaskedOp to the last DAG combine phase where I had originally meant to put it.
llvm-svn: 293157
2017-01-26 07:17:58 +00:00
Craig Topper f0bab7b739 [X86] When bitcasting INSERT_SUBVECTOR/EXTRACT_SUBVECTOR to match masked operations, use the correct type for the immediate operand.
llvm-svn: 293156
2017-01-26 07:17:53 +00:00
Jonas Paulsson 8e2f948ef0 [TargetTransformInfo] Refactor and improve getScalarizationOverhead()
Refactoring to remove duplications of this method.

New method getOperandsScalarizationOverhead() that looks at the present unique
operands and add extract costs for them. Old behaviour was to just add extract
costs for one operand of the type always, which still happens in
getArithmeticInstrCost() if no operands are provided by the caller.

This is a good start of improving on this, but there are more places
that can be improved by using getOperandsScalarizationOverhead().

Review: Hal Finkel
https://reviews.llvm.org/D29017

llvm-svn: 293155
2017-01-26 07:03:25 +00:00
Craig Topper 001aad7da7 [DAGCombiner] Fold extract_subvector of undef to undef. Fold away inserting undef subvectors.
llvm-svn: 293152
2017-01-26 05:38:46 +00:00
Craig Topper b6122122c9 [X86] Add demanded elts support for the inputs to pclmul intrinsic
This intrinsic uses bit 0 and bit 4 of an immediate argument to determine which bits of its inputs to read. This patch uses this information to simplify the demanded elements of the input vectors.

Differential Revision: https://reviews.llvm.org/D28979

llvm-svn: 293151
2017-01-26 05:17:13 +00:00
Taewook Oh 0d26a5376c Revert test commit
llvm-svn: 293150
2017-01-26 04:34:25 +00:00
Taewook Oh d3f1ec9962 test commit
llvm-svn: 293148
2017-01-26 04:32:40 +00:00
Chandler Carruth eab3b90a14 [PM] Simplify the new PM interface to the loop unroller and expose two
factory functions for the two modes the loop unroller is actually used
in in-tree: simplified full-unrolling and the entire thing including
partial unrolling.

I've also wired these up to nice names so you can express both of these
being in a pipeline easily. This is a precursor to actually enabling
these parts of the O2 pipeline.

Differential Revision: https://reviews.llvm.org/D28897

llvm-svn: 293136
2017-01-26 02:13:50 +00:00
Kostya Serebryany 419634bdb8 [libFuzzer] remove a bit of stale code
llvm-svn: 293129
2017-01-26 01:45:54 +00:00
Kostya Serebryany 7856fb36b0 [libFuzzer] further simplify __sanitizer_cov_trace_pc_guard
llvm-svn: 293128
2017-01-26 01:34:58 +00:00
Matt Arsenault 53f0cc238c AMDGPU: Fold fneg into round instructions
llvm-svn: 293127
2017-01-26 01:25:36 +00:00
Kostya Serebryany d0ecb4c69e [libFuzzer] simplify the code for __sanitizer_cov_trace_pc_guard and make sure it is not asan/msan-instrumented
llvm-svn: 293125
2017-01-26 01:04:54 +00:00
Michael Kuperstein 5dd55e8405 [LoopUnroll] Properly update loopinfo for runtime unrolling by 2
Even when we don't create a remainder loop (that is, when we unroll by 2), we
may duplicate nested loops into the remainder. This is complicated by the fact
the remainder may itself be either inserted into an outer loop, or at the top
level. In the latter case, we may need to create new top-level loops.

Differential Revision: https://reviews.llvm.org/D29156

llvm-svn: 293124
2017-01-26 01:04:11 +00:00
Davide Italiano ccbbc8313f [NewGVN] Skip uses in unreachable blocks.
Otherwise we ask for a domtree node that's not there, and we crash.

Differential Revision:  https://reviews.llvm.org/D29145

llvm-svn: 293122
2017-01-26 00:42:42 +00:00
Adam Nemet 916923e689 [llc] Add -pass-remarks-output
This is the opt/llc counterpart of -fsave-optimization-record to output
optimization remarks in a YAML file.

llvm-svn: 293121
2017-01-26 00:39:51 +00:00
Peter Collingbourne 1df6e858ef LowerTypeTests: Ignore external globals with type metadata.
Thanks to Davide Italiano for finding the problem and providing a test case.

llvm-svn: 293119
2017-01-26 00:32:15 +00:00
Kostya Serebryany 7c021afef2 [libFuzzer] don't call GetPreviousInstructionPc on the hot path -- only when dumping the PCs
llvm-svn: 293117
2017-01-26 00:22:08 +00:00
Tim Shen 7117e698bf [APFloat] Fix comments. NFC.
Summary: Fix comments in response to jlebar's comments in D27872.

Reviewers: jlebar

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29109

llvm-svn: 293116
2017-01-26 00:11:07 +00:00
Justin Lebar 7e3184c412 [ValueTracking] Implement SignBitMustBeZero correctly for sqrt.
Summary:
Previously we assumed that the result of sqrt(x) always had 0 as its
sign bit.  But sqrt(-0) == -0.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28928

llvm-svn: 293115
2017-01-26 00:10:26 +00:00
Davide Italiano b3886dd84f [NewGVN] Simplify folding a lambda used only once. NFCI.
llvm-svn: 293112
2017-01-25 23:37:49 +00:00
Adam Nemet a964066705 New OptimizationRemarkEmitter pass for MIR
This allows MIR passes to emit optimization remarks with the same level
of functionality that is available to IR passes.

It also hooks up the greedy register allocator to report spills.  This
allows for interesting use cases like increasing interleaving on a loop
until spilling of registers is observed.

I still need to experiment whether reporting every spill scales but this
demonstrates for now that the functionality works from llc
using -pass-remarks*=<pass>.

Differential Revision: https://reviews.llvm.org/D29004

llvm-svn: 293110
2017-01-25 23:20:33 +00:00
Adam Nemet 484f93db30 [OptDiag] Split code region out of DiagnosticInfoOptimizationBase
Code region is the only part of this class that is IR-specific.  Code
region is moved down in the inheritance tree to a new derived class,
called DiagnosticInfoIROptimization.

All the existing remarks are derived from this new class now.

This allows the new MIR pass-remark classes to be derived from
DiagnosticInfoOptimizationBase.

Also because we keep the name DiagnosticInfoOptimizationBase, the clang
parts don't need any adjustment.

Differential Revision: https://reviews.llvm.org/D29003

llvm-svn: 293109
2017-01-25 23:20:25 +00:00
Adrian McCarthy 6b6b8c4fb9 NFC: Rename (PDB) RawSession to NativeSession
This eliminates one overload on the term Raw.

Differential Revision: https://reviews.llvm.org/D29098

llvm-svn: 293104
2017-01-25 22:38:55 +00:00
Daniel Jasper 65144c852d Revert "[PPC] Give unaligned memory access lower cost on processor that supports it"
This reverts commit r292680. It is causing significantly worse
performance and test timeouts in our internal builds. I have already
routed reproduction instructions your way.

llvm-svn: 293092
2017-01-25 21:21:08 +00:00
Zachary Turner 29da5db7a0 [pdb] Correctly parse the hash adjusters table from TPI stream.
This is not a list of pairs, it is a hash table data structure. We now
correctly parse this out and dump it from llvm-pdbdump.

We still need to understand the conditions that lead to a type
getting an entry in the hash adjuster table.  That will be done
in a followup investigation / patch.

Differential Revision: https://reviews.llvm.org/D29090

llvm-svn: 293090
2017-01-25 21:17:40 +00:00
Tim Northover 470f070b7d SDag: fix how initial loads are formed when splitting vector ops.
Later code expects the vector loads produced to be directly
concatenable, which means we shouldn't pad anything except the last load
produced with UNDEF.

llvm-svn: 293088
2017-01-25 20:58:26 +00:00
Tim Northover 9e35f1e21c GlobalISel: rework getOrCreateVReg to avoid double lookup. NFC.
Thanks to Quentin for suggesting the refactoring.

llvm-svn: 293087
2017-01-25 20:58:22 +00:00
Tim Northover 5d27063eb4 DebugInfo: remove unused parameter from function. NFC.
I think it's a hold-over from some previous iteration, but it's never
set to true in LLVM as it exists now.

llvm-svn: 293086
2017-01-25 20:58:07 +00:00
Daniel Berlin d602e04c9e MemorySSA: Link all defs together into an intrusive defslist, to make updater easier
Summary:
This is the first in a series of patches to add a simple, generalized updater to MemorySSA.

For MemorySSA, every def is may-def, instead of the normal must-def.
(the best way to think of memoryssa is "everything is really one variable, with different versions of that variable at different points in the program).
This means when updating, we end up having to do a bunch of work to touch defs below and above us.

In order to support this quickly, i have ilist'd all the defs for each block.  ilist supports tags, so this is quite easy. the only slightly messy part is that you can't have two iplists for the same type that differ only whether they have the ownership part enabled or not, because the traits are for the value type.

The verifiers have been updated to test that the def order is correct.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29046

llvm-svn: 293085
2017-01-25 20:56:19 +00:00
Konstantin Zhuravlyov 400771edd6 [AMDGPU] Bump up n_type for metadata v2
Differential Revision: https://reviews.llvm.org/D29115

llvm-svn: 293083
2017-01-25 20:47:17 +00:00
Matt Arsenault 5d9101941f AMDGPU: Set call_convention bit in kernel_code_t
According to the documentation this is supposed to be -1
if indirect calls are not supported.

llvm-svn: 293081
2017-01-25 20:21:57 +00:00
Serge Rogatch bc2d34394d [XRay][AArch64] More staging for tail call support in XRay on AArch64 - in LLVM
Summary:
This patch prepares more for tail call support in XRay. Until the logging part supports tail calls, this is just staging, so it seems LLVM part is mostly ready with this patch.
Related: https://reviews.llvm.org/D28948 (compiler-rt)

Reviewers: dberris, rengolin

Reviewed By: dberris

Subscribers: llvm-commits, iid_iunknown, aemerson

Differential Revision: https://reviews.llvm.org/D28947

llvm-svn: 293080
2017-01-25 20:21:49 +00:00
Krzysztof Parzyszek ee9aa3ffee Add iterator_range<regclass_iterator> to {Target,MC}RegisterInfo, NFC
llvm-svn: 293077
2017-01-25 19:29:04 +00:00
Chad Rosier 4f724dce42 Revert "Do not verify dominator tree if it has no roots"
This reverts commit r293033, per Danny's comment.  In short, we require
domtrees to have roots at all times.

llvm-svn: 293075
2017-01-25 17:15:48 +00:00
Matthias Braun aeb8e33968 PowerPC: Slight cleanup of getReservedRegs(); NFC
Change getReservedRegs() to not mark a register as reserved and then
revert that decision in some cases. Motivated by the discussion in
https://reviews.llvm.org/D29056

llvm-svn: 293073
2017-01-25 17:12:10 +00:00
Krzysztof Parzyszek 0fd6296b82 Add loop pass insertion point EP_LateLoopOptimizations
Differential Revision: https://reviews.llvm.org/D28694

llvm-svn: 293067
2017-01-25 16:12:25 +00:00
Artur Pilipenko 8fb3d57e67 [Guards] Introduce loop-predication pass
This patch introduces guard based loop predication optimization. The new LoopPredication pass tries to convert loop variant range checks to loop invariant by widening checks across loop iterations. For example, it will convert

  for (i = 0; i < n; i++) {
    guard(i < len);
    ...
  }

to

  for (i = 0; i < n; i++) {
    guard(n - 1 < len);
    ...
  }

After this transformation the condition of the guard is loop invariant, so loop-unswitch can later unswitch the loop by this condition which basically predicates the loop by the widened condition:

  if (n - 1 < len)
    for (i = 0; i < n; i++) {
      ...
    } 
  else
    deoptimize

This patch relies on an NFC change to make ScalarEvolution::isMonotonicPredicate public (revision 293062).

Reviewed By: sanjoy

Differential Revision: https://reviews.llvm.org/D29034

llvm-svn: 293064
2017-01-25 16:00:44 +00:00
Chad Rosier 072e70b365 [AArch64] Minor code refactoring. NFC.
llvm-svn: 293063
2017-01-25 15:56:59 +00:00
Artur Pilipenko b85f7a5d99 [InstCombine] Canonicalize guards for NOT OR condition
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine

Reviewed By: apilipenko

Differential Revision: https://reviews.llvm.org/D29075

Patch by Maxim Kazantsev.

llvm-svn: 293061
2017-01-25 14:45:12 +00:00
Simon Pilgrim 6f6b279109 [InstCombine][SSE] Add support for PACKSS/PACKUS constant folding
Differential Revision: https://reviews.llvm.org/D28949

llvm-svn: 293060
2017-01-25 14:37:24 +00:00
Martin Bohme 8396e14e7f [ARM] GlobalISel: Fix stack-use-after-scope bug.
Summary:
Lifetime extension wasn't triggered on the result of BuildMI because the
reference was non-const. However, instead of adding a const, I've
removed the reference entirely as RVO should kick in anyway.

Reviewers: rovka, bkramer

Reviewed By: bkramer

Subscribers: aemerson, rengolin, dberris, llvm-commits, kristof.beyls

Differential Revision: https://reviews.llvm.org/D29124

llvm-svn: 293059
2017-01-25 14:28:19 +00:00
Artur Pilipenko 4df4c4a4aa [InstCombine] Canonicalize guards for AND condition
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine

Reviewed By: apilipenko

Differential Revision: https://reviews.llvm.org/D29074

Patch by Maxim Kazantsev.

llvm-svn: 293058
2017-01-25 14:20:52 +00:00
Artur Pilipenko e812ca00bb [InstCombine] Allow InstrCombine to remove one of adjacent guards if they are equivalent
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine

Reviewed By: majnemer, apilipenko

Differential Revision: https://reviews.llvm.org/D29071

Patch by Maxim Kazantsev.

llvm-svn: 293056
2017-01-25 14:12:12 +00:00
Alexey Bataev d28ab559a7 [SLP] Improve horizontal vectorization for non-power-of-2 number of
instructions.

If number of instructions in horizontal reduction list is not power of 2
then only PowerOf2Floor(NumberOfInstructions) last elements are actually
vectorized, other instructions remain scalar. Patch tries to vectorize
the remaining elements either.

Differential Revision: https://reviews.llvm.org/D28959

llvm-svn: 293042
2017-01-25 09:54:38 +00:00
whitequark 16f1e5f1ca Mark @llvm.powi.* as safe to speculatively execute.
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the @llvm.powi.* intrinsics do not correspond to any libm
function, and lacks any defined error handling semantics in LangRef.
It most certainly does not alter errno.

llvm-svn: 293041
2017-01-25 09:32:30 +00:00
Mohammed Agabaria 20caee95e1 [X86] enable memory interleaving for X86\SLM arch.
Differential Revision: https://reviews.llvm.org/D28547

llvm-svn: 293040
2017-01-25 09:14:48 +00:00
Artur Pilipenko bc93452420 Fix buildbot failures introduced by 293036
Fix unused variable, specify types explicitly to make VC compiler happy.

llvm-svn: 293039
2017-01-25 09:10:07 +00:00
Artur Pilipenko 41c0005aa3 [DAGCombiner] Match load by bytes idiom and fold it into a single load. Attempt #2.
The previous patch (https://reviews.llvm.org/rL289538) got reverted because of a bug. Chandler also requested some changes to the algorithm.
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161212/413479.html

This is an updated patch. The key difference is that collectBitProviders (renamed to calculateByteProvider) now collects the origin of one byte, not the whole value. It simplifies the implementation and allows to stop the traversal earlier if we know that the result won't be used.

From the original commit:

Match a pattern where a wide type scalar value is loaded by several narrow loads and combined by shifts and ors. Fold it into a single load or a load and a bswap if the targets supports it.

Assuming little endian target:
  i8 *a = ...
  i32 val = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24)
=>
  i32 val = *((i32)a)

  i8 *a = ...
  i32 val = (a[0] << 24) | (a[1] << 16) | (a[2] << 8) | a[3]
=>
  i32 val = BSWAP(*((i32)a))

This optimization was discussed on llvm-dev some time ago in "Load combine pass" thread. We came to the conclusion that we want to do this transformation late in the pipeline because in presence of atomic loads load widening is irreversible transformation and it might hinder other optimizations.

Eventually we'd like to support folding patterns like this where the offset has a variable and a constant part:
  i32 val = a[i] | (a[i + 1] << 8) | (a[i + 2] << 16) | (a[i + 3] << 24)

Matching the pattern above is easier at SelectionDAG level since address reassociation has already happened and the fact that the loads are adjacent is clear. Understanding that these loads are adjacent at IR level would have involved looking through geps/zexts/adds while looking at the addresses.

The general scheme is to match OR expressions by recursively calculating the origin of individual bytes which constitute the resulting OR value. If all the OR bytes come from memory verify that they are adjacent and match with little or big endian encoding of a wider value. If so and the load of the wider type (and bswap if needed) is allowed by the target generate a load and a bswap if needed.

Reviewed By: RKSimon, filcab, chandlerc 

Differential Revision: https://reviews.llvm.org/D27861

llvm-svn: 293036
2017-01-25 08:53:31 +00:00
Diana Picus d83df5d372 [ARM] GlobalISel: Support i1 add and ABI extensions
Add support for:
* i1 add
* i1 function arguments, if passed through registers
* i1 returns, with ABI signext/zeroext

Differential Revision: https://reviews.llvm.org/D27706

llvm-svn: 293035
2017-01-25 08:47:40 +00:00
Diana Picus 8b6c6bedcb [ARM] GlobalISel: Support i8/i16 ABI extensions
At the moment, this means supporting the signext/zeroext attribute on the return
type of the function. For function arguments, signext/zeroext should be handled
by the caller, so there's nothing for us to do until we start lowering calls.

Note that this does not include support for other extensions (i8 to i16), those
will be added later.

Differential Revision: https://reviews.llvm.org/D27705

llvm-svn: 293034
2017-01-25 08:10:40 +00:00
Serge Pavlov 43a7759f4b Do not verify dominator tree if it has no roots
If dominator tree has no roots, the pass that calculates it is
likely to be skipped. It occures, for instance, in the case of
entities with linkage available_externally. Do not run tree
verification in such case.

Differential Revision: https://reviews.llvm.org/D28767

llvm-svn: 293033
2017-01-25 07:58:10 +00:00
Coby Tayree 77807d93af [X86]Enable the use of 'mov' with a 64bit GPR and a large immediate
Enable the next form (intel style):
"mov <reg64>, <largeImm>"
which is should be available,
where <largeImm> stands for immediates which exceed the range of a singed 32bit integer

Differential Revision: https://reviews.llvm.org/D28988

llvm-svn: 293030
2017-01-25 07:09:42 +00:00
Diana Picus 1d8eaf4387 [ARM] GlobalISel: Bail out on Thumb. NFC
Thumb is not supported yet, so bail out early.

llvm-svn: 293029
2017-01-25 07:08:53 +00:00
Matt Arsenault 74a576e7d3 AMDGPU: Check nsz instead of unsafe math
llvm-svn: 293028
2017-01-25 06:27:02 +00:00
Akira Hatanaka 4ec7b20ef6 [SimplifyCFG] Do not sink and merge inline-asm instructions.
Conservatively disable sinking and merging inline-asm instructions as doing so
can potentially create arguments that cannot satisfy the inline-asm constraints.

For example, SimplifyCFG used to do the following transformation:

(before)
if.then:
  %0 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 8)
  br label %if.end
if.else:
  %1 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 6)
  br label %if.end

(after)
  %.sink = select i1 %tobool, i32 6, i32 8
  %0 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 %.sink)

This would result in a crash in the backend since only immediate integer operands
are permitted for constraint "n".

rdar://problem/30110806

Differential Revision: https://reviews.llvm.org/D29111

llvm-svn: 293025
2017-01-25 06:21:51 +00:00
Matt Arsenault 732a531506 DAG: Recognize no-signed-zeros-fp-math attribute
clang already emits this with -cl-no-signed-zeros, but codegen
doesn't do anything with it. Treat it like the other fast math
attributes, and change one place to use it.

llvm-svn: 293024
2017-01-25 06:08:42 +00:00
Justin Bogner 4844573eb1 GlobalISel: Fix typo in error message
llvm-svn: 293023
2017-01-25 06:02:10 +00:00
Matt Arsenault 8a27aee6ae DAGCombiner: Allow negating ConstantFP after legalize
llvm-svn: 293019
2017-01-25 04:54:34 +00:00
NAKAMURA Takumi 28dc4d5122 Rewind instantiations of OuterAnalysisManagerProxy in r289317, r291651, and r291662.
I found root class should be instantiated for variadic tempate to instantiate static member explicitly.

This will fix failures in mingw DLL build.

llvm-svn: 293017
2017-01-25 04:26:29 +00:00
Matt Arsenault 9f5e0ef0c5 AMDGPU: Implement early ifcvt target hooks.
Leave early ifcvt disabled for now since there are some
shader-db regressions.

This causes some immediate improvements, but could be better.
The cost checking that the pass does is based on critical path
length for out of order CPUs which we do not want so it skips out
on many cases we want.

llvm-svn: 293016
2017-01-25 04:25:02 +00:00
Ahmed Bougacha eb185e1f64 Try to prevent build breakage by touching a CMakeLists.txt.
Looks like our cmake goop for handling .inc->td dependencies doesn't
track the .td files.

This manifests as cmake complaining about missing files since r293009.

Force a rerun to avoid that.

llvm-svn: 293012
2017-01-25 02:55:24 +00:00
Chandler Carruth ce40fa13ce [PM] Teach LoopUnroll to update the LPM infrastructure as it unrolls
loops.

We do this by reconstructing the newly added loops after the unroll
completes to avoid threading pass manager details through all the mess
of the unrolling infrastructure.

I've enabled some extra assertions in the LPM to try and catch issues
here and enabled a bunch of unroller tests to try and make sure this is
sane.

Currently, I'm manually running loop-simplify when needed. That should
go away once it is folded into the LPM infrastructure.

Differential Revision: https://reviews.llvm.org/D28848

llvm-svn: 293011
2017-01-25 02:49:01 +00:00
Ahmed Bougacha 05a5f7dc0b [GlobalISel] Generate selector for more integer binop patterns.
This surprisingly isn't NFC because there are patterns to select GPR
sub to SUBSWrr (rather than SUBWrr/rs); SUBS is later optimized to
SUB if NZCV is dead.  From ISel's perspective, both are fine.

llvm-svn: 293010
2017-01-25 02:41:38 +00:00
Gor Nishanov df3d71a7a9 [coroutines] Spill the result of the invoke instruction correctly
Summary:
When we decide that the result of the invoke instruction need to be spilled, we need to insert the spill into a block that is on the normal edge coming out of the invoke instruction. (Prior to this change the code would insert the spill immediately after the invoke instruction, which breaks the IR, since invoke is a terminator instruction).

In the following example, we will split the edge going into %cont and insert the spill there.

```
  %r = invoke double @print(double 0.0) to label %cont unwind label %pad

  cont:
    %0 = call i8 @llvm.coro.suspend(token none, i1 false)
    switch i8 %0, label %suspend [i8 0, label %resume
                                  i8 1, label %cleanup]
  resume:
    call double @print(double %r)
```

Reviewers: majnemer

Reviewed By: majnemer

Subscribers: mehdi_amini, llvm-commits, EricWF

Differential Revision: https://reviews.llvm.org/D29102

llvm-svn: 293006
2017-01-25 02:25:54 +00:00
Tom Stellard 2f3f9855f0 AMDGPU add support for spilling to a user sgpr pointed buffers
Summary:
This lets you select which sort of spilling you want, either s[0:1] or 64-bit loads from s[0:1].

Patch By: Dave Airlie

Reviewers: nhaehnle, arsenm, tstellarAMD

Reviewed By: arsenm

Subscribers: mareko, llvm-commits, kzhuravl, wdng, yaxunl, tony-tye

Differential Revision: https://reviews.llvm.org/D25428

llvm-svn: 293000
2017-01-25 01:25:13 +00:00
Eugene Zelenko 11f6907f40 [AArch64] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 292996
2017-01-25 00:29:26 +00:00
Justin Bogner a029531e10 GlobalISel: Use the correct types when translating landingpad instructions
There was a bug here where we were using p0 instead of s32 for the
selector type in the landingpad. Instead of hardcoding these types we
should get the types from the landingpad instruction directly.

Note that we replicate an assert from SDAG here to only support
two-valued landingpads.

llvm-svn: 292995
2017-01-25 00:16:53 +00:00
Kevin Enderby 7a165755ba Fix llvm-objdump so it picks a good CPU based for Mach-O files
for CPU_SUBTYPE_ARM_V7S and CPU_SUBTYPE_ARM_V7K.

For these two cpusubtypes they should default to a cortex-a7 CPU
to give proper disassembly without a -mcpu= flag.

rdar://27431703

llvm-svn: 292993
2017-01-24 23:41:04 +00:00
Eugene Zelenko 8c6ed0f3a0 [XCore] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 292988
2017-01-24 23:02:48 +00:00
Matt Arsenault bf67cf7e4b AMDGPU: Remove spurious out branches after a kill
The sequence like this:
  v_cmpx_le_f32_e32 vcc, 0, v0
  s_branch BB0_30
  s_cbranch_execnz BB0_30
  ; BB#29:
  exp null off, off, off, off done vm
  s_endpgm
  BB0_30:
  ; %endif110

is likely wrong. The s_branch instruction will unconditionally jump
to BB0_30 and the skip block (exp done + endpgm) inserted for
performing the kill instruction will never be executed. This results
in a GPU hang with Star Ruler 2.

The s_branch instruction is added during the "Control Flow Optimizer"
pass which seems to re-organize the basic blocks, and we assume
that SI_KILL_TERMINATOR is always the last instruction inside a
basic block. Thus, after inserting a skip block we just go to the
next BB without looking at the subsequent instructions after the
kill, and the s_branch op is never removed.

Instead, we should remove the unconditional out branches and let
skip the two instructions if the exec mask is non-zero.

This patch fixes the GPU hang and doesn't introduce any regressions
with "make check".

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99019

Patch by Samuel Pitoiset <samuel.pitoiset@gmail.com>

llvm-svn: 292985
2017-01-24 22:18:39 +00:00
Wei Mi f1cf0278e8 Revert rL292621. Caused some internal build bot failures in apple.
llvm-svn: 292984
2017-01-24 22:15:06 +00:00
Eugene Zelenko 3943d2b0d7 [SystemZ] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 292983
2017-01-24 22:10:43 +00:00
Matt Arsenault 7aad8fd8f4 Enable FeatureFlatForGlobal on Volcanic Islands
This switches to the workaround that HSA defaults to
for the mesa path.

This should be applied to the 4.0 branch.

Patch by Vedran Miletić <vedran@miletic.net>

llvm-svn: 292982
2017-01-24 22:02:15 +00:00
Dehao Chen a5eb1689dc Explicitly promote indirect calls before sample profile annotation.
Summary: In iterative sample pgo where profile is collected from PGOed binary, we may see indirect call targets promoted and inlined in the profile. Before profile annotation, we need to make this happen in order to annotate correctly on IR. This patch explicitly promotes these indirect calls and inlines them before profile annotation.

Reviewers: xur, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29040

llvm-svn: 292979
2017-01-24 21:05:51 +00:00
Saleem Abdulrasool 85824ee618 Demangle: correct demangling for CV-qualified functions
When demangling a CV-qualified function type with a final reference type
parameter, we would treat the reference type parameter as a r-value ref
accidentally.  This would result in the improper decoration of the
function type itself.

Resolves PR31741!

llvm-svn: 292976
2017-01-24 20:04:58 +00:00
Saleem Abdulrasool 25ee0a62ac Demangle: use named values for CV qualifiers
Rather than hard-coding magic values of 1, 2, 4 (bit-field), use an enum
to name the values.  NFC.

llvm-svn: 292975
2017-01-24 20:04:56 +00:00
Daniel Berlin 390dfde0f3 Remove the load hoisting code of MLSM, it is completely subsumed by GVNHoist
Summary:
GVNHoist performs all the optimizations that MLSM does to loads, in a
more general way, and in a faster time bound (MLSM is N^3 in most
cases, N^4 in a few edge cases).

This disables the load portion.

Note that the way ld_hoist_st_sink.ll is written makes one think that
the loads should be moved to the while.preheader block, but

1. Neither MLSM nor GVNHoist do it (they both move them to identical places).

2. MLSM couldn't possibly do it anyway, as the while.preheader block
is not the head of the diamond, while.body is.  (GVNHoist could do it
if it was legal).

3. At a glance, it's not legal anyway because the in-loop load
conflict with the in-loop store, so the loads must stay in-loop.

I am happy to update the test to use update_test_checks so that
checking is tighter, just was going to do it as a followup.

Note that i can find no particular benefit to the store portion on any
real testcase/benchmark i have (even size-wise).  If we really still
want it, i am happy to commit to writing a targeted store sinker, just
taking the code from the MemorySSA port of MergedLoadStoreMotion
(which is N^2 worst case, and N most of the time).

We can do what it does in a much better time bound.

We also should be both hoisting and sinking stores, not just sinking
them, anyway, since whether we should hoist or sink to merge depends
basically on luck of the draw of where the blockers are placed.

Nonetheless, i have left it alone for now.

Reviewers: chandlerc, davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29079

llvm-svn: 292971
2017-01-24 19:55:36 +00:00
Changpeng Fang c85abbd955 AMDGPU/SI: Give up in promote alloca when a pointer may be captured.
Differential Revision:
  http://reviews.llvm.org/D28970

Reviewer:
  Matt

llvm-svn: 292966
2017-01-24 19:06:28 +00:00
Saleem Abdulrasool c38cd326fc Demangle: avoid butchering parameter type
When demangling a CV-qualified function type with a final parameter with
a reference type, we would insert the CV qualification on the parameter
rather than the function, and in the process adjust the insertion point
by one extra, splitting the type name.  This avoids doing so, even
though the attribution is still incorrect.

llvm-svn: 292965
2017-01-24 18:52:19 +00:00
Chad Rosier 8e11fbd15d [AArch64] Fix typo. NFC.
llvm-svn: 292959
2017-01-24 18:08:10 +00:00
Amaury Sechet d90f5f6698 Use InstCombine's builder in foldSelectCttzCtlz instead of creating a new one.
Summary: As per title. This will add the instructiions we are interested in in the worklist.

Reviewers: mehdi_amini, majnemer, andreadb

Differential Revision: https://reviews.llvm.org/D29081

llvm-svn: 292957
2017-01-24 17:48:25 +00:00
Stanislav Mekhanoshin 22a56f2f5a [AMDGPU] Add VGPR copies post regalloc fix pass
Regalloc creates COPY instructions which do not formally use VALU.
That results in v_mov instructions displaced after exec mask modification.
One pass which do it is SIOptimizeExecMasking, but potentially it can be
done by other passes too.

This patch adds a pass immediately after regalloc to add implicit exec
use operand to all VGPR copy instructions.

Differential Revision: https://reviews.llvm.org/D28874

llvm-svn: 292956
2017-01-24 17:46:17 +00:00
Evandro Menezes 7784cacd91 [AArch64] Rename 'no-quad-ldst-pairs' to 'slow-paired-128'
In order to follow the pattern of the existing 'slow-misaligned-128store'
option, rename the option 'no-quad-ldst-pairs' to 'slow-paired-128'.

llvm-svn: 292954
2017-01-24 17:34:31 +00:00