Commit Graph

42253 Commits

Author SHA1 Message Date
Ahmed Bougacha 05a5f7dc0b [GlobalISel] Generate selector for more integer binop patterns.
This surprisingly isn't NFC because there are patterns to select GPR
sub to SUBSWrr (rather than SUBWrr/rs); SUBS is later optimized to
SUB if NZCV is dead.  From ISel's perspective, both are fine.

llvm-svn: 293010
2017-01-25 02:41:38 +00:00
Gor Nishanov df3d71a7a9 [coroutines] Spill the result of the invoke instruction correctly
Summary:
When we decide that the result of the invoke instruction need to be spilled, we need to insert the spill into a block that is on the normal edge coming out of the invoke instruction. (Prior to this change the code would insert the spill immediately after the invoke instruction, which breaks the IR, since invoke is a terminator instruction).

In the following example, we will split the edge going into %cont and insert the spill there.

```
  %r = invoke double @print(double 0.0) to label %cont unwind label %pad

  cont:
    %0 = call i8 @llvm.coro.suspend(token none, i1 false)
    switch i8 %0, label %suspend [i8 0, label %resume
                                  i8 1, label %cleanup]
  resume:
    call double @print(double %r)
```

Reviewers: majnemer

Reviewed By: majnemer

Subscribers: mehdi_amini, llvm-commits, EricWF

Differential Revision: https://reviews.llvm.org/D29102

llvm-svn: 293006
2017-01-25 02:25:54 +00:00
Justin Bogner a029531e10 GlobalISel: Use the correct types when translating landingpad instructions
There was a bug here where we were using p0 instead of s32 for the
selector type in the landingpad. Instead of hardcoding these types we
should get the types from the landingpad instruction directly.

Note that we replicate an assert from SDAG here to only support
two-valued landingpads.

llvm-svn: 292995
2017-01-25 00:16:53 +00:00
Kevin Enderby 7a165755ba Fix llvm-objdump so it picks a good CPU based for Mach-O files
for CPU_SUBTYPE_ARM_V7S and CPU_SUBTYPE_ARM_V7K.

For these two cpusubtypes they should default to a cortex-a7 CPU
to give proper disassembly without a -mcpu= flag.

rdar://27431703

llvm-svn: 292993
2017-01-24 23:41:04 +00:00
Matt Arsenault bf67cf7e4b AMDGPU: Remove spurious out branches after a kill
The sequence like this:
  v_cmpx_le_f32_e32 vcc, 0, v0
  s_branch BB0_30
  s_cbranch_execnz BB0_30
  ; BB#29:
  exp null off, off, off, off done vm
  s_endpgm
  BB0_30:
  ; %endif110

is likely wrong. The s_branch instruction will unconditionally jump
to BB0_30 and the skip block (exp done + endpgm) inserted for
performing the kill instruction will never be executed. This results
in a GPU hang with Star Ruler 2.

The s_branch instruction is added during the "Control Flow Optimizer"
pass which seems to re-organize the basic blocks, and we assume
that SI_KILL_TERMINATOR is always the last instruction inside a
basic block. Thus, after inserting a skip block we just go to the
next BB without looking at the subsequent instructions after the
kill, and the s_branch op is never removed.

Instead, we should remove the unconditional out branches and let
skip the two instructions if the exec mask is non-zero.

This patch fixes the GPU hang and doesn't introduce any regressions
with "make check".

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99019

Patch by Samuel Pitoiset <samuel.pitoiset@gmail.com>

llvm-svn: 292985
2017-01-24 22:18:39 +00:00
Wei Mi f1cf0278e8 Revert rL292621. Caused some internal build bot failures in apple.
llvm-svn: 292984
2017-01-24 22:15:06 +00:00
Matt Arsenault 7aad8fd8f4 Enable FeatureFlatForGlobal on Volcanic Islands
This switches to the workaround that HSA defaults to
for the mesa path.

This should be applied to the 4.0 branch.

Patch by Vedran Miletić <vedran@miletic.net>

llvm-svn: 292982
2017-01-24 22:02:15 +00:00
Dehao Chen a5eb1689dc Explicitly promote indirect calls before sample profile annotation.
Summary: In iterative sample pgo where profile is collected from PGOed binary, we may see indirect call targets promoted and inlined in the profile. Before profile annotation, we need to make this happen in order to annotate correctly on IR. This patch explicitly promotes these indirect calls and inlines them before profile annotation.

Reviewers: xur, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29040

llvm-svn: 292979
2017-01-24 21:05:51 +00:00
Ivan Krasin 34e89ad0a4 Revert [AMDGPU][mc][tests][NFC] Add coverage/smoke tests for Gfx7 and Gfx8.
Reason: broke ASAN bots with a global buffer overflow.
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/2291

Each test contains 20-30K test cases but takes only several (from 4 to 10)
seconds to complete on average machine. The tests cover the majority of
AMDGPU Gfx7/Gfx8 instructions, including many dark corners, and intended
to quickly find out if something is broken.

llvm-svn: 292974
2017-01-24 19:58:59 +00:00
Daniel Berlin 390dfde0f3 Remove the load hoisting code of MLSM, it is completely subsumed by GVNHoist
Summary:
GVNHoist performs all the optimizations that MLSM does to loads, in a
more general way, and in a faster time bound (MLSM is N^3 in most
cases, N^4 in a few edge cases).

This disables the load portion.

Note that the way ld_hoist_st_sink.ll is written makes one think that
the loads should be moved to the while.preheader block, but

1. Neither MLSM nor GVNHoist do it (they both move them to identical places).

2. MLSM couldn't possibly do it anyway, as the while.preheader block
is not the head of the diamond, while.body is.  (GVNHoist could do it
if it was legal).

3. At a glance, it's not legal anyway because the in-loop load
conflict with the in-loop store, so the loads must stay in-loop.

I am happy to update the test to use update_test_checks so that
checking is tighter, just was going to do it as a followup.

Note that i can find no particular benefit to the store portion on any
real testcase/benchmark i have (even size-wise).  If we really still
want it, i am happy to commit to writing a targeted store sinker, just
taking the code from the MemorySSA port of MergedLoadStoreMotion
(which is N^2 worst case, and N most of the time).

We can do what it does in a much better time bound.

We also should be both hoisting and sinking stores, not just sinking
them, anyway, since whether we should hoist or sink to merge depends
basically on luck of the draw of where the blockers are placed.

Nonetheless, i have left it alone for now.

Reviewers: chandlerc, davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29079

llvm-svn: 292971
2017-01-24 19:55:36 +00:00
Changpeng Fang c85abbd955 AMDGPU/SI: Give up in promote alloca when a pointer may be captured.
Differential Revision:
  http://reviews.llvm.org/D28970

Reviewer:
  Matt

llvm-svn: 292966
2017-01-24 19:06:28 +00:00
Stanislav Mekhanoshin 22a56f2f5a [AMDGPU] Add VGPR copies post regalloc fix pass
Regalloc creates COPY instructions which do not formally use VALU.
That results in v_mov instructions displaced after exec mask modification.
One pass which do it is SIOptimizeExecMasking, but potentially it can be
done by other passes too.

This patch adds a pass immediately after regalloc to add implicit exec
use operand to all VGPR copy instructions.

Differential Revision: https://reviews.llvm.org/D28874

llvm-svn: 292956
2017-01-24 17:46:17 +00:00
Evandro Menezes 7784cacd91 [AArch64] Rename 'no-quad-ldst-pairs' to 'slow-paired-128'
In order to follow the pattern of the existing 'slow-misaligned-128store'
option, rename the option 'no-quad-ldst-pairs' to 'slow-paired-128'.

llvm-svn: 292954
2017-01-24 17:34:31 +00:00
Sanjay Patel 562272536a [InstSimplify] try to eliminate icmp Pred (add nsw X, C1), C2
I was surprised to see that we're missing icmp folds based on 'add nsw' in InstCombine, 
but we should handle the InstSimplify cases first because that could make the InstCombine
code simpler.

Here are Alive-based proofs for the logic:

Name: add_neg_constant
Pre: C1 < 0 && (C2 > ((1<<(width(C1)-1)) + C1))
%a = add nsw i7 %x, C1
%b = icmp sgt %a, C2
  =>
%b = false

Name: add_pos_constant
Pre: C1 > 0 && (C2 < ((1<<(width(C1)-1)) + C1 - 1))
%a = add nsw i6 %x, C1
%b = icmp slt %a, C2
  =>
%b = false

Name: nuw
Pre: C1 u>= C2
%a = add nuw i11 %x, C1
%b = icmp ult %a, C2
  =>
%b = false

Differential Revision: https://reviews.llvm.org/D29053

llvm-svn: 292952
2017-01-24 17:03:24 +00:00
Simon Pilgrim 5bdb95c078 [X86][AVX2] Regenerate test.
llvm-svn: 292950
2017-01-24 16:58:22 +00:00
Reid Kleckner 11cf053bd1 [CodeView] Fix off-by-one error in def range gap emission
Also fixes a much worse bug where we emitted the wrong gap size for the
def range uncovered by the test for this issue.

Fixes PR31726.

llvm-svn: 292949
2017-01-24 16:57:55 +00:00
Simon Pilgrim 1b80a14685 [X86][AVX2] Removed FIXME comment and regenerated test.
The comment talked about replacing vpmovzxwd+vpslld+vpsrad with vpmovsxwd - which isn't valid as we're sign extending a <8 x i1> bool vector not an all/nobits <8 x i16>

llvm-svn: 292948
2017-01-24 16:56:23 +00:00
Simon Pilgrim fa6afd1bc6 [X86][AVX2] Cleaned up test triple and regenerated tests.
llvm-svn: 292946
2017-01-24 16:53:09 +00:00
Geoff Berry 92a286ae5a [SelectionDAG] Handle inverted conditions when splitting into multiple branches.
Summary:
When conditional branches with complex conditions are split into
multiple branches in SelectionDAGBuilder::FindMergedConditions, also
handle inverted conditions.  These may sometimes appear without having
been optimized by InstCombine when CodeGenPrepare decides to sink and
duplicate cmp instructions, causing them to have only one use.  This
problem can be increased by e.g. GVNHoist hiding more cmps from
InstCombine by combining equivalent cmps from different blocks.

For example codegen X & !(Y | Z) as:
    jmp_if_X TmpBB
    jmp FBB
  TmpBB:
    jmp_if_notY Tmp2BB
    jmp FBB
  Tmp2BB:
    jmp_if_notZ TBB
    jmp FBB

Reviewers: bogner, MatzeB, qcolombet

Subscribers: llvm-commits, hiraditya, mcrosier, sebpop

Differential Revision: https://reviews.llvm.org/D28380

llvm-svn: 292944
2017-01-24 16:36:07 +00:00
Chandler Carruth 6acdca78a0 [PH] Replace uses of AssertingVH from members of analysis results with
a lazy-asserting PoisoningVH.

AssertVH is fundamentally incompatible with cache-invalidation of
analysis results. The invaliadtion happens after the AssertingVH has
already fired. Instead, use a PoisoningVH that will assert if the
dangling handle is ever used rather than merely be assigned or
destroyed.

This patch also removes all of the (numerous) doomed attempts to work
around this fundamental incompatibility. It is a pretty significant
simplification IMO.

The most interesting change is in the Inliner where we still do some
clearing because we don't want to rely on the coarse grained
invalidation strategy of the containing pass manager. However, I prefer
the approach that contains this logic to the cleanup phase of the
Inliner, and I think we could enhance the CGSCC analysis management
layer to make this even better in the future if desired.

The rest is straight cleanup.

I've also added a test for one of the harder cases to work around: when
a *module analysis* contains many AssertingVHes pointing at functions.

Differential Revision: https://reviews.llvm.org/D29006

llvm-svn: 292928
2017-01-24 12:55:57 +00:00
Artem Tamazov 819da50d12 [AMDGPU][mc][tests][NFC] Add coverage/smoke tests for Gfx7 and Gfx8.
Each test contains 20-30K test cases but takes only several (from 4 to 10)
seconds to complete on average machine. The tests cover the majority of
AMDGPU Gfx7/Gfx8 instructions, including many dark corners, and intended
to quickly find out if something is broken.

llvm-svn: 292922
2017-01-24 12:22:01 +00:00
Simon Pilgrim e1ec9072f6 [X86][SSE] Add support for constant folding vector arithmetic shift by immediates
llvm-svn: 292919
2017-01-24 11:46:13 +00:00
Simon Pilgrim 6340e54861 [X86][SSE] Add support for constant folding vector logical shift by immediates
llvm-svn: 292915
2017-01-24 11:21:57 +00:00
Alexey Bataev 992ac2d5c2 [SLP] Additional test for checking that instruction with extra args is
not reconstructed.

llvm-svn: 292911
2017-01-24 10:44:00 +00:00
Serge Pavlov 098ee2fe02 Update domtree incrementally in loop peeling.
With this change dominator tree remains in sync after each step of loop
peeling.

Differential Revision: https://reviews.llvm.org/D29029

llvm-svn: 292895
2017-01-24 06:58:39 +00:00
Wei Ding ee21a36f8a AMDGPU : Add trap handler support.
llvm-svn: 292893
2017-01-24 06:41:21 +00:00
Chandler Carruth d7e0e6b514 [PM] Further fixes to the test case in r292863.
This should hopefully fix the MSVC failures remaining.

llvm-svn: 292887
2017-01-24 05:30:41 +00:00
Craig Topper ff272ad4f3 [SelectionDAG] Teach getNode to simplify a couple easy cases of EXTRACT_SUBVECTOR
Summary:
This teaches getNode to simplify extracting from Undef. This is similar to what is done for EXTRACT_VECTOR_ELT. It also adds support for extracting from CONCAT_VECTOR when we can reuse one of the inputs to the concat. These seem like simple non-target specific optimizations.

For X86 we currently handle undef in extractSubvector, but not all EXTRACT_SUBVECTOR creations go through there.

Ultimately, my motivation here is to simplify extractSubvector and remove custom lowering for EXTRACT_SUBVECTOR since we don't do anything but handle undef and BUILD_VECTOR optimizations, but those should be DAG combines.

Reviewers: RKSimon, delena

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29000

llvm-svn: 292876
2017-01-24 02:36:59 +00:00
Davide Italiano ea2dc02668 [PM] Try to make all three compilers happy when it comes to pretty printing.
Modeled after a similar change from Michael Kuperstein. Let's hope this
sticks together.

llvm-svn: 292872
2017-01-24 01:45:53 +00:00
Matthias Braun b901d33461 LiveIntervalAnalysis: Calculate liveness even if a superreg is reserved.
A register unit may be allocatable and non-reserved but some of the
register(tuples) built with it are reserved. We still need to calculate
liveness in this case.

Note to out of tree targets: If you start seeing machine verifier errors
with this commit, it probably means that you do not properly mark super
registers of reserved register as reserved. See for example r292836 or
r292870 for example on how to fix that.

rdar://29996737

Differential Revision: https://reviews.llvm.org/D28881

llvm-svn: 292871
2017-01-24 01:12:58 +00:00
Davide Italiano 9a6d49f451 [LTO] Add test to show up we don't support ThinLTO yet.
llvm-svn: 292865
2017-01-24 00:59:00 +00:00
Davide Italiano 0dd200e04d [LTO] Teach lib/LTO about the new pass manager.
Differential Revision:  https://reviews.llvm.org/D28997

llvm-svn: 292864
2017-01-24 00:58:24 +00:00
Davide Italiano 089a912365 [PM] Flesh out the new pass manager LTO pipeline.
Differential Revision:  https://reviews.llvm.org/D28996

llvm-svn: 292863
2017-01-24 00:57:39 +00:00
Kostya Serebryany 4b2ff07c11 [sanitizer-coverage] emit __sanitizer_cov_trace_pc_guard w/o a preceding 'if' by default. Update the docs, also add deprecation notes around other parts of sanitizer coverage
llvm-svn: 292862
2017-01-24 00:57:31 +00:00
Matt Arsenault 954a624fb9 SimplifyLibCalls: Replace more unary libcalls with intrinsics
llvm-svn: 292855
2017-01-23 23:55:08 +00:00
Michael Kuperstein 461aa57ad3 [LoopUnroll] First form LCSSA, then loop-simplify
Running non-LCSSA-preserving LoopSimplify followed by LCSSA on (roughly) the
same loop is incorrect, since LoopSimplify may break LCSSA arbitrarily higher
in the loop nest. Instead, run LCSSA first, and then run LCSSA-preserving
LoopSimplify on the result.

This fixes PR31718.

Differential Revision: https://reviews.llvm.org/D29055

llvm-svn: 292854
2017-01-23 23:45:42 +00:00
Matt Arsenault 3aef809384 AMDGPU: Custom lower more vector operations
This avoids stack usage.

llvm-svn: 292846
2017-01-23 23:09:58 +00:00
Matt Arsenault 4e305c6c1e DAG: Don't fold vector extract into load if target doesn't want to
Fixes turning a 32-bit scalar load into an extending vector load
for AMDGPU when dynamically indexing a vector.

llvm-svn: 292842
2017-01-23 22:48:53 +00:00
Sanjay Patel ce9d6faed6 [InstSimplify] add tests to show missing folds from 'icmp (add nsw)'; NFC
llvm-svn: 292841
2017-01-23 22:42:55 +00:00
Tim Shen fd1e5aa8df [APFloat] Switch from (PPCDoubleDoubleImpl, IEEEdouble) layout to (IEEEdouble, IEEEdouble)
Summary:
This patch changes the layout of DoubleAPFloat, and adjust all
operations to do either:
1) (IEEEdouble, IEEEdouble) -> (uint64_t, uint64_t) -> PPCDoubleDoubleImpl,
   then run the old algorithm.
2) Do the right thing directly.

1) includes multiply, divide, remainder, mod, fusedMultiplyAdd, roundToIntegral,
   convertFromString, next, convertToInteger, convertFromAPInt,
   convertFromSignExtendedInteger, convertFromZeroExtendedInteger,
   convertToHexString, toString, getExactInverse.
2) includes makeZero, makeLargest, makeSmallest, makeSmallestNormalized,
   compare, bitwiseIsEqual, bitcastToAPInt, isDenormal, isSmallest,
   isLargest, isInteger, ilogb, scalbn, frexp, hash_value, Profile.

I could split this into two patches, e.g. use
1) for all operatoins first, then incrementally change some of them to
2). I didn't do that, because 1) involves code that converts data between
PPCDoubleDoubleImpl and (IEEEdouble, IEEEdouble) back and forth, and may
pessimize the compiler. Instead, I find easy functions and use
approach 2) for them directly.

Next step is to implement move multiply and divide from 1) to 2). I don't
have plans for other functions in 1).

Differential Revision: https://reviews.llvm.org/D27872

llvm-svn: 292839
2017-01-23 22:39:35 +00:00
Matt Arsenault a6867fd441 AMDGPU: Combine fp16/fp64 subtarget features
The same control register controls both, and are set to
the same defaults. Keep the old names around as aliases.

llvm-svn: 292837
2017-01-23 22:31:03 +00:00
Kevin Enderby c3a035d86f Add support for the x86_thread_state32_t and
in llvm-objdump for Mach-O files add the printing of the
x86_thread_state32_t in the same format as
otool-classic(1) on darwin.

To do this the 32-bit x86 general tread state
needed to be defined in include/llvm/Support/MachO.h .

rdar://30110111

llvm-svn: 292829
2017-01-23 21:13:29 +00:00
Ahmed Bougacha b6137063eb [AArch64][GlobalISel] Legalize narrow scalar fp->int conversions.
Since we're now avoiding operations using narrow scalar integer types,
we have to legalize the integer side of the FP conversions.

This requires teaching the legalizer how to do that.

llvm-svn: 292828
2017-01-23 21:10:14 +00:00
Ahmed Bougacha cfb384d39d [AArch64][GlobalISel] Legalize narrow scalar ops again.
Since r279760, we've been marking as legal operations on narrow integer
types that have wider legal equivalents (for instance, G_ADD s8).
Compared to legalizing these operations, this reduced the amount of
extends/truncates required, but was always a weird legalization decision
made at selection time.

So far, we haven't been able to formalize it in a way that permits the
selector generated from SelectionDAG patterns to be sufficient.

Using a wide instruction (say, s64), when a narrower instruction exists
(s32) would introduce register class incompatibilities (when one narrow
generic instruction is selected to the wider variant, but another is
selected to the narrower variant).

It's also impractical to limit which narrow operations are matched for
which instruction, as restricting "narrow selection" to ranges of types
clashes with potentially incompatible instruction predicates.

Concerns were also raised regarding  MIPS64's sign-extended register
assumptions, as well as wrapping behavior.
See discussions in https://reviews.llvm.org/D26878.

Instead, legalize the operations.

Should we ever revert to selecting these narrow operations, we should
try to represent this more accurately: for instance, by separating
a "concrete" type on operations, and an "underlying" type on vregs, we
could move the "this narrow-looking op is really legal" decision to the
legalizer, and let the selector use the "underlying" vreg type only,
which would be guaranteed to map to a register class.

In any case, we eventually should mitigate:
- the performance impact by selecting no-op extract/truncates to COPYs
  (which we currently do), and the COPYs to register reuses (which we
  don't do yet).
- the compile-time impact by optimizing away extract/truncate sequences
  in the legalizer.

llvm-svn: 292827
2017-01-23 21:10:05 +00:00
Steven Wu 69af145767 Attempt to fix the testcase in r292824
Try fix the testcase r292824 (failing on some bots) by reduce it to the
minimal. If this fix doesn't work, I will revert this test.

llvm-svn: 292826
2017-01-23 20:42:17 +00:00
Javed Absar 00cce41752 [ARM] Classification Improvements to ARM Sched-Models. NFCI.
This is a series of patches to enable adding of machine sched
models for ARM processors easier and compact. They define new
sched-readwrites for groups of ARM instructions. This has been
missing so far, and as a consequence, machine scheduler models
for individual sub-targets have tended to be larger than they
needed to be. 

The current patch focuses on floating-point instructions.

Reviewers: Diana Picus (rovka), Renato Golin (rengolin)

Differential Revision: https://reviews.llvm.org/D28194

llvm-svn: 292825
2017-01-23 20:20:39 +00:00
Steven Wu 5b54a42c0f Add LC_BUILD_VERSION load command
Summary:
Add a new load command LC_BUILD_VERSION. It is a generic version of
LC_*_VERSION_MIN load_command used on Apple platforms. Instead of having
a seperate load command for each platform, LC_BUILD_VERSION is recording
platform info as an enum. It also records SDK version, min_os, and tools
that used to build the binary.

rdar://problem/29781291

Reviewers: enderby

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29044

llvm-svn: 292824
2017-01-23 20:07:55 +00:00
Alexey Bataev 95d176242b [SLP] Additional test with extra args in horizontal reductions.
llvm-svn: 292821
2017-01-23 19:28:23 +00:00
Matt Arsenault 7030661364 DAG: Allow legalization of fcanonicalize vector types
llvm-svn: 292814
2017-01-23 18:52:26 +00:00
Piotr Padlewski 4040d6918e [MemorySSA] Add new tests for invariant.groups
Summary:
Next round of extra tests for MSSA.
I have a prototype invariant.group handling implementation
that fixes all the FIXMEs, and I think it will be
easier to see what is the difference if I firstly
post this, and then only fix fixits.

Reviewers: george.burgess.iv, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29022

llvm-svn: 292797
2017-01-23 16:38:10 +00:00