Commit Graph

92895 Commits

Author SHA1 Message Date
Tim Northover 75ad077330 GlobalISel: implement Legalization querying framework.
This adds an (incomplete, inefficient) framework for deciding what to do with
some operation on a given type.

llvm-svn: 276184
2016-07-20 21:13:29 +00:00
Ahmed Bougacha a0cdd79070 [AArch64][FastISel] Select -O0 legal cmpxchg.
At -O0, cmpxchg survives AtomicExpand: it's mostly straightforward
to select it in fast-isel, and let the pseudo be expanded later.

extractvalues on the result are the tricky part: the generic logic
only works for legal types (and it would be painful to make it
support illegal types), so we can only support i32/i64 cmpxchg.

llvm-svn: 276183
2016-07-20 21:12:32 +00:00
Ahmed Bougacha b0674d1143 [AArch64][FastISel] Select atomic stores into STLR.
llvm-svn: 276182
2016-07-20 21:12:27 +00:00
David Majnemer bd21012c6c [GVNHoist] Don't hoist PHI nodes
We hoisted PHIs without respecting their special insertion point in the
block, leading to verfier errors.

This fixes PR28626.

llvm-svn: 276181
2016-07-20 21:05:01 +00:00
Davide Italiano 15ff2d6d0c [SCCP] Zap multiple return values.
We can replace the return values with undef if we replaced all
the call uses with a constant/undef.

Differential Revision:  https://reviews.llvm.org/D22336

llvm-svn: 276174
2016-07-20 20:17:13 +00:00
Justin Lebar a272c12b73 [LSV] Don't move stores across may-load instrs, and loosen restrictions on moving loads.
Summary:
Previously we wouldn't move loads/stores across instructions that had
side-effects, where that was defined as may-write or may-throw.  But
this is not sufficiently restrictive: Stores can't safely be moved
across instructions that may load.

This patch also adds a DEBUG check that all instructions in our chain
are either loads or stores.

Reviewers: asbirlea

Subscribers: llvm-commits, jholewinski, arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22547

llvm-svn: 276171
2016-07-20 20:07:37 +00:00
Justin Lebar 62b03e344e [LSV] Vectorize up to side-effecting instructions.
Summary:
Previously if we had a chain that contained a side-effecting
instruction, we wouldn't vectorize it at all.  Now we'll vectorize
everything that comes before the side-effecting instruction.

Reviewers: asbirlea

Subscribers: arsenm, jholewinski, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22536

llvm-svn: 276170
2016-07-20 20:07:34 +00:00
George Burgess IV 400ae40348 [MSSA] Add an overload for getClobberingMemoryAccess.
A seemingly common use for the walker's getClobberingMemoryAccess
function is:

```
MemoryAccess *getClobber(MemorySSAWalker *W, MemoryUseOrDef *MUD) {
  const Instruction *I = MUD->getMemoryInst();
  return W->getClobberingMemoryAccess(I);
}
```

Which is kind of redundant, since walkers will ultimately query MSSA to
find out which MemoryAccess `I` maps to (...which is always `MUD`).

So, this patch adds an overload of getClobberingMemoryAccess that
accepts MemoryAccesses directly. As a result, the Instruction overload
of getClobberingMemoryAccess becomes a lightweight wrapper around our
new overload.

Additionally, this patch un`virtual`izes the Instruction overload of
getClobberingMemoryAccess, since there doesn't seem to be a walker that
benefits from that being virtual, and I can't think of how else one
would implement it. Happy to make it virtual again if we would benefit
from doing so.

llvm-svn: 276169
2016-07-20 19:51:34 +00:00
Tim Northover 62ae568bbb GlobalISel: implement low-level type with just size & vector lanes.
This should be all the low-level instruction selection needs to determine how
to implement an operation, with the remaining context taken from the opcode
(e.g. G_ADD vs G_FADD) or other flags not based on type (e.g. fast-math).

llvm-svn: 276158
2016-07-20 19:09:30 +00:00
Alina Sbirlea b86aa17b06 Properly ifdef the use of cpuid.
llvm-svn: 276156
2016-07-20 18:54:26 +00:00
Artem Belevich 74158b5061 [NVPTX] deal with all aggregate return types.
Fixes a crash in llvm_unreachable when a function has array return type.

Differential Revision: https://reviews.llvm.org/D22524

llvm-svn: 276154
2016-07-20 18:39:52 +00:00
Artem Belevich b2e76a5e7a [NVPTX] Improve lowering of byval args of device functions.
Avoid unnecessary spills of byval arguments of device functions to
local space on SASS level and subsequent pointer conversion to generic
address space that follows. Instead, make a local copy in IR, provide
a way to access arguments directly, and let LLVM optimize the copy away
when possible.

Differential Review: https://reviews.llvm.org/D21421

llvm-svn: 276153
2016-07-20 18:39:47 +00:00
Alina Sbirlea 33588b14a7 [cpu-detection] Cleanup of Host.cpp.
Summary:
Mirroring most cleanup changed from compiler-rt/lib/builtins/cpu_model.
x86 methods are still returning a bool.

Reviewers: llvm-commits, echristo, craig.topper, sanjoy

Subscribers: mehdi_amini

Differential Revision: https://reviews.llvm.org/D22480

llvm-svn: 276149
2016-07-20 18:15:29 +00:00
Sanjay Patel 683170bf56 move decomposeBitTestICmp() to Transforms/Utils; NFC
As noted in https://reviews.llvm.org/D22537 , we can use this functionality in 
visitSelectInstWithICmp() and InstSimplify, but currently we have duplicated
code.

llvm-svn: 276140
2016-07-20 17:18:45 +00:00
Wei Mi db80c0c77f Use ValueOffsetPair to enhance value reuse during SCEV expansion.
In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion.
However, const folding and sext/zext distribution can make the reuse still difficult.

A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and
  S1 = S2 + C_a
  S3 = S2 + C_b
where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as
V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a
complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused
by the fact that S3 is generated from S1 after const folding.

In order to do that, we represent ExprValueMap as a mapping from SCEV to
ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the
ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first
expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to
V1 - C_a + C_b.

Differential Revision: https://reviews.llvm.org/D21313

llvm-svn: 276136
2016-07-20 16:40:33 +00:00
Sanjay Patel be53c65fab fix documentation comments; NFC
llvm-svn: 276135
2016-07-20 16:30:55 +00:00
Yaxun Liu 4b1d9f7f18 AMDGPU: Fix bug causing crash due to invalid opencl version metadata.
Differential Revision: https://reviews.llvm.org/D22526

llvm-svn: 276119
2016-07-20 14:38:06 +00:00
Benjamin Kramer b4d64cf27d Revert "[InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))"
Makes InstCombine infloop when compiling v8.

This reverts commit r275989 and r276105.

llvm-svn: 276106
2016-07-20 11:40:16 +00:00
Simon Pilgrim 1b4f511aaa [X86][SSE] Add cost model values for CTPOP of vectors
This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better.

Differential Revision: https://reviews.llvm.org/D22456

llvm-svn: 276104
2016-07-20 10:41:28 +00:00
Diana Picus f345d40ae2 [ARM] Skip inline asm memory operands in DAGToDAGISel
Retry r275776 (no changes, we suspect the issue was with another commit).

The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.

The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.

Fixes PR26038

Differential Revision: https://reviews.llvm.org/D22103

llvm-svn: 276101
2016-07-20 09:48:24 +00:00
Craig Topper 09b99c3a75 [AVX512] Add a missing NoVLX to give priority to the AVX512 version of the pattern.
llvm-svn: 276088
2016-07-20 05:05:50 +00:00
Craig Topper 7a95428f7c [X86] Use 'HasAVX1Only' to properly give priority to the AVX2 version without relying on file ordering.
llvm-svn: 276087
2016-07-20 05:05:48 +00:00
Craig Topper cde436a663 [X86] Create some multiclasses to reduce the repeated patterns for VEXTRACT(F/I)128/VINSERT(I/F)128. NFC
llvm-svn: 276086
2016-07-20 05:05:46 +00:00
Craig Topper 55913ead3d [X86] Create some wrapper multiclasses to create AVX and SSE shift instructions with less repeated code. NFC
llvm-svn: 276085
2016-07-20 05:05:44 +00:00
David Majnemer 5d26127752 Revert "Disable this-return argument forwarding on ARM/AArch64"
Inference of the 'returned' attribute was fixed in r276008, lets try
turning the backend support back on.

This reverts commit r275677.

llvm-svn: 276081
2016-07-20 04:13:01 +00:00
Adam Nemet 67c8929a2c [LV] Add hotness attribute to missed-optimization remarks
The new OptimizationRemarkEmitter analysis pass is hooked up to both new
and old PM passes.

llvm-svn: 276080
2016-07-20 04:03:43 +00:00
Michael Zolotukhin 6bc56d552a Revert "Revert r275883 and r275891. They seem to cause PR28608."
This reverts commit r276064, and thus reapplies r275891 and r275883 with
a fix for PR28608.

llvm-svn: 276077
2016-07-20 01:55:27 +00:00
Justin Lebar 6114b37838 [LSV] Don't assume that loads/stores appear in address order in the BB.
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores.  It unwittingly assumed that the loads/stores
appeared in the BB in address order.  If they didn't, it would do the
wrong thing.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22535

llvm-svn: 276072
2016-07-20 00:55:12 +00:00
Matthias Braun 5b9722d6c7 Revert "RegScavenging: Add scavengeRegisterBackwards()"
Reverting this commit for now as it seems to be causing failures on
test-suite tests on the clang-ppc64le-linux-lnt bot.

This reverts commit r276044.

llvm-svn: 276068
2016-07-20 00:21:32 +00:00
Kyle Butt d2b886e569 Codegen: Tail Duplication: Only duplicate into layout pred if it is a CFG Pred.
Add a check that the layout predecessor of a block is an actual CFG
predecssor of the block as well. No current code fails this check, but
upcoming patches can trigger this, and it makes sense to separate it
out.

llvm-svn: 276066
2016-07-20 00:01:51 +00:00
Sean Silva 554efb28d2 Revert r275883 and r275891. They seem to cause PR28608.
Revert "[LoopSimplify] Update LCSSA after separating nested loops."

This reverts commit r275891.

Revert "[LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form."

This reverts commit r275883.

llvm-svn: 276064
2016-07-19 23:54:29 +00:00
Sean Silva e3c18a5ae8 [PM] Port LoopUnroll.
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.

Also port LoopInfo verifier pass to test LoopUnrollPass.

llvm-svn: 276063
2016-07-19 23:54:23 +00:00
Kyle Butt 9e52c064c2 Codegen: Factor out canTailDuplicate
canTailDuplicate accepts two blocks and returns true if the first can be
duplicated into the second successfully. Use this function to
encapsulate the heuristic.

llvm-svn: 276062
2016-07-19 23:54:21 +00:00
Justin Lebar ea9598b1f4 Get rid of call to StringRef::substr that's never used.
Summary: substr doesn't modify the string, so this line has no effect.

Reviewers: majnemer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22540

llvm-svn: 276057
2016-07-19 23:19:22 +00:00
Justin Lebar 8778c62629 [LSV] Insert stores at the right point.
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*.  Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.

This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error.  As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.

Reviewers: asbirlea, tstellarAMD

Subscribers: mzolotukhin, arsenm, llvm-commits

Differential Revision: https://reviews.llvm.org/D22534

llvm-svn: 276056
2016-07-19 23:19:20 +00:00
Justin Lebar 2cf2c22870 [LSV] Use make_range, and reformat a DEBUG message. NFC
Summary:
The DEBUG message was hard to read because two Values were being printed
on the same line with only the delimiter "aliases".  This change makes
us print each Value on its own line.

Reviewers: asbirlea

Subscribers: llvm-commits, arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22533

llvm-svn: 276055
2016-07-19 23:19:18 +00:00
Justin Lebar 4ee8a2d024 [LSV] Nix two global (ish) variables in the LoadStoreVectorizer. NFC
Reviewers: asbirlea

Subscribers: mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22532

llvm-svn: 276054
2016-07-19 23:19:16 +00:00
Kostya Serebryany 0ccf06f467 [libFuzzer] extend the messages printed by afl_driver
llvm-svn: 276052
2016-07-19 23:18:28 +00:00
Matt Arsenault a1fe17c9ad AMDGPU: Change fdiv lowering based on !fpmath metadata
If 2.5 ulp is acceptable, denormals are not required, and
isn't a reciprocal which will already be handled, replace
with a faster fdiv.

Simplify the lowering tests by using per function
subtarget features.

llvm-svn: 276051
2016-07-19 23:16:53 +00:00
Daniel Berlin 1986030b62 Fix unused variable
llvm-svn: 276050
2016-07-19 23:08:08 +00:00
Paul Robinson 2d23c029f7 Make GVN Hoisting obey optnone/bisect.
Differential Revision: http://reviews.llvm.org/D22545

llvm-svn: 276048
2016-07-19 22:57:14 +00:00
Daniel Berlin 5c46b943db Make MemorySSA::dominates/locallydominates constant time
Summary: Make MemorySSA::dominates/locallydominates constant time

Reviewers: george.burgess.iv, gberry

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22527

llvm-svn: 276046
2016-07-19 22:49:43 +00:00
Chandler Carruth 2aff750cb8 Add AIX support to Path.inc, Host.h, and CMake.
Patch by Andrew Paprocki!

Differential Revision: https://reviews.llvm.org/D18359

llvm-svn: 276045
2016-07-19 22:46:39 +00:00
Matthias Braun 84fd4bee6c RegScavenging: Add scavengeRegisterBackwards()
This is a variant of scavengeRegister() that works for
enterBasicBlockEnd()/backward(). The benefit of the backward mode is
that it is not affected by incomplete kill flags.

This patch also changes
PrologEpilogInserter::doScavengeFrameVirtualRegs() to use the register
scavenger in backwards mode.

Differential Revision: http://reviews.llvm.org/D21885

llvm-svn: 276044
2016-07-19 22:37:09 +00:00
Matthias Braun 4cb68e1048 RegisterScavenger: Introduce backward() mode.
This adds two pieces:
- RegisterScavenger:::enterBasicBlockEnd() which behaves similar to
  enterBasicBlock() but starts tracking at the end of the basic block.
- A RegisterScavenger::backward() method. It is subtly different
  from the existing unprocess() method which only considers uses with
  the kill flag set: If a value is dead at the end of a basic block with
  a last use inside the basic block, unprocess() will fail to mark it as
  live. However we cannot change/fix this behaviour because unprocess()
  needs to perform the exact reverse operation of forward().

Differential Revision: http://reviews.llvm.org/D21873

llvm-svn: 276043
2016-07-19 22:37:02 +00:00
Evandro Menezes 238fa76574 [AArch64] Properly validate the reciprocal estimation.
Add check for legal data types when expanding into a Newton series.

Differential Revision: https://reviews.llvm.org/D22267

llvm-svn: 276041
2016-07-19 22:31:11 +00:00
Sanjay Patel 2d477e59e8 [InstCombine] fold add(zext(xor X, C), C) --> sext X when C is INT_MIN in the source type
The pattern may look more obviously like a sext if written as:

  define i32 @g(i16 %x) {
    %zext = zext i16 %x to i32
    %xor = xor i32 %zext, 32768
    %add = add i32 %xor, -32768
    ret i32 %add
  }

We already have that fold in visitAdd().

Differential Revision: https://reviews.llvm.org/D22477

llvm-svn: 276035
2016-07-19 22:09:34 +00:00
George Burgess IV 22a0f1a0b9 Attempt to appease MSVC buildbots.
Broken by r276026.

llvm-svn: 276032
2016-07-19 21:35:47 +00:00
Davide Italiano 63e5968033 [AMDGPU] Remove spurious line (should've been removed in r276029).
llvm-svn: 276030
2016-07-19 21:16:30 +00:00
Davide Italiano 1576e38598 [AMDGPU] Remove dead code.
LGTM'd by Matt Arsenault.

llvm-svn: 276029
2016-07-19 21:10:49 +00:00