Commit Graph

41975 Commits

Author SHA1 Message Date
Daniel Jasper bf56ad36cb Revert "[GlobalISel] track predecessor mapping during switch lowering."
This reverts commit r291973.

The test fails in a Release build with LLVM_BUILD_GLOBAL_ISEL enabled.
AFAICT, llc segfaults. I'll add a few more details to the original
commit.

llvm-svn: 292061
2017-01-15 09:41:49 +00:00
Chandler Carruth 0952750fae [PM] Clean up the testing for IVUsers, especially with the new PM.
First, I've moved a test of IVUsers from the LSR tree to a dedicated
IVUsers test directory. I've also simplified its RUN line now that the
new pass manager's loop PM is providing analyses on their own.

No functionality changed, but it makes subsequent changes cleaner.

llvm-svn: 292060
2017-01-15 09:29:27 +00:00
Chandler Carruth 1ae34c35ba [PM] Teach the optimization remarks emitter to handle invalidation
events.

This pass sometimes has a pointer to BlockFrequencyInfo so it needs
custom invalidation logic. It is also otherwise immutable so we can
reduce the number of invalidations that happen substantially.

llvm-svn: 292058
2017-01-15 08:20:50 +00:00
Chandler Carruth 2f19a324cb [PM] The assumption cache is fundamentally designed to be self-updating,
mark it as never invalidated in the new PM.

The old PM already required this to work, and after a discussion with
Hal this seems to really be the only sensible answer. The cache
gracefully degrades as the IR is mutated, and most things which do this
should already be incrementally updating the cache.

This gets rid of a bunch of logic preserving and testing the
invalidation of this analysis.

llvm-svn: 292039
2017-01-15 00:26:18 +00:00
Chandler Carruth 5edfd4d99e [PM] Fix instcombine's analysis preservation in the new pass manager to
cover domtree and alias analysis. These are the pretty clear analyses
that we would always want to survive this pass.

To make these survive, we also need to preserve the assumption cache.

Added a test that verifies the important bits of this preservation.

llvm-svn: 292037
2017-01-14 23:25:22 +00:00
Sanjay Patel f8ed0a5e93 [InstCombine] add test to show missed vector fold; NFC
llvm-svn: 292035
2017-01-14 23:12:29 +00:00
Simon Pilgrim d419b73a42 [CostModel][X86] Updated vXi64 ASHR costs on AVX512 targets now that D28604 has landed
llvm-svn: 292023
2017-01-14 19:24:23 +00:00
Simon Pilgrim 8e5ecf8ad1 [X86][XOP] Added support for VPMADCSWD 'extend+hadd' IFMA patterns
VPMADCSWD act as VPADDD( VPMADDWD( x, y ), z ) - multiply+extend+hadd and add to v4i32 accumulator

llvm-svn: 292021
2017-01-14 18:52:13 +00:00
Simon Pilgrim b290805e94 [X86][XOP] Added support for VPMACSDQH/VPMACSDQL 'extension' IFMA patterns
VPMACSDQH/VPMACSDQL act as VPADDQ( VPMULDQ( x, y ), z ) - multiply+extending either the odd/even 4i32 input elements and adding to v2i64 accumulator

llvm-svn: 292020
2017-01-14 18:08:54 +00:00
Simon Pilgrim a1631749f8 [X86][XOP] Added support for VPMACSWW/VPMACSDD 'lossy' IFMA patterns
VPMACSWW/VPMACSDD act as add( mul( x, y ), z ) - ignoring any upper bits from both the multiply and add stages

llvm-svn: 292019
2017-01-14 17:13:52 +00:00
Simon Pilgrim 3a1a800128 [X86][XOP] Add tests for integer fused multiply add
Tests showing missed opportunities to use XOP's integer fma instructions

Some of these are pretty awkward to match as they often have implicit sext/trunc stages but many just ignore overflow bits which makes things pretty straightforward.

llvm-svn: 292017
2017-01-14 13:07:22 +00:00
Craig Topper 63e2cd6caa [AVX-512] Teach two address instruction pass to replace masked move instructions with blendm instructions when its beneficial.
Isel now selects masked move instructions for vselect instead of blendm. But sometimes it beneficial to register allocation to remove the tied register constraint by using blendm instructions.

This also picks up cases where the masked move was created due to a masked load intrinsic.

Differential Revision: https://reviews.llvm.org/D28454

llvm-svn: 292005
2017-01-14 07:50:52 +00:00
Craig Topper 09b7e0f01d [AVX-512] Replace V_SET0 in AVX-512 patterns with AVX512_128_SET0. Enhance AVX512_128_SET0 expansion to make this possible.
We'll now expand AVX512_128_SET0 to an EVEX VXORD if VLX available. Or if its not, but register allocation has selected a non-extended register we will use VEX VXORPS. And if its an extended register without VLX we'll use a 512-bit XOR. Do the same for AVX512_FsFLD0SS/SD.

This makes it possible for the register allocator to have all 32 registers available to work with.

llvm-svn: 292004
2017-01-14 07:29:24 +00:00
Craig Topper 9850210d03 [AVX-512] Change blend mask in lowerVectorShuffleAsBlend to a 64-bit value. Also add 32-bit mode command lines to the test case that exercises this just to make sure we sanely handle the 64-bit immediate there.
This fixes a undefined sanitizer failure from r291888.

llvm-svn: 291994
2017-01-14 04:19:35 +00:00
Daniel Berlin 01831b7db2 NewGVN: Fix PR31613 test regex naming
llvm-svn: 291979
2017-01-13 23:54:10 +00:00
Sanjay Patel 40f401776b [InstCombine] optimize unsigned icmp of increment
Allows LLVM to optimize sequences like the following:

%add = add nuw i32 %x, 1
%cmp = icmp ugt i32 %add, %y

Into:

%cmp = icmp uge i32 %x, %y

Previously, only signed comparisons were being handled.

Decrements could also be handled, but 'sub nuw %x, 1' is currently canonicalized to
'add %x, -1' in InstCombineAddSub, losing the nuw flag. Removing that canonicalization
seems like it might have far-reaching ramifications so I kept this simple for now.

Patch by Matti Niemenmaa!

Differential Revision: https://reviews.llvm.org/D24700

llvm-svn: 291975
2017-01-13 23:25:46 +00:00
Tim Northover 2b57987827 [GlobalISel] track predecessor mapping during switch lowering.
Correctly populating Machine PHIs relies on knowing exactly how the IR level
CFG was lowered to MachineIR. This needs to be tracked by any translation
phases that meddle (currently only SwitchInst handling).

llvm-svn: 291973
2017-01-13 23:11:37 +00:00
Sanjay Patel 2d4b456427 [InstCombine] use m_APInt to allow lshr folds for vectors with splat constants
llvm-svn: 291972
2017-01-13 23:04:10 +00:00
Sanjay Patel d511dde2ec [InstCombine / InstSimplify] add and move tests for lshr transforms; NFC
llvm-svn: 291970
2017-01-13 22:54:12 +00:00
Daniel Berlin c0431fd02d NewGVN: Move leaders around properly to ensure we have a canonical dominating leader. Fixes PR 31613.
Summary:
This is a testcase where phi node cycling happens, and because we do
not order the leaders by domination or anything similar, the leader
keeps changing.

Using std::set for the members is too expensive, and we actually don't
need them sorted all the time, only at leader changes.

We could keep both a set and a vector, and keep them mostly sorted and
resort as necessary, or use a set and a fibheap, but all of this seems
premature.

After running some statistics, we are able to avoid the vast majority
of sorting by keeping a "next leader" field.  Most congruence classes only have
leader changes once or twice during GVN.

Reviewers: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28594

llvm-svn: 291968
2017-01-13 22:40:01 +00:00
David Majnemer bba17390c7 [LoopStrengthReduce] Don't bother rewriting PHIs in catchswitch blocks
The catchswitch instruction cannot be split, don't bother trying to
rewrite it.

This fixes PR31627.

llvm-svn: 291966
2017-01-13 22:24:27 +00:00
Artem Belevich 64dc9be7b4 [NVPTX] Added support for half-precision floating point.
Only scalar half-precision operations are supported at the moment.

- Adds general support for 'half' type in NVPTX.
- fp16 math operations are supported on sm_53+ GPUs only
  (can be disabled with --nvptx-no-f16-math).
- Type conversions to/from fp16 are supported on all GPU variants.
- On GPU variants that do not have full fp16 support (or if it's disabled),
  fp16 operations are promoted to fp32 and results are converted back
  to fp16 for storage.

Differential Revision: https://reviews.llvm.org/D28540

llvm-svn: 291956
2017-01-13 20:56:17 +00:00
Konstantin Zhuravlyov 7d88275577 [AMDGPU] Implement f16 fcopysign and fcopysign(f32, f64)
Differential Revision: https://reviews.llvm.org/D28496

llvm-svn: 291954
2017-01-13 19:49:25 +00:00
James Y Knight 99a2ce2af2 Check for register clobbers when merging a vreg live range with a
reserved physreg in RegisterCoalescer.

Previously, we only checked for clobbers when merging into a READ of
the physreg, but not when merging from a WRITE to the physreg.

Differential Revision: https://reviews.llvm.org/D28527

llvm-svn: 291942
2017-01-13 19:08:36 +00:00
Artem Belevich d109f46573 [NVPTX] Only lower sin/cos to approximate instructions if unsafe math is allowed.
Previously we'd always lower @llvm.{sin,cos}.f32 to {sin.cos}.approx.f32
instruction even when unsafe FP math was not allowed.

Clang-generated IR is not affected by this as it uses precise sin/cos
from CUDA's libdevice when unsafe math is disabled.

Differential Revision: https://reviews.llvm.org/D28619

llvm-svn: 291936
2017-01-13 18:48:13 +00:00
Sanjay Patel b22f6c5f26 [InstCombine] use m_APInt to allow shl folds for vectors with splat constants
llvm-svn: 291934
2017-01-13 18:39:09 +00:00
Sanjay Patel bbc1c1e46b [InstCombine] add tests to show missing transforms for vector shl; NFC
llvm-svn: 291926
2017-01-13 18:27:23 +00:00
Simon Pilgrim 79fb07066c [X86][AVX] Bad v4f64/v4i64 '1z3z' shuffle test case
This lowers to SHUFPD if the input is zeroinitializer but not with a demanded elts optimized build vector.

llvm-svn: 291924
2017-01-13 18:23:47 +00:00
Simon Pilgrim 7f329becb7 Regenerate test.
llvm-svn: 291920
2017-01-13 17:44:28 +00:00
Sanjay Patel 5178363687 [InstCombine] if the condition of a select may be known via assumes, eliminate the select
This is a limited solution for PR31512:
https://llvm.org/bugs/show_bug.cgi?id=31512

The motivation is that we will need to increase usage of llvm.assume and/or metadata to solve PR28430:
https://llvm.org/bugs/show_bug.cgi?id=28430

...and this kind of simplification is needed to take advantage of that extra information.

The 'not' test case would be handled by:
https://reviews.llvm.org/D28485

Differential Revision:
https://reviews.llvm.org/D28337

llvm-svn: 291915
2017-01-13 17:02:42 +00:00
Ivan Krasin 1ed7896c1b Revert r291903 and r291898. Reason: they break check-lld on the bots.
Summary:
Revert [ARM] Fix ubig32_t read in ARMAttributeParser

Now using support functions to read data instead of trying to
perform casts.
===========================================================

Revert [ARM] Enable objdump to construct triple for ARM

Now that The ARMAttributeParser has been moved into the library,
it has been modified so that it can parse the attributes without
printing them and stores them in a map. ELFObjectFile now queries
the attributes to fill out the architecture details of a provided
triple for 'arm' and 'thumb' targets. llvm-objdump uses this new
functionality.

Subscribers: llvm-commits, samparker, aemerson, mgorny

Differential Revision: https://reviews.llvm.org/D28683

llvm-svn: 291911
2017-01-13 16:45:15 +00:00
Simon Pilgrim e4bbd88116 Regenerate test with update_llc_test_checks.py
llvm-svn: 291910
2017-01-13 16:37:38 +00:00
Saleem Abdulrasool 6ef45916c6 ARM: match GCC's behaviour for builtins
GCC changes the CC between the user-code and the builtins based on the
value of `-target` rather than `-mfloat-abi`.  When a HF target is used,
the VFP variant of the AAPCS CC is used.  Otherwise, the AAPCS variant
is used.  In all cases, the AEABI functions use the AAPCS CC.  Adjust
the calling convention based on the target.

Resolves PR30543!

llvm-svn: 291909
2017-01-13 16:25:33 +00:00
George Rimar 8f5976ec00 [llvm-dwp] - Reuse object::Decompressor class
One more place where Decompressor class can be reused.

Differential revision: https://reviews.llvm.org/D28679

llvm-svn: 291906
2017-01-13 15:58:55 +00:00
Simon Pilgrim 7f2a6d5e8c [X86][AVX512] Add support for variable ASHR v2i64/v4i64 support without VLX
Use v8i64 variable ASHR instructions if we don't have VLX.

This is a reduced version of D28537 that just adds support for variable shifts - I'll continue with that patch (for just constant/uniform shifts) once I've fixed the type legalization issue in avx512-cvt.ll.

Differential Revision: https://reviews.llvm.org/D28604

llvm-svn: 291901
2017-01-13 13:16:19 +00:00
Sam Parker 770ceb69ba [ARM] Enable objdump to construct triple for ARM
Now that The ARMAttributeParser has been moved into the library,
it has been modified so that it can parse the attributes without
printing them and stores them in a map. ELFObjectFile now queries
the attributes to fill out the architecture details of a provided
triple for 'arm' and 'thumb' targets. llvm-objdump uses this new
functionality.

Differential Revision: https://reviews.llvm.org/D28281

llvm-svn: 291898
2017-01-13 11:04:21 +00:00
Michael Zuckerman 558a4d8419 [X86][AVX512] Adding missing shuffle lowering to blend mask instructions
Some shuffles can be lowered to blend mask instruction (VPBLENDMB/VPBLENDMW/VPBLENDMD/VPBLENDMQ) .
In this patch, I added new pattern match for this case.

Reviewers:
1. craig.topper
2. guyblank
3. RKSimon
4. igorb     

Differential Revision: https://reviews.llvm.org/D28483

llvm-svn: 291888
2017-01-13 09:06:00 +00:00
Serge Pavlov d409411ef1 Track validity of pass results
Running tests with expensive checks enabled exhibits some problems with
verification of pass results.

First, the pass verification may require results of analysis that are not
available. For instance, verification of loop info requires results of dominator
tree analysis. A pass may be marked as conserving loop info but does not need to
be dependent on DominatorTreePass. When a pass manager tries to verify that loop
info is valid, it needs dominator tree, but corresponding analysis may be
already destroyed as no user of it remained.

Another case is a pass that is skipped. For instance, entities with linkage
available_externally do not need code generation and such passes are skipped for
them. In this case result verification must also be skipped.

To solve these problems this change introduces a special flag to the Pass
structure to mark passes that have valid results. If this flag is reset,
verifications dependent on the pass result are skipped.

Differential Revision: https://reviews.llvm.org/D27190

llvm-svn: 291882
2017-01-13 06:09:54 +00:00
Adam Nemet 6117caab58 Move test of lazy BFI with ORE to a generic directory
llvm-svn: 291862
2017-01-13 00:16:23 +00:00
Evgeniy Stepanov f01c70fec0 [asan] Don't overalign global metadata.
Other than on COFF with incremental linking, global metadata should
not need any extra alignment.

Differential Revision: https://reviews.llvm.org/D28628

llvm-svn: 291859
2017-01-12 23:26:20 +00:00
Evgeniy Stepanov 5d31d08a21 [asan] Refactor instrumentation of globals.
llvm-svn: 291858
2017-01-12 23:03:03 +00:00
Teresa Johnson 83aaf358fd [ThinLTO] Import static functions from the same module as caller
Summary:
We can sometimes end up with multiple copies of a local function that
have the same GUID in the index. This happens when there are local
functions with the same name that are in different source files with the
same name (but in different directories), and they were compiled in
their own directory so had the same path at compile time.

In this case make sure we import the copy in the caller's module. While
it isn't a correctness problem (the renamed reference which is based on the
module IR hash will be unique since the module must have had an
externally visible function that was imported), importing the wrong copy
will result in lost performance opportunity since it won't be referenced
and inlined.

Reviewers: mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28440

llvm-svn: 291841
2017-01-12 22:04:45 +00:00
Robert Lougher 426851e608 [DebugInfo] Handle same locations in DILocation::getMergedLocation
Revision 289661 introduced the function DILocation::getMergedLocation for
merging of debug locations. At the time is was simply a stub which always
returned no location. This patch modifies getMergedLocation to handle the
case where the two locations are the same or can't be discriminated.

Differential Revision: https://reviews.llvm.org/D28521

llvm-svn: 291809
2017-01-12 20:34:35 +00:00
Nikolai Bozhenov f02ac0eeb2 [X86] Replace AND+IMM64 with SRL/SHL
Emit SHRQ/SHLQ instead of ANDQ with a 64 bit constant mask if the result
is unused and the mask has only higher/lower bits set. For example, with
this patch LLVM emits

  shrq $41, %rdi
  je

instead of

  movabsq $0xFFFFFE0000000000, %rcx
  testq   %rcx, %rdi
  je

This reduces number of instructions, code size and register pressure.
The transformation is applied only for cases where the mask cannot be
encoded as an immediate value within TESTQ instruction.

Differential Revision: https://reviews.llvm.org/D28198

llvm-svn: 291806
2017-01-12 19:54:27 +00:00
Nikolai Bozhenov 3db8bcdbba [X86] Modify BypassSlowDivision tests to match their new names (NFC)
- bypass-slow-division-32.ll:
  tests verifying correctness of divl-to-divb bypassing

- bypass-slow-division-64.ll:
  tests verifying correctness of divq-to-divl bypassing

- bypass-slow-division-tune.ll:
  tests verifying that bypassing is enabled only when appropriate

Differential Revision: https://reviews.llvm.org/D28551

llvm-svn: 291804
2017-01-12 19:48:01 +00:00
Nikolai Bozhenov 6684aeb137 [X86] Rename tests for bypassing slow division (NFC)
For tests on bypassing slow division there's no need to be
Atom-specific. The patch renames all tests on division bypassing
and makes their names more consistent:

  atom-bypass-slow-division.ll -> bypass-slow-division-32.ll
  (tests verifying correctness of divl-to-divb bypassing)

  atom-bypass-slow-division-64.ll -> bypass-slow-division-64.ll
  (tests verifying correctness of divq-to-divl bypassing)

  slow-div.ll -> bypass-slow-division-tune.ll
  (tests verifying that bypassing is enabled only when appropriate)

Differential Revision: https://reviews.llvm.org/D28197

llvm-svn: 291802
2017-01-12 19:41:27 +00:00
Nikolai Bozhenov 6bdf92cec7 [X86] Tune bypassing of slow division for Intel CPUs
64-bit integer division in Intel CPUs is extremely slow, much slower
than 32-bit division. On the other hand, 8-bit and 16-bit divisions
aren't any faster. The only important exception is Atom where DIV8
is fastest. Because of that, the patch
1) Enables bypassing of 64-bit division for Atom, Silvermont and
   all big cores.
2) Modifies 64-bit bypassing to use 32-bit division instead of
   16-bit one. This doesn't make the shorter division slower but
   increases chances of taking it. Moreover, it's much more likely
   to prove at compile-time that a value fits 32 bits and doesn't
   require a run-time check (e.g. zext i32 to i64).

Differential Revision: https://reviews.llvm.org/D28196

llvm-svn: 291800
2017-01-12 19:34:15 +00:00
Nikolai Bozhenov 05b4095990 [X86] Update LLC tests for slow division bypassing (NFC)
Run update_llc_test_checks.py on

    CodeGen/X86/atom-bypass-slow-division.ll
    CodeGen/X86/atom-bypass-slow-division-64.ll
    CodeGen/X86/slow-div.ll

Differential Revision: https://reviews.llvm.org/D28469

llvm-svn: 291799
2017-01-12 19:29:18 +00:00
Matt Arsenault 45337df08f AMDGPU: Skip fneg/select combine if it can fold into other
llvm-svn: 291792
2017-01-12 18:58:15 +00:00
Matt Arsenault 31c039ef2e AMDGPU: Fold free fneg into sin
llvm-svn: 291790
2017-01-12 18:48:09 +00:00