Commit Graph

115341 Commits

Author SHA1 Message Date
Shiva Chen f5938bfbf9 Revert "[DebugInfo] Generate DWARF debug information for labels."
This reverts commit b454fa1b4079b6c0a5b1565982d16516385838d7.

llvm-svn: 337812
2018-07-24 06:17:45 +00:00
Shiva Chen d6b2cdf9d4 [DebugInfo] Generate DWARF debug information for labels.
There are two forms for label debug information in DWARF format.

1. Labels in a non-inlined function:

DW_TAG_label
  DW_AT_name
  DW_AT_decl_file
  DW_AT_decl_line
  DW_AT_low_pc

2. Labels in an inlined function:

DW_TAG_label
  DW_AT_abstract_origin
  DW_AT_low_pc

We will collect label information from DBG_LABEL. Before every DBG_LABEL,
we will generate a temporary symbol to denote the location of the label.
The symbol could be used to get DW_AT_low_pc afterwards. So, we create a
mapping between 'inlined label' and DBG_LABEL MachineInstr in DebugHandlerBase.
The DBG_LABEL in the mapping is used to query the symbol before it.

The AbstractLabels in DwarfCompileUnit is used to process labels in inlined
functions.

We also keep a mapping between scope and labels in DwarfFile to help to
generate correct tree structure of DIEs.

Differential Revision: https://reviews.llvm.org/D45556

Patch by Hsiangkai Wang.

llvm-svn: 337799
2018-07-24 02:22:55 +00:00
Tom Stellard b7f19e6d1e AMDGPU/GlobalISel: Legalize G_INSERT
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D49601

llvm-svn: 337798
2018-07-24 02:19:20 +00:00
Tom Stellard 2d37929c10 AMDGPU/GlobalISel: Remove unnecessary legality constraint for G_EXTRACT
Summary:
We were marking G_EXTRACT operations unsupported if the output type
was larger than the input type.  I don't see how this could ever actually
happen, so I dropped the constraint.  Doing this makes it possible to
reuse the same legality code for G_INSERT.

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D49600

llvm-svn: 337794
2018-07-24 01:43:49 +00:00
Andres Freund 376a3d3659 Add PerfJITEventListener for perf profiling support.
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.

Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes.  Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.

I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.

Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.

Disregarding the previous comment, here's an example:

$ cat /tmp/expensive_loop.c

bool stupid_isprime(uint64_t num)
{
        if (num == 2)
                return true;
        if (num < 1 || num % 2 == 0)
                return false;
        for(uint64_t i = 3; i < num / 2; i+= 2) {
                if (num % i == 0)
                        return false;
        }
        return true;
}

int main(int argc, char **argv)
{
        int numprimes = 0;

        for (uint64_t num = argc; num < 100000; num++)
        {
                if (stupid_isprime(num))
                        numprimes++;
        }

        return numprimes;
}

$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll

$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1

$ perf inject --jit -i perf.data -o perf.jit.data

$ perf report -i perf.jit.data
-   92.59%  lli      jitted-5881-2.so                   [.] stupid_isprime
     stupid_isprime
     main
     llvm::MCJIT::runFunction
     llvm::ExecutionEngine::runFunctionAsMain
     main
     __libc_start_main
     0x4bf6258d4c544155
+    0.85%  lli      ld-2.27.so                         [.] do_lookup_x

And line-level annotations also work:
       │              for(uint64_t i = 3; i < num / 2; i+= 2) {
       │1 30:   movq   $0x3,-0x18(%rbp)
  0.03 │1 38:   mov    -0x18(%rbp),%rax
  0.03 │        mov    -0x10(%rbp),%rcx
       │        shr    $0x1,%rcx
  3.63 │     ┌──cmp    %rcx,%rax
       │     ├──jae    6f
       │     │                if (num % i == 0)
  0.03 │     │  mov    -0x10(%rbp),%rax
       │     │  xor    %edx,%edx
 89.00 │     │  divq   -0x18(%rbp)
       │     │  cmp    $0x0,%rdx
  0.22 │     │↓ jne    5f
       │     │                        return false;
       │     │  movb   $0x0,-0x1(%rbp)
       │     │↓ jmp    73
       │     │        }
  3.22 │1 5f:│↓ jmp    61
       │     │        for(uint64_t i = 3; i < num / 2; i+= 2) {

Subscribers: mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D44892

llvm-svn: 337789
2018-07-24 00:54:06 +00:00
Chandler Carruth 66fbbbca60 [x86/SLH] Simplify the code for hardening a loaded value. NFC.
This is in preparation for extracting this into a re-usable utility in
this code.

llvm-svn: 337785
2018-07-24 00:35:36 +00:00
Chandler Carruth b46c22de00 [x86/SLH] Remove complex SHRX-based post-load hardening.
This code was really nasty, had several bugs in it originally, and
wasn't carrying its weight. While on Zen we have all 4 ports available
for SHRX, on all of the Intel parts with Agner's tables, SHRX can only
execute on 2 ports, giving it 1/2 the throughput of OR.

Worse, all too often this pattern required two SHRX instructions in
a chain, hurting the critical path by a lot.

Even if we end up needing to safe/restore EFLAGS, that is no longer so
bad. We pay for a uop to save the flag, but we very likely get fusion
when it is used by forming a test/jCC pair or something similar. In
practice, I don't expect the SHRX to be a significant savings here, so
I'd like to avoid the complex code required. We can always resurrect
this if/when someone has a specific performance issue addressed by it.

llvm-svn: 337781
2018-07-24 00:21:59 +00:00
Fangrui Song 5bad9d835a [DWARF] Use deque in place of SmallVector to fix use-after-free issue
Summary: SmallVector's elements are moved when resizing and cause use-after-free.

Reviewers: probinson, dblaikie

Subscribers: JDevlieghere, llvm-commits

Differential Revision: https://reviews.llvm.org/D49702

llvm-svn: 337772
2018-07-23 23:27:45 +00:00
Wolfgang Pieb 790d86cefc Embed a template specialization in a namespace to work around a gcc bug.
llvm-svn: 337770
2018-07-23 23:14:23 +00:00
Wolfgang Pieb 439801ba1d [DWARF v5] Refactor range lists dumping by using a more generic way of handling tables of lists.
The intent is to use it for location list tables as well. Change is almost NFC with the exception
of the spelling of some strings used during dumping (all lowercase now).

Reviewer: JDevlieghere

Differential Revision: https://reviews.llvm.org/D49500

llvm-svn: 337763
2018-07-23 22:37:17 +00:00
Teresa Johnson b963c0b658 [LTO] Handle __imp_ (dllimport) symbols consistently with lld
Summary:
Similar to what lld already does for dllimport symbols which are
prefaced with __imp_ (see lld patch r240620), strip off the __imp_
prefix in LTO. Otherwise we can get 2 separate GlobalResolution for
a single symbol, the dllimport declaration, and the definition, which
leads to incorrect LTO handling.

Fixes PR38105.

Reviewers: pcc

Subscribers: mehdi_amini, inglorion, steven_wu, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D49138

llvm-svn: 337762
2018-07-23 22:33:57 +00:00
Erik Pilkington 28e08a0a61 [demangler] call terminate() if allocation failed
We really should set *status to memory_alloc_failure, but we need to refactor
the demangler a bit to properly propagate the failure up the stack. Until then,
its better to explicitly terminate then rely on a null dereference crash.

rdar://31240372

llvm-svn: 337759
2018-07-23 22:23:04 +00:00
Martin Storsjo db42d51ee3 [MC] Add a separate flag for skipping comdat constant sections for MinGW. NFC.
This actually has nothing to do with the associative comdat sections
that aren't supported by GNU binutils ld.

Clarify the comments from SVN r335918 and use a separate flag for it.

Differential Revision: https://reviews.llvm.org/D49645

llvm-svn: 337757
2018-07-23 22:15:25 +00:00
Martin Storsjo 100fc97051 [COFF] Fix assembly output of comdat sections without an attached symbol
Since SVN r335286, the .xdata sections are produced without an attached
symbol, which requires using a different syntax when printing assembly
output.

Instead of the usual syntax of '.section <name>,"dr",discard,<symbol>',
use '.section <name>,"dr"' + '.linkonce discard' (which is what GCC
uses for all assembly output).

This fixes PR38254.

Differential Revision: https://reviews.llvm.org/D49651

llvm-svn: 337756
2018-07-23 22:15:19 +00:00
Martin Storsjo c2b701408e [AArch64] Use MCAsmInfoMicrosoft and MCAsmInfoGNUCOFF as base classes
This matches the structure used on X86 and ARM. This requires
a little bit of duplication of the parts that are equal in both
AArch64 COFF variants though.

Before SVN r335286, these classes didn't add anything that MCAsmInfoCOFF
didn't, but now they do.

This makes AArch64 match X86 in how comdat is used for float constants
for MinGW.

Differential Revision: https://reviews.llvm.org/D49637

llvm-svn: 337755
2018-07-23 22:15:14 +00:00
Vedant Kumar 22bd6f99fa [SelectionDAG] Reduce DanglingDebugInfo memory traffic, NFC
This avoids approx. 2 x 10^5 DenseMap insertions in both non-debug and
debug -O2 builds of the sqlite3 amalgamation.

llvm-svn: 337751
2018-07-23 21:59:04 +00:00
Teresa Johnson e214fdeb69 [ThinLTO] Ensure the TargetLibraryInfo is constructed early enough
Summary:
Without this change, the WholeProgramDevirt pass, which requires the
TargetLibraryInfo, will construct one from the default triple.

Fixes PR38139.

Reviewers: pcc

Subscribers: mehdi_amini, inglorion, steven_wu, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D49278

llvm-svn: 337750
2018-07-23 21:58:19 +00:00
George Burgess IV b00fb46479 [DebugCounters] Keep track of total counts
This patch makes debug counters keep track of the total number of times
we've called `shouldExecute` for each counter, so it's easier to build
automated tooling on top of these.

A patch to print these counts is coming soon.

Patch by Zhizhou Yang!

Differential Revision: https://reviews.llvm.org/D49560

llvm-svn: 337748
2018-07-23 21:49:36 +00:00
Manoj Gupta f9f50f634d ConstantFolding: Avoid a crash.
Summary:
Check if the parent basic block and caller exists
before calling CS.getCaller when constant folding
strip.invariant.group instrinsic.

This avoids a crash when the function containing the intrinsic
is being inlined. The instruction is checked for any simplifiction
but has not yet been added to a basic block.

Reviewers: Prazek, rsmith, efriedma

Reviewed By: efriedma

Subscribers: eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D49690

llvm-svn: 337742
2018-07-23 21:20:00 +00:00
Reid Kleckner 980c4df037 Re-land r335297 "[X86] Implement more of x86-64 large and medium PIC code models"
Don't try to generate large PIC code for non-ELF targets. Neither COFF
nor MachO have relocations for large position independent code, and
users have been using "large PIC" code models to JIT 64-bit code for a
while now. With this change, if they are generating ELF code, their
JITed code will truly be PIC, but if they target MachO or COFF, it will
contain 64-bit immediates that directly reference external symbols. For
a JIT, that's perfectly fine.

llvm-svn: 337740
2018-07-23 21:14:35 +00:00
David Greene 4345df3dad Fix RegScavenger::unprocess
RegScavenger::unprocess walks backward, so it should undo the effects
of defs before undoing effects of kills. Previously it did things in
the opposite order, leaving a register apparently unused (dead) in the
case where an instruction both used (killed) and defined a register.

Differential Revision: https://reviews.llvm.org/D42200

llvm-svn: 337735
2018-07-23 20:23:50 +00:00
Krzysztof Parzyszek 9500a24fce [Hexagon] Handle unnamed globals in HexagonConstExpr
Instead of comparing names, compare positions in the parent module.

llvm-svn: 337723
2018-07-23 18:30:17 +00:00
Reid Kleckner dbae8cdb89 [Demangle] Attempt to fix arena memory leak
llvm-svn: 337720
2018-07-23 18:21:43 +00:00
Fangrui Song 58407ca045 [ARM] Use unique_ptr to fix memory leak introduced in r337701
llvm-svn: 337714
2018-07-23 17:43:21 +00:00
Jordan Rupprecht e5daf61229 OpChain has subclasses, so add a virtual destructor.
Summary:
OpChain has subclasses, so add a virtual destructor.

This fixes an issue when deleting subclasses of OpChain (see MatchSMLAD() specifically) in r337701.

Reviewers: javed.absar

Subscribers: llvm-commits, SjoerdMeijer, samparker

Differential Revision: https://reviews.llvm.org/D49681

llvm-svn: 337713
2018-07-23 17:38:05 +00:00
Matt Morehouse d773cb99f1 [ARM] Follow-up to r337709.
Fix double-free.

llvm-svn: 337711
2018-07-23 17:22:53 +00:00
Matt Morehouse a70685f63a [ARM] Add doFinalization() to ARMCodeGenPrepare pass.
Attempt to fix the leak introduced in r337687 and make sanitizer
buildbots green again.

llvm-svn: 337709
2018-07-23 17:00:45 +00:00
Nirav Dave eac2ca4e28 [Legalize] Elide MERGE_VALUES created by scalarizeVectorLoad.
scalarizeVectorLoad creates MERGE_VALUES nodes which are immediately
decomposed in expandLoad. Elide the node in these cases.

llvm-svn: 337708
2018-07-23 16:43:42 +00:00
Sam Parker 89a3799a69 [ARM][NFC] ParallelDSP reorganisation
In preparing to allow ARMParallelDSP pass to parallelise more than
smlads, I've restructed some elements:

- The ParallelMAC struct has been renamed to BinOpChain.
- The BinOpChain struct holds two value lists: LHS and RHS, as well
  as inheriting from the OpChain base class.
- The OpChain struct holds all the values of the represented chain
  and has had the memory locations functionality inserted into it.
- ParallelMACList becomes OpChainList and it now holds pointers
  instead of objects.

Differential Revision: https://reviews.llvm.org/D49020

llvm-svn: 337701
2018-07-23 15:25:59 +00:00
Jonas Paulsson 59c94bec0d [SystemZ] Fix dumpSU() method in SystemZHazardRecognizer.
Two minor issues: The new MCD SchedWrite name does not contain "Unit" like
all the others, so a check is needed. Also, print "LSU" instead of "LS".

Review: Ulrich Weigand
llvm-svn: 337700
2018-07-23 15:08:35 +00:00
Cameron McInally 2c9bcffc92 [FPEnv] Legalize double width StrictFP vector operations
Differential Revision: https://reviews.llvm.org/D48809

llvm-svn: 337698
2018-07-23 14:40:17 +00:00
Sam Parker 3828c6ff94 [ARM] ARMCodeGenPrepare backend pass
Arm specific codegen prepare is implemented to perform type promotion
on icmp operands, which can enable the removal of uxtb and uxth
(unsigned extend) instructions. This is possible because performing
type promotion before ISel alleviates this duty from the DAG builder
which has to perform legalisation, but has a limited view on data
ranges.
    
The pass visits any instruction operand of an icmp and creates a
worklist to traverse the use-def tree to determine whether the values
can simply be promoted. Our concern is values in the registers
overflowing the narrow (i8, i16) data range, so instructions marked
with nuw can be promoted easily. For add and sub instructions, we are
able to use the parallel dsp instructions to operate on scalar data
types and avoid overflowing bits. Underflowing adds and subs are also
permitted when the result is only used by an unsigned icmp.

Differential Revision: https://reviews.llvm.org/D48832

llvm-svn: 337687
2018-07-23 12:27:47 +00:00
John Brawn fc18a6ad7d [GVN] Don't use the eliminated load as an available value in phi construction
In ConstructSSAForLoadSet if an available value is actually the load that we're
doing SSA construction to eliminate, then we can omit it as SSAUpdate will add
in the value for the phi that will be replacing it anyway. This can result in
simpler IR which can allow further optimisation.

Differential Revision: https://reviews.llvm.org/D44160

llvm-svn: 337686
2018-07-23 12:14:45 +00:00
Alexandros Lamprineas bf6009c234 [MemorySSAUpdater] Update Phi operands after trivial Phi elimination
Bug fix for PR37445. The underlying problem and its fix are similar to PR37808.
The bug lies in MemorySSAUpdater::getPreviousDefRecursive(), where PhiOps is
computed before the call to tryRemoveTrivialPhi() and it ends up being out of
date, pointing to stale data. We have now turned each of the PhiOps into a
TrackingVH<MemoryAccess>.

Differential Revision: https://reviews.llvm.org/D49425

llvm-svn: 337680
2018-07-23 10:56:30 +00:00
Sam McCall 4bb7883d09 [Support] Add a UniqueStringSaver: like StringSaver, but deduplicating.
Summary: Clarify contract of StringSaver (it null-terminates, callers rely on it).

Reviewers: hokein

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49596

llvm-svn: 337677
2018-07-23 10:44:40 +00:00
Roman Lebedev 52b85377eb [NFC][MCA] ZnVer1: Update RegisterFile to identify false dependencies on partially written registers.
Summary:
Pretty mechanical follow-up for D49196.

As microarchitecture.pdf notes, "20 AMD Ryzen pipeline",
"20.8 Register renaming and out-of-order schedulers":
  The integer register file has 168 physical registers of 64 bits each.
  The floating point register file has 160 registers of 128 bits each.
"20.14 Partial register access":
  The processor always keeps the different parts of an integer register together.
  ...
  An instruction that writes to part of a register will therefore have a false dependence
  on any previous write to the same register or any part of it.

Reviewers: andreadb, courbet, RKSimon, craig.topper, GGanesh

Reviewed By: GGanesh

Subscribers: gbedwell, llvm-commits

Differential Revision: https://reviews.llvm.org/D49393

llvm-svn: 337676
2018-07-23 10:10:13 +00:00
Alexandros Lamprineas 592cc78dd8 [GVNHoist] safeToHoistLdSt allows illegal hoisting
Bug fix for PR36787. When reasoning if it's safe to hoist a load we
want to make sure that the defining memory access dominates the new
insertion point of the hoisted instruction. safeToHoistLdSt calls
firstInBB(InsertionPoint,DefiningAccess) which returns false if
InsertionPoint == DefiningAccess, and therefore it falsely thinks
it's safe to hoist.

Differential Revision: https://reviews.llvm.org/D49555

llvm-svn: 337674
2018-07-23 09:42:35 +00:00
Chandler Carruth 1d926fb9f4 [x86/SLH] Fix a bug where we would harden tail calls twice -- once as
a call, and then again as a return.

Also added a comment to try and explain better why we would be doing
what we're doing when hardening the (non-call) returns.

llvm-svn: 337673
2018-07-23 07:56:15 +00:00
Chandler Carruth 0477b40137 [x86/SLH] Rename and comment the main hardening function. NFC.
This provides an overview of the algorithm used to harden specific
loads. It also brings this our terminology further in line with
hardening rather than checking.

Differential Revision: https://reviews.llvm.org/D49583

llvm-svn: 337667
2018-07-23 04:01:34 +00:00
Jiading Gai a269437258 Test commit, fix a minor typo.
llvm-svn: 337657
2018-07-22 20:04:42 +00:00
Craig Topper b2a626b52e [X86] Remove the max vector width restriction from combineLoopMAddPattern and rely splitOpsAndApply to handle splitting.
This seems to be a net improvement. There's still an issue under avx512f where we have a 512-bit vpaddd, but not vpmaddwd so we end up doing two 256-bit vpmaddwds and inserting the results before a 512-bit vpaddd. It might be better to do two 512-bits paddds with zeros in the upper half. Same number of instructions, but breaks a dependency.

llvm-svn: 337656
2018-07-22 19:44:35 +00:00
Xin Tong 023e25ad14 [ORE] Move loop invariant ORE checks outside the PM loop.
Summary:
This takes 22ms out of ~20s compiling sqlite3.c because we call it
for every unit of compilation and every pass.

Reviewers: paquette, anemet

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D49586

llvm-svn: 337654
2018-07-22 05:27:41 +00:00
Craig Topper b8b21d2fca [SelectionDAGBuilder] Use APInt::isZero instead of comparing APInt::getZExtValue to 0 in a place where we can't be sure contents of the APInt fit in a uint64_t.
This is used on an extract vector element index which is most cases is going to be an i32 or i64 and the element will be a valid element number. But it is possible to construct IR with a larger type and large out of range value.

llvm-svn: 337652
2018-07-22 05:16:50 +00:00
Craig Topper bf61113e4f [SelectionDAGBuilder] Restrict vector reduction check to types with a power of 2 number of elements.
The check for the shuffles usages probably isn't correct for non power of 2 vectors.

llvm-svn: 337651
2018-07-22 05:16:49 +00:00
Simon Atanasyan 2c5b18f70f [mips] Factor out register class selection for global base register. NFC
Factor out register class selection for global base register into a
separate function to escape long chain of ternary operators.

llvm-svn: 337647
2018-07-21 16:16:08 +00:00
Simon Atanasyan ecd1e0afdd [mips] Move out the WrapperPat declaration from the NotInMicroMips predicate
This is a follow-up to the rL335185. Those commit adds some WrapperPat
patterns for microMIPS target. But declaration of the WrapperPat class
is under the NotInMicroMips predicate and microMIPS patterns cannot be
selected because predicate (Subtarget->inMicroMipsMode()) &&
(!Subtarget->inMicroMipsMode()) is always false.

This change move out the WrapperPat class declaration from the
NotInMicroMips predicate and enables microMIPS WrapperPat patterns.

Differential revision: https://reviews.llvm.org/D49533

llvm-svn: 337646
2018-07-21 16:16:03 +00:00
Aditya Kumar 373ce7eca5 Early exit with cheaper checks
Reviewers: sebpop,davide,fhahn,trentxintong
Differential Revision: https://reviews.llvm.org/D49617

llvm-svn: 337643
2018-07-21 14:13:44 +00:00
Chen Zheng 69bb064539 [InstrSimplify] fold sdiv if two operands are negated and non-overflow
Differential Revision: https://reviews.llvm.org/D49382

llvm-svn: 337642
2018-07-21 12:27:54 +00:00
Lang Hames 960246dbee [ORC] Re-apply r336760 with fixes.
llvm-svn: 337637
2018-07-21 00:12:05 +00:00
Lang Hames a48d108353 Re-apply r337595 with fix for LLVM_ENABLE_THREADS=Off.
llvm-svn: 337626
2018-07-20 22:22:19 +00:00
Peter Collingbourne acf005676e Change the cap on the amount of padding for each vtable to 32-byte (previously it was 128-byte)
We tested different cap values with a recent commit of Chromium. Our results show that the 32-byte cap yields the smallest binary and all the caps yield similar performance.
Based on the results, we propose to change the cap value to 32-byte.

Patch by Zhaomo Yang!

Differential Revision: https://reviews.llvm.org/D49405

llvm-svn: 337622
2018-07-20 21:43:20 +00:00
Matt Arsenault 5ae765e68c AMDGPU: Use existing function to check for VGPRs
llvm-svn: 337621
2018-07-20 21:20:36 +00:00
Benjamin Kramer 64c7fa3201 Revert "[X86][AVX] Convert X86ISD::VBROADCAST demanded elts combine to use SimplifyDemandedVectorElts"
This reverts commit r337547. It triggers an infinite loop.

llvm-svn: 337617
2018-07-20 20:59:46 +00:00
Martin Storsjo 509f1668e5 [COFF] Use symbolic constants instead of hardcoded numbers. NFCI.
Patch by Martell Malone.

llvm-svn: 337614
2018-07-20 20:48:33 +00:00
Martin Storsjo a6ffc9c8df [COFF] Adjust how we flag weak externals
This fixes PR36096.

Originally based on a patch by Martell Malone.

Differential Revision: https://reviews.llvm.org/D44357

llvm-svn: 337613
2018-07-20 20:48:29 +00:00
Reid Kleckner 7ed83591c2 Revert r337595 "[ORC] Add new symbol lookup methods to ExecutionSessionBase in preparation for"
Breaks the build with LLVM_ENABLE_THREADS=OFF.

llvm-svn: 337608
2018-07-20 20:20:45 +00:00
Roman Tereshin 31d52847ef Reapply "[LSV] Refactoring + supporting bitcasts to a type of different size"
This reapplies commit r337489 reverted by r337541
Additionally, this commit contains a speculative fix to the issue reported in r337541
(the report does not contain an actionable reproducer, just a stack trace)

llvm-svn: 337606
2018-07-20 20:10:04 +00:00
Martin Storsjo 0f2abd803b Remove a superfluous semicolon
llvm-svn: 337599
2018-07-20 18:43:42 +00:00
Zachary Turner 9d72aa9006 [Demangler] Correctly factor in assignment when allocating.
Incidentally all allocations that we currently perform were
properly aligned, but this was only an accident.

Thanks to Erik Pilkington for catching this.

llvm-svn: 337596
2018-07-20 18:35:06 +00:00
Lang Hames 732d116a96 [ORC] Add new symbol lookup methods to ExecutionSessionBase in preparation for
deprecating SymbolResolver and AsynchronousSymbolQuery.

Both lookup overloads take a VSO search order to perform the lookup. The first
overload is non-blocking and takes OnResolved and OnReady callbacks. The second
is blocking, takes a boolean flag to indicate whether to wait until all symbols
are ready, and returns a SymbolMap. Both overloads take a RegisterDependencies
function to register symbol dependencies (if any) on the query.

llvm-svn: 337595
2018-07-20 18:31:53 +00:00
Lang Hames d4df0f1733 [ORC] Simplify VSO::lookupFlags to return the flags map.
This discards the unresolved symbols set and returns the flags map directly
(rather than mutating it via the first argument).

The unresolved symbols result made it easy to chain lookupFlags calls, but such
chaining should be rare to non-existant (especially now that symbol resolvers
are being deprecated) so the simpler method signature is preferable.

llvm-svn: 337594
2018-07-20 18:31:52 +00:00
Lang Hames fd0c1e7169 [ORC] Replace SymbolResolvers in the new ORC layers with search orders on VSOs.
A search order is a list of VSOs to be searched linearly to find symbols. Each
VSO now has a search order that will be used when fixing up definitions in that
VSO. Each VSO's search order defaults to just that VSO itself.

This is a first step towards removing symbol resolvers from ORC altogether. In
practice symbol resolvers tended to be used to implement a search order anyway,
sometimes with additional programatic generation of symbols. Now that VSOs
support programmatic generation of definitions via fallback generators, search
orders provide a cleaner way to achieve the desired effect (while removing a lot
of boilerplate).

llvm-svn: 337593
2018-07-20 18:31:50 +00:00
Benjamin Kramer a2e18bba30 [Demangler] Add missing overrides
-Winconsistent-missing-override complains about this.

llvm-svn: 337592
2018-07-20 18:22:12 +00:00
Zachary Turner 91ecedd29e Fix a few warnings and style issues in MS demangler.
Also remove a broken test case.

llvm-svn: 337591
2018-07-20 18:07:33 +00:00
Craig Topper 28ac623f6f [X86] Remove isel patterns for MOVSS/MOVSD ISD opcodes with integer types.
Ideally our ISD node types going into the isel table would have types consistent with their instruction domain. This prevents us having to duplicate patterns with different types for the same instruction.

Unfortunately, it seems our shuffle combining is currently relying on this a little remove some bitcasts. This seems to enable some switching between shufps and shufd. Hopefully there's some way we can address this in the combining.

Differential Revision: https://reviews.llvm.org/D49280

llvm-svn: 337590
2018-07-20 17:57:53 +00:00
Craig Topper 6194ccf8c7 [X86] Remove what appear to be unnecessary uses of DCI.CombineTo
CombineTo is most useful when you need to replace multiple results, avoid the worklist management, or you need to something else after the combine, etc. Otherwise you should be able to just return the new node and let DAGCombiner go through its usual worklist code.

All of the places changed in this patch look to be standard cases where we should be able to use the more stand behavior of just returning the new node.

Differential Revision: https://reviews.llvm.org/D49569

llvm-svn: 337589
2018-07-20 17:57:42 +00:00
Zachary Turner f435a7eada Add a Microsoft Demangler.
This adds initial support for a demangling library (LLVMDemangle)
and tool (llvm-undname) for demangling Microsoft names.  This
doesn't cover 100% of cases and there are some known limitations
which I intend to address in followup patches, at least until such
time that we have (near) 100% test coverage matching up with all
of the test cases in clang/test/CodeGenCXX/mangle-ms-*.

Differential Revision: https://reviews.llvm.org/D49552

llvm-svn: 337584
2018-07-20 17:27:48 +00:00
Alina Sbirlea 20c2962585 [MemorySSA] Add API to update MemoryPhis, following CFG changes.
Summary:
When splitting predecessors in BasicBlockUtils, we create a new block as an immediate predecessor of the original BB, then we connect a given set of predecessors to the new block.
The API in this patch will be used to update MemoryPhis for this CFG change.
If all predecessors are being moved, we move the MemoryPhi directly. Otherwise we create a new MemoryPhi in the NewBB and populate its incoming values, while deleting them from BB's Phi.
[Split from D45299 for easier review]

Reviewers: george.burgess.iv

Subscribers: sanjoy, jlebar, Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D49156

llvm-svn: 337581
2018-07-20 17:13:05 +00:00
Simon Pilgrim 70fcd0f481 [X86][XOP] Fix SUB constant folding for VPSHA/VPSHL shift lowering
We can safely use getConstant here as we're still lowering, which allows constant folding to kick in and simplify the vector shift codegen.

Noticed while working on D49562.

llvm-svn: 337578
2018-07-20 16:55:18 +00:00
Alexander Potapenko 80c6f41581 [MSan] Hotfix compilation
Make sure NewSI is used in materializeStores()

llvm-svn: 337577
2018-07-20 16:52:12 +00:00
Evandro Menezes fffa9b5897 [ARM] Add new feature to enable optimizing the VFP registers
Enable the optimization of operations on DPR and SPR via a feature instead
of checking the target.

Differential revision: https://reviews.llvm.org/D49463

llvm-svn: 337575
2018-07-20 16:49:28 +00:00
Alexander Potapenko 5ff3abbc31 [MSan] run materializeChecks() before materializeStores()
When pointer checking is enabled, it's important that every pointer is
checked before its value is used.
For stores MSan used to generate code that calculates shadow/origin
addresses from a pointer before checking it.
For userspace this isn't a problem, because the shadow calculation code
is quite simple and compiler is able to move it after the check on -O2.
But for KMSAN getShadowOriginPtr() creates a runtime call, so we want the
check to be performed strictly before that call.

Swapping materializeChecks() and materializeStores() resolves the issue:
both functions insert code before the given IR location, so the new
insertion order guarantees that the code calculating shadow address is
between the address check and the memory access.

llvm-svn: 337571
2018-07-20 16:28:49 +00:00
Simon Pilgrim c7132031a2 [X86][SSE] Use SplitOpsAndApply to improve HADD/HSUB lowering
Improve AVX1 256-bit vector HADD/HSUB matching by using SplitOpsAndApply to split into 128-bit instructions.

llvm-svn: 337568
2018-07-20 16:20:45 +00:00
Simon Pilgrim a85b86a982 [X86][AVX] Add support for i16 256-bit vector horizontal op redundant shuffle removal
llvm-svn: 337566
2018-07-20 15:51:01 +00:00
Pavel Labath 0120691f53 Fix build breakage from r337562
I changed a variable's type from pointer to reference, but forgot to
update the assert-only code.

llvm-svn: 337564
2018-07-20 15:40:24 +00:00
Nirav Dave 25802ac9fd [DAG] Avoid Node Update assertion due to AND simplification
Check for construction-time folding for incomplete AND nodes in
BackwardsPropagateMask.

Fixes PR38185.

Reviewers: RKSimon, samparker

Reviewed By: samparker

Subscribers: llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D49444

llvm-svn: 337563
2018-07-20 15:27:24 +00:00
Pavel Labath 7f7e60694e DwarfDebug: Reduce duplication in addAccel*** methods
Summary:
Each of the four methods had a dozen lines and was doing almost exactly
the same thing: get the appropriate accelerator table kind and insert an
entry into it. I move this common logic to a helper function and make
these methods delegate to it.

This came up in the context of D49493, where I've needed to make adding
a string to a string pool slightly more complicated, and it seemed to
make sense to do it in one place instead of five.

To make this work I've needed to unify the interface of the AccelTable
data types, as some used to store DIE& and others DIE*. I chose to unify
to a reference as that's what the caller uses.

This technically isn't NFC, because it changes the StringPool used for
apple tables in the DWO case (now it uses the main file like DWARF v5
instead of the DWO file). However, that shouldn't matter, as DWO is not
a thing on apple targets (clang frontend simply ignores -gsplit-dwarf).

Reviewers: JDevlieghere, aprantl, probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49542

llvm-svn: 337562
2018-07-20 15:24:13 +00:00
Simon Pilgrim 7c56bce996 [X86][AVX] Add support for 32/64 bits 256-bit vector horizontal op redundant shuffle removal
llvm-svn: 337561
2018-07-20 15:24:12 +00:00
Nirav Dave 5a4e11ad9c [DAG] Fix Memory ordering check in ReduceLoadOpStore.
When merging through a TokenFactor we need to check that the
load may be ordered such that no other aliasing memory operations may
happen. It is not sufficient to just check that the load is a member
of the chain token factor as it there may be a indirect chain. Require
the load's chain has only one use.

This fixes PR37826.

Reviewers: spatel, davide, efriedma, craig.topper, RKSimon

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D49388

llvm-svn: 337560
2018-07-20 15:20:50 +00:00
Florian Hahn ec3ca89a17 [IPSCCP] Fix for bot failure caused by r337548
llvm-svn: 337554
2018-07-20 14:37:10 +00:00
Florian Hahn 0a560d5d9c Recommit r328307: [IPSCCP] Use constant range information for comparisons of parameters.
This version contains a fix to add values for which the state in ParamState change
to the worklist if the state in ValueState did not change. To avoid adding the
same value multiple times, mergeInValue returns true, if it added the value to
the worklist. The value is added to the worklist depending on its state in
ValueState.

Original message:
For comparisons with parameters, we can use the ParamState lattice
elements which also provide constant range information. This improves
the code for PR33253 further and gets us closer to use
ValueLatticeElement for all values.

Also, as we are using the range information in the solver directly, we
do not need tryToReplaceWithConstantRange afterwards anymore.

Reviewers: dberlin, mssimpso, davide, efriedma

Reviewed By: mssimpso

Differential Revision: https://reviews.llvm.org/D43762

llvm-svn: 337548
2018-07-20 13:29:12 +00:00
Simon Pilgrim 6fb8b68b2d [X86][AVX] Convert X86ISD::VBROADCAST demanded elts combine to use SimplifyDemandedVectorElts
This is an early step towards using SimplifyDemandedVectorElts for target shuffle combining - this merely moves the existing X86ISD::VBROADCAST simplification code to use the SimplifyDemandedVectorElts mechanism.

Adds X86TargetLowering::SimplifyDemandedVectorEltsForTargetNode to handle X86ISD::VBROADCAST - in time we can support all target shuffles (and other ops) here.

llvm-svn: 337547
2018-07-20 13:26:51 +00:00
Chen Zheng f801d0fea9 [InstSimplify] fold srem instruction if its two operands are negated.
Differential Revision: https://reviews.llvm.org/D49423

llvm-svn: 337545
2018-07-20 13:00:47 +00:00
Pavel Labath f9adc20aef [DebugInfo] Generate .debug_names section when it makes sense
Summary:
This patch makes us generate the debug_names section in response to some
user-facing commands (previously it was only generated if explicitly
selected via the -accel-tables option).

My goal was to make this work for DWARF>=5 (as it's an official part of
that standard), and also, as an extension, for DWARF<5 if one is
explicitly tuning for lldb as a debugger (because it brings a large
performance improvement there).

This is slightly complicated by the fact that the debug_names tables are
incompatible with the DWARF v4 type units (they assume that the type
units are in the debug_info section), and unfortunately, right now we
generate DWARF v4-style type units even for -gdwarf-5. For this reason,
I disable all accelerator tables if the user requested type unit
generation. I do this even for apple tables, as they have the same
problem (in fact generating type units for apple targets makes us crash
even before we get around to emitting the accelerator tables).

Reviewers: JDevlieghere, aprantl, dblaikie, echristo, probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49420

llvm-svn: 337544
2018-07-20 12:59:05 +00:00
Sam McCall 57743883f1 Revert "[LSV] Refactoring + supporting bitcasts to a type of different size"
This reverts commit r337489.
It causes asserts to fire in some TensorFlow tests, e.g.
tensorflow/compiler/tests/gather_test.py on GPU.

Example stack trace:
Start test case: GatherTest.testHigherRank
assertion failed at third_party/llvm/llvm/lib/Support/APInt.cpp:819 in llvm::APInt llvm::APInt::trunc(unsigned int) const: width && "Can't truncate to 0 bits"
    @     0x5559446ebe10  __assert_fail
    @     0x55593ef32f5e  llvm::APInt::trunc()
    @     0x55593d78f86e  (anonymous namespace)::Vectorizer::lookThroughComplexAddresses()
    @     0x55593d78f2bc  (anonymous namespace)::Vectorizer::areConsecutivePointers()
    @     0x55593d78d128  (anonymous namespace)::Vectorizer::isConsecutiveAccess()
    @     0x55593d78c926  (anonymous namespace)::Vectorizer::vectorizeInstructions()
    @     0x55593d78c221  (anonymous namespace)::Vectorizer::vectorizeChains()
    @     0x55593d78b948  (anonymous namespace)::Vectorizer::run()
    @     0x55593d78b725  (anonymous namespace)::LoadStoreVectorizer::runOnFunction()
    @     0x55593edf4b17  llvm::FPPassManager::runOnFunction()
    @     0x55593edf4e55  llvm::FPPassManager::runOnModule()
    @     0x55593edf563c  (anonymous namespace)::MPPassManager::runOnModule()
    @     0x55593edf5137  llvm::legacy::PassManagerImpl::run()
    @     0x55593edf5b71  llvm::legacy::PassManager::run()
    @     0x55593ced250d  xla::gpu::IrDumpingPassManager::run()
    @     0x55593ced5033  xla::gpu::(anonymous namespace)::EmitModuleToPTX()
    @     0x55593ced40ba  xla::gpu::(anonymous namespace)::CompileModuleToPtx()
    @     0x55593ced33d0  xla::gpu::CompileToPtx()
    @     0x55593b26b2a2  xla::gpu::NVPTXCompiler::RunBackend()
    @     0x55593b21f973  xla::Service::BuildExecutable()
    @     0x555938f44e64  xla::LocalService::CompileExecutable()
    @     0x555938f30a85  xla::LocalClient::Compile()
    @     0x555938de3c29  tensorflow::XlaCompilationCache::BuildExecutable()
    @     0x555938de4e9e  tensorflow::XlaCompilationCache::CompileImpl()
    @     0x555938de3da5  tensorflow::XlaCompilationCache::Compile()
    @     0x555938c5d962  tensorflow::XlaLocalLaunchBase::Compute()
    @     0x555938c68151  tensorflow::XlaDevice::Compute()
    @     0x55593f389e1f  tensorflow::(anonymous namespace)::ExecutorState::Process()
    @     0x55593f38a625  tensorflow::(anonymous namespace)::ExecutorState::ScheduleReady()::$_1::operator()()
*** SIGABRT received by PID 7798 (TID 7837) from PID 7798; ***

llvm-svn: 337541
2018-07-20 12:03:00 +00:00
Jonas Paulsson c88d3f6a99 [SystemZ] Reimplent SchedModel IssueWidth and WriteRes/ReadAdvance mappings.
As a consequence of recent discussions
(http://lists.llvm.org/pipermail/llvm-dev/2018-May/123164.html), this patch
changes the SystemZ SchedModels so that the IssueWidth is 6, which is the
decoder capacity, and NumMicroOps become the number of decoder slots needed
per instruction.

In addition, the SchedWrite latencies now match the MachineInstructions
def-operand indexes, and ReadAdvances have been added on instructions with
one register operand and one memory operand.

Review: Ulrich Weigand
https://reviews.llvm.org/D47008

llvm-svn: 337538
2018-07-20 09:40:43 +00:00
Andrew V. Tischenko ee2e3144ba Improved sched model for X86 BSWAP* instrs.
Differential Revision: https://reviews.llvm.org/D49477

llvm-svn: 337537
2018-07-20 09:39:14 +00:00
Matt Arsenault 4bec7d4261 Reapply "AMDGPU: Fix handling of alignment padding in DAG argument lowering"
Reverts r337079 with fix for msan error.

llvm-svn: 337535
2018-07-20 09:05:08 +00:00
Sander de Smalen 33f588acb9 [AArch64][SVE] Asm: Support for bit/byte reverse operations.
This patch adds the following instructions:

  RBIT      reverse bits within each active elemnt (predicated), e.g.
                rbit z0.d, p0/m, z1.d

            for 8, 16, 32 and 64 bit elements.

  REV       reverse order of elements in data/predicate vector
            (unpredicated), e.g.
                rev z0.d, z1.d
                rev p0.d, p1.d

            for 8, 16, 32 and 64 bit elements.

  REVB      reverse order of bytes within each active element, e.g.
                revb z0.d, p0/m, z1.d

            for 16, 32 and 64 bit elements.

  REVH      reverse order of 16-bit half-words within each active
            element, e.g.
                revh z0.d, p0/m, z1.d

            for 32 and 64 bit elements.

  REVW      reverse order of 32-bit words within each active element,
            e.g.
                revw z0.d, p0/m, z1.d

            for 64 bit elements.

llvm-svn: 337534
2018-07-20 09:00:44 +00:00
Sander de Smalen 3ed7f81ce1 [AArch64][SVE] Asm: Support for FTMAD instruction.
Floating-point trigonometric multiply-add coefficient,
e.g.

  ftmad z0.h, z0.h, z1.h, #7

with variants for 16, 32 and 64-bit elements.

llvm-svn: 337533
2018-07-20 08:47:26 +00:00
Craig Topper d8734450a2 [DAGCombiner] Fold X - (-Y *Z) -> X + (Y * Z)
llvm-svn: 337518
2018-07-20 01:40:03 +00:00
Duncan P. N. Exon Smith c03b04d533 Reapply "ADT: Shrink size of SmallVector by 8B on 64-bit platforms"
I'm optimistically reverting commit r337511, effectively reapplying
r337504 *without* changes.

The failing bots that had `SmallVector` in the backtrace recovered after
the unrelated commit r337508.  The backtraces looked bogus anyway, with
`SmallVector::size()` calling (e.g.) `ConstantArray::get()`.

Here's the original commit message:

    ADT: Shrink size of SmallVector by 8B on 64-bit platforms

    Represent size and capacity directly as unsigned and calculate
    `end()` using `begin() + size()`.

    This limits the maximum size/capacity of a vector to UINT32_MAX.

    https://reviews.llvm.org/D48518

llvm-svn: 337514
2018-07-20 00:44:58 +00:00
Heejin Ahn 022a37af77 [WebAssembly] Disable a test that violates DR1696
Summary:
lifetime2.C violates DR1696, which prevents reference members from being
initialized to temporaries, whose lifetime would end at the end of ctor.

Reviewers: sbc100

Subscribers: dschuff, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D49577

llvm-svn: 337512
2018-07-20 00:13:42 +00:00
Duncan P. N. Exon Smith 42f20f3c55 Revert "ADT: Shrink size of SmallVector by 8B on 64-bit platforms"
This reverts commit r337504 while I investigate a TSan bot failure that
seems related:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-autoconf/builds/26526

    #8 0x000055581f2895d8 (/b/sanitizer-x86_64-linux-autoconf/build/tsan_debug_build/bin/clang-7+0x1eb45d8)
    #9 0x000055581f294323 llvm::ConstantAggrKeyType<llvm::ConstantArray>::create(llvm::ArrayType*) const /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:409:0
    #10 0x000055581f294323 llvm::ConstantUniqueMap<llvm::ConstantArray>::create(llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray>, std::pair<unsigned int, std::pair<llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray> > >&) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:635:0
    #11 0x000055581f294323 llvm::ConstantUniqueMap<llvm::ConstantArray>::getOrCreate(llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray>) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:654:0
    #12 0x000055581f2944cb llvm::ConstantArray::get(llvm::ArrayType*, llvm::ArrayRef<llvm::Constant*>) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/Constants.cpp:964:0
    #13 0x000055581fa27e19 llvm::SmallVectorBase::size() const /b/sanitizer-x86_64-linux-autoconf/build/llvm/include/llvm/ADT/SmallVector.h:53:0
    #14 0x000055581fa27e19 llvm::SmallVectorImpl<llvm::Constant*>::resize(unsigned long) /b/sanitizer-x86_64-linux-autoconf/build/llvm/include/llvm/ADT/SmallVector.h:347:0
    #15 0x000055581fa27e19 (anonymous namespace)::EmitArrayConstant(clang::CodeGen::CodeGenModule&, clang::ConstantArrayType const*, llvm::Type*, unsigned int, llvm::SmallVectorImpl<llvm::Constant*>&, llvm::Constant*) /b/sanitizer-x86_64-linux-autoconf/build/llvm/tools/clang/lib/CodeGen/CGExprConstant.cpp:669:0

llvm-svn: 337511
2018-07-20 00:09:56 +00:00
Chandler Carruth a3a03ac247 [x86/SLH] Clean up helper naming for return instruction handling and
remove dead declaration of a call instruction handling helper.

This moves to the 'harden' terminology that I've been trying to settle
on for returns. It also adds a really detailed comment explaining what
all we're trying to accomplish with return instructions and why.
Hopefully this makes it much more clear what exactly is being
"hardened".

Differential Revision: https://reviews.llvm.org/D49571

llvm-svn: 337510
2018-07-19 23:46:24 +00:00
Eli Friedman a3c78f5981 [SCCP] Don't use markForcedConstant on branch conditions.
It's more aggressive than we need to be, and leads to strange
workarounds in other places like call return value inference. Instead,
just directly mark an edge viable.

Tests by Florian Hahn.

Differential Revision: https://reviews.llvm.org/D49408

llvm-svn: 337507
2018-07-19 23:02:07 +00:00
Stephen Canon 8995c5f0f6 Skip out of SimplifyDemandedBits for BITCAST of f16 to i16
Mirrors the existing exit path for f128, avoiding a crash later on.

Differential Revision: https://reviews.llvm.org/D49524

llvm-svn: 337506
2018-07-19 22:46:42 +00:00
Duncan P. N. Exon Smith 6acb8cee05 ADT: Shrink size of SmallVector by 8B on 64-bit platforms
Representing size and capacity directly as unsigned and calculate
`end()` using `begin() + size()`.

This limits the maximum size/capacity of a vector to UINT32_MAX.

https://reviews.llvm.org/D48518

llvm-svn: 337504
2018-07-19 22:29:47 +00:00
Teresa Johnson 0c432b1a70 [ThinLTO] Only emit referenced type id records in index files
Summary:
Currently all type ids are emitted into the index file when it is
written. For distributed ThinLTO, that meant that all type ids were
being duplicated into every single distributed index file, regardless of
whether they were referenced, leading to huge amounts of unnecessary
duplication and size bloat.

Keep track of the type id GUIDs actually referenced by the GV summary
records being emitted, and only emit those type IDs.

Add a new test, and fix test/Assembler/thinlto-summary.ll so that all
type ids are referenced to prevent deletion in that test.

Reviewers: pcc

Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, vitalybuka, llvm-commits

Differential Revision: https://reviews.llvm.org/D49565

llvm-svn: 337503
2018-07-19 22:25:56 +00:00
Craig Topper c12c5d421f [DAGCombiner] Teach DAGCombiner that A-(-B) is A+B.
We already knew A+(-B) is A-B in visitAdd. This does the opposite for visitSub.

llvm-svn: 337502
2018-07-19 22:24:43 +00:00
Simon Pilgrim 1d181bc992 [X86][AVX] Use extract_subvector to reduce vector op widths (PR36761)
We have a number of cases where we fail to reduce vector op widths, performing the op in a larger vector and then extracting a subvector. This is often because by default it would create illegal types.

This peephole patch attempts to handle a few common cases detailed in PR36761, which typically involved extension+conversion to vX2f64 types.

Differential Revision: https://reviews.llvm.org/D49556

llvm-svn: 337500
2018-07-19 21:52:06 +00:00
Craig Topper 9888670c6b [X86] Fix some 'return SDValue()' after DCI.CombineTo instead return the output of CombineTo
Returning SDValue() means nothing was changed. Returning the result of CombineTo returns the first argument of CombineTo. This is specially detected by DAGCombiner as meaning that something changed, but worklist management was already taken care of.

I think the only real effect of this change is that we now properly update the Statistic the counts the number of combines performed. That's the only thing between the check for null and the check for N in the DAGCombiner.

llvm-svn: 337491
2018-07-19 20:10:44 +00:00
Roman Tereshin b49b2a601f [LSV] Refactoring + supporting bitcasts to a type of different size
This is mostly a preparation work for adding a limited support for
select instructions. It proved to be difficult to do due to size and
irregularity of Vectorizer::isConsecutiveAccess, this is fixed here I
believe.

It also turned out that these changes make it simpler to finish one of
the TODOs and fix a number of other small issues, namely:

1. Looking through bitcasts to a type of a different size (requires
careful tracking of the original load/store size and some math
converting sizes in bytes to expected differences in indices of GEPs).

2. Reusing partial analysis of pointers done by first attempt in proving
them consecutive instead of starting from scratch. This added limited
support for nested GEPs co-existing with difficult sext/zext
instructions. This also required a careful handling of negative
differences between constant parts of offsets.

3. Handing a case where the first pointer index is not an add, but
something else (a function parameter for instance).

I observe an increased number of successful vectorizations on a large
set of shader programs. Only few shaders are affected, but those that
are affected sport >5% less loads and stores than before the patch.

Reviewed By: rampitec

Differential-Revision: https://reviews.llvm.org/D49342
llvm-svn: 337489
2018-07-19 19:42:43 +00:00
Stefan Pintilie 0c122d5a41 [Power9] Code Cleanup - Remove needsAggressiveScheduling()
As we already return true from needsAggressiveScheduling() for the most recent
hardware it would be cleaner to just return true for all PowerPC hardware.

Differential Revision: https://reviews.llvm.org/D48663

llvm-svn: 337488
2018-07-19 19:34:18 +00:00
Shoaib Meenai a57d7139b3 [Analysis] Fix typo in assert. NFC
Test commit to see if my mailing list woes have been resolved.

llvm-svn: 337485
2018-07-19 19:11:29 +00:00
Krzysztof Parzyszek 55a0dcee07 [APInt] Keep the original bit width in quotient and remainder
Some trivial cases in udivrem were handled by directly assigning 0 or 1
to APInt objects. This would set the bit width to 1, instead of the bit
width of the inputs. A potentially undesirable side effect of that is
that with the bit width of 1, 1 equals -1.

Differential Revision: https://reviews.llvm.org/D49554

llvm-svn: 337478
2018-07-19 18:07:56 +00:00
Farhana Aleen 8c7a30baea [LoadStoreVectorizer] Use getMinusScev() to compute the distance between two pointers.
Summary: Currently, isConsecutiveAccess() detects two pointers(PtrA and PtrB) as consecutive by
         comparing PtrB with BaseDelta+PtrA. This works when both pointers are factorized or
         both of them are not factorized. But isConsecutiveAccess() fails if one of the
         pointers is factorized but the other one is not.

         Here is an example:
         PtrA = 4 * (A + B)
         PtrB = 4 + 4A + 4B

         This patch uses getMinusSCEV() to compute the distance between two pointers.
         getMinusSCEV() allows combining the expressions and computing the simplified distance.

Author: FarhanaAleen

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D49516

llvm-svn: 337471
2018-07-19 16:50:27 +00:00
Andrea Di Biagio b6022aa8d9 [X86][BtVer2] correctly model the latency/throughput of LEA instructions.
This patch fixes the latency/throughput of LEA instructions in the BtVer2
scheduling model.

On Jaguar, A 3-operands LEA has a latency of 2cy, and a reciprocal throughput of
1. That is because it uses one cycle of SAGU followed by 1cy of ALU1.  An LEA
with a "Scale" operand is also slow, and it has the same latency profile as the
3-operands LEA. An LEA16r has a latency of 3cy, and a throughput of 0.5 (i.e.
RThrouhgput of 2.0).

This patch adds a new TIIPredicate named IsThreeOperandsLEAFn to X86Schedule.td.
The tablegen backend (for instruction-info) expands that definition into this
(file X86GenInstrInfo.inc):
```
static bool isThreeOperandsLEA(const MachineInstr &MI) {
  return (
    (
      MI.getOpcode() == X86::LEA32r
      || MI.getOpcode() == X86::LEA64r
      || MI.getOpcode() == X86::LEA64_32r
      || MI.getOpcode() == X86::LEA16r
    )
    && MI.getOperand(1).isReg()
    && MI.getOperand(1).getReg() != 0
    && MI.getOperand(3).isReg()
    && MI.getOperand(3).getReg() != 0
    && (
      (
        MI.getOperand(4).isImm()
        && MI.getOperand(4).getImm() != 0
      )
      || (MI.getOperand(4).isGlobal())
    )
  );
}
```

A similar method is generated in the X86_MC namespace, and included into
X86MCTargetDesc.cpp (the declaration lives in X86MCTargetDesc.h).

Back to the BtVer2 scheduling model:
A new scheduling predicate named JSlowLEAPredicate now checks if either the
instruction is a three-operands LEA, or it is an LEA with a Scale value
different than 1.
A variant scheduling class uses that new predicate to correctly select the
appropriate latency profile.

Differential Revision: https://reviews.llvm.org/D49436

llvm-svn: 337469
2018-07-19 16:42:15 +00:00
Teresa Johnson 28023dbed7 [ThinLTO] Enable ThinLTO WholeProgramDevirt and LowerTypeTests in new PM
Summary:
Enable these passes for CFI and WPD in ThinLTO and LTO with the new pass
manager. Add a couple of tests for both PMs based on the clang tests
tools/clang/test/CodeGen/thinlto-distributed-cfi*.ll, but just test
through llvm-lto2 and not with distributed ThinLTO.

Reviewers: pcc

Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D49429

llvm-svn: 337461
2018-07-19 14:51:32 +00:00
Tim Northover a7c18ee8c4 ARM: switch armv7em MachO triple to hard-float defaults and libcalls.
We were emitting incorrect calls to libm functions that LLVM had decided it
knew about because the default is soft-float.

Recommitted without breaking ELF this time.

llvm-svn: 337450
2018-07-19 12:44:51 +00:00
Chandler Carruth 4b0028a3d1 [x86/SLH] Major refactoring of SLH implementaiton. There are two big
changes that are intertwined here:

1) Extracting the tracing of predicate state through the CFG to its own
   function.
2) Creating a struct to manage the predicate state used throughout the
   pass.

Doing #1 necessitates and motivates the particular approach for #2 as
now the predicate management is spread across different functions
focused on different aspects of it. A number of simplifications then
fell out as a direct consequence.

I went with an Optional to make it more natural to construct the
MachineSSAUpdater object.

This is probably the single largest outstanding refactoring step I have.
Things get a bit more surgical from here. My current goal, beyond
generally making this maintainable long-term, is to implement several
improvements to how we do interprocedural tracking of predicate state.
But I don't want to do that until the predicate state management and
tracing is in reasonably clear state.

Differential Revision: https://reviews.llvm.org/D49427

llvm-svn: 337446
2018-07-19 11:13:58 +00:00
Simon Pilgrim dd7bf598cc Fix spelling mistake in comments. NFCI.
llvm-svn: 337442
2018-07-19 09:14:39 +00:00
Max Kazantsev d41faecc49 [SCEV] Fix buggy behavior in getAddExpr with truncs
SCEV tries to constant-fold arguments of trunc operands in SCEVAddExpr, and when it does
that, it passes wrong flags into the recursion. It is only valid to pass flags that are proved for
narrow type into a computation in wider type if we can prove that trunc instruction doesn't
actually change the value. If it did lose some meaningful bits, we may end up proving wrong
no-wrap flags for sum of arguments of trunc.

In the provided test we end up with `nuw` where it shouldn't be because of this bug.

The solution is to conservatively pass `SCEV::FlagAnyWrap` which is always a valid thing to do.

Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D49471

llvm-svn: 337435
2018-07-19 01:46:21 +00:00
Peter Collingbourne 4a653fa7f1 Rename __asan_gen_* symbols to ___asan_gen_*.
This prevents gold from printing a warning when trying to export
these symbols via the asan dynamic list after ThinLTO promotes them
from private symbols to external symbols with hidden visibility.

Differential Revision: https://reviews.llvm.org/D49498

llvm-svn: 337428
2018-07-18 22:23:14 +00:00
Heejin Ahn 47068a42d2 [WebAssembly] Add missing -mattr=+exception-handling guards
Summary:
The use of exception handling instructions should only be enabled with
`-mattr=+exception-handling` option.

Reviewers: jgravelle-google

Subscribers: dschuff, sbc100, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D49391

llvm-svn: 337425
2018-07-18 21:42:22 +00:00
Tim Northover da142d10d9 Revert "ARM: switch armv7em triple to hard-float defaults and libcalls."
This reverts commit r337385 until it can be targeted at MachO only.

llvm-svn: 337424
2018-07-18 21:32:49 +00:00
Simon Pilgrim d4b82da113 [X86][SSE] Canonicalize scalar fp arithmetic shuffle patterns
As discussed on PR38197, this canonicalizes MOVS*(N0, OP(N0, N1)) --> MOVS*(N0, SCALAR_TO_VECTOR(OP(N0[0], N1[0])))

This returns the scalar-fp codegen lost by rL336971.

Additionally it handles the OP(N1, N0)) case for commutable (FADD/FMUL) ops.

Differential Revision: https://reviews.llvm.org/D49474

llvm-svn: 337419
2018-07-18 19:55:19 +00:00
Xin Tong 074ccf32ce Skip debuginfo intrinsic in markLiveBlocks.
Summary:
The optimizer is 10%+ slower with vs without debuginfo. I started checking where
the difference is coming from.

I compiled sqlite3.c with and without debug info from CTMark and compare the time difference.

I use Xcode Instrument to find where time is spent. This brings about 20ms, out of ~20s.

Reviewers: davide, hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, aprantl, JDevlieghere, llvm-commits

Differential Revision: https://reviews.llvm.org/D49337

llvm-svn: 337416
2018-07-18 18:40:45 +00:00
David Blaikie d66140514d [DebugInfo] Dwarfv5: Avoid unnecessary base_address specifiers in rnglists
Since DWARFv5 rnglists are self descriptive and have distinct encodings
for base-relative (offset_pair) and absolute (start_length) entries,
there's no need to use a base address specifier when describing a lone
address range in a section.

Use that, and improve the test coverage a bit here to include cases like
this and others.

llvm-svn: 337411
2018-07-18 18:04:42 +00:00
Nirav Dave a747d3ca60 [ScheduleDAG] Fix unfolding of SUnits to already existent nodes.
Summary:
If unfolding an SUnit results in both load or the operation using it which
already exist in the DAG, abort the unfold if they are already scheduled.
If not, make sure we don't add duplicate dependencies.

This fixes PR37916.

Reviewers: davide, eli.friedman, fhahn, bogner

Subscribers: MatzeB, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D48666

llvm-svn: 337409
2018-07-18 18:01:03 +00:00
Wei Mi 10ef7d20b2 [RegAlloc][NFC] Fix the help string of the option "huge-size-for-split".
llvm-svn: 337402
2018-07-18 16:56:33 +00:00
Nirav Dave e24fcd5382 [MC] Fix nested macro body parsing
Add missing .rep case in nestlevel checking for macro body parsing.

llvm-svn: 337398
2018-07-18 16:17:03 +00:00
Simon Atanasyan e9d7b3198a [mips] Fix predicate for the MipsTruncIntFP pattern
This is a follow-up to the rL337171. This patch fixes regression
introduced by the r337171 and enables MipsTruncIntFP pattern.

Differential revision: https://reviews.llvm.org/D49469

llvm-svn: 337392
2018-07-18 14:11:22 +00:00
Simon Pilgrim 2b37ddce4b [SLPVectorizer] Avoid duplicate scalar cost calculations in BoUpSLP::getEntryCost. NFCI.
Pulled out from D49225, we have a lot of repeated scalar cost calculations, often with arguments that don't look the same but turn out to be.

llvm-svn: 337390
2018-07-18 13:53:55 +00:00
Tim Northover 43d64b0b36 [Support] Build fix for Haiku when checking for a local filesystem
Haiku does not expose information about local versus remote mounts, so just
return false, like Cygwin.

Patch by Niels Sascha Reedijk.

llvm-svn: 337389
2018-07-18 13:42:18 +00:00
Simon Pilgrim 3a45369b9e [X86][SSE] Remove BLENDPD canonicalization from combineTargetShuffle
When rL336971 removed the scalar-fp isel patterns, we lost the need for this canonicalization - commutation/folding can handle everything else.

llvm-svn: 337387
2018-07-18 13:01:20 +00:00
Tim Northover e00cf4fc68 ARM: stop explicitly marking armv7k libcalls as hard-float. NFC.
Since the triple's default is hard float, the libcalls will already use VFP
registers.

llvm-svn: 337386
2018-07-18 12:37:43 +00:00
Tim Northover d4abd14c1b ARM: switch armv7em triple to hard-float defaults and libcalls.
We were emitting incorrect calls to libm functions that LLVM had decided it
knew about because the default is soft-float.

llvm-svn: 337385
2018-07-18 12:37:04 +00:00
Tim Northover 097a3e3d95 ARM: deduplicate hard-float detection code. NFC.
ARMSubtarget had a copy/pasted block to determine whether the target was
hard-float, but it just delegated to triple features anyway so it's better at
the TargetMachine level.

llvm-svn: 337384
2018-07-18 12:36:25 +00:00
Sander de Smalen 330d887d72 [AArch64][SVE] Asm: Support for unpredicated FP operations.
This patch adds support for the following unpredicated
floating-point instructions:

  FADD      Floating point add
  FSUB      Floating point subtract
  FMUL      Floating point multiplication
  FTSMUL    Floating point trigonometric starting value
  FRECPS    Floating point reciprocal step
  FRSQRTS   Floating point reciprocal square root step

The instructions have the following assembly format:
  fadd z0.h, z1.h, z2.h
and have variants for 16, 32 and 64-bit FP elements.

llvm-svn: 337383
2018-07-18 11:59:12 +00:00
Roman Lebedev 3cb87e905c [InstCombine] Re-commit: Fold 'check for [no] signed truncation' pattern
Summary:
[[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]]

As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.

The DAGCombine will reverse this transform, see
https://reviews.llvm.org/D49266

This transform is surprisingly frustrating.
This does not deal with non-splat shift amounts, or with undef shift amounts.
I've outlined what i think the solution should be:
```
  // Potential handling of non-splats: for each element:
  //  * if both are undef, replace with constant 0.
  //    Because (1<<0) is OK and is 1, and ((1<<0)>>1) is also OK and is 0.
  //  * if both are not undef, and are different, bailout.
  //  * else, only one is undef, then pick the non-undef one.
```

This is a re-commit, as the original patch, committed in rL337190
was reverted in rL337344 as it broke chromium build:
https://bugs.llvm.org/show_bug.cgi?id=38204 and
https://crbug.com/864832
Proofs that the fixed folds are ok: https://rise4fun.com/Alive/VYM

Differential Revision: https://reviews.llvm.org/D49320

llvm-svn: 337376
2018-07-18 10:55:17 +00:00
Daniel Cederman 959c8bf51c Revert "[Sparc] Use the IntPair reg class for r constraints with value type f64"
This reverts commit 55222c9183c6e07f53a54c4061677734f54feac1.

I missed that this patch has a dependency on https://reviews.llvm.org/D49219
that has not been approved yet.

llvm-svn: 337373
2018-07-18 10:05:30 +00:00
Sander de Smalen ccdc7ebc1d [AArch64][SVE] Asm: Support for UDOT/SDOT instructions.
The signed/unsigned DOT instructions perform a dot-product on
quadtuplets from two source vectors and accumulate the result in
the destination register. The instructions come in two forms:

Vector form, e.g.
  sdot  z0.s, z1.b, z2.b     - signed dot product on four 8-bit quad-tuplets,
                               accumulating results in 32-bit elements.

  udot  z0.d, z1.h, z2.h     - unsigned dot product on four 16-bit quad-tuplets,
                               accumulating results in 64-bit elements.

Indexed form, e.g.
  sdot  z0.s, z1.b, z2.b[3]  - signed dot product on four 8-bit quad-tuplets
                               with specified quadtuplet from second
                               source vector, accumulating results in 32-bit
                               elements.
  udot  z0.d, z1.h, z2.h[1]  - dot product on four 16-bit quad-tuplets
                               with specified quadtuplet from second
                               source vector, accumulating results in 64-bit
                               elements.

llvm-svn: 337372
2018-07-18 09:37:51 +00:00
Daniel Cederman 4e38df18ea [Sparc] Use the IntPair reg class for r constraints with value type f64
Summary: This is how it appears to be handled in GCC and it prevents a
"Unknown mismatch" error in the SelectionDAGBuilder.

Reviewers: venkatra, jyknight, jrtc27

Reviewed By: jyknight, jrtc27

Subscribers: eraman, fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D49218

llvm-svn: 337370
2018-07-18 09:25:33 +00:00
Sander de Smalen 889fe81ce5 [AArch64][SVE] Asm: Integer divide instructions.
This patch adds the following predicated instructions:

  UDIV    Unsigned divide active elements
  UDIVR   Unsigned divide active elements, reverse form.
  SDIV    Signed divide active elements
  SDIVR   Signed divide active elements, reverse form.

e.g.
  udiv  z0.s, p0/m, z0.s, z1.s
    (unsigned divide active elements in z0 by z1, store result in z0)

  sdivr z0.s, p0/m, z0.s, z1.s
    (signed divide active elements in z1 by z0, store result in z0)

llvm-svn: 337369
2018-07-18 09:17:29 +00:00
Simon Pilgrim 3cb3056fca Fix -Wdocumentation warning. NFCI.
llvm-svn: 337367
2018-07-18 09:07:54 +00:00
Sander de Smalen ac0cb5bf75 [AArch64][SVE] Asm: Support for integer MUL instructions.
This patch adds the following instructions:
  MUL   - multiply vectors, e.g.
    mul z0.h, p0/m, z0.h, z1.h
        - multiply with immediate, e.g.
    mul z0.h, z0.h, #127

  SMULH - signed multiply returning high half, e.g.
    smulh z0.h, p0/m, z0.h, z1.h

  UMULH - unsigned multiply returning high half, e.g.
    umulh z0.h, p0/m, z0.h, z1.h

llvm-svn: 337358
2018-07-18 08:10:03 +00:00
Craig Topper 92ea7a7b48 [X86] Enable commuting of VUNPCKHPD to VMOVLHPS to enable load folding by using VMOVLPS with a modified address.
This required an annoying amount of tablegen multiclass changes to make only VUNPCKHPDZ128rr commutable.

llvm-svn: 337357
2018-07-18 07:31:32 +00:00
Hiroshi Inoue cd83d459bc [NFC] fix trivial typos in comments
llvm-svn: 337351
2018-07-18 06:04:43 +00:00
Justin Hibbits 22e939a15b Fix build failures from r337347, found by clang
* Delete a no-longer-used override, and mark the other
getRegisterTypeForCallingConv() as override.
* SPE only supports i32, not i64, as the internal type, so simply remove
the type check, so that DestReg and Opc are provably always set.

GCC 6.4 did not warn about either of the above.

llvm-svn: 337350
2018-07-18 05:19:25 +00:00
Craig Topper 95063a45b8 [X86] Remove patterns that mix X86ISD::MOVLHPS/MOVHLPS with v2i64/v2f64 types.
The X86ISD::MOVLHPS/MOVHLPS should now only be emitted in SSE1 only. This means that the v2i64/v2f64 types would be illegal thus we don't need these patterns.

llvm-svn: 337349
2018-07-18 05:10:53 +00:00
Craig Topper 1425e10cc6 [X86] Generate v2f64 X86ISD::UNPCKL/UNPCKH instead of X86ISD::MOVLHPS/MOVHLPS for unary v2f64 {0,0} and {1,1} shuffles with SSE2.
I'm trying to restrict the MOVLHPS/MOVHLPS ISD nodes to SSE1 only. With SSE2 we can use unpcks. I believe this will allow some patterns to be cleaned up to require fewer bitcasts.

I've put in an odd isel hack to still select MOVHLPS instruction from the unpckh node to avoid changing tests and because movhlps is a shorter encoding. Ideally we'd do execution domain switching on this, but the operands are in the wrong order and are tied. We might be able to try a commute in the domain switching using custom code.

We already support domain switching for UNPCKLPD and MOVLHPS.

llvm-svn: 337348
2018-07-18 05:10:51 +00:00
Justin Hibbits d52990c71b Introduce codegen for the Signal Processing Engine
Summary:
The Signal Processing Engine (SPE) is found on NXP/Freescale e500v1,
e500v2, and several e200 cores.  This adds support targeting the e500v2,
as this is more common than the e500v1, and is in SoCs still on the
market.

This patch is very intrusive because the SPE is binary incompatible with
the traditional FPU.  After discussing with others, the cleanest
solution was to make both SPE and FPU features on top of a base PowerPC
subset, so all FPU instructions are now wrapped with HasFPU predicates.

Supported by this are:
* Code generation following the SPE ABI at the LLVM IR level (calling
conventions)
* Single- and Double-precision math at the level supported by the APU.

Still to do:
* Vector operations
* SPE intrinsics

As this changes the Callee-saved register list order, one test, which
tests the precise generated code, was updated to account for the new
register order.

Reviewed by: nemanjai
Differential Revision: https://reviews.llvm.org/D44830

llvm-svn: 337347
2018-07-18 04:25:10 +00:00
Justin Hibbits 4fa4fa6a73 Complete the SPE instruction set patterns
This is the lead-up to having SPE codegen.  Add the rest of the
instructions, along with MC tests.

Differential Revision:  https://reviews.llvm.org/D44829

llvm-svn: 337346
2018-07-18 04:24:57 +00:00
Justin Hibbits ceb3cd96f7 Add PowerPC e500(v2) core scheduler and directives.
Differential Revision: https://reviews.llvm.org/D44828

llvm-svn: 337345
2018-07-18 04:24:49 +00:00
Bob Haarman 4ebe5d59b6 Revert "[InstCombine] Fold 'check for [no] signed truncation' pattern"
This reverts r337190 (and a few follow-up commits), which caused the
Chromium build to fail. See
https://bugs.llvm.org/show_bug.cgi?id=38204 and
https://crbug.com/864832

llvm-svn: 337344
2018-07-18 02:18:28 +00:00
Peter Collingbourne fc50498ced CodeGen: Don't create address significance table entries for thread-local variables.
The presence of these symbols in the symbol table can cause symbol type
mismatch errors (or undefined symbol errors on emulated TLS targets)
and they can't be ICF'd anyway.

llvm-svn: 337338
2018-07-18 00:21:40 +00:00
Craig Topper a29f58dc31 [X86] Remove the vector alignment requirement from the patterns added in r337320.
The resulting instruction will only load 64 bits so alignment isn't required.

llvm-svn: 337334
2018-07-17 23:26:20 +00:00
Peter Collingbourne bd9d313d5c CodeGen: Add a target option for emitting .addrsig directives for all address-significant symbols.
Differential Revision: https://reviews.llvm.org/D48143

llvm-svn: 337331
2018-07-17 22:40:08 +00:00
Peter Collingbourne 3e22733698 MC: Implement support for new .addrsig and .addrsig_sym directives.
Part of the address-significance tables proposal:
http://lists.llvm.org/pipermail/llvm-dev/2018-May/123514.html

Differential Revision: https://reviews.llvm.org/D47744

llvm-svn: 337328
2018-07-17 22:17:18 +00:00
Craig Topper 9ef92865ec [X86] Add patterns for folding full vector load into MOVHPS and MOVLPS with SSE1 only.
llvm-svn: 337320
2018-07-17 20:16:18 +00:00
Fangrui Song 5391bb62fb [Demangle] Add missing header files
llvm-svn: 337318
2018-07-17 19:50:41 +00:00
Zachary Turner 427bce1114 Add missing include.
llvm-svn: 337317
2018-07-17 19:48:46 +00:00
Zachary Turner 8a0efd0919 Add some helper functions to the demangle utility classes.
These are all methods that, while not currently used in the
Itanium demangler, are generally useful enough that it's
likely the itanium demangler could find a use for them.  More
importantly, they are all necessary for the Microsoft demangler
which is up and coming in a subsequent patch.  Rather than
combine these into a single monolithic patch, I think it makes
sense to commit this utility code first since it is very simple,
this way it won't detract from the substance of the MS demangler
patch.

llvm-svn: 337316
2018-07-17 19:42:29 +00:00
Vedant Kumar 9ece818291 [InstCombine] Preserve debug value when simplifying cast-of-select
InstCombine has a cast transform that matches a cast-of-select:

  Orig = cast (Src = select Cond TV FV)

And tries to replace it with a select which has the cast folded in:

  NewSel = select Cond (cast TV) (cast FV)

The combiner does RAUW(Orig, NewSel), so any debug values for Orig would
survive the transform. But debug values for Src would be lost.

This patch teaches InstCombine to replace all debug uses of Src with
NewSel (taking care of doing any necessary DIExpression rewriting).

Differential Revision: https://reviews.llvm.org/D49270

llvm-svn: 337310
2018-07-17 18:08:36 +00:00
Chandler Carruth c0cb5731fc [x86/SLH] Flesh out the data-invariant instruction table a bit based on feedback from Craig.
Summary:
The only thing he suggested that I've skipped here is the double-wide
multiply instructions. Multiply is an area I'm nervous about there being
some hidden data-dependent behavior, and it doesn't seem important for
any benchmarks I have, so skipping it and sticking with the minimal
multiply support that matches what I know is widely used in existing
crypto libraries. We can always add double-wide multiply when we have
clarity from vendors about its behavior and guarantees.

I've tried to at least cover the fundamentals here with tests, although
I've not tried to cover every width or permutation. I can add more tests
where folks think it would be helpful.

Reviewers: craig.topper

Subscribers: sanjoy, mcrosier, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D49413

llvm-svn: 337308
2018-07-17 18:07:59 +00:00
Sam Clegg 28b3e99482 [WebAssembly] Update WebAssemblyLowerEmscriptenEHSjLj to handle separate compilation
Previously we were assuming whole program compilation. Now that
separate compilation is a thing we need to update this pass.
Firstly, it can no longer assert on the existence of malloc and free.
This functions might not be in the current translation unit.  If we
need them then we will generate not imports for them.

Secondly the global helper function we create should be marked as
weak since we will be generating a separate copy in each translation
unit.

Finally the names of the symbols used must be unique and fixed since
they need to agree across translation units.

Differential Revision: https://reviews.llvm.org/D49263

llvm-svn: 337301
2018-07-17 16:40:03 +00:00
Craig Topper 9187bca71b [X86] Remove some standalone patterns in favor of the patterns in the MOVLPD instruction definitions.
Previously we passed 'null_frag' into the instruction definition. The multiclass is shared with MOVHPD which doesn't use null_frag. It turns out by passing X86Movsd it produces patterns equivalent to some standalone patterns.

llvm-svn: 337299
2018-07-17 16:24:33 +00:00
Sander de Smalen 7b33faaf38 [AArch64][SVE]: Integer multiply-add/subtract instructions.
This patch adds support for the following instructions:
  MLA  mul-add, writing addend       (Zda = Zda +   Zn * Zm)
  MLS  mul-sub, writing addend       (Zda = Zda +  -Zn * Zm)
  MAD  mul-add, writing multiplicant (Zdn =  Za +  Zdn * Zm)
  MSB  mul-sub, writing multiplicant (Zdn =  Za + -Zdn * Zm)

llvm-svn: 337293
2018-07-17 15:41:58 +00:00
Petar Jovanovic f10e4798b4 [Mips][FastISel] Fix handling of icmp with i1 type
The Mips FastISel back-end does not extend i1 values while lowering icmp.
Ensure that we bail into DAG ISel when handling this case.

Patch by Dragan Mladjenovic.

Differential Revision: https://reviews.llvm.org/D49290

llvm-svn: 337288
2018-07-17 14:57:46 +00:00
Florian Hahn d95761d9d0 [IPSCCP] Run Solve each time we resolved an undef in a function.
Once we resolved an undef in a function we can run Solve, which could
lead to finding a constant return value for the function, which in turn
could turn undefs into constants in other functions that call it, before
resolving undefs there.

Computationally the amount of work we are doing stays the same, just the
order we process things is slightly different and potentially there are
a few less undefs to resolve.

We are still relying on the order of functions in the IR, which means
depending on the order, we are able to resolve the optimal undef first
or not. For example, if @test1 comes before @testf, we find the constant
return value of @testf too late and we cannot use it while solving
@test1.

This on its own does not lead to more constants removed in the
test-suite, probably because currently we have to be very lucky to visit
applicable functions in the right order.

Maybe we manage to come up with a better way of resolving undefs in more
'profitable' functions first.

Reviewers: efriedma, mssimpso, davide

Reviewed By: efriedma, davide

Differential Revision: https://reviews.llvm.org/D49385

llvm-svn: 337283
2018-07-17 14:04:59 +00:00
Sander de Smalen f41dd122d6 [AArch64][SVE] Asm: FP fused multiply-add/subtract instructions.
This patch adds support for the following instructions:

  FMLA    mul-add, writing addend                (Zda =  Zda +   Zn * Zm)
  FNMLA   negated mul-add, writing addend        (Zda = -Zda +  -Zn * Zm)

  FMLS    mul-sub, writing addend                (Zda =  Zda +  -Zn * Zm)
  FNMLS   negated mul-sub, writing addend        (Zda = -Zda +   Zn * Zm)

  FMAD    mul-add, writing multiplicant          (Zdn =  Za  +  Zdn * Zm)
  FNMAD   negated mul-add, writing multiplicant  (Zdn = -Za  + -Zdn * Zm)

  FMSB    mul-sub, writing multiplicant          (Zdn =  Za  + -Zdn * Zm)
  FNMSB   negated mul-sub, writing multiplicant  (Zdn = -Za  +  Zdn * Zm)

llvm-svn: 337282
2018-07-17 13:58:46 +00:00
Simon Pilgrim 1a4f3c93fb [SLPVectorizer] Don't attempt horizontal reduction on pointer types (PR38191)
TTI::getMinMaxReductionCost typically can't handle pointer types - until this is changed its better to limit horizontal reduction to integer/float vector types only.

llvm-svn: 337280
2018-07-17 13:43:33 +00:00
Tim Renouf e1016f1bc7 More fixes for subreg join failure in RegCoalescer
Summary:
Part of the adjustCopiesBackFrom method wasn't correctly dealing with SubRange
intervals when updating.

2 changes. The first to ensure that bogus SubRange Segments aren't propagated when
encountering Segments of the form [1234r, 1234d:0) when preparing to merge value
numbers. These can be removed in this case.

The second forces a shrinkToUses call if SubRanges end on the copy index
(instead of just the parent register).

V2: Addressed review comments, plus MIR test instead of ll test

Subscribers: MatzeB, qcolombet, nhaehnle

Differential Revision: https://reviews.llvm.org/D40308

Change-Id: I1d2b2b4beea802fce11da01edf71feb2064aab05
llvm-svn: 337273
2018-07-17 12:38:39 +00:00
Sander de Smalen 5dabcf887b [AArch64][SVE] Asm: Support for predicated FP operations (FP immediate)
This patch completes support for the following floating point
instructions that take FP immediates:
  FADD*  (addition)
  FSUB   (subtract)
  FSUBR  (subtract reverse form)
  FMUL*  (multiplication)
  FMAX*  (maximum)
  FMAXNM (maximum number)
  FMIN   (maximum)
  FMINNM (maximum number)

All operations are predicated and take a FP immediate operand,
e.g.

  fadd z0.h, p0/m, z0.h, #0.5
  fmin z0.s, p0/m, z0.s, #1.0
        ^___________^ (tied)

* Instructions added in a previous patch.

llvm-svn: 337272
2018-07-17 12:36:08 +00:00
Joerg Sonnenberger 7f457d79ec Don't assert that a size_t fits into 64bit.
Avoids tautological compare warnings on 32bit platforms.

llvm-svn: 337269
2018-07-17 12:30:34 +00:00
whitequark a41b24f32d [LLVM-C] Fix name mangling on AggressiveInstCombine
Similarly to rL336736, at least one more C API function does not
properly get declared as extern "C" due to a missing header, causing
name mangling and linking errors.

This patch fixes calls to LLVMAddAggressiveInstCombinerPass().

Differential Revision: https://reviews.llvm.org/D49416

Reviewed By: whitequark

llvm-svn: 337264
2018-07-17 11:13:58 +00:00
whitequark 7c4a074505 [LLVM-C] Add target triple normalization to the C API.
rL333307 was introduced to remove automatic target triple
normalization when calling sys::getDefaultTargetTriple(), arguing
that users of the latter already called Triple::normalize()
if necessary. However, users of the C API currently have no way of
doing target triple normalization.

This patch introduces an LLVMNormalizeTargetTriple function to
the C API which wraps Triple::normalize() and can be used on
the result of LLVMGetDefaultTargetTriple to achieve the same effect.

Differential Revision: https://reviews.llvm.org/D49414

Reviewed By: whitequark

llvm-svn: 337263
2018-07-17 10:57:39 +00:00
Sander de Smalen 3b9e342ae1 [AArch64][SVE] Asm: Support for predicated FP operations.
This patch adds support for the following floating point
instructions:
  FABD   (absolute difference)
  FADD   (addition)
  FSUB   (subtract)
  FSUBR  (subtract reverse form)
  FDIV   (divide)
  FDIVR  (divide reverse form)
  FMAX   (maximum)
  FMAXNM (maximum number)
  FMIN   (minimum)
  FMINNM (minimum number)
  FSCALE (adjust exponent)
  FMULX  (multiply extended)

All operations are predicated and binary form, e.g.

  fadd z0.h, p0/m, z0.h, z1.h
        ^___________^ (tied)

Supporting 16, 32 and 64-bit FP elements.

llvm-svn: 337259
2018-07-17 09:48:57 +00:00
Simon Pilgrim e4d12bb2d6 [DAGCombiner] Call SimplifyDemandedVectorElts from EXTRACT_VECTOR_ELT
If we are only extracting vector elements via EXTRACT_VECTOR_ELT(s) we may be able to use SimplifyDemandedVectorElts to avoid unnecessary vector ops.

Differential Revision: https://reviews.llvm.org/D49262

llvm-svn: 337258
2018-07-17 09:45:35 +00:00
Simon Pilgrim a0220b0570 Fix MSVC "result of 32-bit shift implicitly converted to 64 bits" warning. NFCI.
llvm-svn: 337257
2018-07-17 09:39:55 +00:00
Sander de Smalen ec229abb9b [AArch64][SVE] Asm: Support for SPLICE instruction.
The SPLICE instruction splices two vectors into one vector using a
predicate. It copies the active elements from the first vector, and
then fills the remaining elements with the low-numbered elements from
the second vector.

The instruction has the following form, e.g.

  splice z0.b, p0, z0.b, z1.b

for 8-bit elements. It also supports 16, 32 and
64-bit elements.

llvm-svn: 337253
2018-07-17 08:52:45 +00:00
Sander de Smalen 78fe30a662 [AArch64][SVE] Asm: Support for EXT instruction.
This patch adds an instruction that allows extracting
a vector from a pair of vectors, given an immediate index
that describes the element position to extract from.

The instruction has the following assembly:
  ext z0.b, z0.b, z1.b, #imm

where #imm is an immediate between 0 and 255.

llvm-svn: 337251
2018-07-17 08:39:48 +00:00
Craig Topper 880f92ad71 [X86] Properly qualify some MOVSS/MOVSD patterns with OptSize.
These are integer versions of patterns that I already fixed for floating point.

llvm-svn: 337240
2018-07-17 06:24:16 +00:00
Daniel Cederman c812dcc3db [Sparc] Do not depend on icc for ta 1
The ta instruction will always trap, regardless of the value
of the integer condition codes. TRAPri is marked as using icc,
so we cannot use a pattern for TRAPri to implement ta 1, as
verify-machineinstrs can complain that icc is not defined.
Instead we implement ta 1 the same way as ta 5.

llvm-svn: 337236
2018-07-17 05:49:33 +00:00
Craig Topper c376a1916b [X86] Add full set of patterns for turning ceil/floor/trunc/rint/nearbyint into rndscale with loads, broadcast, and masking.
This amounts to pretty ridiculous number of patterns. Ideally we'd canonicalize the X86ISD::VRNDSCALE earlier to reuse those patterns. I briefly looked into doing that, but some strict FP operations could still get converted to rint and nearbyint during isel. It's probably still worthwhile to look into. This patch is meant as a starting point to work from.

llvm-svn: 337234
2018-07-17 05:48:48 +00:00
Craig Topper 6751727d76 [X86] Add a missing FMA3 scalar intrinsic pattern.
This allows us to use 231 form to fold an insertelement on the add input to the fma. There is technically no software intrinsic that can use this until AVX512F, but it can be manually built up from other intrinsics.

llvm-svn: 337223
2018-07-16 23:10:58 +00:00
Sam Clegg cf2a9e28b1 [WebAssembly] Remove ELF file support.
This support was partial and temporary.  Now that we have
wasm object file support its no longer needed.

Differential Revision: https://reviews.llvm.org/D48744

llvm-svn: 337222
2018-07-16 23:09:29 +00:00
Sanjay Patel c71adc8040 [Intrinsics] define funnel shift IR intrinsics + DAG builder support
As discussed here:
http://lists.llvm.org/pipermail/llvm-dev/2018-May/123292.html
http://lists.llvm.org/pipermail/llvm-dev/2018-July/124400.html

We want to add rotate intrinsics because the IR expansion of that pattern is 4+ instructions, 
and we can lose pieces of the pattern before it gets to the backend. Generalizing the operation 
by allowing 2 different input values (plus the 3rd shift/rotate amount) gives us a "funnel shift" 
operation which may also be a single hardware instruction.

Initially, I thought we needed to define new DAG nodes for these ops, and I spent time working 
on that (much larger patch), but then I concluded that we don't need it. At least as a first 
step, we have all of the backend support necessary to match these ops...because it was required. 
And shepherding these through the IR optimizer is the primary concern, so the IR intrinsics are 
likely all that we'll ever need.

There was also a question about converting the intrinsics to the existing ROTL/ROTR DAG nodes
(along with improving the oversized shift documentation). Again, I don't think that's strictly 
necessary (as the test results here prove). That can be an efficiency improvement as a small 
follow-up patch.

So all we're left with is documentation, definition of the IR intrinsics, and DAG builder support. 

Differential Revision: https://reviews.llvm.org/D49242

llvm-svn: 337221
2018-07-16 22:59:31 +00:00
Zachary Turner 21808b1037 Add missing includes.
llvm-svn: 337218
2018-07-16 21:34:25 +00:00
Zachary Turner fac2da1ba7 [LLVMDemangle] Move some utility classes to header files.
In a followup I'm looking to add a Microsoft demangler.  Doing
so needs a lot of the same utility classes and feature test
macros which are already implemented in ItaniumDemangle.cpp.
So move all of these things into header files so that they
can be re-used by a new demangler.

Differential Revision: https://reviews.llvm.org/D49399

llvm-svn: 337217
2018-07-16 21:24:03 +00:00
Fangrui Song cb0bab86b3 [CodeGen] Fix inconsistent declaration parameter name
llvm-svn: 337200
2018-07-16 18:51:40 +00:00
Farhana Aleen c370d7b33d [AMDGPU] [AMDGPU] Support a fdot2 pattern.
Summary: Optimize fma((float)S0.x, (float)S1.x fma((float)S0.y, (float)S1.y, z))
                   -> fdot2((v2f16)S0, (v2f16)S1, (float)z)

Author: FarhanaAleen

Reviewed By: rampitec, b-sumner

Subscribers: AMDGPU

Differential Revision: https://reviews.llvm.org/D49146

llvm-svn: 337198
2018-07-16 18:19:59 +00:00
Mandeep Singh Grang 20239b18bb [llvm] Change 2 instances of std::sort to llvm::sort
llvm-svn: 337192
2018-07-16 17:26:37 +00:00
Roman Lebedev b79b4f539b [InstCombine] Fold 'check for [no] signed truncation' pattern
Summary:
[[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]]

As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.

Proofs for this transform: https://rise4fun.com/Alive/mgu
This transform is surprisingly frustrating.
This does not deal with non-splat shift amounts, or with undef shift amounts.
I've outlined what i think the solution should be:
```
  // Potential handling of non-splats: for each element:
  //  * if both are undef, replace with constant 0.
  //    Because (1<<0) is OK and is 1, and ((1<<0)>>1) is also OK and is 0.
  //  * if both are not undef, and are different, bailout.
  //  * else, only one is undef, then pick the non-undef one.
```

The DAGCombine will reverse this transform, see
https://reviews.llvm.org/D49266

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: JDevlieghere, rkruppe, llvm-commits

Differential Revision: https://reviews.llvm.org/D49320

llvm-svn: 337190
2018-07-16 16:45:42 +00:00
Wei Mi 40c4aa7637 [RegAlloc] Skip global splitting if the live range is huge and its spill is
trivially rematerializable.

We run into a case where machineLICM hoists a large number of live ranges
outside of a big loop because it thinks those live ranges are trivially
rematerializable. In regalloc, global splitting is tried out first for those
live ranges before they are spilled and rematerialized. Because the global
splitting algorithm is quadratic, increasing a lot of global splitting
candidates causes huge compile time increase (50s to 1400s on my local
machine when compiling a module).

However, we think for live ranges which are very large and are trivially
rematerialiable, it is better to just skip global splitting so as to save
compile time with little chance of sacrificing performance.  We uses the
segment size of live range to indirectly evaluate whether the global
splitting of the live range can introduce high cost, and use an option
as a knob to adjust the size limit threshold.

Differential Revision: https://reviews.llvm.org/D49353

llvm-svn: 337186
2018-07-16 15:42:20 +00:00
Teresa Johnson d68935c5ac Restore "[ThinLTO] Ensure we always select the same function copy to import"
This reverts commit r337081, therefore restoring r337050 (and fix in
r337059), with test fix for bot failure described after the original
description below.

In order to always import the same copy of a linkonce function,
even when encountering it with different thresholds (a higher one then a
lower one), keep track of the summary we decided to import.
This ensures that the backend only gets a single definition to import
for each GUID, so that it doesn't need to choose one.

Move the largest threshold the GUID was considered for import into the
current module out of the ImportMap (which is part of a larger map
maintained across the whole index), and into a new map just maintained
for the current module we are computing imports for. This saves some
memory since we no longer have the thresholds maintained across the
whole index (and throughout the in-process backends when doing a normal
non-distributed ThinLTO build), at the cost of some additional
information being maintained for each invocation of ComputeImportForModule
(the selected summary pointer for each import).

There is an additional map lookup for each callee being considered for
importing, however, this was able to subsume a map lookup in the
Worklist iteration that invokes computeImportForFunction. We also are
able to avoid calling selectCallee if we already failed to import at the
same or higher threshold.

I compared the run time and peak memory for the SPEC2006 471.omnetpp
benchmark (running in-process ThinLTO backends), as well as for a large
internal benchmark with a distributed ThinLTO build (so just looking at
the thin link time/memory). Across a number of runs with and without
this change there was no significant change in the time and memory.

(I tried a few other variations of the change but they also didn't
improve time or peak memory).

The new commit removes a test that no longer makes sense
(Transforms/FunctionImport/hotness_based_import2.ll), as exposed by the
reverse-iteration bot. The test depends on the order of processing the
summary call edges, and actually depended on the old problematic
behavior of selecting more than one summary for a given GUID when
encountered with different thresholds. There was no guarantee even
before that we would eventually pick the linkonce copy with the hottest
call edges, it just happened to work with the test and the old code, and
there was no guarantee that we would end up importing the selected
version of the copy that had the hottest call edges (since the backend
would effectively import only one of the selected copies).

Reviewers: davidxl

Subscribers: mehdi_amini, inglorion, llvm-commits

Differential Revision: https://reviews.llvm.org/D48670

llvm-svn: 337184
2018-07-16 15:30:27 +00:00
Chandler Carruth fa065aa75c [x86/SLH] Completely rework how we sink post-load hardening past data
invariant instructions to be both more correct and much more powerful.

While testing, I continued to find issues with sinking post-load
hardening. Unfortunately, it was amazingly hard to create any useful
tests of this because we were mostly sinking across copies and other
loading instructions. The fact that we couldn't sink past normal
arithmetic was really a big oversight.

So first, I've ported roughly the same set of instructions from the data
invariant loads to also have their non-loading varieties understood to
be data invariant. I've also added a few instructions that came up so
often it again made testing complicated: inc, dec, and lea.

With this, I was able to shake out a few nasty bugs in the validity
checking. We need to restrict to hardening single-def instructions with
defined registers that match a particular form: GPRs that don't have
a NOREX constraint directly attached to their register class.

The (tiny!) test case included catches all of the issues I was seeing
(once we can sink the hardening at all) except for the NOREX issue. The
only test I have there is horrible. It is large, inexplicable, and
doesn't even produce an error unless you try to emit encodings. I can
keep looking for a way to test it, but I'm out of ideas really.

Thanks to Ben for giving me at least a sanity-check review. I'll follow
up with Craig to go over this more thoroughly post-commit, but without
it SLH crashes everywhere so landing it for now.

Differential Revision: https://reviews.llvm.org/D49378

llvm-svn: 337177
2018-07-16 14:58:32 +00:00
Simon Atanasyan 738a6e449e [mips] Eliminate the usage of hasStdEnc in MipsPat.
Instead, the pattern is tagged with the correct predicate when
it is declared. Some patterns have been duplicated as necessary.

Patch by Simon Dardis.

Differential revision: https://reviews.llvm.org/D48365

llvm-svn: 337171
2018-07-16 13:52:41 +00:00
Petar Jovanovic 021e4c82eb [MIPS GlobalISel] Select instructions to load and store i32 on stack
Add code for selection of G_LOAD, G_STORE, G_GEP, G_FRAMEINDEX and
G_CONSTANT. Support loads and stores of i32 values.

Patch by Petar Avramovic.

Differential Revision: https://reviews.llvm.org/D48957

llvm-svn: 337168
2018-07-16 13:29:32 +00:00
Roman Lebedev de506632aa [X86][AArch64][DAGCombine] Unfold 'check for [no] signed truncation' pattern
Summary:

[[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]]

As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.

But the IR-optimal patter does not lower efficiently, so we want to undo it..

This handles the simple pattern.
There is a second pattern with predicate and constants inverted.

NOTE: we do not check uses here. we always do the transform.

Reviewers: spatel, craig.topper, RKSimon, javed.absar

Reviewed By: spatel

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D49266

llvm-svn: 337166
2018-07-16 12:44:10 +00:00
Daniel Cederman 92a700a215 [Sparc] Use the correct encoding for ta 3
Summary: The old encoding generated a "tn %g1 + 3" instruction instead
of the expected "ta 3".

Reviewers: venkatra, jyknight

Reviewed By: jyknight

Subscribers: fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D49171

llvm-svn: 337165
2018-07-16 12:28:26 +00:00
Daniel Cederman ab09da7b57 [Sparc] Use the names .rem and .urem instead of __modsi3 and __umodsi3
Summary: These are the names used in libgcc.

Reviewers: venkatra, jyknight, ekedaigle

Reviewed By: jyknight

Subscribers: joerg, fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D48915

llvm-svn: 337164
2018-07-16 12:22:08 +00:00
Daniel Cederman 68765757d4 [Sparc] Generate ta 1 for the @llvm.debugtrap intrinsic
Summary: Software trap number one is the trap used for breakpoints
in the Sparc ABI.

Reviewers: jyknight, venkatra

Reviewed By: jyknight

Subscribers: fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D48637

llvm-svn: 337163
2018-07-16 12:16:53 +00:00
Daniel Cederman c3d8002c2e Avoid losing Hi part when expanding VAARG nodes on big endian machines
Summary:
If the high part of the load is not used the offset to the next element
will not be set correctly.

For example, on Sparc V8, the following code will read val2 from offset 4
instead of 8.

```
int val = __builtin_va_arg(va, long long);
int val2 = __builtin_va_arg(va, int);
```

Reviewers: jyknight

Reviewed By: jyknight

Subscribers: fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D48595

llvm-svn: 337161
2018-07-16 12:14:17 +00:00
Chandler Carruth e66a6f48e3 [x86/SLH] Fix a bug where we would try to post-load harden non-GPRs.
Found cases that hit the assert I added. This patch factors the validity
checking into a nice helper routine and calls it when deciding to harden
post-load, and asserts it when doing so later.

I've added tests for the various ways of loading a floating point type,
as well as loading all vector permutations. Even though many of these go
to identical instructions, it seems good to somewhat comprehensively
test them.

I'm confident there will be more fixes needed here, I'll try to add
tests each time as I get this predicate adjusted.

llvm-svn: 337160
2018-07-16 11:38:48 +00:00
Alexander Potapenko d1a381b17a MSan: minor fixes, NFC
- remove an extra space after |ID| declaration
 - drop the unused |FirstInsn| parameter in getShadowOriginPtrUserspace()

llvm-svn: 337159
2018-07-16 10:57:19 +00:00
Jonas Devlieghere 98062cb396 [AccelTable] Provide DWARF5AccelTableStaticData for dsymutil.
For dsymutil we want to store offsets in the accelerator table entries
rather than DIE pointers. In addition, we need a way to communicate
which CU a DIE belongs to. This patch provides support for both of these
issues.

Differential revision: https://reviews.llvm.org/D49102

llvm-svn: 337158
2018-07-16 10:52:27 +00:00
Chandler Carruth 3620b9957a [x86/SLH] Extract another small helper function, add better comments and
use better terminology. NFC.

llvm-svn: 337157
2018-07-16 10:46:16 +00:00
Mark Searles f4e7025299 [AMDGPU][Waitcnt] Re-apply fix "comparison of integers of different signs" build error"
Re-apply "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" build error""
( fe0a456510131f268e388c4a18a92f575c0db183 ), which was inadvertantly reverted via
2b2ee080f0164485562593b1b87291a48cea4a9a .

llvm-svn: 337156
2018-07-16 10:21:36 +00:00
Alexander Potapenko 725a4ddc9e [MSan] factor userspace-specific declarations into createUserspaceApi(). NFC
This patch introduces createUserspaceApi() that creates function/global
declarations for symbols used by MSan in the userspace.
This is a step towards the upcoming KMSAN implementation patch.

Reviewed at https://reviews.llvm.org/D49292

llvm-svn: 337155
2018-07-16 10:03:30 +00:00
Mark Searles 72da47df25 run post-RA hazard recognizer pass late
Memory legalizer, waitcnt, and shrink  passes can perturb the instructions,
which means that the post-RA hazard recognizer pass should run after them.
Otherwise, one of those passes may invalidate the work done by the hazard
recognizer. Note that this has adverse side-effect that any consecutive
S_NOP 0's, emitted by the hazard recognizer, will not be shrunk into a
single S_NOP <N>. This should be addressed in a follow-on patch.

Differential Revision: https://reviews.llvm.org/D49288

llvm-svn: 337154
2018-07-16 10:02:41 +00:00
Mark Searles c2d5d9adb5 Revert "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" build error"
This reverts commit fe0a456510131f268e388c4a18a92f575c0db183.

llvm-svn: 337153
2018-07-16 10:02:40 +00:00
Alexandros Lamprineas f854ce84c4 [MemorySSAUpdater] Remove deleted trivial Phis from active workset
Bug fix for PR37808. The regression test is a reduced version of the
original reproducer attached to the bug report. As stated in the report,
the problem was that InsertedPHIs was keeping dangling pointers to
deleted Memory-Phis. MemoryPhis are created eagerly and sometimes get
zapped shortly afterwards. I've used WeakVH instead of an expensive
removal operation from the active workset.

Differential Revision: https://reviews.llvm.org/D48372

llvm-svn: 337149
2018-07-16 07:51:27 +00:00
Craig Topper 07a1787501 [X86] Merge the FR128 and VR128 regclass since they have identical spill and alignment characteristics.
This unfortunately requires a bunch of bitcasts to be added added to SUBREG_TO_REG, COPY_TO_REGCLASS, and instructions in output patterns. Otherwise tablegen seems to default to picking f128 and then we fail when something tries to get the register class for f128 which isn't always valid.

The test changes are because we were previously mixing fr128 and vr128 due to contrainRegClass finding FR128 first and passes like live range shrinking weren't handling that well.

llvm-svn: 337147
2018-07-16 06:56:09 +00:00
Chandler Carruth bc46bca99e [x86/SLH] Fix an unused variable warning in release builds after
r337144.

llvm-svn: 337145
2018-07-16 04:42:27 +00:00
Chandler Carruth cdf0addc65 [x86/SLH] Teach speculative load hardening to correctly harden the
indices used by AVX2 and AVX-512 gather instructions.

The index vector is hardened by broadcasting the predicate state
into a vector register and then or-ing. We don't even have to worry
about EFLAGS here.

I've added a test for all of the gather intrinsics to make sure that we
don't miss one. A particularly interesting creation is the gather
prefetch, which needs to be marked as potentially "loading" to get the
correct behavior. It's a memory access in many ways, and is actually
relevant for SLH. Based on discussion with Craig in review, I've moved
it to be `mayLoad` and `mayStore` rather than generic side effects. This
matches how we model other prefetch instructions.

Many thanks to Craig for the review here.

Differential Revision: https://reviews.llvm.org/D49336

llvm-svn: 337144
2018-07-16 04:17:51 +00:00
Chen Zheng ccc8422464 [InstCombine] add more SPFofSPF folding
Differential Revision: https://reviews.llvm.org/D49238

llvm-svn: 337143
2018-07-16 02:23:00 +00:00
Chen Zheng b972273f98 [InstCombine] fold icmp pred (sub 0, X) C for vector type
Differential Revision: https://reviews.llvm.org/D49283

llvm-svn: 337141
2018-07-16 00:51:40 +00:00
Michael J. Spencer 7bb2767fba Recommit r335794 "Add support for generating a call graph profile from Branch Frequency Info." with fix for removed functions.
llvm-svn: 337140
2018-07-16 00:28:24 +00:00
Chandler Carruth 3ffcc0383e [x86/SLH] Extract one of the bits of logic to its own function. NFC.
This is just a refactoring to start cleaning up the code here and make
it more readable and approachable.

llvm-svn: 337138
2018-07-15 23:46:36 +00:00
Craig Topper cdb0ed2910 [X86] Add custom execution domain fixing for 128/256-bit integer logic operations with AVX512F, but not AVX512DQ.
AVX512F only has integer domain logic instructions. AVX512DQ added FP domain logic instructions.

Execution domain fixing runs before EVEX->VEX. So if we have AVX512F and not AVX512DQ we fail to do execution domain switching of the logic operations. This leads to mismatches in execution domain and more test differences.

This patch adds custom domain fixing that switches EVEX integer logic operations to VEX fp logic operations if XMM16-31 are not used.

llvm-svn: 337137
2018-07-15 23:32:36 +00:00
Craig Topper d553ff3e2e [X86] Add load patterns for cases where we select X86Movss/X86Movsd to blend instructions.
This allows us to fold the load during isel without waiting for the peephole pass to do it.

llvm-svn: 337136
2018-07-15 21:49:01 +00:00
Craig Topper ec0038398a [X86] Use 128-bit blends instead vmovss/vmovsd for 512-bit vzmovl patterns to match AVX.
llvm-svn: 337135
2018-07-15 18:51:08 +00:00
Craig Topper 8f34858779 [X86] Use 128-bit ops for 256-bit vzmovl patterns.
128-bit ops implicitly zero the upper bits. This should address the comment about domain crossing for the integer version without AVX2 since we can use a 128-bit VBLENDW without AVX2.

The only bad thing I see here is that we failed to reuse an vxorps in some of the tests, but I think that's already known issue.

llvm-svn: 337134
2018-07-15 18:51:07 +00:00
Sanjay Patel 79a423cfa2 [DAGCombiner] fix typo in comment; NFC
llvm-svn: 337132
2018-07-15 17:09:35 +00:00
Sanjay Patel 9d2099cc03 [InstCombine] Corrections in comments for division transformation (NFC)
The actual code seems to be correct, but the comments were misleading.

Patch by Aaron Puchert!

Differential Revision: https://reviews.llvm.org/D49276

llvm-svn: 337131
2018-07-15 17:06:59 +00:00
Sanjay Patel a41c886c55 [DAGCombiner] extend(ifpositive(X)) -> shift-right (not X)
This is almost the same as an existing IR canonicalization in instcombine, 
so I'm assuming this is a good early generic DAG combine too.

The motivation comes from reduced bit-hacking for select-of-constants in IR 
after rL331486. We want to restore that functionality in the DAG as noted in
the commit comments for that change and the llvm-dev discussion here:
http://lists.llvm.org/pipermail/llvm-dev/2018-July/124433.html

The PPC and AArch tests show that those targets are already doing something 
similar. x86 will be neutral in the minimal case and generally better when 
this pattern is extended with other ops as shown in the signbit-shift.ll tests.

Note the asymmetry: we don't include the (extend (ifneg X)) transform because 
it already exists in SimplifySelectCC(), and that is verified in the later 
unchanged tests in the signbit-shift.ll files. Without the 'not' op, the 
general transform to use a shift is always a win because that's a single 
instruction.

Alive proofs:
https://rise4fun.com/Alive/ysli

Name: if pos, get -1
  %c = icmp sgt i16 %x, -1
  %r = sext i1 %c to i16
  =>
  %n = xor i16 %x, -1
  %r = ashr i16 %n, 15

Name: if pos, get 1
  %c = icmp sgt i16 %x, -1
  %r = zext i1 %c to i16
  =>
  %n = xor i16 %x, -1
  %r = lshr i16 %n, 15

Differential Revision: https://reviews.llvm.org/D48970

llvm-svn: 337130
2018-07-15 16:27:07 +00:00
Sanjay Patel 92d0c1c129 [InstSimplify] fold minnum/maxnum with NaN arg
This fold is repeated/misplaced in instcombine, but I'm
not sure if it's safe to remove that yet because some
other folds appear to be asserting that the transform
has occurred within instcombine itself.

This isn't the best fix for PR37776, but it probably
hides the bug with the given code example:
https://bugs.llvm.org/show_bug.cgi?id=37776

We have another test to demonstrate the more general bug.

llvm-svn: 337127
2018-07-15 14:52:16 +00:00
Andrea Di Biagio ff630c2cdc [llvm-mca][BtVer2] teach how to identify false dependencies on partially written
registers.

The goal of this patch is to improve the throughput analysis in llvm-mca for the
case where instructions perform partial register writes.

On x86, partial register writes are quite difficult to model, mainly because
different processors tend to implement different register merging schemes in
hardware.

When the code contains partial register writes, the IPC (instructions per
cycles) estimated by llvm-mca tends to diverge quite significantly from the
observed IPC (using perf).

Modern AMD processors (at least, from Bulldozer onwards) don't rename partial
registers. Quoting Agner Fog's microarchitecture.pdf:
" The processor always keeps the different parts of an integer register together.
For example, AL and AH are not treated as independent by the out-of-order
execution mechanism. An instruction that writes to part of a register will
therefore have a false dependence on any previous write to the same register or
any part of it."

This patch is a first important step towards improving the analysis of partial
register updates. It changes the semantic of RegisterFile descriptors in
tablegen, and teaches llvm-mca how to identify false dependences in the presence
of partial register writes (for more details: see the new code comments in
include/Target/TargetSchedule.h - class RegisterFile).

This patch doesn't address the case where a write to a part of a register is
followed by a read from the whole register.  On Intel chips, high8 registers
(AH/BH/CH/DH)) can be stored in separate physical registers. However, a later
(dirty) read of the full register (example: AX/EAX) triggers a merge uOp, which
adds extra latency (and potentially affects the pipe usage).
This is a very interesting article on the subject with a very informative answer
from Peter Cordes:
https://stackoverflow.com/questions/45660139/how-exactly-do-partial-registers-on-haswell-skylake-perform-writing-al-seems-to

In future, the definition of RegisterFile can be extended with extra information
that may be used to identify delays caused by merge opcodes triggered by a dirty
read of a partial write.

Differential Revision: https://reviews.llvm.org/D49196

llvm-svn: 337123
2018-07-15 11:01:38 +00:00
Dylan McKay 0603bae41c [AVR] Document some public functions
llvm-svn: 337122
2018-07-15 07:24:27 +00:00
Craig Topper 60ce856134 [X86] Add some optsize patterns for 256-bit X86vzmovl.
These patterns use VMOVSS/SD. Without optsize we use BLENDI instead.

llvm-svn: 337119
2018-07-15 06:03:19 +00:00
Roman Lebedev c7bc4c02eb [NFC][InstCombine] foldICmpWithLowBitMaskedVal(): update comments.
All predicates are handled.
There does not seem to be any other possible folds here.
There are some more folds possible with inverted mask though.

llvm-svn: 337112
2018-07-14 20:08:52 +00:00
Roman Lebedev b972fc3e8a [InstCombine] Fold x & (-1 >> y) s< x to x s> (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/I3O

This pattern is not commutative!
We must make sure not to fold the commuted version!

llvm-svn: 337111
2018-07-14 20:08:47 +00:00
Roman Lebedev f14426101e [InstCombine] Fold x & (-1 >> y) s>= x to x s<= (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/I3O

This pattern is not commutative!
We must make sure not to fold the commuted version!

llvm-svn: 337109
2018-07-14 20:08:37 +00:00
Roman Lebedev 1e61e358a4 [InstCombine] Fold x s<= x & (-1 >> y) to x s<= (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/I3O

This pattern is not commutative!
We must make sure not to fold the commuted version!

llvm-svn: 337107
2018-07-14 20:08:26 +00:00
Roman Lebedev 859e14aeaa [InstCombine] Fold x s> x & (-1 >> y) to x s> (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/I3O

This pattern is not commutative!
We must make sure not to fold the commuted version!

llvm-svn: 337105
2018-07-14 20:08:16 +00:00
Roman Lebedev 0f5ec8921b [InstCombine] Fold x u<= x & C to x u<= C
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/Fqp

This pattern is not commutative. But InstSimplify will
already have taken care of the 'commutative' variant.

llvm-svn: 337102
2018-07-14 16:44:54 +00:00
Roman Lebedev 74f611a1f5 [InstCombine] Fold x u> x & C to x u> C
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/JvS

This pattern is not commutative. But InstSimplify will
already have taken care of the 'commutative' variant.

llvm-svn: 337100
2018-07-14 16:44:43 +00:00
Roman Lebedev e3dc587ae0 [InstCombine] Fold x & (-1 >> y) u< x to x u> (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/ocb

This pattern is not commutative. But InstSimplify will
already have taken care of the 'commutative' variant.

llvm-svn: 337098
2018-07-14 12:20:16 +00:00
Roman Lebedev fac48474ce [InstCombine] Fold x & (-1 >> y) u>= x to x u<= (-1 >> y)
https://bugs.llvm.org/show_bug.cgi?id=38123
https://rise4fun.com/Alive/azI

This pattern is not commutative. But InstSimplify will
already have taken care of the 'commutative' variant.

llvm-svn: 337096
2018-07-14 12:20:06 +00:00
Francis Visoiu Mistrih f905bf1408 [MachineOutliner] Check the last instruction from the sequence when updating liveness
The MachineOutliner was doing an std::for_each from the call (inserted
before the outlined sequence) to the iterator at the end of the
sequence.

std::for_each needs the iterator past the end, so the last instruction
was not taken into account when propagating the liveness information.

This fixes the machine verifier issue in machine-outliner-disubprogram.ll.

Differential Revision: https://reviews.llvm.org/D49295

llvm-svn: 337090
2018-07-14 09:40:01 +00:00
Chandler Carruth fb503ac027 [x86/SLH] Fix an issue where we wouldn't harden any loads if we found
no conditions.

This is only valid to do if we're hardening calls and rets with LFENCE
which results in an LFENCE guarding the entire entry block for us.

llvm-svn: 337089
2018-07-14 09:32:37 +00:00
Craig Topper 7426cf6717 [X86] Fix a subtle bug in the custom execution domain fixing for blends.
The code tried to find the immediate by using getNumOperands() on the MachineInstr, but there might be implicit-defs after the immediate that get counted.

Instead use getNumOperands() from the instruction description which will only count the operands that are defined in the td file.

llvm-svn: 337088
2018-07-14 06:30:30 +00:00
Nico Weber 17172c6b80 Give llvm-lib rudimentary help output.
https://reviews.llvm.org/D49318

llvm-svn: 337084
2018-07-14 02:29:44 +00:00
Craig Topper f0b164415c [X86] Prefer blendi over movss/sd when avx512 is enabled unless optimizing for size.
AVX512 doesn't have an immediate controlled blend instruction. But blend throughput is still better than movss/sd on SKX.

This commit changes AVX512 to use the AVX blend instructions instead of MOVSS/MOVSD. This constrains the register allocation since it won't be able to use XMM16-31, but hopefully the increased throughput and reduced port 5 pressure makes up for that.

llvm-svn: 337083
2018-07-14 02:05:08 +00:00
Teresa Johnson b78c5d0602 Revert "[ThinLTO] Ensure we always select the same function copy to import"
This reverts commits r337050 and r337059. Caused failure in
reverse-iteration bot that needs more investigation.

llvm-svn: 337081
2018-07-14 01:45:49 +00:00
Evgeniy Stepanov 1971ba097d Revert "AMDGPU: Fix handling of alignment padding in DAG argument lowering"
This reverts commit r337021.

WARNING: MemorySanitizer: use-of-uninitialized-value
    #0 0x1415cd65 in void write_signed<long>(llvm::raw_ostream&, long, unsigned long, llvm::IntegerStyle) /code/llvm-project/llvm/lib/Support/NativeFormatting.cpp:95:7
    #1 0x1415c900 in llvm::write_integer(llvm::raw_ostream&, long, unsigned long, llvm::IntegerStyle) /code/llvm-project/llvm/lib/Support/NativeFormatting.cpp:121:3
    #2 0x1472357f in llvm::raw_ostream::operator<<(long) /code/llvm-project/llvm/lib/Support/raw_ostream.cpp:117:3
    #3 0x13bb9d4 in llvm::raw_ostream::operator<<(int) /code/llvm-project/llvm/include/llvm/Support/raw_ostream.h:210:18
    #4 0x3c2bc18 in void printField<unsigned int, &(amd_kernel_code_s::amd_kernel_code_version_major)>(llvm::StringRef, amd_kernel_code_s const&, llvm::raw_ostream&) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:78:23
    #5 0x3c250ba in llvm::printAmdKernelCodeField(amd_kernel_code_s const&, int, llvm::raw_ostream&) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:104:5
    #6 0x3c27ca3 in llvm::dumpAmdKernelCode(amd_kernel_code_s const*, llvm::raw_ostream&, char const*) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:113:5
    #7 0x3a46e6c in llvm::AMDGPUTargetAsmStreamer::EmitAMDKernelCodeT(amd_kernel_code_s const&) /code/llvm-project/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUTargetStreamer.cpp:161:3
    #8 0xd371e4 in llvm::AMDGPUAsmPrinter::EmitFunctionBodyStart() /code/llvm-project/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp:204:26

[...]

Uninitialized value was created by an allocation of 'KernelCode' in the stack frame of function '_ZN4llvm16AMDGPUAsmPrinter21EmitFunctionBodyStartEv'
    #0 0xd36650 in llvm::AMDGPUAsmPrinter::EmitFunctionBodyStart() /code/llvm-project/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp:192

llvm-svn: 337079
2018-07-14 01:20:53 +00:00
Chandler Carruth a24fe067c0 [x86/SLH] Add an assert to catch if we ever end up trying to harden
post-load a register that isn't valid for use with OR or SHRX.

llvm-svn: 337078
2018-07-14 00:52:09 +00:00
Tim Shen a064622bd3 Re-apply "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."
llvm-svn: 337075
2018-07-13 23:58:46 +00:00
Krzysztof Parzyszek 7ced04c0fd [Hexagon] Avoid introducing calls into coalesced range of HVX vector pairs
If an HVX vector register is to be coalesced into a vector pair, make
sure that the vector pair will not have a function call in its live range,
unless it already had one. All HVX vector registers are volatile, so
any vector register live across a function call will have to be spilled.

If a vector needs to be spilled, and it's coalesced into a vector pair
then the whole pair will need to be spilled (even if only a part of it is
live), taking extra stack space.

llvm-svn: 337073
2018-07-13 23:42:29 +00:00
Tim Shen 9e25d5d2ce [LSR] If no Use is interesting, early return.
Summary:
By looking at the callers of getUse(), we can see that even though
IVUsers may offer uses, but they may not be interesting to
LSR. It's possible that none of them is interesting.

Reviewers: sanjoy

Subscribers: jlebar, hiraditya, bixia, llvm-commits

Differential Revision: https://reviews.llvm.org/D49049

llvm-svn: 337072
2018-07-13 23:40:00 +00:00
Craig Topper 8315667d99 [X86][SLH] Remove PDEP and PEXT from isDataInvariantLoad
Ryzen has something like an 18 cycle latency on these based on Agner's data. AMD's own xls is blank. So it seems like there might be something tricky here.

Agner's data for Intel CPUs indicates these are a single uop there.

Probably safest to remove them. We never generate them without an intrinsic so this should be ok.

Differential Revision: https://reviews.llvm.org/D49315

llvm-svn: 337067
2018-07-13 22:41:52 +00:00
Craig Topper 41fa858262 [X86][SLH] Add VEX and EVEX conversion instructions to isDataInvariantLoad
-Drop the intrinsic versions of conversion instructions. These should be handled when we do vectors. They shouldn't show up in scalar code.
-Add the float<->double conversions which were missing.
-Add the AVX512 and AVX version of the conversion instructions including the unsigned integer conversions unique to AVX512

Differential Revision: https://reviews.llvm.org/D49313

llvm-svn: 337066
2018-07-13 22:41:50 +00:00
Craig Topper 445abf700d [X86][SLH] Regroup the instructions in isDataInvariantLoad a little. NFC
-Move BSF/BSR to the same group as TZCNT/LZCNT/POPCNT.
-Split some of the bit manipulation instructions away from TZCNT/LZCNT/POPCNT. These are things like 'x & (x - 1)' which are composed of a few simple arithmetic operations. These aren't nearly as complicated/surprising as counting bits.
-Move BEXTR/BZHI into their own group. They aren't like a simple arithmethic op or the bit manipulation instructions. They're more like a shift+and.

Differential Revision: https://reviews.llvm.org/D49312

llvm-svn: 337065
2018-07-13 22:41:46 +00:00
Vedant Kumar d37e078d39 Fix comments which mixed up 'before' and 'after', NFC
llvm-svn: 337061
2018-07-13 22:39:31 +00:00
Craig Topper 2260e4149a [X86] Use the correct types in some recently added isel patterns.
These were supposed to be integer types since we are selecting integer instructions.

Found while preparing to remove these patterns for another patch.

llvm-svn: 337057
2018-07-13 22:27:53 +00:00
Tom Stellard ac68471326 AMDGPU/GlobalISel: Implement select() for 32-bit @llvm.minnun and @llvm.maxnum
Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D46172

llvm-svn: 337056
2018-07-13 22:16:03 +00:00
Craig Topper da424ba1c5 [X86][FastISel] Support uitofp with avx512.
llvm-svn: 337055
2018-07-13 22:09:30 +00:00
Eli Friedman 835297a951 [LTO] Fix linking with an alias defined using another alias.
When we're linking an alias which will be defined later, we neeed to
build a GlobalAlias, or else we'll crash later in
IRLinker::linkGlobalValueBody.

clang sometimes constructs aliases like this for C++ destructors.

Differential Revision: https://reviews.llvm.org/D49316

llvm-svn: 337053
2018-07-13 21:58:55 +00:00