Commit Graph

144084 Commits

Author SHA1 Message Date
Tim Northover 293f74355b GlobalISel: merge invoke and call translation paths.
Well, sort of. But the lower-level code that invoke used to be using completely
botched the handling of varargs functions, which hopefully won't be possible if
they're using the same code.

llvm-svn: 293670
2017-01-31 18:36:11 +00:00
Peter Collingbourne d763c4cc85 MC: Introduce the ABS8 symbol modifier.
@ABS8 can be applied to symbols which appear as immediate operands to
instructions that have a 8-bit immediate form for that operand. It causes
the assembler to use the 8-bit form and an 8-bit relocation (e.g. R_386_8
or R_X86_64_8) for the symbol.

Differential Revision: https://reviews.llvm.org/D28688

llvm-svn: 293667
2017-01-31 18:28:44 +00:00
Simon Pilgrim 4628451742 [X86][XOP] Add test showing failure to combine build vector to vpermil2ps shuffle
llvm-svn: 293663
2017-01-31 18:10:34 +00:00
Kevin Enderby 02d3a37857 Fix a bug in llvm-obdump(1) with the -macho flag disassembling an object
without symbols that makes calls through a symbol stub which were not
correctly being annotated with “## symbol stub for: _foo”.

Just adds the same parameters for getting the annotations from
DisAsm->getInstruction() and passing them to IP->printInst() from the
code above when boolean variable symbolTableWorked was true.

rdar://29791952

llvm-svn: 293662
2017-01-31 18:09:10 +00:00
Davide Italiano aec4617dc8 [Instcombine] Combine consecutive identical fences
Differential Revision:  https://reviews.llvm.org/D29314

llvm-svn: 293661
2017-01-31 18:09:05 +00:00
Arnold Schwaighofer c368563bd6 Don't combine stores to a swifterror pointer operand to a different type
llvm-svn: 293658
2017-01-31 17:53:49 +00:00
Dehao Chen 274df5ea41 Explicitly promote indirect calls before sample profile annotation.
Summary: In iterative sample pgo where profile is collected from PGOed binary, we may see indirect call targets promoted and inlined in the profile. Before profile annotation, we need to make this happen in order to annotate correctly on IR. This patch explicitly promotes these indirect calls and inlines them before profile annotation.

Reviewers: xur, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29040

llvm-svn: 293657
2017-01-31 17:49:37 +00:00
Sanjay Patel c0abce81c1 [InstCombine] add test for possible zext-phi transform; NFC
The datalayout doesn't include i1, so we don't do a potential shrink and sink transform.

Example based on discussion here:
http://lists.llvm.org/pipermail/llvm-dev/2017-January/109631.html

llvm-svn: 293656
2017-01-31 17:43:00 +00:00
Matt Arsenault d5d78510c7 AMDGPU: Use source mods with fcanonicalize
llvm-svn: 293654
2017-01-31 17:28:40 +00:00
Sanjay Patel 2217f75ad1 fix formatting; NFC
llvm-svn: 293652
2017-01-31 17:25:42 +00:00
Nirav Dave a7c041d147 [X86] Implement -mfentry
Summary: Insert calls to __fentry__ at function entry.

Reviewers: hfinkel, craig.topper

Subscribers: mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D28000

llvm-svn: 293648
2017-01-31 17:00:27 +00:00
David Bozier 60b80d2233 Add support for demangling C++11 thread_local variables.
In clang, the grammar for mangling for these names are "<special-name> ::= TW <object name>" for wrapper variables or "<special-name> ::= TH <object name>" for initialization variables.

Initial change was made in libccxxabi r293638

llvm-svn: 293643
2017-01-31 15:56:36 +00:00
Artem Tamazov e0c3b40dde [AMDGPU][mc][tests][NFC] Revert coverage/smoke Gfx7 asm test
Reason: http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/2916/testReport/junit/LLVM/MC_AMDGPU/gfx7_asm_all_s/

This seems to reveal an AMDGPU/mc issue which needs to be triaged & fixed prior re-committing the test.

llvm-svn: 293642
2017-01-31 15:50:11 +00:00
Tom Stellard 124f5cc8c2 AMDGPU/SI: Fix inst-select-load-smrd.mir on some builds
Summary:
For some reason instructions are being inserted in the wrong order with some
builds.  I'm not sure why this is happening.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, tpr, llvm-commits

Differential Revision: https://reviews.llvm.org/D29325

llvm-svn: 293639
2017-01-31 15:24:11 +00:00
Simon Pilgrim 1b39d5db7b [X86][SSE] Add support for combining PINSRB into a target shuffle.
llvm-svn: 293637
2017-01-31 14:59:44 +00:00
Nicolai Haehnle 8813d5d221 [DAGCombine] require UnsafeFPMath for re-association of addition
Summary:
The affected transforms all implicitly use associativity of addition,
for which we usually require unsafe math to be enabled.

The "Aggressive" flag is only meant to convey information about the
performance of the fused ops relative to a fmul+fadd sequence.

Fixes Bug 31626.

Reviewers: spatel, hfinkel, mehdi_amini, arsenm, tstellarAMD

Subscribers: jholewinski, nemanjai, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D28675

llvm-svn: 293635
2017-01-31 14:35:37 +00:00
Sam Parker 9bf658d5fe [ARM] Avoid using ARM instructions in Thumb mode
The Requires class overrides the target requirements of an instruction,
rather than adding to them, so all ARM instructions need to include the
IsARM predicate when they have overwitten requirements.

This caused the swp and swpb instructions to be allowed in thumb mode
assembly, and the ARM encoding of CDP to be selected in codegen (which
is different for conditional instructions).

Differential Revision: https://reviews.llvm.org/D29283

llvm-svn: 293634
2017-01-31 14:35:01 +00:00
Benjamin Kramer 94a833962c [X86] Silence unused variable warning in Release builds.
llvm-svn: 293631
2017-01-31 14:13:53 +00:00
Silviu Baranga c6d21eba0e [InstCombine] Make sure that LHS and RHS have the same type in
transformToIndexedCompare

If they don't have the same type, the size of the constant
index would need to be adjusted (and this wouldn't be always
possible).

Alternatively we could try the analysis with the initial
RHS value, which would guarantee that the two sides have
the same type. However it is unlikely that in practice this
would pass our transformation requirements.

Fixes PR31808 (https://llvm.org/bugs/show_bug.cgi?id=31808).

llvm-svn: 293629
2017-01-31 14:04:15 +00:00
Simon Pilgrim 4eab18f6b8 [X86][SSE] Detect unary PBLEND shuffles.
These can appear during shuffle combining.

llvm-svn: 293628
2017-01-31 13:58:01 +00:00
Simon Pilgrim c29eab52e8 [X86][SSE] Add support for combining PINSRW into a target shuffle.
Also add the ability to recognise PINSR(Vex, 0, Idx).

Targets shuffle combines won't replace multiple insertions with a bit mask until a depth of 3 or more, so we avoid codesize bloat.

The unnecessary vpblendw in clearupper8xi16a will be fixed in an upcoming patch.

llvm-svn: 293627
2017-01-31 13:51:10 +00:00
Nemanja Ivanovic 2f2a6ab991 [PowerPC][Altivec] Add vmr extended mnemonic
Just adds the vmr (Vector Move Register) mnemonic for the VOR instruction in
the PPC back end.

Committing on behalf of brunoalr (Bruno Rosa).

Differential Revision: https://reviews.llvm.org/D29133

llvm-svn: 293626
2017-01-31 13:43:11 +00:00
Florian Hahn 5364cf3b56 [LoopUnroll] Use addClonedBlockToLoopInfo to clone the top level loop (NFC)
Summary:
rL293124 added the necessary infrastructure to properly add the cloned
top level loop to LoopInfo, which means we do not have to do it manually
in CloneLoopBlocks.

@mkuper sorry for not pointing this out during my review of D29156, I just
realized that today.


Reviewers: mzolotukhin, chandlerc, mkuper

Reviewed By: mkuper

Subscribers: llvm-commits, mkuper

Differential Revision: https://reviews.llvm.org/D29173

llvm-svn: 293615
2017-01-31 11:13:44 +00:00
Simon Dardis 12850eeac5 [mips] Addition of the immediate cases for the instructions [d]div, [d]divu
Related to http://reviews.llvm.org/D15772

Depends on http://reviews.llvm.org/D16888

Adds support for immediate operand for [D]DIV[U] instructions.

Patch By: Srdjan Obucina

Reviewers: zoran.jovanovic, vkalintiris, dsanders, obucina

Differential Revision: https://reviews.llvm.org/D16889

llvm-svn: 293614
2017-01-31 10:49:24 +00:00
Craig Topper 2cfa2071bd [AVX-512] Don't both looking into the AVX512DQ execution domain fixing tables if AVX512DQ isn't supported since we can't do any conversion anyway.
llvm-svn: 293608
2017-01-31 06:49:55 +00:00
Craig Topper 797e32dd98 [X86] Add AVX and SSE2 version of MOVSDmr to execution domain fixing table. AVX-512 already did this for the EVEX version.
llvm-svn: 293607
2017-01-31 06:49:53 +00:00
Craig Topper 779e4c5bb4 [AVX-512] Fix copy and paste bug in execution domain fixing tables so that we can convert 256-bit movnt instructions.
llvm-svn: 293606
2017-01-31 06:49:50 +00:00
Justin Lebar 1c9692a46f [NVPTX] Implement NVPTXTargetLowering::getSqrtEstimate.
Summary:

This lets us lower to sqrt.approx and rsqrt.approx under more
circumstances.

* Now we emit sqrt.approx and rsqrt.approx for calls to @llvm.sqrt.f32,
  when fast-math is enabled.  Previously, we only would emit it for
  calls to @llvm.nvvm.sqrt.f.  (With this patch we no longer emit
  sqrt.approx for calls to @llvm.nvvm.sqrt.f; we rely on intcombine to
  simplify llvm.nvvm.sqrt.f into llvm.sqrt.f32.)

* Now we emit the ftz version of rsqrt.approx when ftz is enabled.
  Previously, we only emitted rsqrt.approx when ftz was disabled.

Reviewers: hfinkel

Subscribers: llvm-commits, tra, jholewinski

Differential Revision: https://reviews.llvm.org/D28508

llvm-svn: 293605
2017-01-31 05:58:22 +00:00
Craig Topper 06e038c6de [X86] Update the broadcast fallback patterns to use shuffle instructions from the appropriate execution domain.
llvm-svn: 293603
2017-01-31 05:18:29 +00:00
Craig Topper 88b0a47312 [X86] Add test cases for AVX1 broadcast fallback patterns when load can't be folded.
Also add test cases that do an insertelement to all elements for the 8 element vector tests.

llvm-svn: 293602
2017-01-31 05:18:27 +00:00
Craig Topper e9e84c8284 [AVX-512] Fix the ExeDomain for VMOVDDUP, VMOVSLDUP, and VMOVSHDUP.
llvm-svn: 293601
2017-01-31 05:18:24 +00:00
Matt Arsenault f84e5d9a27 AMDGPU: Generalize matching of v_med3_f32
I think this is safe as long as no inputs are known to ever
be nans.

Also add an intrinsic for fmed3 to be able to handle all safe
math cases.

llvm-svn: 293598
2017-01-31 03:07:46 +00:00
Matt Arsenault 973c4aebad InferAddressSpaces: Rename constant
llvm-svn: 293594
2017-01-31 02:17:41 +00:00
Matt Arsenault 72f259b8eb InferAddressSpaces: Handle icmp
llvm-svn: 293593
2017-01-31 02:17:32 +00:00
Craig Topper d064cc93b2 [X86] Remove patterns for X86VPermilpi with integer types. I don't think we've formed these since the shuffle lowering rewrite.
llvm-svn: 293592
2017-01-31 02:09:53 +00:00
Craig Topper 85935f69fb [X86] Remove duplicate patterns for X86VPermilpv that already exist in the instructions themselves.
llvm-svn: 293591
2017-01-31 02:09:51 +00:00
Craig Topper ced68315ce [X86] Remove patterns for selecting PSHUFD with FP types. We don't seem to do this anymore and the AVX case definitely should be using VPERMILPS anyway.
llvm-svn: 293590
2017-01-31 02:09:49 +00:00
Craig Topper b76494e017 [X86] Remove 'else' after 'return'. NFC
llvm-svn: 293589
2017-01-31 02:09:46 +00:00
Craig Topper f9d901f0ea [X86] Use integer broadcast instructions for integer broadcast patterns.
I'm not sure why we were using an FP instruction before and had to have a comment calling attention to it, but not justifying it.

llvm-svn: 293588
2017-01-31 02:09:43 +00:00
Matt Arsenault 6d5a8d48fd InferAddressSpaces: Support memory intrinsics
llvm-svn: 293587
2017-01-31 01:56:57 +00:00
Matt Arsenault 6c907a9bb3 InferAddressSpaces: Support atomics
llvm-svn: 293584
2017-01-31 01:40:38 +00:00
Matt Arsenault d89a6e11a7 InferAddressSpaces: Don't replace volatile users
llvm-svn: 293582
2017-01-31 01:30:16 +00:00
Matt Arsenault b6491cc854 AMDGPU: Implement hook for InferAddressSpaces
For now just port some of the existing NVPTX tests
and from an old HSAIL optimization pass which
approximately did the same thing.

Don't enable the pass yet until more testing is done.

llvm-svn: 293580
2017-01-31 01:20:54 +00:00
Matt Arsenault 850657a439 NVPTX: Move InferAddressSpaces to generic code
llvm-svn: 293579
2017-01-31 01:10:58 +00:00
Eugene Zelenko 342257ea92 [ARM] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 293578
2017-01-31 00:56:17 +00:00
Saleem Abdulrasool 6f5f001fdc TableGen: use fully qualified name for StringLiteral
Use the qualified name for StringLiteral (llvm::StringLiteral) when
generating the sources.  This is needed as the generated files may be
used out-of-tree (e.g. swift) where you may not have a
`using namespace llvm;` resulting in an undefined lookup.

llvm-svn: 293577
2017-01-31 00:45:01 +00:00
Eli Friedman 10d1ff64fe [SCEV] Simplify/generalize howFarToZero solving.
Make SolveLinEquationWithOverflow take the start as a SCEV, so we can
solve more cases. With that implemented, get rid of the special case
for powers of two.

The additional functionality probably isn't particularly useful,
but it might help a little for certain cases involving pointer
arithmetic.

Differential Revision: https://reviews.llvm.org/D28884

llvm-svn: 293576
2017-01-31 00:42:42 +00:00
Reid Kleckner 71012aa945 Remove LLVM_CONFIG from config headers
It appears to be dead, and it needlessly caused me to rebuild all of
LLVM when I changed CMAKE_INSTALL_PREFIX.

llvm-svn: 293574
2017-01-31 00:34:23 +00:00
Vedant Kumar 359785ddad Fix llvm-readobj build error after r293569
Clang complains about an ambiguous call to printNumber() because it
can't work out what size_t should convert to. I picked uint64_t.

llvm-svn: 293573
2017-01-30 23:58:51 +00:00
Keno Fischer 578cf7aae7 [ExecutionDepsFix] Improve clearance calculation for loops
Summary:
In revision rL278321, ExecutionDepsFix learned how to pick a better
register for undef register reads, e.g. for instructions such as
`vcvtsi2sdq`. While this revision improved performance on a good number
of our benchmarks, it unfortunately also caused significant regressions
(up to 3x) on others. This regression turned out to be caused by loops
such as:

PH -> A -> B (xmm<Undef> -> xmm<Def>) -> C -> D -> EXIT
      ^                                  |
      +----------------------------------+

In the previous version of the clearance calculation, we would visit
the blocks in order, remembering for each whether there were any
incoming backedges from blocks that we hadn't processed yet and if
so queuing up the block to be re-processed. However, for loop structures
such as the above, this is clearly insufficient, since the block B
does not have any unknown backedges, so we do not see the false
dependency from the previous interation's Def of xmm registers in B.

To fix this, we need to consider all blocks that are part of the loop
and reprocess them one the correct clearance values are known. As
an optimization, we also want to avoid reprocessing any later blocks
that are not part of the loop.

In summary, the iteration order is as follows:
Before: PH A B C D A'
Corrected (Naive): PH A B C D A' B' C' D'
Corrected (w/ optimization): PH A B C A' B' C' D

To facilitate this optimization we introduce two new counters for each
basic block. The first counts how many of it's predecssors have
completed primary processing. The second counts how many of its
predecessors have completed all processing (we will call such a block
*done*. Now, the criteria to reprocess a block is as follows:
    - All Predecessors have completed primary processing
    - For x the number of predecessors that have completed primary
      processing *at the time of primary processing of this block*,
      the number of predecessors that are done has reached x.

The intuition behind this criterion is as follows:
We need to perform primary processing on all predecessors in order to
find out any direct defs in those predecessors. When predecessors are
done, we also know that we have information about indirect defs (e.g.
in block B though that were inherited through B->C->A->B). However,
we can't wait for all predecessors to be done, since that would
cause cyclic dependencies. However, it is guaranteed that all those
predecessors that are prior to us in reverse postorder will be done
before us. Since we iterate of the basic blocks in reverse postorder,
the number x above, is precisely the count of the number of predecessors
prior to us in reverse postorder.

Reviewers: myatsina
Differential Revision: https://reviews.llvm.org/D28759

llvm-svn: 293571
2017-01-30 23:37:03 +00:00