Commit Graph

6468 Commits

Author SHA1 Message Date
Duncan P. N. Exon Smith 814b8e91c7 DI: Require subprogram definitions to be distinct
As a follow-up to r246098, require `DISubprogram` definitions
(`isDefinition: true`) to be 'distinct'.  Specifically, add an assembler
check, a verifier check, and bitcode upgrading logic to combat testcase
bitrot after the `DIBuilder` change.

While working on the testcases, I realized that
test/Linker/subprogram-linkonce-weak-odr.ll isn't relevant anymore.  Its
purpose was to check for a corner case in PR22792 where two subprogram
definitions match exactly and share the same metadata node.  The new
verifier check, requiring that subprogram definitions are 'distinct',
precludes that possibility.

I updated almost all the IR with the following script:

    git grep -l -E -e '= !DISubprogram\(.* isDefinition: true' |
    grep -v test/Bitcode |
    xargs sed -i '' -e 's/= \(!DISubprogram(.*, isDefinition: true\)/= distinct \1/'

Likely some variant of would work for out-of-tree testcases.

llvm-svn: 246327
2015-08-28 20:26:49 +00:00
David Majnemer 9c51053dd5 Test case for r246304.
llvm-svn: 246306
2015-08-28 17:19:54 +00:00
Sanjay Patel 7c912898a5 [x86] enable machine combiner reassociations for scalar 'and' insts
llvm-svn: 246300
2015-08-28 14:09:48 +00:00
Reid Kleckner 0e2882345d [WinEH] Add some support for code generating catchpad
We can now run 32-bit programs with empty catch bodies.  The next step
is to change PEI so that we get funclet prologues and epilogues.

llvm-svn: 246235
2015-08-27 23:27:47 +00:00
Ahmed Bougacha 87166905c8 [CodeGen] Check FoldConstantArithmetic result before using it.
Fixes PR24602: r245689 introduced an unguarded use of
SelectionDAG::FoldConstantArithmetic, which returns 0 when it fails
because of opaque (hoisted) constants.

llvm-svn: 246217
2015-08-27 21:46:04 +00:00
Cong Hou 08cb4fc688 Fixed a bug that edge weights are not assigned correctly when lowering switch statement.
This is a one-line-change patch that moves the update to UnhandledWeights to the correct position: it should be updated for all clusters instead of just range clusters.

Differential Revision: http://reviews.llvm.org/D12391

llvm-svn: 246129
2015-08-27 00:37:40 +00:00
Cong Hou 03127700d5 Assign weights to edges to jump table / bit test header when lowering switch statement.
Currently, when lowering switch statement and a new basic block is built for jump table / bit test header, the edge to this new block is not assigned with a correct weight. This patch collects the edge weight from all its successors and assign this sum of weights to the edge (and also the other fall-through edge). Test cases are adjusted accordingly.

Differential Revision: http://reviews.llvm.org/D12166#fae6eca7

llvm-svn: 246104
2015-08-26 23:15:32 +00:00
Matthias Braun 4e7ded834f SelectionDAGBuilder: Fix SPDescriptor not resetting GuardReg
This was causing problems when some functions use a GuardReg and some
don't as can happen when mixing SelectionDAG and FastISel generated
functions.

llvm-svn: 246075
2015-08-26 20:46:52 +00:00
Matthias Braun 4816b18d86 FastISel: Avoid adding a successor block twice for degenerate IR.
This fixes http://llvm.org/PR24581

Differential Revision: http://reviews.llvm.org/D12350

llvm-svn: 246074
2015-08-26 20:46:49 +00:00
Charles Davis 119525914c Make variable argument intrinsics behave correctly in a Win64 CC function.
Summary:
This change makes the variable argument intrinsics, `llvm.va_start` and
`llvm.va_copy`, and the `va_arg` instruction behave as they do on Windows
inside a `CallingConv::X86_64_Win64` function. It's needed for a Clang patch
I have to add support for GCC's `__builtin_ms_va_list` constructs.

Reviewers: nadav, asl, eugenis

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D1622

llvm-svn: 245990
2015-08-25 23:27:41 +00:00
Cong Hou cd59591396 Remove the final bit test during lowering switch statement if all cases in bit test cover a contiguous range.
When lowering switch statement, if bit tests are used then LLVM will always generates a jump to the default statement in the last bit test. However, this is not necessary when all cases in bit tests cover a contiguous range. This is because when generating the bit tests header MBB, there is a range check that guarantees cases in bit tests won't go outside of [low, high], where low and high are minimum and maximum case values in the bit tests. This patch checks if this is the case and then doesn't emit jump to default statement and hence saves a bit test and a branch.

Differential Revision: http://reviews.llvm.org/D12249

llvm-svn: 245976
2015-08-25 21:34:38 +00:00
Sanjay Patel deb8f826a5 make fast unaligned memory accesses implicit with SSE4.2 or SSE4a
This is a follow-on from the discussion in http://reviews.llvm.org/D12154.

This change allows memset/memcpy to use SSE or AVX memory accesses for any chip that has
generally fast unaligned memory ops.

A motivating use case for this change is a clang invocation that doesn't explicitly set
the CPU, but does target a feature that we know only exists on a CPU that supports fast
unaligned memops. For example:
$ clang -O1 foo.c -mavx

This resolves a difference in lowering noted in PR24449:
https://llvm.org/bugs/show_bug.cgi?id=24449

Before this patch, we used different store types depending on whether the example can be
lowered as a memset or not.

Differential Revision: http://reviews.llvm.org/D12288

llvm-svn: 245950
2015-08-25 16:29:21 +00:00
Michael Kuperstein 6e3fee07f7 [X86] Remove references to _ftol2
As of r245924, _ftol2 is no longer used for fptoui on MS platforms.
Remove the dead code associated with it.

llvm-svn: 245925
2015-08-25 07:58:33 +00:00
Michael Kuperstein 8515893be8 [X86] Fix fptoui conversions
This fixes two issues in x86 fptoui lowering.
1) Makes conversions from f80 go through the right path on AVX-512.
2) Implements an inline sequence for fptoui i64 instead of a library
call. This improves performance by 6X on SSE3+ and 3X otherwise.
Incidentally, it also removes the use of ftol2 for fptoui, which was
wrong to begin with, as ftol2 converts to a signed i64, producing
wrong results for values >= 2^63.

Patch by: mitch.l.bodart@intel.com
Differential Revision: http://reviews.llvm.org/D11316

llvm-svn: 245924
2015-08-25 07:42:09 +00:00
Steve King 5cdbd20cc3 Pass function attributes instead of boolean in isIntDivCheap().
llvm-svn: 245921
2015-08-25 02:31:21 +00:00
Simon Pilgrim 342f60384e [X86][SSE] Added tests for zero-extension vector shuffles that don't extend starting from the 0'th lane.
llvm-svn: 245878
2015-08-24 21:28:13 +00:00
Sanjay Patel 453b4df973 remove FIXME; fixed by r245733
llvm-svn: 245819
2015-08-23 20:43:25 +00:00
Simon Pilgrim 2a7049abe0 [DAGCombiner] Fold CONCAT_VECTORS of bitcasted EXTRACT_SUBVECTOR
Minor generalization of D12125 - peek through any bitcast to the original vector that we're extracting from.

llvm-svn: 245814
2015-08-23 15:22:14 +00:00
Sanjay Patel f0bc07f7a5 [x86] enable machine combiner reassociations for 256-bit vector min/max
llvm-svn: 245735
2015-08-21 21:04:21 +00:00
Sanjay Patel dddad10241 remove 'FeatureSlowUAMem' from AMD CPUs based on 10H micro-arch or later
See discussion in D12154 ( http://reviews.llvm.org/D12154 ), AMD Software
Optimization Guides for 10H/12H/15H/16H, and Agner Fog's experimental data.

llvm-svn: 245733
2015-08-21 20:39:17 +00:00
Sanjay Patel cf942fa905 [x86] enable machine combiner reassociations for 128-bit vector min/max
llvm-svn: 245715
2015-08-21 18:06:49 +00:00
Sanjay Patel c9f4aa6b8c save some testing time; get rid of the non-SSE chips in this test
It doesn't matter what slow/fast unaligned attribute the old chips
have - they can't use anything more than 4-byte stores.

llvm-svn: 245709
2015-08-21 17:16:51 +00:00
Sanjay Patel 44d7bef0fa add a test case to check the fast-unaligned-mem attribute per CPU
This will confirm that the patch in D12154 is actually NFC.
It will also confirm that the proposed changes for the AMD chips
are behaving as expected.

llvm-svn: 245704
2015-08-21 16:08:26 +00:00
Ahmed Bougacha 0cdc7719f0 [X86] Look for scalar through one bitcast when lowering to VBROADCAST.
Fixes PR23464: one way to use the broadcast intrinsics is:

  _mm256_broadcastw_epi16(_mm_cvtsi32_si128(*(int*)src));

We don't currently fold this, but now that we use native IR for
the intrinsics (r245605), we can look through one bitcast to find
the broadcast scalar.

Differential Revision: http://reviews.llvm.org/D10557

llvm-svn: 245613
2015-08-20 21:02:39 +00:00
Ahmed Bougacha 69a17acb74 [X86] Add some broadcast-from-memory tests.
llvm-svn: 245612
2015-08-20 20:59:41 +00:00
Ahmed Bougacha 1a498705e4 [X86] Replace avx2 broadcast intrinsics with native IR.
Since r245605, the clang headers don't use these anymore.
r245165 updated some of the tests already; update the others, add
an autoupgrade, remove the intrinsics, and cleanup the definitions.

Differential Revision: http://reviews.llvm.org/D10555

llvm-svn: 245606
2015-08-20 20:36:19 +00:00
David Majnemer cfc1df553e [X86] Fix the (shl (and (setcc_c), c1), c2) -> (and setcc_c, (c1 << c2)) fold
We didn't check for the necessary preconditions before folding a
mask/shift into a single mask.

This fixes PR24516.

llvm-svn: 245544
2015-08-20 09:00:56 +00:00
Sanjay Patel 9e5927fdc3 [x86] enable machine combiner reassociations for scalar double-precision min/max
llvm-svn: 245506
2015-08-19 21:27:27 +00:00
Sanjay Patel 4e3ee1e548 [x86] enable machine combiner reassociations for scalar single-precision maximums
llvm-svn: 245504
2015-08-19 21:18:46 +00:00
Simon Pilgrim 35f528262f [DAGCombiner] Added SMAX/SMIN/UMAX/UMIN constant folding
We still need to add constant folding of vector comparisons to fold the tests for targets that don't support the respective min/max nodes

I needed to update 2011-12-06-AVXVectorExtractCombine to load a vector instead of using a constant vector to prevent it folding

Differential Revision: http://reviews.llvm.org/D12118

llvm-svn: 245503
2015-08-19 21:11:58 +00:00
David Majnemer f25fe64716 [X86] Emit more efficient >= comparisons against 0
We don't do a great job with >= 0 comparisons against zero when the
result is used as an i8.

Given something like:
  void f(long long LL, bool *B) {
    *B = LL >= 0;
  }

We used to generate:
  shrq    $63, %rdi
  xorb    $1, %dil
  movb    %dil, (%rsi)

Now we generate:
  testq   %rdi, %rdi
  setns   (%rsi)

Differential Revision: http://reviews.llvm.org/D12136

llvm-svn: 245498
2015-08-19 20:51:40 +00:00
Simon Pilgrim 989cbbd2f5 [DAGCombiner] Fold CONCAT_VECTORS of EXTRACT_SUBVECTOR (or undef) to VECTOR_SHUFFLE.
Check to see if this is a CONCAT_VECTORS of a bunch of EXTRACT_SUBVECTOR operations. If so, and if the EXTRACT_SUBVECTOR vector inputs come from at most two distinct vectors the same size as the result, attempt to turn this into a legal shuffle.

Differential Revision: http://reviews.llvm.org/D12125

llvm-svn: 245490
2015-08-19 20:09:50 +00:00
Bruno Cardoso Lopes 27fd06922b [PeepholeOptimizer] Look through PHIs to find additional register sources
Reintroduce r245442. Remove an overly conservative assertion introduced
in r245442. We could replace the assertion to use `shareSameRegisterFile`
instead, but in that point in `insertPHI` we already lost the original
Def subreg to check against. So drop the assertion completely.

Original commit message:

- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197
rdar://problem/20404526

llvm-svn: 245479
2015-08-19 18:53:36 +00:00
Derek Schuff 55817ee604 x32. Fixes a bug in x32 exception handling.
This patch updates the X86 lowering so that the Exception Pointer and Selector
are 64-bit wide only if Subtarget.isTarget64BitLP64.

Patch by João Porto

Reviewers: dschuff, rnk
Differential Revision: http://reviews.llvm.org/D12111

llvm-svn: 245454
2015-08-19 16:28:21 +00:00
JF Bastien 5ab87edbb4 x32. Fixes jmp %reg in x32
x32 has 32-bit pointers; x86-64 can't jmp %r32. This patch addresses this issue by explicitly zero-extending brind's target to 64-bits.

Author: jpp

Reviewers: jfb, dschuff, pavel.v.chupin

Subscribers: llvm-commits

Differential revision: http://reviews.llvm.org/D12112

llvm-svn: 245452
2015-08-19 16:17:08 +00:00
Bruno Cardoso Lopes 61009142b8 Revert "[PeepholeOptimizer] Look through PHIs to find additional register sources"
Revert r245442 while investigating a fix. An assertion hit in
http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/11380

llvm-svn: 245446
2015-08-19 15:10:32 +00:00
Bruno Cardoso Lopes 0a1c126684 [PeepholeOptimizer] Look through PHIs to find additional register sources
Reapply r243486.

- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197
rdar://problem/20404526

llvm-svn: 245442
2015-08-19 14:34:41 +00:00
Tobias Grosser 85508e804b Revert "[X86] Widen the 'AND' mask if doing so shrinks the encoding size"
This reverts commit 245169 which miscompiles MultiSource/Applications/siod
from LNT.

llvm-svn: 245432
2015-08-19 11:35:10 +00:00
Michael Kuperstein 9fe42604aa [X86] Do not lower scalar sdiv/udiv to a shifts + mul sequence when optimizing for minsize
There are some cases where the mul sequence is smaller, but for the most part,
using a div is preferable. This does not apply to vectors, since x86 doesn't
have vector idiv, and a vector mul/shifts sequence ought to be smaller than a
scalarized division.

Differential Revision: http://reviews.llvm.org/D12082

llvm-svn: 245431
2015-08-19 11:21:43 +00:00
Simon Pilgrim 4bce6d6b73 [X86] Refreshed sign extension tests.
llvm-svn: 245358
2015-08-18 21:21:35 +00:00
Simon Pilgrim ce30ae62e2 [X86][AVX] Added shuffle concatenation tests
llvm-svn: 245351
2015-08-18 20:51:15 +00:00
Simon Pilgrim edaba3b7c3 Updated constants to give more useful min/max constant folding tests
llvm-svn: 245348
2015-08-18 20:46:48 +00:00
Simon Pilgrim 9f4374d361 Fixed max/min typo in test names
llvm-svn: 245278
2015-08-18 09:02:51 +00:00
Simon Pilgrim 19ffd57f45 [X86][SSE} Added constant SMAX/SMIN/UMAX/UMIN tests
Constant folding patch to follow soon

llvm-svn: 245276
2015-08-18 08:52:43 +00:00
Simon Pilgrim 08d823afe4 [X86][SSE] Added extra vector truncation tests.
Including cases for PR14866

llvm-svn: 245274
2015-08-18 08:37:09 +00:00
David Majnemer 1a59e49f3c [X86] Widen the 'AND' mask if doing so shrinks the encoding size
We can set additional bits in a mask given that we know the other
operand of an AND already has some bits set to zero.  This can be more
efficient if doing so allows us to use an instruction which implicitly
sign extends the immediate.

This fixes PR24085.

Differential Revision: http://reviews.llvm.org/D11289

llvm-svn: 245169
2015-08-16 04:52:11 +00:00
Sanjay Patel 40d4eb40f6 [x86] enable machine combiner reassociations for scalar single-precision minimums
llvm-svn: 245166
2015-08-15 17:01:54 +00:00
Simon Pilgrim d65ace84c7 Updated broadcast stack folding test to avoid use of broadcast intrinsics.
llvm-svn: 245165
2015-08-15 16:54:18 +00:00
Sanjay Patel 3b7e3677e3 fix typos; NFC
llvm-svn: 245164
2015-08-15 16:53:08 +00:00
Sanjay Patel 9f6c7dddd2 add test case to show current codegen
llvm-svn: 245163
2015-08-15 16:49:50 +00:00
Simon Pilgrim 0750c84623 [DAGCombiner] Attempt to mask vectors before zero extension instead of after.
For cases where we TRUNCATE and then ZERO_EXTEND to a larger size (often from vector legalization), see if we can mask the source data and then ZERO_EXTEND (instead of after a ANY_EXTEND). This can help avoid having to generate a larger mask, and possibly applying it to several sub-vectors.

(zext (truncate x)) -> (zext (and(x, m))

Includes a minor patch to SystemZ to better recognise 8/16-bit zero extension patterns from RISBG bit-extraction code.

This is the first of a number of minor patches to help improve the conversion of byte masks to clear mask shuffles.

Differential Revision: http://reviews.llvm.org/D11764

llvm-svn: 245160
2015-08-15 13:27:30 +00:00
Sanjay Patel 7332e0455f make current codegen visible in the checks, so we can decide if it's right
llvm-svn: 245120
2015-08-14 23:03:01 +00:00
Pat Gavlin b399095c3f Add a target environment for CoreCLR.
Although targeting CoreCLR is similar to targeting MSVC, there are
certain important differences that the backend must be aware of
(e.g. differences in stack probes, EH, and library calls).

Differential Revision: http://reviews.llvm.org/D11012

llvm-svn: 245115
2015-08-14 22:41:43 +00:00
Sanjay Patel dd175bc6c4 make current codegen visible in the checks, so we can decide if it's right
llvm-svn: 245108
2015-08-14 22:10:59 +00:00
Sanjay Patel ed502905f7 [x86] fix allowsMisalignedMemoryAccess() implementation
This patch fixes the x86 implementation of allowsMisalignedMemoryAccess() to correctly
return the 'Fast' output parameter for 32-byte accesses. To test that, an existing load
merging optimization is changed to use the TLI hook. This exposes a shortcoming in the
current logic and results in the regression test update. Changing other direct users of
the isUnalignedMem32Slow() x86 CPU attribute would be a follow-on patch.

Without the fix in allowsMisalignedMemoryAccesses(), we will infinite loop when targeting
SandyBridge because LowerINSERT_SUBVECTOR() creates 32-byte loads from two 16-byte loads
while PerformLOADCombine() splits them back into 16-byte loads.

Differential Revision: http://reviews.llvm.org/D10662

llvm-svn: 245075
2015-08-14 17:53:40 +00:00
Rafael Espindola dbaf0498a9 Revert "Centralize the information about which object format we are using."
This reverts commit r245047.

It was failing on the darwin bots. The problem was that when running

./bin/llc -march=msp430

llc gets to

  if (TheTriple.getTriple().empty())
    TheTriple.setTriple(sys::getDefaultTargetTriple());

Which means that we go with an arch of msp430 but a triple of
x86_64-apple-darwin14.4.0 which fails badly.

That code has to be updated to select a triple based on the value of
march, but that is not a trivial fix.

llvm-svn: 245062
2015-08-14 15:48:41 +00:00
Rafael Espindola 90eb70c8a7 Centralize the information about which object format we are using.
Other than some places that were handling unknown as ELF, this should
have no change. The test updates are because we were detecting
arm-coff or x86_64-win64-coff as ELF targets before.

It is not clear if the enum should live on the Triple. At least now it lives
in a single location and should be easier to move somewhere else.

llvm-svn: 245047
2015-08-14 13:31:17 +00:00
Simon Pilgrim 1fdc177ccf Renamed min tests (typo)
llvm-svn: 245038
2015-08-14 11:03:31 +00:00
Alex Lorenz 5022f6bb81 MIR Serialization: Change MIR syntax - use custom syntax for MBBs.
This commit modifies the way the machine basic blocks are serialized - now the
machine basic blocks are serialized using a custom syntax instead of relying on
YAML primitives. Instead of using YAML mappings to represent the individual
machine basic blocks in a machine function's body, the new syntax uses a single
YAML block scalar which contains all of the machine basic blocks and
instructions for that function.

This is an example of a function's body that uses the old syntax:

    body:
      - id: 0
        name: entry
        instructions:
          - '%eax = MOV32r0 implicit-def %eflags'
          - 'RETQ %eax'
    ...

The same body is now written like this:

    body: |
      bb.0.entry:
        %eax = MOV32r0 implicit-def %eflags
        RETQ %eax
    ...

This syntax change is motivated by the fact that the bundled machine
instructions didn't map that well to the old syntax which was using a single
YAML sequence to store all of the machine instructions in a block. The bundled
machine instructions internally use flags like BundledPred and BundledSucc to
determine the bundles, and serializing them as MI flags using the old syntax
would have had a negative impact on the readability and the ease of editing
for MIR files. The new syntax allows me to serialize the bundled machine
instructions using a block construct without relying on the internal flags,
for example:

   BUNDLE implicit-def dead %itstate, implicit-def %s1 ... {
      t2IT 1, 24, implicit-def %itstate
      %s1 = VMOVS killed %s0, 1, killed %cpsr, implicit killed %itstate
   }

This commit also converts the MIR testcases to the new syntax. I developed
a script that can convert from the old syntax to the new one. I will post the
script on the llvm-commits mailing list in the thread for this commit.

llvm-svn: 244982
2015-08-13 23:10:16 +00:00
Simon Pilgrim 829091e310 [X86][SSE] Tests for SMAX/SMIN/UMAX/UMIN vector instructions
llvm-svn: 244944
2015-08-13 20:31:03 +00:00
Simon Pilgrim a5737a44da Cleaned up test. NFCI.
llvm-svn: 244765
2015-08-12 17:00:50 +00:00
Michael Kuperstein fe0d9bb6eb [X86] Disable mul -> shl + lea combine when compiling for minsize
Differential Revision: http://reviews.llvm.org/D11904

llvm-svn: 244740
2015-08-12 11:27:26 +00:00
Michael Kuperstein bc7f99a3ab [X86] Allow x86 call frame optimization to fold more loads into pushes
This abstracts away the test for "when can we fold across a MachineInstruction"
into the the MI interface, and changes call-frame optimization use the same test
the peephole optimizer users.

Differential Revision: http://reviews.llvm.org/D11945

llvm-svn: 244729
2015-08-12 10:14:58 +00:00
Simon Pilgrim 8c049d5c03 [InstCombine] Move SSE/AVX vector blend folding to instcombiner
As discussed in D11886, this patch moves the SSE/AVX vector blend folding to instcombiner from PerformINTRINSIC_WO_CHAINCombine (which allows us to remove this completely).

InstCombiner already had partial support for this, I just had to add support for zero (ConstantAggregateZero) masks and also the case where both selection inputs were the same (allowing us to ignore the mask).

I also moved all the relevant combine tests into InstCombine/blend_x86.ll

Differential Revision: http://reviews.llvm.org/D11934

llvm-svn: 244723
2015-08-12 08:08:56 +00:00
Sanjay Patel 260b6d36f4 [x86] enable machine combiner reassociations for 256-bit vector FP mul/add
llvm-svn: 244705
2015-08-12 00:29:10 +00:00
Sanjay Patel 2c6a01570d [x86] enable machine combiner reassociations for 128-bit vector single/double multiplies
llvm-svn: 244657
2015-08-11 20:19:23 +00:00
Sanjay Patel 82d91ddb4f fix minsize detection: minsize attribute implies optimizing for size
Also, add a test for optsize because this was not part of any existing regression test.

llvm-svn: 244651
2015-08-11 19:39:36 +00:00
Sanjay Patel 070df89928 fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244631
2015-08-11 17:04:31 +00:00
Sanjay Patel caddda56aa add missing tests for powi expansion with size optimizations
The minsize test will be fixed in the next commit.

llvm-svn: 244630
2015-08-11 16:58:49 +00:00
Sanjay Patel c3e8349a3e fixed to use FileCheck
llvm-svn: 244627
2015-08-11 16:51:31 +00:00
Sanjay Patel 605b6adf31 fixed to test attribute, rather than CPU
llvm-svn: 244625
2015-08-11 16:43:18 +00:00
Michael Kuperstein 243c073a2e [X86] Allow merging of immediates within a basic block for code size savings
First step in preventing immediates that occur more than once within a single
basic block from being pulled into their users, in order to prevent unnecessary
large instruction encoding .Currently enabled only when optimizing for size.

Patch by: zia.ansari@intel.com
Differential Revision: http://reviews.llvm.org/D11363

llvm-svn: 244601
2015-08-11 14:10:58 +00:00
Michael Kuperstein 7337ee23d8 [X86] When optimizing for minsize, use POP for small post-call stack clean-up
When optimizing for size, replace "addl $4, %esp" and "addl $8, %esp"
following a call by one or two pops, respectively. We don't try to do it in
general, but only when the stack adjustment immediately follows a call - which
is the most common case.

That allows taking a short-cut when trying to find a free register to pop into,
instead of a full-blown liveness check. If the adjustment immediately follows a
call, then every register the call clobbers but doesn't define should be dead at
that point, and can be used.

Differential Revision: http://reviews.llvm.org/D11749

llvm-svn: 244578
2015-08-11 08:48:48 +00:00
Michael Kuperstein 82814f63c0 Allow PeepholeOptimizer to fold a few more cases
The condition for clearing the folding candidate list was clamped together
with the "uninteresting instruction" condition. This is too conservative,
e.g. we don't need to clear the list when encountering an IMPLICIT_DEF.

Differential Revision: http://reviews.llvm.org/D11591

llvm-svn: 244577
2015-08-11 08:19:43 +00:00
Sanjay Patel d967a878fa fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244528
2015-08-10 23:07:26 +00:00
Alex Lorenz e5101e2016 MachineVerifier: Handle the optional def operand in a PATCHPOINT instruction.
The PATCHPOINT instructions have a single optional defined register operand,
but the machine verifier can't verify the optional defined register operands.
This commit makes sure that the machine verifier won't report an error when a
PATCHPOINT instruction doesn't have its optional defined register operand.
This change will allow us to enable the machine verifier for the code
generation tests for the patchpoint intrinsics.

Reviewers: Juergen Ributzka
llvm-svn: 244513
2015-08-10 21:47:36 +00:00
Alex Lorenz 2f43dd5a12 StackMap: FastISel: Add an appropriate number of immediate operands to the
frame setup instruction.

This commit ensures that the stack map lowering code in FastISel adds an
appropriate number of immediate operands to the frame setup instruction.

The previous code added just one immediate operand, which was fine for a target
like AArch64, but on X86 the ADJCALLSTACKDOWN64 instruction needs two explicit
operands. This caused the machine verifier to report an error when the old code
added just one.

Reviewers: Juergen Ributzka

Differential Revision: http://reviews.llvm.org/D11853

llvm-svn: 244508
2015-08-10 21:27:03 +00:00
JF Bastien fa9746dc8d x86: Emit LAHF/SAHF instead of PUSHF/POPF
NaCl's sandbox doesn't allow PUSHF/POPF out of security concerns (priviledged emulators have forgotten to mask system bits in the past, and EFLAGS's DF bit is a constant source of hilarity). Commit r220529 fixed PR20376 by saving cmpxchg's flags result using EFLAGS, this commit now generated LAHF/SAHF instead, for all of x86 (not just NaCl) because it leads to an overall performance gain over PUSHF/POPF.

As with the previous patch this code generation is pretty bad because it occurs very later, after register allocation, and in many cases it rematerializes flags which were already available (e.g. already in a register through SETE). Fortunately it's somewhat rare that this code needs to fire.

I did [[ https://github.com/jfbastien/benchmark-x86-flags | a bit of benchmarking ]], the results on an Intel Haswell E5-2690 CPU at 2.9GHz are:

| Time per call (ms)  | Runtime (ms) | Benchmark                      |
| 0.000012514         |      6257    | sete.i386                      |
| 0.000012810         |      6405    | sete.i386-fast                 |
| 0.000010456         |      5228    | sete.x86-64                    |
| 0.000010496         |      5248    | sete.x86-64-fast               |
| 0.000012906         |      6453    | lahf-sahf.i386                 |
| 0.000013236         |      6618    | lahf-sahf.i386-fast            |
| 0.000010580         |      5290    | lahf-sahf.x86-64               |
| 0.000010304         |      5152    | lahf-sahf.x86-64-fast          |
| 0.000028056         |     14028    | pushf-popf.i386                |
| 0.000027160         |     13580    | pushf-popf.i386-fast           |
| 0.000023810         |     11905    | pushf-popf.x86-64              |
| 0.000026468         |     13234    | pushf-popf.x86-64-fast         |

Clearly `PUSHF`/`POPF` are suboptimal. It doesn't really seems to be worth teaching LLVM about individual flags, at least not for this purpose.

Reviewers: rnk, jvoung, t.p.northover

Subscribers: llvm-commits

Differential revision: http://reviews.llvm.org/D6629

llvm-svn: 244503
2015-08-10 20:59:36 +00:00
Sanjay Patel d09391c8cd fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244499
2015-08-10 20:45:44 +00:00
Sanjay Patel 178f8cba51 [x86, SSE]]add missing tests for load folding with partial register update
The minsize case is wrong; that will be fixed in the next commit.

llvm-svn: 244498
2015-08-10 20:34:34 +00:00
Simon Pilgrim a3a72b41de [InstCombine] Move SSE2/AVX2 arithmetic vector shift folding to instcombiner
As discussed in D11760, this patch moves the (V)PSRA(WD) arithmetic shift-by-constant folding to InstCombine to match the logical shift implementations.

Differential Revision: http://reviews.llvm.org/D11886

llvm-svn: 244495
2015-08-10 20:21:15 +00:00
Jonathan Roelofs f45295c366 Fix a few more cases of 'CHECK[^:]*$'. NFCI
llvm-svn: 244491
2015-08-10 19:56:39 +00:00
Jonathan Roelofs 49e46ce8e2 Fix a bunch of trivial cases of 'CHECK[^:]*$' in the tests. NFCI
I looked into adding a warning / error for this to FileCheck, but there doesn't
seem to be a good way to avoid it triggering on the instances of it in RUN lines.

llvm-svn: 244481
2015-08-10 19:01:27 +00:00
Sanjay Patel 10294b59de fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244464
2015-08-10 17:15:17 +00:00
Sanjay Patel 0f12d71b49 fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244463
2015-08-10 17:00:44 +00:00
Sanjay Patel 68b0325a9e fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244460
2015-08-10 16:47:47 +00:00
Sanjay Patel 9a9003d94c fix minsize detection: minsize attribute implies optimizing for size
llvm-svn: 244458
2015-08-10 16:43:20 +00:00
Robert Lougher 11a44b78a3 Trace copies when checking for rematerializability in spill weight calculation
PR24139 contains an analysis of poor register allocation. One of the findings
was that when calculating the spill weight, a rematerializable interval once
split is no longer rematerializable. This is because the isRematerializable
check in CalcSpillWeights.cpp does not follow the copies introduced by live
range splitting (after splitting, the live interval register definition is a
copy which is not rematerializable).

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D11686

llvm-svn: 244439
2015-08-10 11:59:44 +00:00
Sanjay Patel e0178262d4 [x86] enable machine combiner reassociations for 128-bit vector single/double adds
llvm-svn: 244403
2015-08-08 19:08:20 +00:00
Sanjay Patel 0f51d14957 add a missing regression test for a DAGCombiner FDIV optimization
There's no test for this transform in any backend. Discovered
while debugging fast-math-flag propagation in the DAG (r244053).

llvm-svn: 244373
2015-08-07 23:19:41 +00:00
Sanjay Patel 8d768b5b3a redo r244360 (tighten checks...) after specifying triple
llvm-svn: 244361
2015-08-07 21:42:24 +00:00
Sanjay Patel 0e1d731631 tighten checks using update_llc_test_checks.py
llvm-svn: 244360
2015-08-07 21:38:53 +00:00
Kit Barton a7bf96ab5c Fix possible infinite loop in shrink wrapping when searching for save/restore
points.

There is an infinite loop that can occur in Shrink Wrapping while searching 
for the Save/Restore points. 

Part of this search checks whether the save/restore points are located in
different loop nests and if so, uses the (post) dominator trees to find the
immediate (post) dominator blocks. However, if the current block does not have
any immediate (post) dominators then this search will result in an infinite
loop. This can occur in code containing an infinite loop.

The modification checks whether the immediate (post) dominator is different from
the current save/restore block. If it is not, then the search terminates and the
current location is not considered as a valid save/restore point for shrink wrapping.

Phabricator: http://reviews.llvm.org/D11607
llvm-svn: 244247
2015-08-06 19:01:57 +00:00
Michael Kuperstein 868dc65444 [X86] Improve EmitLoweredSelect for contiguous CMOV pseudo instructions.
This change improves EmitLoweredSelect() so that multiple contiguous CMOV pseudo
instructions with the same (or exactly opposite) conditions get lowered using a single
new basic-block. This eliminates unnecessary extra basic-blocks (and CFG merge points)
when contiguous CMOVs are being lowered.

Patch by: kevin.b.smith@intel.com
Differential Revision: http://reviews.llvm.org/D11428

llvm-svn: 244202
2015-08-06 08:45:34 +00:00
Reid Kleckner 10cac7a190 Fix Windows test failure with triple instead of using the native OS
llvm-svn: 244159
2015-08-05 22:27:08 +00:00
JF Bastien 8662083770 x86 atomic: optimize a.store(reg op a.load(acquire), release)
Summary: PR24191 finds that the expected memory-register operations aren't generated when relaxed { load ; modify ; store } is used. This is similar to PR17281 which was addressed in D4796, but only for memory-immediate operations (and for memory orderings up to acquire and release). This patch also handles some floating-point operations.

Reviewers: reames, kcc, dvyukov, nadav, morisset, chandlerc, t.p.northover, pete

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11382

llvm-svn: 244128
2015-08-05 21:04:59 +00:00
JF Bastien 7c4218f49c Revert "Fix MO's analyzePhysReg, it was confusing sub- and super-registers. Problem pointed out by Michael Hordijk."
I mistakenly committed the patch for D6629, and was trying to commit another. Reverting until it gets proper signoff.

llvm-svn: 244121
2015-08-05 20:53:56 +00:00
JF Bastien ce5256f5c5 Fix MO's analyzePhysReg, it was confusing sub- and super-registers. Problem pointed out by Michael Hordijk.
llvm-svn: 244120
2015-08-05 20:49:46 +00:00
Sanjay Patel 75ced2782b [x86] machine combiner reassociation: mark EFLAGS operand as 'dead'
In the commentary for D11660, I wasn't sure if it was alright to create new
integer machine instructions without also creating the implicit EFLAGS operand. 
From what I can see, the implicit operand is always created by the MachineInstrBuilder
based on the instruction type, so we don't have to do that explicitly. However, in
reviewing the debug output, I noticed that the operand was not marked as 'dead'. 
The machine combiner should do that to preserve future optimization opportunities 
that may be checking for that dead EFLAGS operand themselves.

Differential Revision: http://reviews.llvm.org/D11696

llvm-svn: 243990
2015-08-04 15:21:56 +00:00
Duncan P. N. Exon Smith 55ca964e94 DI: Disallow uniquable DICompileUnits
Since r241097, `DIBuilder` has only created distinct `DICompileUnit`s.
The backend is liable to start relying on that (if it hasn't already),
so make uniquable `DICompileUnit`s illegal and automatically upgrade old
bitcode.  This is a nice cleanup, since we can remove an unnecessary
`DenseSet` (and the associated uniquing info) from `LLVMContextImpl`.

Almost all the testcases were updated with this script:

    git grep -e '= !DICompileUnit' -l -- test |
    grep -v test/Bitcode |
    xargs sed -i '' -e 's,= !DICompileUnit,= distinct !DICompileUnit,'

I imagine something similar should work for out-of-tree testcases.

llvm-svn: 243885
2015-08-03 17:26:41 +00:00
Simon Pilgrim 87368fdd30 [X86][SSE] Refreshed sse2 vector shift tests
llvm-svn: 243850
2015-08-02 15:23:53 +00:00
Simon Pilgrim 503a2594c3 [DAGCombiner] Convert constant AND masks to shuffle clear masks down to the byte level
The XformToShuffleWithZero method currently checks AND masks at the per-lane level for all-one and all-zero constants and attempts to convert them to legal shuffle clear masks.

This patch generalises XformToShuffleWithZero, splitting and checking the sub-lanes of the constants down to the byte level to see if any legal shuffle clear masks are possible. This allows a lot of masks (often from legalization or truncation) to be folded into existing shuffle patterns and removes a lot of constant mask loading.

There are a few examples of poor shuffle lowering that are exposed by this patch that will be cleaned up in future patches (e.g. merging shuffles that are separated by bitcasts, x86 legalized v8i8 zero extension uses PMOVZX+AND+AND instead of AND+PMOVZX, etc.)

Differential Revision: http://reviews.llvm.org/D11518

llvm-svn: 243831
2015-08-01 10:01:46 +00:00
Duncan P. N. Exon Smith ed013cd221 DI: Remove DW_TAG_arg_variable and DW_TAG_auto_variable
Remove the fake `DW_TAG_auto_variable` and `DW_TAG_arg_variable` tags,
using `DW_TAG_variable` in their place Stop exposing the `tag:` field at
all in the assembly format for `DILocalVariable`.

Most of the testcase updates were generated by the following sed script:

    find test/ -name "*.ll" -o -name "*.mir" |
    xargs grep -l 'DILocalVariable' |
    xargs sed -i '' \
      -e 's/tag: DW_TAG_arg_variable, //' \
      -e 's/tag: DW_TAG_auto_variable, //'

There were only a handful of tests in `test/Assembly` that I needed to
update by hand.

(Note: a follow-up could change `DILocalVariable::DILocalVariable()` to
set the tag to `DW_TAG_formal_parameter` instead of `DW_TAG_variable`
(as appropriate), instead of having that logic magically in the backend
in `DbgVariable`.  I've added a FIXME to that effect.)

llvm-svn: 243774
2015-07-31 18:58:39 +00:00
Sanjay Patel 9ff4626028 [x86] reassociate integer multiplies using machine combiner pass
Add i16, i32, i64 imul machine instructions to the list of reassociation
candidates.

A new bit of logic is needed to handle integer instructions: they have an
implicit EFLAGS operand, so we have to make sure it's dead in order to do
any reassociation with integer ops.

Differential Revision: http://reviews.llvm.org/D11660

llvm-svn: 243756
2015-07-31 16:21:55 +00:00
Sanjay Patel 1166f2ff9f fix memcpy/memset/memmove lowering when optimizing for size
Fixing MinSize attribute handling was discussed in D11363. 
This is a prerequisite patch to doing that.

The handling of OptSize when lowering mem* functions was broken
on Darwin because it wants to ignore -Os for these cases, but the
existing logic also made it ignore -Oz (MinSize).

The Linux change demonstrates a widespread problem. The backend
doesn't usually recognize the MinSize attribute by itself; it
assumes that if the MinSize attribute exists, then the OptSize 
attribute must also exist. 

Fixing this more generally will be a follow-on patch or two.

Differential Revision: http://reviews.llvm.org/D11568

llvm-svn: 243693
2015-07-30 21:41:50 +00:00
Nick Lewycky c3890d2969 Fix typo "fuction" noticed in comments in AssumptionCache.h, and also all the other files that have the same typo. All comments, no functionality change! (Merely a "fuctionality" change.)
Bonus change to remove emacs major mode marker from SystemZMachineFunctionInfo.cpp because emacs already knows it's C++ from the extension. Also fix typo "appeary" in AMDGPUMCAsmInfo.h.

llvm-svn: 243585
2015-07-29 22:32:47 +00:00
Simon Pilgrim ba10f76705 [X86][SSE] Keep 32-bit target i64 vector shifts on SSE unit.
This patch improves the 32-bit target i64 constant matching to detect the shuffle vector splats that are introduced by i64 vector shift vectorization (D8416).

Differential Revision: http://reviews.llvm.org/D11327

llvm-svn: 243577
2015-07-29 21:44:27 +00:00
Simon Pilgrim 86478c6909 [X86][SSE] Vectorize i64 ASHR operations
This patch vectorizes the v2i64/v4i64 ASHR shift operations - the last remaining integer vector shifts that are still being transferred to/from the scalar unit to be completed.

Differential Revision: http://reviews.llvm.org/D11439

llvm-svn: 243569
2015-07-29 20:31:45 +00:00
Bruno Cardoso Lopes 38c0250679 Revert "[PeepholeOptimizer] Look through PHIs to find additional register sources"
Reported to Broke some internal tests: PR24303

This reverts commit r243486.

llvm-svn: 243540
2015-07-29 17:46:47 +00:00
Sanjay Patel 133e68b45c ignore duplicate divisor uses when transforming into reciprocal multiplies (PR24141)
PR24141: https://llvm.org/bugs/show_bug.cgi?id=24141
contains a test case where we have duplicate entries in a node's uses() list.

After r241826, we use CombineTo() to delete dead nodes when combining the uses into
reciprocal multiplies, but this fails if we encounter the just-deleted node again in
the list.

The solution in this patch is to not add duplicate entries to the list of users that
we will subsequently iterate over. For the test case, this avoids triggering the
combine divisors logic entirely because there really is only one user of the divisor.

Differential Revision: http://reviews.llvm.org/D11345

llvm-svn: 243500
2015-07-28 23:28:22 +00:00
Bruno Cardoso Lopes 3c235763e5 [PeepholeOptimizer] Look through PHIs to find additional register sources
Reapply 243271 with more fixes; although we are not handling multiple
sources with coalescable copies, we were not properly skipping this
case.

- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197
rdar://problem/20404526

llvm-svn: 243486
2015-07-28 21:45:50 +00:00
Alex Lorenz 305e8f6312 Add a test case for r242191 ([MMX] Use the appropriate instructions for
GR64 <-> VR64 copies).

This commit adds a MIR test case for the commit r242191, which was committed
without one. This test case verifies that the ExpandPostRA pass expands the
GR64 <-> VR64 copies into the appropriate MMX_MOV instructions.

llvm-svn: 243457
2015-07-28 17:52:59 +00:00
Chih-Hung Hsieh 9843f406ec Move unit tests to target specific directories.
Differential Revision: http://reviews.llvm.org/D10522

llvm-svn: 243454
2015-07-28 17:32:49 +00:00
Sanjay Patel 94a7433cde add tests to show broken current behavior of minsize attribute
llvm-svn: 243451
2015-07-28 17:18:25 +00:00
Chih-Hung Hsieh 1e859582d6 Implement target independent TLS compatible with glibc's emutls.c.
The 'common' section TLS is not implemented.
Current C/C++ TLS variables are not placed in common section.
DWARF debug info to get the address of TLS variables is not generated yet.

clang and driver changes in http://reviews.llvm.org/D10524

  Added -femulated-tls flag to select the emulated TLS model,
  which will be used for old targets like Android that do not
  support ELF TLS models.

Added TargetLowering::LowerToTLSEmulatedModel as a target-independent
function to convert a SDNode of TLS variable address to a function call
to __emutls_get_address.

Added into lib/Target/*/*ISelLowering.cpp to call LowerToTLSEmulatedModel
for TLSModel::Emulated. Although all targets supporting ELF TLS models are
enhanced, emulated TLS model has been tested only for Android ELF targets.
Modified AsmPrinter.cpp to print the emutls_v.* and emutls_t.* variables for
emulated TLS variables.
Modified DwarfCompileUnit.cpp to skip some DIE for emulated TLS variabls.

TODO: Add proper DIE for emulated TLS variables.
      Added new unit tests with emulated TLS.

Differential Revision: http://reviews.llvm.org/D10522

llvm-svn: 243438
2015-07-28 16:24:05 +00:00
Simon Pilgrim df984f58ad [X86][SSE] Use bitmasks instead of shuffles where possible.
VPAND is a lot faster than VPSHUFB and VPBLENDVB - this patch ensures we attempt to lower to a basic bitmask before lowering to the slower byte shuffle/blend instructions.

Split off from D11518.

Differential Revision: http://reviews.llvm.org/D11541

llvm-svn: 243395
2015-07-28 08:54:41 +00:00
Igor Breger 8352a0ddf2 AVX512: Implemented encoding and intrinsics for VGETEXPSS/D instructions
Added tests for intrinsics and encoding.

Differential Revision: http://reviews.llvm.org/D11528

llvm-svn: 243390
2015-07-28 06:53:28 +00:00
Sanjay Patel 8c13e3680d fix invalid load folding with SSE/AVX FP logical instructions (PR22371)
This is a follow-up to the FIXME that was added with D7474 ( http://reviews.llvm.org/rL229531 ).
I thought this load folding bug had been made hard-to-hit, but it turns out to be very easy
when targeting 32-bit x86 and causes a miscompile/crash in Wine:
https://bugs.winehq.org/show_bug.cgi?id=38826
https://llvm.org/bugs/show_bug.cgi?id=22371#c25

The quick fix is to simply remove the scalar FP logical instructions from the load folding table
in X86InstrInfo, but that causes us to miss load folds that should be possible when lowering fabs,
fneg, fcopysign. So the majority of this patch is altering those lowerings to use *vector* FP
logical instructions (because that's all x86 gives us anyway). That lets us do the load folding 
legally.

Differential Revision: http://reviews.llvm.org/D11477

llvm-svn: 243361
2015-07-28 00:48:32 +00:00
NAKAMURA Takumi 69ed7170dc Tweak llvm/test/CodeGen/X86/virtual-registers-cleared-in-machine-functions-liveins.ll not to fail for targeting win32.
llvm-svn: 243341
2015-07-27 23:01:41 +00:00
Simon Pilgrim c363e7d8e0 [X86][SSE] Added shuffle tests to demonstrate missed bitmask.
llvm-svn: 243324
2015-07-27 20:41:57 +00:00
Bruno Cardoso Lopes b20841df44 Revert "[PeepholeOptimizer] Look through PHIs to find additional register sources"
Still breaks some ARM buildbots. This reverts r243271.

llvm-svn: 243318
2015-07-27 20:26:04 +00:00
Alex Lorenz 10b23525cc Reset the virtual registers in liveins when clearing the virtual registers.
This commit zeroes out the virtual register references in the machine
function's liveins in the class 'MachineRegisterInfo' when the virtual
register definitions are cleared.

Reviewers: Matthias Braun
llvm-svn: 243290
2015-07-27 17:51:59 +00:00
Bruno Cardoso Lopes 669c921bfd [PeepholeOptimizer] Look through PHIs to find additional register sources
Reapply r242295 with fixes in the implementation.

- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197
rdar://problem/20404526

llvm-svn: 243271
2015-07-27 14:39:46 +00:00
Simon Pilgrim 65d35a14b7 [X86][SSE] Refreshed vector bit count tests.
llvm-svn: 243249
2015-07-26 17:02:25 +00:00
Simon Pilgrim 8a9c1d7d88 [X86][AVX2] Refreshed avx2 conversion tests
llvm-svn: 243248
2015-07-26 17:01:16 +00:00
Igor Breger f2460112ad Implemented encoding and intrinsics of the following instructions
vunpckhps/pd, vunpcklps/pd, 
  vpunpcklbw, vpunpckhbw, vpunpcklwd, vpunpckhwd, vpunpckldq, vpunpckhdq, vpunpcklqdq, vpunpckhqdq
Added tests for intrinsics and encoding.

Differential Revision: http://reviews.llvm.org/D11509

llvm-svn: 243246
2015-07-26 14:41:44 +00:00
Simon Pilgrim 944a5777bb [X86][SSE] Added additional vector sign/zero load extension tests.
llvm-svn: 243216
2015-07-25 14:07:20 +00:00
Simon Pilgrim 20dc35aff6 [X86][SSE] Added additional vector sign/zero extension tests.
llvm-svn: 243212
2015-07-25 11:17:35 +00:00
Duncan P. N. Exon Smith 56b893b364 DI/Verifier: Fix argument bitrot in DILocalVariable
Add a verifier check that `DILocalVariable`s of tag
`DW_TAG_arg_variable` always have a non-zero 'arg:' field, and those of
tag `DW_TAG_auto_variable` always have a zero 'arg:' field.  These are
the only configurations that are properly understood by the backend.

(Also, fix the bad examples in LangRef and test/Assembler, and fix the
bug in Kaleidoscope Ch8.)

A large number of testcases seem to have bitrotted their way forward
from some ancient version of the debug info hierarchy that didn't have
`arg:` parameters.  If you have out-of-tree testcases that start failing
in the verifier and you don't care enough to get the `arg:` right, you
may have some luck just calling:

    sed -e 's/, arg: 0/, arg: 1/'

or some such, but I hand-updated the ones in tree.

llvm-svn: 243183
2015-07-24 23:59:25 +00:00
Igor Breger 074a64e72c AVX-512: Implemented encoding , DAG lowering and intrinsics for Integer Truncate with/without saturation
Added tests for DAG lowering ,encoding and intrinsic

Differential Revision: http://reviews.llvm.org/D11218

llvm-svn: 243122
2015-07-24 17:24:15 +00:00
Sanjay Patel f2fa58e744 fix crash in machine trace metrics due to processing dbg_value instructions (PR24199)
The test in PR24199 ( https://llvm.org/bugs/show_bug.cgi?id=24199 ) crashes because machine
trace metrics was not ignoring dbg_value instructions when calculating data dependencies.

The machine-combiner pass asks machine trace metrics to calculate an instruction trace, 
does some reassociations, and calls MachineInstr::eraseFromParentAndMarkDBGValuesForRemoval()
along with MachineTraceMetrics::invalidate(). The dbg_value instructions have their operands
invalidated, but the instructions are not expected to be deleted.

On a subsequent loop iteration of the machine-combiner pass, machine trace metrics would be
called again and die while accessing the invalid debug instructions.

Differential Revision: http://reviews.llvm.org/D11423

llvm-svn: 243057
2015-07-23 22:56:53 +00:00
Michael Kuperstein 454d145395 [X86] Allow load folding into PUSH instructions
Adds pushes to the folding tables.
This also required a fix to the TD definition, since the memory forms of 
the push instructions did not have the right mayLoad/mayStore flags.

Differential Revision: http://reviews.llvm.org/D11340

llvm-svn: 243010
2015-07-23 12:23:45 +00:00
Elena Demikhovsky 482b303254 X86: Fixed assertion failure in 32-bit mode
The DAG Node "SCALAR_TO_VECTOR" may be created if the type of the scalar element is legal.
Added a check for the scalar type before creating this node.
Added a test that fails with assertion on the current version.

Differential Revision: http://reviews.llvm.org/D11413

llvm-svn: 242994
2015-07-23 08:25:23 +00:00
Chandler Carruth fe414353db Revert r242990: "AVX-512: Implemented encoding , DAG lowering and ..."
This commit broke the build. Numerous build bots broken, and it was
blocking my progress so reverting.

It should be trivial to reproduce -- enable the BPF backend and it
should fail when running llvm-tblgen.

llvm-svn: 242992
2015-07-23 08:03:44 +00:00
Igor Breger da1b2ea955 AVX-512: Implemented encoding , DAG lowering and intrinsics for Integer Truncate with/without saturation
Added tests for DAG lowering ,encoding and intrinsic

Differential Revision: http://reviews.llvm.org/D11218

llvm-svn: 242990
2015-07-23 07:39:21 +00:00
Igor Breger 87e6397fb1 AVX : Fix ISA disabling in case AVX512VL , some instructions should be disabled only if AVX512BW and AVX512VL present.
Tests added.

Differential Revision: http://reviews.llvm.org/D11414

llvm-svn: 242987
2015-07-23 07:11:14 +00:00
Asaf Badouh a5b2e5e2a7 [X86][AVX512] add reduce/range/scalef/rndScale
include encoding and intrinsics

Differential Revision: http://reviews.llvm.org/D11222

llvm-svn: 242896
2015-07-22 12:00:43 +00:00
Elena Demikhovsky a26f10ce18 AVX-512: Added intrinsics for VCVT* instructions.
All SKX forms. All VCVT instructions for float/double/int/long types.

Differential Revision: http://reviews.llvm.org/D11343

llvm-svn: 242877
2015-07-22 08:56:00 +00:00
Igor Breger f7fd547e27 AVX512 : Implemented VPMADDUBSW and VPMADDWD instruction ,
Added tests for intrinsics and encoding.

Differential Revision: http://reviews.llvm.org/D11351

llvm-svn: 242761
2015-07-21 07:11:28 +00:00
Sanjoy Das 93d608c3c3 [ImplicitNullChecks] Work with implicit defs.
Summary:
This change generalizes the implicit null checks pass to work with
instructions that don't have any explicit register defs.  This lets us
use X86's `cmp` against memory as faulting load instructions.

Reviewers: reames, JosephTremoulet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11286

llvm-svn: 242703
2015-07-20 20:31:39 +00:00
Simon Pilgrim 23a29dafda [X86][SSE] Tidied up vector CTLZ/CTTZ. NFCI.
llvm-svn: 242645
2015-07-19 17:09:43 +00:00
Elena Demikhovsky 17b906058e AVX-512: Floating point conversions for SKX - DAG Lowering.
SKX supports conversion for all FP types. Integer types include doublewords and quardwords.
I added "Legal" status for these nodes and a bunch of tests.
I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported.

Differential Revision: http://reviews.llvm.org/D11255

llvm-svn: 242637
2015-07-19 10:17:33 +00:00
Simon Pilgrim 8c81678dfa [X86][SSE] Added additional fp/int tests.
Demonstrates some shortfalls in subvector(cvt(x)) compared to cvt(subvector(x)) patterns - especially on AVX/AVX2 targets.

llvm-svn: 242614
2015-07-18 17:05:39 +00:00
Simon Pilgrim c77f72c4e4 Refreshed tests.
llvm-svn: 242613
2015-07-18 16:53:51 +00:00
Simon Pilgrim 036db52673 Refreshed tests and reordered in descending integer size.
llvm-svn: 242610
2015-07-18 16:14:56 +00:00
Simon Pilgrim 27e130f11c Tidyup shufflevector calls - don't repeat inputs if you can avoid it.
llvm-svn: 242609
2015-07-18 15:56:33 +00:00
Rafael Espindola 494a381ede Use small encodings for constants when possible.
llvm-svn: 242493
2015-07-17 00:57:52 +00:00
Simon Pilgrim 769d58f9df [X86][SSE] Added nounwind attribute to vector shift tests.
Stop i686 codegen from generating cfi directives.

llvm-svn: 242443
2015-07-16 21:14:26 +00:00
Simon Pilgrim 88cfb3a748 [X86][SSE] Updated vector conversion test names.
I'll be adding further tests shortly so need a more thorough naming convention.

llvm-svn: 242440
2015-07-16 21:00:57 +00:00
James Molloy 7395a8182c [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute difference operation
This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation.
The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86.

This is 1st of the three patches.

Patch by Shahid Asghar-ahmad!

llvm-svn: 242409
2015-07-16 15:22:46 +00:00