Commit Graph

28542 Commits

Author SHA1 Message Date
Tim Northover 43c0d2db50 DeadArgElim: arguments affect all returned sub-values by default.
Unless we meet an insertvalue on a path from some value to a return, that value
will be live if *any* of the return's components are live, so all of those
components must be added to the MaybeLiveUses.

Previously we were deleting arguments if sub-value 0 turned out to be dead.

llvm-svn: 228731
2015-02-10 19:49:18 +00:00
Bill Schmidt 82f1c775a0 [PowerPC] Fix reverted patch r227976 to avoid register assignment issues
See full discussion in http://reviews.llvm.org/D7491.

We now hide the add-immediate and call instructions together in a
separate pseudo-op, which is tagged to define GPR3 and clobber the
call-killed registers.  The PPCTLSDynamicCall pass prior to RA now
expands this op into the two separate addi and call ops, with explicit
definitions of GPR3 on both instructions, and explicit clobbers on the
call instruction.  The pass is now marked as requiring and preserving
the LiveIntervals and SlotIndexes analyses, and fixes these up after
the replacement sequences are introduced.

Self-hosting has been verified on LE P8 and BE P7 with various
optimization levels, etc.  It has also been verified with the
--no-tls-optimize flag workaround removed.

llvm-svn: 228725
2015-02-10 19:09:05 +00:00
David Majnemer a7d908eb2b X86: Emit Win64 SaveXMM opcodes at the right offset in the right order
Walk the instructions marked FrameSetup and consider any stores of XMM
registers to the stack as needing a SaveXMM opcode.

This fixes PR22521.

Differential Revision: http://reviews.llvm.org/D7527

llvm-svn: 228724
2015-02-10 19:01:47 +00:00
Hal Finkel 57c6ac5e41 [PowerPC] Support the (old) cntlz instruction alias
Some old assembly code uses the cntlz alias for cntlzw, binutils supports this,
and we should too. Fixes PR22519.

llvm-svn: 228719
2015-02-10 18:45:02 +00:00
Michael Zolotukhin 03e3518c91 Add a test case for new unrolling heuristics.
THe heuristics were added in r228265 and r228434.

llvm-svn: 228713
2015-02-10 17:54:54 +00:00
Zoran Jovanovic 416886793f [mips][microMIPS] Implement movep instruction
Differential Revision: http://reviews.llvm.org/D7465

llvm-svn: 228703
2015-02-10 16:36:20 +00:00
Paul Robinson 848cf6aa3a Explicitly initialize a flag in a default constructor.
Works around a Visual C++ issue.

Patch by Douglas Yung!

llvm-svn: 228699
2015-02-10 15:30:02 +00:00
Bradley Smith e997b45076 [ARM] Add armv6s[-]m as an alias to armv6[-]m
llvm-svn: 228696
2015-02-10 15:15:08 +00:00
Simon Pilgrim d142ab7d08 [X86][AVX2] Missing AVX2 memory folding instructions
Added most of the missing vector folding patterns for AVX2 (as well as fixing the vpermpd and verpmq patterns)

Differential Revision: http://reviews.llvm.org/D7492

llvm-svn: 228688
2015-02-10 13:22:57 +00:00
Jozef Kolek e76eb41c21 [mips][microMIPS] Add disassembler tests for 16-bit instructions BREAK16 and SDBBP16
Differential Revision: http://reviews.llvm.org/D7443

llvm-svn: 228687
2015-02-10 13:20:51 +00:00
Simon Pilgrim cd32254a35 [X86][XOP] Added XOP memory folding patterns + tests
This patch adds the complete AMD Bulldozer XOP instruction set to the memory folding pattern tables for stack folding, etc.

Note: Many of the XOP instructions have multiple table entries as it can fold loads from different sources.

Differential Revision: http://reviews.llvm.org/D7484

llvm-svn: 228685
2015-02-10 12:57:17 +00:00
Jozef Kolek d68d424abf [mips][microMIPS] Fix disassembling of 16-bit microMIPS instructions LWM16 and SWM16
Differential Revision: http://reviews.llvm.org/D7436

llvm-svn: 228683
2015-02-10 12:41:13 +00:00
Andrea Di Biagio 62622d2396 [X86][FastIsel] Avoid introducing legacy SSE instructions if the target has AVX.
This patch teaches X86FastISel how to select AVX instructions for scalar
float/double convert operations.

Before this patch, X86FastISel always selected legacy SSE instructions
for FPExt (from float to double) and FPTrunc (from double to float).

For example:
\code
  define double @foo(float %f) {
    %conv = fpext float %f to double
    ret double %conv
  }
\end code

Before (with -mattr=+avx -fast-isel) X86FastIsel selected a CVTSS2SDrr which is
legacy SSE:
  cvtss2sd %xmm0, %xmm0

With this patch, X86FastIsel selects a VCVTSS2SDrr instead:
  vcvtss2sd %xmm0, %xmm0, %xmm0

Added test fast-isel-fptrunc-fpext.ll to check both the register-register and
the register-memory float/double conversion variants.

Differential Revision: http://reviews.llvm.org/D7438

llvm-svn: 228682
2015-02-10 12:04:41 +00:00
Chandler Carruth 2496910325 Revert r228556: InstCombine: propagate nonNull through assume
This commit isn't using the correct context, and is transfoming calls
that are operands to loads rather than calls that are operands to an
icmp feeding into an assume. I've replied on the original review thread
with a very reduced test case and some thoughts on how to rework this.

llvm-svn: 228677
2015-02-10 08:07:32 +00:00
Nick Lewycky 1cbc13a928 Remove non-test files that appear to have been accidentally committed in r228641.
llvm-svn: 228657
2015-02-10 02:39:17 +00:00
Chandler Carruth b65d61a2e8 [x86] Fix PR22524: the DAG combiner was incorrectly handling illegal
nodes when folding bitcasts of constants.

We can't fold things and then check after-the-fact whether it was legal.
Once we have formed the DAG node, arbitrary other nodes may have been
collapsed to it. There is no easy way to go back. Instead, we need to
test for the specific folding cases we're interested in and ensure those
are legal first.

This could in theory make this less powerful for bitcasting from an
integer to some vector type, but AFAICT, that can't actually happen in
the SDAG so its fine. Now, we *only* whitelist specific int->fp and
fp->int bitcasts for post-legalization folding. I've added the test case
from the PR.

(Also as a note, this does not appear to be in 3.6, no backport needed)

llvm-svn: 228656
2015-02-10 02:25:56 +00:00
David Majnemer 93c22a45be X86: Emit an ABI compliant prologue and epilogue for Win64
Win64 has specific contraints on what valid prologues and epilogues look
like.  This constraint is born from the flexibility and descriptiveness
of Win64's unwind opcodes.

Prologues previously emitted by LLVM could not be represented by the
unwind opcodes, preventing operations powered by stack unwinding to
successfully work.

Differential Revision: http://reviews.llvm.org/D7520

llvm-svn: 228641
2015-02-10 00:57:42 +00:00
Adrian Prantl 34e7590e0d Debug info: When updating debug info during SROA, do not emit debug info
for any padding introduced by SROA. In particular, do not emit debug info
for an alloca that represents only the padding introduced by a previous
iteration.

Fixes PR22495.

llvm-svn: 228632
2015-02-09 23:57:22 +00:00
Adrian Prantl 27bd01f71c Debug info: Use DW_OP_bit_piece instead of DW_OP_piece in the
intermediate representation. This
- increases consistency by using the same granularity everywhere
- allows for pieces < 1 byte
- DW_OP_piece didn't actually allow storing an offset.

Part of PR22495.

llvm-svn: 228631
2015-02-09 23:57:15 +00:00
Ramkumar Ramachandra 2e4b9e0a37 PlaceSafepoints: modernize gc.result.* -> gc.result
Differential Revision: http://reviews.llvm.org/D7516

llvm-svn: 228625
2015-02-09 23:00:40 +00:00
Duncan P. N. Exon Smith b407bb2789 DebugInfo: Remove DW_TAG_constant
Remove handling for DW_TAG_constant.  We started producing it in
r110656, but reverted that in r110876 without dropping the support.
Finish the job.

llvm-svn: 228623
2015-02-09 22:48:04 +00:00
Philip Reames 0edbf2e407 Introduce more tests for PlaceSafepoints
These tests the two optimizations for backedge insertion currently implemented and the split backedge flag which is currently off by default.

llvm-svn: 228617
2015-02-09 22:10:15 +00:00
Philip Reames 5fc82fd6b5 Minor test cleanup
a) add gc attribute
b) remove unused param

llvm-svn: 228612
2015-02-09 21:50:31 +00:00
Ramkumar Ramachandra 82ab65c7cd MemDerefPrinter: Require DataLayoutPass for higher accuracy
Without a valid data layout, deferenceable(N) doesn't get parsed or
propagated. Since this is the key item we are testing, add a dependency
on the pass.

Differential Revision: http://reviews.llvm.org/D7508

llvm-svn: 228611
2015-02-09 21:50:03 +00:00
Philip Reames b1ed02f728 Add basic tests for PlaceSafepoints
This is just adding really simple tests which should have been part of the original submission.  When doing so, I discovered that I'd mistakenly removed required pieces when preparing the patch for upstream submission.  I fixed two such bugs in this submission.

llvm-svn: 228610
2015-02-09 21:48:05 +00:00
Ramkumar Ramachandra a7343d65f4 isDereferenceablePointer: look through gc.relocate calls
While a theoretical GC might change dereferenceability on collection,
there is no such known collector and no need to account for the case
with a flag yet.

Differential Revision: http://reviews.llvm.org/D7454

llvm-svn: 228606
2015-02-09 21:08:03 +00:00
Colin LeMahieu 955c4ff9c3 [Hexagon] Factoring classes out of store patterns.
llvm-svn: 228602
2015-02-09 20:33:46 +00:00
Sanjoy Das bf5d870dfa Bugfix: SCEV incorrectly marks certain add recurrences as nsw
When creating a scev for sext({X,+,Y}), scev checks if the expression
is equivalent to {sext X,+,zext Y}.  If it can prove that, it also
tags the original {X,+,Y} as <nsw>, which is not correct.

In the test case I run `-scalar-evolution` twice because the bug
manifests only once SCEV has run through and seen the `sext`
expressions (and then does a in-place mutation on {X,+,Y}).

Differential Revision: http://reviews.llvm.org/D7495

llvm-svn: 228586
2015-02-09 18:34:55 +00:00
Sanjay Patel 546f26acf3 fixed to test features, not CPUs
llvm-svn: 228581
2015-02-09 17:17:09 +00:00
Kit Barton 0b0cdb1cd4 This change implements the following three logical vector operations:
veqv (vector equivalence)
vnand
vorc
I increased the AddedComplexity for these instructions to 500 to ensure they are generated instead of issuing other VSX instructions.


Phabricator review: http://reviews.llvm.org/D7469

llvm-svn: 228580
2015-02-09 17:03:18 +00:00
Johannes Doerfert 2683e5676c Allow ScalarEvolution to catch more min/max cases
For the attached test case different types are used in the ICmpInst
  and SelectInst that represent the min/max expressions. However, if the
  ICmpInst type is smaller a comparison with the sign/zero extended
  operands would have yielded the same result. This situation might
  arise after the instruction combination pass was applied.

  Differential Revision: http://reviews.llvm.org/D7338

llvm-svn: 228572
2015-02-09 12:34:23 +00:00
Akira Hatanaka 8d3cb829ce Fix a bug in DemoteRegToStack where a reload instruction was inserted into the
wrong basic block.

This would happen when the result of an invoke was used by a phi instruction
in the invoke's normal destination block. An instruction to reload the invoke's
value would get inserted before the critical edge was split and a new basic
block (which is the correct insertion point for the reload) was created. This
commit fixes the bug by splitting the critical edge before all the reload
instructions are inserted.

Also, hoist up the code which computes the insertion point to the only place
that need that computation.

rdar://problem/15978721

llvm-svn: 228566
2015-02-09 06:38:23 +00:00
David Majnemer 1de3094d78 MC: Calculate intra-section symbol differences correctly for COFF
This fixes PR22060.

llvm-svn: 228565
2015-02-09 06:31:31 +00:00
Tim Northover 705d2af9e1 DeadArgElim: fix mismatch in accounting of array return types.
Some parts of DeadArgElim were only considering the individual fields
of StructTypes separately, but others (where insertvalue &
extractvalue instructions occur) also looked into ArrayTypes.

This one is an actual bug; the mismatch can lead to an argument being
considered used by a return sub-value that isn't being tracked (and
hence is dead by default). It then gets incorrectly eliminated.

llvm-svn: 228559
2015-02-09 01:21:00 +00:00
Tim Northover 854c927de5 DeadArgElim: assess uses of entire return value aggregate.
Previously, a non-extractvalue use of an aggregate return value meant
the entire return was considered live (the algorithm gave up
entirely). This was correct, but conservative. It's better to actually
look at that Use, making the analysis results apply to all sub-values
under consideration.

E.g.

  %val = call { i32, i32 } @whatever()
  [...]
  ret { i32, i32 } %val

The return is using the entire aggregate (sub-values 0 and 1). We can
still simplify @whatever if we can prove that this return is itself
unused.

Also unifies the logic slightly between aggregate and non-aggregate
cases..

llvm-svn: 228558
2015-02-09 01:20:53 +00:00
Ramkumar Ramachandra a021ee62ca InstCombine: propagate nonNull through assume
Make assume (load (call|invoke) != null) set nonNull return attribute
for the call and invoke. Also include tests.

Differential Revision: http://reviews.llvm.org/D7107

llvm-svn: 228556
2015-02-09 01:13:13 +00:00
Sanjoy Das f2e931cae9 Bugfix: ScalarEvolution incorrectly assumes that the start of certain
add recurrences don't overflow.

This change makes the optimization more restrictive.  It still assumes
that an overflowing `add nsw` is undefined behavior; and this change
will need revisiting once we have a consistent semantics for poison
values.

Differential Revision: http://reviews.llvm.org/D7331

llvm-svn: 228552
2015-02-08 22:52:17 +00:00
Sanjay Patel baf0a2415c fix test attributes; this is an SSE2 test, not a Nehalem test
llvm-svn: 228546
2015-02-08 21:14:27 +00:00
Sanjay Patel e6eed52325 fix test attributes; this is an x86-64 test, not a Nehalem test
llvm-svn: 228545
2015-02-08 21:10:40 +00:00
Sanjay Patel 5cf03374a1 fix test attributes; these are SSE2 tests, not Nehalem tests
llvm-svn: 228544
2015-02-08 21:05:03 +00:00
Sanjay Patel 9be09a3617 fix test attributes; these are SSE2 tests, not Nehalem tests
llvm-svn: 228541
2015-02-08 20:50:58 +00:00
Sanjay Patel ff9dec22ba fix test attributes; these are x86-64 tests, not Nehalem tests
llvm-svn: 228536
2015-02-08 20:05:53 +00:00
Sanjay Patel 972425d221 fix test attributes; these are MMX tests, not Nehalem tests
llvm-svn: 228535
2015-02-08 20:01:12 +00:00
Sanjay Patel c871fe746b fix test attributes; these are SSE2 tests, not Nehalem tests
llvm-svn: 228534
2015-02-08 19:50:55 +00:00
Sanjay Patel 3fa03da6f8 generalize test; nothing Nehalem-specific here
llvm-svn: 228532
2015-02-08 19:38:25 +00:00
Simon Pilgrim e490385843 [X86][AVX2] AVX2 broadcast + permute memory folding tests.
llvm-svn: 228528
2015-02-08 18:33:13 +00:00
Bjorn Steinbrink 5ec7522771 Correctly combine alias.scope metadata by a union instead of intersecting
Summary:
The alias.scope metadata represents sets of things an instruction might
alias with. When generically combining the metadata from two
instructions the result must be the union of the original sets, because
the new instruction might alias with anything any of the original
instructions aliased with.

Reviewers: hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7490

llvm-svn: 228525
2015-02-08 17:07:14 +00:00
Tim Northover 45aa89c925 ARM & AArch64: teach LowerVSETCC that output type size may differ from input.
While various DAG combines try to guarantee that a vector SETCC
operation will have the same output size as input, there's nothing
intrinsic to either creation or LegalizeTypes that actually guarantees
it, so the function needs to be ready to handle a mismatch.

Fortunately this is easy enough, just extend or truncate the naturally
compared result.

I couldn't reproduce the failure in other backends that I know have
SIMD, so it's probably only an issue for these two due to shared
heritage.

Should fix PR21645.

llvm-svn: 228518
2015-02-08 00:50:47 +00:00
Craig Topper 1d472db8cc [X86] Add GETSEC instruction.
llvm-svn: 228514
2015-02-07 23:36:36 +00:00
Simon Pilgrim 7440699267 [X86][AVX2] AVX2 integer stack folding tests.
This adds tests for the remaining AVX2 instructions that currently support memory folding.

llvm-svn: 228513
2015-02-07 23:28:16 +00:00
Simon Pilgrim a2618679a8 [X86][AVX] Added missing stack folding support + test for vptest ymm instruction
llvm-svn: 228509
2015-02-07 21:44:06 +00:00
Simon Pilgrim cbc3b2fdc2 [X86][SSE] Added missing stack folding tests for (v)mpsadbw instruction
llvm-svn: 228506
2015-02-07 21:20:11 +00:00
Benjamin Kramer 17d9015d27 ValueTracking: Make isBytewiseValue simpler and more powerful at the same time.
Turns out there is a simpler way of checking that all bytes in a word are equal
than binary decomposition.

llvm-svn: 228503
2015-02-07 19:29:02 +00:00
Bjorn Steinbrink 71bf3b800a Properly update AA metadata when performing call slot optimization
Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7482

llvm-svn: 228500
2015-02-07 17:54:36 +00:00
Ahmed Bougacha 29efe3b287 [BasicAA] Try to disambiguate GEPs through arrays of structs into
different fields.

We can show that two GEPs off of the same (possibly multidimensional)
array of structs, into different fields, can't alias.  Quoting:

For two GEPOperators GEP1 and GEP2, if we find that:
- both GEPs begin indexing from the exact same pointer;
- the last indices in both GEPs are constants, indexing into a struct;
- said indices are different, hence,the pointed-to fields are different;
- and both GEPs only index through arrays prior to that;

this lets us determine that the struct that GEP1 indexes into and the
struct that GEP2 indexes into must either precisely overlap or be
completely disjoint.  Because they cannot partially overlap, indexing
into different non-overlapping fields of the struct will never alias.

The other BasicAA::aliasGEP rules worked in some cases, but not all
(for example, the i32x3 struct in the testcase).
We can add this simple ad-hoc rule to complement them.

rdar://19717375
Differential Revision: http://reviews.llvm.org/D7453

llvm-svn: 228498
2015-02-07 17:04:29 +00:00
Simon Pilgrim 0238b96c06 [X86] Force fp stack folding tests to keep to specific domain.
General boolean instructions (AND, ANDN, OR, XOR) need to use a specific domain instruction (and not just the default).

llvm-svn: 228495
2015-02-07 16:14:55 +00:00
Simon Pilgrim 947ce78d49 [X86][AVX2] More AVX2 integer stack folding tests.
llvm-svn: 228494
2015-02-07 16:07:27 +00:00
David Majnemer 5614ea9aae MC: Emit COFF section flags in the "proper" order
COFF section flags are not idempotent:
  'rd' will make a read-write section because 'd' implies write
  'dr' will make a read-only section because 'r' disables write

llvm-svn: 228490
2015-02-07 08:26:40 +00:00
Hal Finkel 291cc7bacd [PowerPC] Handle loop predecessor invokes
If a loop predecessor has an invoke as its terminator, and the return value
from that invoke is used to determine the loop iteration space, then we can't
insert a computation based on that value in the loop predecessor prior to the
terminator (oops). If there's such an invoke, or just no predecessor for that
matter, insert a new loop preheader.

llvm-svn: 228488
2015-02-07 07:32:58 +00:00
Hal Finkel 12b607ae1b [PowerPC] Fixup incomplete revert of test/CodeGen/PowerPC/tls-pic.ll
llvm-svn: 228467
2015-02-06 23:30:06 +00:00
Kevin Enderby 74b43cb403 Add code to llvm-objdump so the -section option with -macho will dump literal
sections with the Mach-O S_{4,8,16}BYTE_LITERALS section types.

llvm-svn: 228465
2015-02-06 23:25:38 +00:00
Simon Pilgrim 76cb85a6c7 [X86][AVX2] Begun adding AVX2 integer stack folding tests.
llvm-svn: 228462
2015-02-06 23:12:15 +00:00
Hal Finkel 0d2a1515d5 Revert "r227976 - [PowerPC] Yet another approach to __tls_get_addr" and related fixups
Unfortunately, even with the workaround of disabling the linker TLS
optimizations in Clang restored (which has already been done), this still
breaks self-hosting on my P7 machine (-O3 -DNDEBUG -mcpu=native).

Bill is currently working on an alternate implementation to address the TLS
issue in a way that also fully elides the linker bug (which, unfortunately,
this approach did not fully), so I'm reverting this now.

llvm-svn: 228460
2015-02-06 23:07:40 +00:00
Duncan P. N. Exon Smith af677ebb41 IR: Allow 32-bits for lines in debug location
Remove unnecessary restriction of 24-bits for line numbers in
`MDLocation`.

The rest of the debug info schema (with the exception of local
variables) uses 32-bits for line numbers.  As I introduce the
specialized nodes, it makes sense to canonicalize on one size or the
other.

llvm-svn: 228455
2015-02-06 22:50:13 +00:00
Evgeniy Stepanov 4e12057760 [msan] Fix "missing origin" in atomic store.
An atomic store always make the target location fully initialized (in the
current implementation). It should not store origin. Initialized memory can't
have meaningful origin, and, due to origin granularity (4 bytes) there is a
chance that this extra store would overwrite meaningfull origin for an adjacent
location.

llvm-svn: 228444
2015-02-06 21:47:39 +00:00
Reid Kleckner 526ec29370 Don't dllexport declarations
Fixes PR22488

llvm-svn: 228411
2015-02-06 17:59:49 +00:00
Matthias Braun 2e404597f4 InstCombine: Combine select sequences into a single select
Normalize
select(C0, select(C1, a, b), b) -> select((C0 & C1), a, b)
select(C0, a, select(C1, a, b)) -> select((C0 | C1), a, b)

This normal form may enable further combines on the And/Or and shortens
paths for the values. Many targets prefer the other but can go back
easily in CodeGen.

Differential Revision: http://reviews.llvm.org/D7399

llvm-svn: 228409
2015-02-06 17:49:36 +00:00
Daniel Sanders 808141c2af [mips] Fix FileCheck prefixes with whitespace between 'CHECK' and ':'
llvm-svn: 228403
2015-02-06 16:37:30 +00:00
Peter Zotov 7b9b9d8c85 [OCaml] Add Llvm.build_empty_phi.
llvm-svn: 228395
2015-02-06 13:42:03 +00:00
Craig Topper d193763f1c [X86] Add assembler and disassembler test cases for clflushopt, clwb, pcommit, xsaves, xrstors, xsavec
llvm-svn: 228385
2015-02-06 06:19:28 +00:00
Craig Topper bc7a86b98d [X86] Remove a ton of duplicate test cases for the assembler.
llvm-svn: 228383
2015-02-06 05:50:50 +00:00
Michel Danzer a786077253 R600/SI: Amend a test to ensure WQM is enabled for LDS in pixel shaders
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228374
2015-02-06 02:51:29 +00:00
Michel Danzer d89a557f33 R600/SI: Don't enable WQM for V_INTERP_* instructions v2
Doesn't seem necessary anymore. I think this was mostly compensating for
not enabling WQM for texture sampling instructions.

v2: Add test coverage
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228373
2015-02-06 02:51:25 +00:00
Michel Danzer 494391ba47 R600/SI: Also enable WQM for image opcodes which calculate LOD v3
If whole quad mode isn't enabled for these, the level of detail is
calculated incorrectly for pixels along diagonal triangle edges, causing
artifacts.

v2: Use a TSFlag instead of lots of switch cases
v3: Add test coverage

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88642
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228372
2015-02-06 02:51:20 +00:00
Ramkumar Ramachandra 8378ac3684 Introduce print-memderefs to test isDereferenceablePointer
Since testing the function indirectly is tricky, introduce a direct
print-memderefs pass, in the same spirit as print-memdeps, which prints
dereferenceability information matched by FileCheck.

Differential Revision: http://reviews.llvm.org/D7075

llvm-svn: 228369
2015-02-06 01:46:42 +00:00
Matthias Braun 8ab869010f AArch64: Make test more robust.
Avoid the creation of select instructions which can result in different
scheduling of the selects.

I also added a bunch of additional store volatiles. Those avoid A
CodeGen problem (bug?) where normalizes and denomarlizing the control
moves all shift instructions into the first block where ISel can't match
them together with the cmps.

llvm-svn: 228362
2015-02-05 23:52:14 +00:00
Matthias Braun 5cfba2f573 X86: Test cleanup
Use FileCheck, make it more consistent and do not rely on unoptimized
or(cmp,cmp) getting combined for max to be matched.

llvm-svn: 228361
2015-02-05 23:52:12 +00:00
Colin LeMahieu de68b66b4d [Hexagon] Simplifying and formatting several patterns. Changing a pattern multiply to be expanded.
llvm-svn: 228347
2015-02-05 21:13:25 +00:00
Ahmed Bougacha a7e33112f0 [BasicAA] Add datalayouts to make some tests more useful. NFC.
Fixes PR22462: two of the tests have regressed for a while,
but were using CHECK-NOT to match "May:".  The actual output
was changed to "MayAlias:" at some point, which made the tests
useless.
Two others return MayAlias only because of a lack of analysis;
BasicAA returns PartialAlias in those cases, when a datalayout
is present.

llvm-svn: 228346
2015-02-05 21:10:14 +00:00
Hal Finkel c9dd02066c [PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:

  for (int i = 0; i < n; ++i)
    array[i] = c;

to look like this:

  T *p = array[-1];
  for (int i = 0; i < n; ++i)
    *++p = c;

the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.

A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:

External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
	-2.32514% +/- 1.03736%

So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.

In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).

llvm-svn: 228328
2015-02-05 18:43:00 +00:00
Hal Finkel 65d1cbf9df [PowerPC] Generate pre-increment floating-point ld/st instructions
PowerPC supports pre-increment floating-point load/store instructions, both r+r
and r+i, and we had patterns for them, but they were not marked as legal. Mark
them as legal (and add a test case).

llvm-svn: 228327
2015-02-05 18:42:53 +00:00
Ahmed Bougacha e892d13d90 [CodeGen] Add hook/combine to form vector extloads, enabled on X86.
The combine that forms extloads used to be disabled on vector types,
because "None of the supported targets knows how to perform load and
sign extend on vectors in one instruction."

That's not entirely true, since at least SSE4.1 X86 knows how to do
those sextloads/zextloads (with PMOVS/ZX).
But there are several aspects to getting this right.
First, vector extloads are controlled by a profitability callback.
For instance, on ARM, several instructions have folded extload forms,
so it's not always beneficial to create an extload node (and trying to
match extloads is a whole 'nother can of worms).

The interesting optimization enables folding of s/zextloads to illegal
(splittable) vector types, expanding them into smaller legal extloads.

It's not ideal (it introduces some legalization-like behavior in the
combine) but it's better than the obvious alternative: form illegal
extloads, and later try to split them up.  If you do that, you might
generate extloads that can't be split up, but have a valid ext+load
expansion.  At vector-op legalization time, it's too late to generate
this kind of code, so you end up forced to scalarize. It's better to
just avoid creating egregiously illegal nodes.

This optimization is enabled unconditionally on X86.

Note that the splitting combine is happy with "custom" extloads. As
is, this bypasses the actual custom lowering, and just unrolls the
extload. But from what I've seen, this is still much better than the
current custom lowering, which does some kind of unrolling at the end
anyway (see for instance load_sext_4i8_to_4i64 on SSE2, and the added
FIXME).

Also note that the existing combine that forms extloads is now also
enabled on legal vectors.  This doesn't have a big effect on X86
(because sext+load is usually combined to sext_inreg+aextload).
On ARM it fires on some rare occasions; that's for a separate commit.

Differential Revision: http://reviews.llvm.org/D6904

llvm-svn: 228325
2015-02-05 18:31:02 +00:00
Andrew Trick 7fc4583eda X86 ABI fix for return values > 24 bytes.
The return value's address must be returned in %rax.
i.e. the callee needs to copy the sret argument (%rdi)
into the return value (%rax).

This probably won't manifest as a bug when the caller is LLVM-compiled
code. But it is an ABI guarantee and tools expect it.

llvm-svn: 228321
2015-02-05 18:09:05 +00:00
Tom Stellard eea3f70432 R600/SI: Fix bug in TTI loop unrolling preferences
We should be setting UnrollingPreferences::MaxCount to MAX_UINT instead
of UnrollingPreferences::Count.

Count is a 'forced unrolling factor', while MaxCount sets an upper
limit to the unrolling factor.

Setting Count to MAX_UINT was causing the loop in the testcase to be
unrolled 15 times, when it only had a maximum of 4 iterations.

llvm-svn: 228303
2015-02-05 15:32:18 +00:00
Tom Stellard 0f29de78e6 R600/SI: Fix bug from insertion of llvm.SI.end.cf into loop headers
The llvm.SI.end.cf intrinsic is used to mark the end of if-then blocks,
if-then-else blocks, and loops.  It is responsible for updating the
exec mask to re-enable threads that had been masked during the preceding
control flow block.  For example:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()
s_or_b64 exec, exec, s[0:1]         ; llvm.SI.end.cf

The bug fixed by this patch was one where the llvm.SI.end.cf intrinsic
was being inserted into the header of loops.  This would happen when
an if block terminated in a loop header and we would end up with
code like this:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()

LOOP:                       ; Start of loop header
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf <-BUG: The exec mask has the
                              same value at the beginning of each loop
			      iteration.
do_stuff();
s_cbranch_execnz LOOP

The fix is to create a new basic block before the loop and insert the
llvm.SI.end.cf there.  This way the exec mask is restored before the
start of the loop instead of at the beginning of each iteration.

llvm-svn: 228302
2015-02-05 15:32:15 +00:00
Bill Schmidt 433b1c3aae [PowerPC] Implement the vclz instructions for PWR8
Patch by Kit Barton.

Add the vector count leading zeros instruction for byte, halfword,
word, and doubleword sizes.  This is a fairly straightforward addition
after the changes made for vpopcnt:

 1. Add the correct definitions for the various instructions in
    PPCInstrAltivec.td
 2. Make the CTLZ operation legal on vector types when using P8Altivec
    in PPCISelLowering.cpp 

Test Plan

Created new test case in test/CodeGen/PowerPC/vec_clz.ll to check the
instructions are being generated when the CTLZ operation is used in
LLVM.

Check the encoding and decoding in test/MC/PowerPC/ppc_encoding_vmx.s
and test/Disassembler/PowerPC/ppc_encoding_vmx.txt respectively.

llvm-svn: 228301
2015-02-05 15:24:47 +00:00
Bruno Cardoso Lopes ab9ae87623 [X86][MMX] Handle i32->mmx conversion using movd
Implement a BITCAST dag combine to transform i32->mmx conversion patterns
into a X86 specific node (MMX_MOVW2D) and guarantee that moves between
i32 and x86mmx are better handled, i.e., don't use store-load to do the
conversion..

llvm-svn: 228293
2015-02-05 13:23:07 +00:00
Bruno Cardoso Lopes cc6089d2e0 [X86][MMX] Add several bitcast tests
Avoid regression in previously supported MMX code by adding different
combinations of tests which exercise MMX bitcasts. Small improvements
to these patterns should come next.

llvm-svn: 228292
2015-02-05 13:22:57 +00:00
Michael Kuperstein d2b6fdbc31 Teach isDereferenceablePointer() to look through bitcast constant expressions.
This fixes a LICM regression due to the new load+store pair canonicalization.

Differential Revision: http://reviews.llvm.org/D7411

llvm-svn: 228284
2015-02-05 09:15:37 +00:00
Matt Arsenault abd271b4e8 R600/SI: Fix i64 truncate to i1
llvm-svn: 228273
2015-02-05 06:05:13 +00:00
Cameron Esfahani 17177d1e84 Value soft float calls as more expensive in the inliner.
Summary: When evaluating floating point instructions in the inliner, ask the TTI whether it is an expensive operation.  By default, it's not an expensive operation.  This keeps the default behavior the same as before.  The ARM TTI has been updated to return back TCC_Expensive for targets which don't have hardware floating point.

Reviewers: chandlerc, echristo

Reviewed By: echristo

Subscribers: t.p.northover, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D6936

llvm-svn: 228263
2015-02-05 02:09:33 +00:00
Ahmed Bougacha c7f241cba9 [ARM] Use patterns instead of hardcoded regs in test. NFC.
llvm-svn: 228259
2015-02-05 01:52:19 +00:00
Ahmed Bougacha daaf3d9b58 [ARM] Make testcase more explicit. NFC.
The q8/d16 thing is silly;  I'd be happy to hear about a better
way to write those tests where simple substitution isn't enough..

llvm-svn: 228258
2015-02-05 01:45:28 +00:00
Tom Stellard f6afc80cc0 R600/SI: Enable subreg liveness by default
llvm-svn: 228228
2015-02-04 23:14:18 +00:00
Kevin Enderby 10ba041188 Add code to llvm-objdump so the -section option with -macho will dump ‘C’ string
sections with the Mach-O S_CSTRING_LITERALS section type.

llvm-svn: 228198
2015-02-04 21:38:42 +00:00
Rafael Espindola a092f17580 Don' try to make sections in comdats SHF_MERGE.
Parts of llvm were not expecting it and we wouldn't print
the entity size of the section.

Given what comdats are used for, having SHF_MERGE sections would be
just a small improvement, so just disable it for now.

Fixes pr22463.

llvm-svn: 228196
2015-02-04 21:27:24 +00:00
Tom Stellard 33e64c66ac R600/SI: Expand misaligned 16-bit memory accesses
llvm-svn: 228190
2015-02-04 20:49:52 +00:00
Tom Stellard c7e448c92e R600/SI: Make more store operations legal
v2i32, i32, trunc i32 to i16, and truc i32 to i8 stores are legal for
all address spaces.  We had marked them as custom in order to lower
them for the private address space, but this is no longer necessary.

This enables lowering of misaligned stores of these types in the
DAGLegalizer.

llvm-svn: 228189
2015-02-04 20:49:51 +00:00
Tom Stellard 096b8c1e6d R600: Don't promote i64 stores to v2i32 during DAG legalization
We take care of this during instruction selection now.  This
fixes a potential infinite loop when lowering misaligned stores.

llvm-svn: 228188
2015-02-04 20:49:49 +00:00
Tom Stellard 071ec90b68 StructurizeCFG: Use a reverse post-order traversal
We were previously doing a post-order traversal and operating on the
list in reverse, however this would occasionaly cause backedges for
loops to be visited before some of the other blocks in the loop.

We know use a reverse post-order traversal, which avoids this issue.

The reverse post-order traversal is not completely ideal, so we need
to manually fixup the list to ensure that inner loop backedges are
visited before outer loop backedges.

llvm-svn: 228186
2015-02-04 20:49:44 +00:00
Bill Schmidt caf7e8b147 Add missing test case from r228046
llvm-svn: 228182
2015-02-04 20:00:04 +00:00
Duncan P. N. Exon Smith 920df5c1bb Utils: Resolve cycles under distinct MDNodes
Track unresolved nodes under distinct `MDNode`s during `MapMetadata()`,
and resolve them at the end.  Previously, these cycles wouldn't get
resolved.

llvm-svn: 228180
2015-02-04 19:44:34 +00:00
Matthias Braun 26e7ea6267 MachineCSE: Clear dead-def flag on CSE.
In case CSE reuses a previoulsy unused register the dead-def flag has to
be cleared on the def operand, as exposed by the arm64-cse.ll test.

This fixes PR22439 and the corresponding rdar://19694987

Differential Revision: http://reviews.llvm.org/D7395

llvm-svn: 228178
2015-02-04 19:35:16 +00:00
Michael Kuperstein cd63c5fa73 Fixes a bug in vector load legalization that confused bits and bytes.
Differential Revision: http://reviews.llvm.org/D7400

llvm-svn: 228168
2015-02-04 18:54:01 +00:00
Colin LeMahieu c0434466e4 [Hexagon] Adding encoding information for absolute-reg mode stores. Xfailing a test until constant extenders are correctly put in the same packet.
llvm-svn: 228158
2015-02-04 17:52:06 +00:00
Bradley Smith 9f4cd59e80 [ARM] Fix subtarget feature set truncation when using .cpu directive
This is a bug that was caused due to storing the feature bitset in a 32-bit
variable when it is a 64-bit mask, discarding the top half of the feature set.

llvm-svn: 228151
2015-02-04 16:23:24 +00:00
Zoran Jovanovic 5a1a780c2a [mips][microMIPS] Implement CodeGen support for SW16 and LW16 instructions
Differential Revision: http://reviews.llvm.org/D6581

llvm-svn: 228149
2015-02-04 15:43:17 +00:00
Daniel Sanders a9aab74304 [mips] Remove unused check prefix from tests. NFC.
Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7376

llvm-svn: 228145
2015-02-04 14:48:39 +00:00
Renato Golin 6088504499 Adding support to LLVM for targeting Cortex-A72
Currently, Cortex-A72 is modelled as an Cortex-A57 except the fp
load balancing pass isn't enabled for Cortex-A72 as it's not
profitable to have it enabled for this core.

Patch by Ranjeet Singh.

llvm-svn: 228140
2015-02-04 13:31:29 +00:00
Chandler Carruth 4d31f58c88 [x86] Give movss and movsd execution domains in the x86 backend.
This associates movss and movsd with the packed single and packed double
execution domains (resp.). While this is largely cosmetic, as we now
don't have weird ping-pong-ing between single and double precision, it
is also useful because it avoids the domain fixing algorithm from seeing
domain breaks that don't actually exist. It will also be much more
important if we have an execution domain default other than packed
single, as that would cause us to mix movss and movsd with integer
vector code on a regular basis, a very bad mixture.

llvm-svn: 228135
2015-02-04 10:58:53 +00:00
Chandler Carruth 78c8dcd9d3 [x86] Remove a low-value test that was just checking how we cleared
a register. We have lots of tests covering this.

llvm-svn: 228133
2015-02-04 10:47:34 +00:00
Chandler Carruth bb525e336b [x86] Mechanically update a bunch of tests' check lines using the latest
version of the script.

Changes include:
- Using the VEX prefix
- Skipping more detail when we have useful shuffle comments to match
- Matching more shuffle comments that have been added to the printer
  (yay!)
- Matching the destination registers of some AVX instructions
- Stripping trailing whitespace that crept in
- Fixing indentation issues

Nothing interesting going on here. I'm just trying really hard to ensure
these changes don't show up in the diffs with actual changes to the
backend.

llvm-svn: 228132
2015-02-04 10:46:53 +00:00
Renato Golin 2a5c0a51ce Reverting VLD1/VST1 base-updating/post-incrementing combining
This reverts patches 223862, 224198, 224203, and 224754, which were all
related to the vector load/store combining and were reverted/reaplied
a few times due to the same alignment problems we're seeing now.

Further tests, mainly self-hosting Clang, will be needed to reapply this
patch in the future.

llvm-svn: 228129
2015-02-04 10:11:59 +00:00
Chandler Carruth 22b1525ae8 [x86] Include the destination register in the check-lines for AVX
instructions.

No actual change here.

llvm-svn: 228127
2015-02-04 09:18:27 +00:00
Chandler Carruth 18ba596609 [x86] Add some tests I missed in the prior commit to cover blends with
zero for v8i16 as well.

These exhibit the same domain badness, but also exhibit other weaknesses
in our blend lowering. More fixes to come.

llvm-svn: 228126
2015-02-04 09:15:46 +00:00
Chandler Carruth 024cf8efd7 [x86] Start to introduce bit-masking based blend lowering.
This is the simplest form of bit-math based blending which only fires
when we are blending with zero and is relatively profitable. I've only
enabled this path on very specific lowering strategies. I'm planning to
widen its applicability in subsequent patches, but so far you'll notice
that even though we get fewer shufps instructions, we *still* do the bit
math in the FP execution port. I'm looking into why this is still
happening.

llvm-svn: 228124
2015-02-04 09:06:05 +00:00
Chandler Carruth 872d80e7a4 [x86] Add tests for blends-with-zero on 4-element vectors.
llvm-svn: 228122
2015-02-04 09:05:58 +00:00
Bill Schmidt 1354f7c5fa [PowerPC] Handle 32-bit targets properly in PPCTLSDynamicCall.cpp
llvm-svn: 228116
2015-02-04 05:51:56 +00:00
Frederic Riss b61f01f1c2 Fix some unnoticed/unwanted behavior change from r222319.
The ARM assembler allows register alias redefinitions as long as it
targets the same register. r222319 broke that. In the AArch64 case
it would just produce a new warning, but in the ARM case it would
error out on previously accepted assembler.

llvm-svn: 228109
2015-02-04 03:10:03 +00:00
Kostya Serebryany 77cc729ad7 [sanitizer] add another workaround for PR 17409: when over a threshold emit coverage instrumentation as calls.
llvm-svn: 228102
2015-02-04 01:21:45 +00:00
Kevin Enderby 95df54c819 Add code to llvm-objdump so the -section option with -macho will disassemble sections
that have attributes indicating they contain instructions.

llvm-svn: 228101
2015-02-04 01:01:38 +00:00
Chandler Carruth abd09a1f35 [x86] Refresh the checks of a number of tests using
update_llc_test_checks.py.

The exact format of the checks has changed over time. This includes
different indenting rules, new shuffle comments that have been added,
and more operand hiding behind regular expressions.

No functional change to the tests are expected here, but this will make
subsequent patches have a clean diff as they change shuffle lowering.

llvm-svn: 228097
2015-02-04 00:58:42 +00:00
Chandler Carruth abde67eb1c [x86] Switch to using the long '--check-prefix' form which the
update_llc_test_checks.py script uses, and refresh the checks in this
test.

No functionality changed here, just bringing this test up to work with
automated updates using the python script.

llvm-svn: 228096
2015-02-04 00:58:40 +00:00
Chandler Carruth 52332dc620 [x86] Port this test to use utils/update_llc_test_checks.py.
This will make it easy to update as I change some parts of the X86
backend, makes it more clear what instruction differences are
introduced, and I find it makes it a bit easier to read as well.

llvm-svn: 228095
2015-02-04 00:58:37 +00:00
Sanjay Patel b82b8d6b84 improved CHECK
llvm-svn: 228086
2015-02-04 00:24:06 +00:00
Owen Anderson 21b1788ad0 Remove a gross usage of environment variables in MachineVerifier, replacing it with support for setting the -verify-machineinstrs flag via an environment variable in LIT.
This preserves the handy functionality of force-enabling the MachineVerifier, without the need to embed usage of environment variables in LLVM client applications.

llvm-svn: 228079
2015-02-04 00:02:59 +00:00
Simon Pilgrim 46cd4f7400 [X86][SSE] psrl(w/d/q) and psll(w/d/q) bit shifts for SSE2
Patch to match cases where shuffle masks can be reduced to bit shifts. Similar to byte shift shuffle matching from D5699.

Differential Revision: http://reviews.llvm.org/D6649

llvm-svn: 228047
2015-02-03 21:58:29 +00:00
Bill Schmidt fe88b18990 [PowerPC] Implement the vpopcnt instructions for POWER8
Patch by Kit Barton.

Add the vector population count instructions for byte, halfword, word,
and doubleword sizes.  There are two major changes here:

    PPCISelLowering.cpp: Make CTPOP legal for vector types.
    PPCRegisterInfo.td: Added v2i64 to the VRRC register
      definition. This is needed for the doubleword variations of the
      integer ops that were added in P8. 

Test Plan

Test the instruction vpcnt* encoding/decoding in ppc64-encoding-vmx.s

Test the generation of the vpopcnt instructions for various vector
data types.  When adding the v2i64 type to the Vector Register set, I
also needed to add the appropriate bit conversion patterns between
v2i64 and the existing vector types.  Testing for these conversions
were also added in the test case by passing a different vector type as
a parameter into the test functions.  There is also a run step that
will ensure the vpopcnt instructions are generated when the vsx
feature is disabled.

llvm-svn: 228046
2015-02-03 21:58:23 +00:00
Chandler Carruth 1fff318a41 [x86] Add two truly horrific test cases for the new vector shuffle
lowering. I'm prepping patches to improve these, and this will let the
delta of those patches show the improvement. =]

llvm-svn: 228044
2015-02-03 21:56:28 +00:00
Chandler Carruth 4ce669d91c [x86] Update the indent and layout of some tests in this file. NFC
This is just to remove voise from using the update_llc_test_checks
script.

llvm-svn: 228043
2015-02-03 21:56:24 +00:00
Duncan P. N. Exon Smith 974860774e AsmParser: Recognize DW_TAG_* constants
Recognize `DW_TAG_` constants in assembly, and output it by default for
`GenericDebugNode`.

llvm-svn: 228042
2015-02-03 21:56:01 +00:00
Duncan P. N. Exon Smith 4e4aa70535 IR: Assembly and bitcode for GenericDebugNode
llvm-svn: 228041
2015-02-03 21:54:14 +00:00
Marek Olsak 37cd4d0f42 R600/SI: Remove the -CHECK suffix from all FileCheck prefixes in LIT tests
llvm-svn: 228040
2015-02-03 21:53:27 +00:00
Marek Olsak 707a6d0c20 R600/SI: Fix B64 VALU shifts on VI
SI only has standard versions. VI only has REV versions.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 228037
2015-02-03 21:53:01 +00:00
Justin Bogner de15817ea2 InstrProf: Remove CoverageMapping::HasCodeBefore, it isn't used
It's not entirely clear to me what this field was meant for, but it's
always false. Remove it.

llvm-svn: 228034
2015-02-03 21:35:36 +00:00
Chandler Carruth a4a77ed59e [x86] Tweak my update script to use test case function names starting
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).

I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.

llvm-svn: 228033
2015-02-03 21:26:45 +00:00
Colin LeMahieu cd9cb023d7 [Hexagon] Converting XTYPE/SHIFT intrinsics. Cleaning out old intrinsic patterns and updating tests.
llvm-svn: 228026
2015-02-03 20:40:52 +00:00
Daniel Berlin 487aed0d77 Allow PRE to insert no-cost phi nodes
llvm-svn: 228024
2015-02-03 20:37:08 +00:00
Simon Pilgrim d9885856e6 [X86][SSE] Added general integer shuffle matching for MOVQ instruction
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.

llvm-svn: 228022
2015-02-03 20:09:18 +00:00
Colin LeMahieu cf7248bcaf [Hexagon] Updating XTYPE/PRED intrinsics.
llvm-svn: 228019
2015-02-03 19:43:59 +00:00
Jingyue Wu d7966ff3b9 Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,

LLVM unrolls

  #pragma unroll
  foo (int i = 0; i < 3; ++i) {
    sum += foo((b + i) * s);
  }

into

  sum += foo(b * s);
  sum += foo((b + 1) * s);
  sum += foo((b + 2) * s);

However, no optimizations yet reduce the internal redundancy of the three
expressions:

  b * s
  (b + 1) * s
  (b + 2) * s

With SLSR, LLVM can optimize these three expressions into:

  t1 = b * s
  t2 = t1 + s
  t3 = t2 + s

This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.

Test Plan: test/StraightLineStrengthReduce/slsr.ll

Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick

Reviewed By: jholewinski, atrick

Subscribers: karthikthecool, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D7310

llvm-svn: 228016
2015-02-03 19:37:06 +00:00
Colin LeMahieu e5daf3abfe [Hexagon] Updating XTYPE/PERM intrinsics.
llvm-svn: 228015
2015-02-03 19:36:59 +00:00
Simon Pilgrim 6544f815b3 [X86][AVX2] Enabled shuffle matching for the AVX2 zero extension (128bit -> 256bit) vpmovzx* instructions.
Differential Revision: http://reviews.llvm.org/D7251

llvm-svn: 228014
2015-02-03 19:34:09 +00:00
Rafael Espindola 8d911b4419 Fix typo in test/CodeGen/X86/sibcall.ll (pr22331).
llvm-svn: 228011
2015-02-03 19:20:26 +00:00
Colin LeMahieu 99cc7c1070 [Hexagon] Adding missing vector multiply instruction encodings. Converting multiply intrinsics and updating tests.
llvm-svn: 228010
2015-02-03 19:15:11 +00:00
Sanjay Patel b7d5628784 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303

llvm-svn: 228006
2015-02-03 18:54:00 +00:00
Colin LeMahieu a6632452be [Hexagon] Converting complex number intrinsics and adding tests.
llvm-svn: 227995
2015-02-03 18:16:28 +00:00
Colin LeMahieu cdba4e1bcc [Hexagon] Adding vector intrinsics for alu32/alu and xtype/alu.
llvm-svn: 227993
2015-02-03 18:01:45 +00:00
Marek Olsak 191507e0b7 R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.

The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
  (very useful to catch bugs where an unsupported instruction somehow makes
   it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
  to prevent REV from commuting to non-REV on VI

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227990
2015-02-03 17:38:12 +00:00
Marek Olsak 1bd2463548 R600/SI: Fix dependency between instruction writing M0 and S_SENDMSG on VI (v2)
This fixes a hang when using an empty geometry shader.

v2: - don't add s_nop when followed by s_waitcnt
    - comestic changes

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227986
2015-02-03 17:37:52 +00:00