Commit Graph

29484 Commits

Author SHA1 Message Date
Ahmed Bougacha d1655cb1c0 [AArch64, ARM] Enable GlobalMerge with -O3 rather than -O1.
The pass used to be enabled by default with CodeGenOpt::Less (-O1).
This is too aggressive, considering the pass indiscriminately merges
all globals together.

Currently, performance doesn't always improve, and, on code that uses
few globals (e.g., the odd file- or function- static), more often than
not is degraded by the optimization.  Lengthy discussion can be found
on llvmdev (AArch64-focused;  ARM has similar problems):
  http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-February/082800.html
Also, it makes tooling and debuggers less useful when dealing with
globals and data sections.

GlobalMerge needs to better identify those cases that benefit, and this
will be done separately.  In the meantime, move the pass to run with
-O3 rather than -O1, on both ARM and AArch64.

llvm-svn: 233024
2015-03-23 21:17:36 +00:00
Chad Rosier 384ade9b11 [AArch64] Add FileCheck that was missing from test in r232967.
llvm-svn: 233013
2015-03-23 20:25:15 +00:00
Matt Arsenault f5b2cd891a R600/SI: Allow commuting compares
This enables very common cases to switch to the
smaller encoding.

All of the standard LLVM canonicalizations of comparisons
are the opposite of what we want. Compares with constants
are moved to the RHS, but the first operand can be an inline
immediate, literal constant, or SGPR using the 32-bit VOPC
encoding.

There are additional bad canonicalizations that should
also be fixed, such as canonicalizing ge x, k to gt x, (k + 1)
if this makes k no longer an inline immediate value.

llvm-svn: 232988
2015-03-23 18:45:30 +00:00
Chad Rosier affe181b39 [AArch64] Enable rematerialization of float 0 values.
Patch by Geoff Berry<gberry@codeaurora.org>.

llvm-svn: 232967
2015-03-23 17:19:34 +00:00
Bradley Smith ae0ad9c95d Revert "[ARM] Add more pattern matching for f16 <-> f64 conversions"
This change is incorrect since it converts double rounding into single rounding,
which can produce different results. Instead this optimization will be done by
modifying Clang's codegen to not produce double rounding in the first place.

This reverts commit r232954.

llvm-svn: 232962
2015-03-23 16:52:52 +00:00
Tom Stellard f0a575f6be R600/SI: Fix crash in SIInstrInfo::areLoadsFromSameBasePtr()
This function assumed that SMRD instructions always have immediate
offsets, which is not always the case.

llvm-svn: 232957
2015-03-23 16:06:01 +00:00
Bradley Smith bc0f0d8c49 [ARM] Add more pattern matching for f16 <-> f64 conversions
Specifically when the conversion is done in two steps, f16 -> f32 -> f64.

For example:

%1 = tail call float @llvm.convert.from.fp16.f32(i16 %0)
%conv = fpext float %1 to double

to:

vcvtb.f64.f16

llvm-svn: 232954
2015-03-23 15:59:54 +00:00
Petar Jovanovic 5b4362276b Fix sign extension for MIPS64 in makeLibCall function
Fixing sign extension in makeLibCall for MIPS64. In MIPS64 architecture all
32 bit arguments (int, unsigned int, float 32 (soft float)) must be sign
extended. This fixes test "MultiSource/Applications/oggenc/".

Patch by Strahinja Petrovic.

Differential Revision: http://reviews.llvm.org/D7791

llvm-svn: 232943
2015-03-23 12:28:13 +00:00
Hal Finkel 8f7c5a7f18 [SDAG] Don't widen VSETCC during type legalization for split operands
Because the operands of a vector SETCC node can be of a different type from the
result (and often are), it can happen that even if we'd prefer to widen the
result type of the SETCC, the operands have been split instead. In this case,
the SETCC result also must be split. This mirrors what is done in
WidenVecRes_SELECT, and should be NFC elsewhere because if the operands are not
widened the following calls to GetWidenedVector will assert (which is what was
happening in the test case).

llvm-svn: 232935
2015-03-23 08:22:43 +00:00
Lang Hames 1565992679 [Orc] Add missing -use-orcmcjit flag to a number of Orc regression tests.
llvm-svn: 232931
2015-03-23 06:02:49 +00:00
Duncan P. N. Exon Smith 03c37c9099 Prevent CHECK-NOTs from matching file paths
A build directory with a name like `build-Werror` would hit a false
positive on these `CHECK-NOT`s before, since the actual error line looks
like:

    .../build-Werror/bin/llvm-as <stdin>:1:2: error: ...

Switch to using:

    CHECK-NOT: error:

(note the trailing semi-colon) to avoid matching almost any file path.

llvm-svn: 232917
2015-03-22 15:58:21 +00:00
Benjamin Kramer d6aa0ec737 [SimplifyLibCalls] Fix negative shifts being produced by the memchr -> bitfield transform.
llvm-svn: 232903
2015-03-21 22:04:26 +00:00
Benjamin Kramer 7857d723f1 [SimplifyLibCalls] Turn memchr(const, C, const) into a bitfield check.
strchr("123!", C) != nullptr is a common pattern to check if C is one
of 1, 2, 3 or !. If the largest element of the string is smaller than
the target's register size we can easily create a bitfield and just
do a simple test for set membership.

int foo(char C) { return strchr("123!", C) != nullptr; } now becomes

	cmpl	$64, %edi ## range check
	sbbb	%al, %al
	movabsq	$0xE000200000001, %rcx
	btq	%rdi, %rcx ## bit test
	sbbb	%cl, %cl
	andb	%al, %cl ## and the two conditions
	andb	$1, %cl
	movzbl	%cl, %eax ## returning an int
	ret

(imho the backend should expand this into a series of branches, but
that's a different story)

The code is currently limited to bit fields that fit in a register, so
usually 64 or 32 bits. Sadly, this misses anything using alpha chars
or {}. This could be fixed by just emitting a i128 bit field, but that
can generate really ugly code so we have to find a better way. To some
degree this is also recreating switch lowering logic, but we can't
simply emit a switch instruction and thus change the CFG within
instcombine.

llvm-svn: 232902
2015-03-21 21:09:33 +00:00
Matt Arsenault da5ece8e35 R600: Cleanup test with multiple check prefixes
llvm-svn: 232901
2015-03-21 19:15:46 +00:00
Benjamin Kramer 691363e7f2 SimplifyLibCalls: Add basic optimization of memchr calls.
This is just memchr(x, y, 0) -> nullptr and constant folding.

llvm-svn: 232896
2015-03-21 15:36:21 +00:00
Simon Pilgrim 307cb8fe5d Tidied up vec_zero_cse.ll test. NFCI.
Added target triple and refactored the CHECKs to be per function.

llvm-svn: 232894
2015-03-21 14:05:12 +00:00
David Majnemer e165502ed7 MemoryDependenceAnalysis: Don't miscompile atomics
r216771 introduced a change to MemoryDependenceAnalysis that allowed it
to reason about acquire/release operations.  However, this change does
not ensure that the acquire/release operations pair.  Unfortunately,
this leads to miscompiles as we won't see an acquire load as properly
memory effecting.  This largely reverts r216771.

This fixes PR22708.

llvm-svn: 232889
2015-03-21 06:19:17 +00:00
Tim Northover 000f994633 AArch64: simplify test case
llvm-svn: 232886
2015-03-21 04:37:08 +00:00
Eric Christopher faad620569 Remove the bare getSubtargetImpl call from the AArch64 port. As part
of this add a test that shows we can generate code for functions
that specifically enable a subtarget feature.

llvm-svn: 232884
2015-03-21 04:04:50 +00:00
Eric Christopher 83eb13c967 Remove the bare getSubtargetImpl call from the PPC port. As part
of this add a test that shows we can generate code with
for functions that differ by subtarget feature.

llvm-svn: 232882
2015-03-21 03:36:02 +00:00
Eric Christopher c5a85af3b2 Cache the Function dependent subtarget on the MachineFunction.
As preparation for removing the getSubtargetImpl() call from
TargetMachine go ahead and flip the switch on caching the function
dependent subtarget and remove the bare getSubtargetImpl call
from the X86 port. As part of this add a few tests that show we
can generate code and assemble on X86 based on features/cpu on
the Function.

llvm-svn: 232879
2015-03-21 03:13:10 +00:00
Kostya Serebryany f4e35cc47d [sanitizer] experimental tracing for cmp instructions
llvm-svn: 232873
2015-03-21 01:29:36 +00:00
Ahmed Bougacha 7173b669b4 [CodeGen][IfCvt] Don't re-ifcvt blocks with unanalyzable terminators.
If we couldn't analyze its terminator (i.e., it's an indirectbr, or some
other weirdness), we can't safely re-if-convert a predicated block,
because we can't tell whether the predicated terminator can
fallthrough (it does).

Currently, we would completely ignore the fallthrough successor. In
the added testcase, this means we used to generate:

    ...
  @ %entry:
    cmp   r5, #21
    ittt  ne
  @ %cc1f:
    cmpne r7, #42
  @ %cc2t:
    strne.w       r5, [r8]
    movne pc, r10
  @ %cc1t:
    ...

Whereas the successor of %cc1f was originally %bb1.
With the fix, we get the correct:

    ...
  @ %entry:
    cmp   r5, #21
    itt   eq
  @ %cc1t:
    streq.w       r5, [r11]
    moveq pc, r0
  @ %cc1f:
    cmp   r7, #42
    itt   ne
  @ %cc2t:
    strne.w       r5, [r8]
    movne pc, r10
  @ %bb1:
    ...

rdar://20192768
Differential Revision: http://reviews.llvm.org/D8509

llvm-svn: 232872
2015-03-21 01:23:15 +00:00
Ahmed Bougacha e6bb09ac3f [AArch64] Prefer UZP for concat_vector of illegal truncs.
Follow-up to r232459: prefer a UZP shuffle to the intermediate truncs.

llvm-svn: 232871
2015-03-21 01:08:39 +00:00
Yunzhong Gao f44c06b96f Tell lit.cfg about more Windows triples.
For example, the host triple on my 64-bit PC is x86_64-pc-windows-msvc.

llvm-svn: 232854
2015-03-20 22:08:40 +00:00
Sanjay Patel ccf5f24b7b [X86, AVX] instcombine common cases of vperm2* intrinsics into shuffles
vperm2* intrinsics are just shuffles. 
In a few special cases, they're not even shuffles.

Optimizing intrinsics in InstCombine is better than
handling this in the front-end for at least two reasons:

1. Optimizing custom-written SSE intrinsic code at -O0 makes vector coders
   really angry (and so I have regrets about some patches from last week).

2. Doing mask conversion logic in header files is hard to write and 
   subsequently read.

There are a couple of TODOs in this patch to complete this optimization.

Differential Revision: http://reviews.llvm.org/D8486

llvm-svn: 232852
2015-03-20 21:47:56 +00:00
Andrew Kaylor 3170e5620e Fixing a bug with WinEH PHI handling
llvm-svn: 232851
2015-03-20 21:42:54 +00:00
Sanjay Patel c88f724fed [X86] Prefer blendps over insertps codegen for one special case
With this patch, for this one exact case, we'll generate:

  blendps %xmm0, %xmm1, $1

instead of:

  insertps %xmm0, %xmm1, $0

If there's a memory operand available for load folding and we're
optimizing for size, we'll still generate the insertps.

The detailed performance data motivation for this may be found in D7866; 
in summary, blendps has 2-3x throughput vs. insertps on widely used chips.

Differential Revision: http://reviews.llvm.org/D8332

llvm-svn: 232850
2015-03-20 21:19:52 +00:00
Rafael Espindola 36a15cb975 Don't declare all text sections at the start of the .s
The code this patch removes was there to make sure the text sections went
before the dwarf sections. That is necessary because MachO uses offsets
relative to the start of the file, so adding a section can change relaxations.

The dwarf sections were being printed at the start just to produce symbols
pointing at the start of those sections.

The underlying issue was fixed in r231898. The dwarf sections are now printed
when they are about to be used, which is after we printed the text sections.

To make sure we don't regress, the patch makes the MachO streamer assert
if CodeGen puts anything unexpected after the DWARF sections.

llvm-svn: 232842
2015-03-20 20:00:01 +00:00
Duncan P. N. Exon Smith 1de3dc5731 Bugpoint: Fix invalid 'inlinedAt:' references in testcase
These are causing crashes in `DebugInfoFinder` after a WIP patch to
increase strictness of `DIDescriptor` accessors.

llvm-svn: 232839
2015-03-20 19:51:34 +00:00
Rafael Espindola bdfbde56e0 Reorganize the x86 ELF relocation selection logic.
The main differences are:

* Split in 32 and 64 bit functions.
* First switch on the Modifier so that we have only one non fully covered
  switch.
* Map the fixup kind first to a x86_64 (or i386) specific enum, to make
  it easy to handle cases like X86::reloc_riprel_4byte_movq_load.
* Switch on IsPCRel last, which reduces code duplication.

Fixes pr22308.

llvm-svn: 232837
2015-03-20 19:48:54 +00:00
Duncan P. N. Exon Smith a3bdc328a5 Verifier: Check that !dbg attachments have the right type
A WIP patch makes `DIDescriptor` accessors more strict, which in turn
causes the `DebugInfoFinder` to crash on wrongly typed `!dbg`
attachments.  Catch that error up front in
`Verifier::visitInstruction()`.

Also remove a test that we "handle" invalid `!dbg` attachments, added
back in r99938.  We don't want to handle those anymore.

Note: I'm *not* recursing and verifying the debug info graph reachable
from this node; that work is already done by `verifyDebugInfo()`.

llvm-svn: 232834
2015-03-20 19:26:58 +00:00
Duncan P. N. Exon Smith 541133b79d Rewrite test/Feature/md_on_instruction.ll
This test is supposed to be testing whether metadata attachments to
instructions work, but it was using invalid debug info to do so.  (This
was causing assertion failures in the `DebugInfoFinder` with a WIP patch
to be more strict about `DIDescriptor` accessors.)

Rather than fix the debug info -- which is better tested elsewhere --
just test the IR feature directly.

llvm-svn: 232828
2015-03-20 18:34:53 +00:00
Wei Mi 6c428d6ff6 Correctly estimate SROA savings for store operands in inline cost analysis.
When estimating SROA savings, we want to see if an address is derived
off an alloca in the caller. For store instructions, operand 1 is the
address operand, but the current code uses operand 0.  Use
getPointerOperand for loads and stores to fix this.

Patch by Easwaran Raman.
http://reviews.llvm.org/D8425

llvm-svn: 232827
2015-03-20 18:33:12 +00:00
John Brawn 1f26a47630 [ARM] Fix handling of thumb1 out-of-range frame offsets
LocalStackSlotPass assumes that isFrameOffsetLegal doesn't change its
answer when the base register changes. Unfortunately this isn't true
in thumb1, where SP-based loads allow a larger offset than
non-SP-based loads, and this causes the base register reuse code to
generate instructions that are unencodable, causing an assertion
failure. 

Solve this by adding a BaseReg parameter to isFrameOffsetLegal, which
ARMBaseRegisterInfo can then make use of to give the correct answer. 

Differential Revision: http://reviews.llvm.org/D8419

llvm-svn: 232825
2015-03-20 17:20:07 +00:00
Daniel Jasper 214997c63b [MBP] Don't outline short optional branches
With the option -outline-optional-branches, LLVM will place optional
branches out of line (more details on r231230).

With this patch, this is not done for short optional branches. A short
optional branch is a branch containing a single block with an
instruction count below a certain threshold (defaulting to 3). Still
everything is guarded under -outline-optional-branches).

Outlining a short branch can't significantly improve code locality. It
can however decrease performance because of the additional jmp and in
cases where the optional branch is hot. This fixes a compile time
regression I have observed in a benchmark.

Review: http://reviews.llvm.org/D8108
llvm-svn: 232802
2015-03-20 10:00:37 +00:00
Tom Stellard 8d19f9b681 R600/SI: Add missing CHECK-LABEL lines to a test
llvm-svn: 232797
2015-03-20 03:12:42 +00:00
Nick Lewycky be8af48824 When simplifying a SCEV truncate by distributing, consider it a simplification to replace a cast, even if we end up with a trunc around the term. Fixes PR22960!
llvm-svn: 232794
2015-03-20 02:25:00 +00:00
Peter Collingbourne bbdc26ba1b test: Make a start on a test suite for libLTO.
This works in a similar way to the gold plugin tests. We search for a compatible
linker on $PATH and use it to run tests against our just-built libLTO. To start
with, test the just added opt level functionality.

Differential Revision: http://reviews.llvm.org/D8472

llvm-svn: 232785
2015-03-19 23:55:38 +00:00
Owen Anderson db4201235b Fix a nasty bug in DAGCombine of STORE nodes.
This is very related to the bug fixed in r174431.  The problem is that
SelectionDAG does not include alignment in the uniquing of loads and
stores.  When an otherwise no-op DAGCombine would increase the alignment
of a load or store, the original node would be returned (with the
alignment increased), which would cause the node not to be processed by
any further DAGCombines.

I don't have a direct testcase for this that manifests on an in-tree
target, but I did see some noise in the tests for other targets and have
updated them for it.

llvm-svn: 232780
2015-03-19 22:48:57 +00:00
Reid Kleckner c759fe90bc WinEH: Make llvm.eh.actions emission match the EH docs
This switches the sense of the i32 values and updates the test cases.

We can also use CHECK-SAME to clean up some tests, and reduce the visual
noise from bitcasts.

llvm-svn: 232774
2015-03-19 22:31:02 +00:00
Sanjay Patel d5c2d287f9 [X86, AVX] use blends instead of insert128 with index 0
Another case of x86-specific shuffle strength reduction:
avoid generating insert*128 instructions with index 0 because
they are slower than their non-lane-changing blend equivalents.

Shuffle lowering already catches most of these cases, but
the zero vector case and some other paths such as in the
modified test in vector-shuffle-256-v32.ll were getting
through.

Differential Revision: http://reviews.llvm.org/D8366

llvm-svn: 232773
2015-03-19 22:29:40 +00:00
Peter Collingbourne 994ba3d29c LowerBitSets: Avoid reusing byte set addresses.
Each use of the byte array uses a different alias. This makes the
backend less likely to reuse previously computed byte array addresses,
improving the security of the CFI mechanism based on this pass.

Differential Revision: http://reviews.llvm.org/D8455

llvm-svn: 232770
2015-03-19 22:02:10 +00:00
Peter Collingbourne 070843d60b libLTO, llvm-lto, gold: Introduce flag for controlling optimization level.
This change also introduces a link-time optimization level of 1. This
optimization level runs only the globaldce pass as well as cleanup passes for
passes that run at -O0, specifically simplifycfg which cleans up lowerbitsets.

http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266951.html

llvm-svn: 232769
2015-03-19 22:01:00 +00:00
Krzysztof Parzyszek 02fd29452c Unxfail test/CodeGen/Generic/vector.ll now passing on Hexagon
llvm-svn: 232758
2015-03-19 20:22:17 +00:00
Peter Collingbourne 152b936683 gold: Make powerpc support optional for the tests.
Differential Revision: http://reviews.llvm.org/D8400

llvm-svn: 232744
2015-03-19 18:23:31 +00:00
Artem Belevich 9e8a039318 Add support for __nvvm_reflect changes in libdevice in CUDA-7.0
Summary:
CUDA 7.0's libdevice uses slightly different IR to call __nvvm_reflect
and that triggers an assertion in nvvm_reflect optimization pass. This
change allows nvvm_reflect pass to deal with both old and new ways to
pass an argument to __nvvm_reflect.

Test Plan: ninja check-all

Reviewers: eliben, echristo

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8399

llvm-svn: 232732
2015-03-19 17:05:35 +00:00
Krzysztof Parzyszek 421133470f [Hexagon] Add support for vector instructions
llvm-svn: 232728
2015-03-19 16:33:08 +00:00
Daniel Jasper 5add63f21e [InstCombine] Don't fold a GEP into itself through a PHI node
This can only occur (I think) through the back-edge of the loop.

However, folding a GEP into itself means that the value of the previous
iteration needs to be stored in the meantime, thus requiring an
additional register variable to be live, but not actually achieving
anything (the gep still needs to be executed once per loop iteration).

The attached test case is derived from:
  typedef unsigned uint32;
  typedef unsigned char uint8;
  inline uint8 *f(uint32 value, uint8 *target) {
    while (value >= 0x80) {
      value >>= 7;
      ++target;
    }
    ++target;
    return target;
  }
  uint8 *g(uint32 b, uint8 *target) {
    target = f(b, f(42, target));
    return target;
  }

What happens is that the GEP stored in incptr2 is folded into itself
through the loop's back-edge and the phi-node stored in loopptr,
effectively incrementing the ptr by "2" in each iteration instead of "1".

In this case, it is actually increasing the number of GEPs required as
the GEP before the loop can't be folded away anymore. For comparison:

With this patch:
  define i8* @test4(i32 %value, i8* %buffer) {
  entry:
    %cmp = icmp ugt i32 %value, 127
    br i1 %cmp, label %loop.header, label %exit

  loop.header:                                      ; preds = %entry
    br label %loop.body

  loop.body:                                        ; preds = %loop.body, %loop.header
    %buffer.pn = phi i8* [ %buffer, %loop.header ], [ %loopptr, %loop.body ]
    %newval = phi i32 [ %value, %loop.header ], [ %shr, %loop.body ]
    %loopptr = getelementptr inbounds i8, i8* %buffer.pn, i64 1
    %shr = lshr i32 %newval, 7
    %cmp2 = icmp ugt i32 %newval, 16383
    br i1 %cmp2, label %loop.body, label %loop.exit

  loop.exit:                                        ; preds = %loop.body
    br label %exit

  exit:                                             ; preds = %loop.exit, %entry
    %0 = phi i8* [ %loopptr, %loop.exit ], [ %buffer, %entry ]
    %incptr3 = getelementptr inbounds i8, i8* %0, i64 2
    ret i8* %incptr3
  }

Without this patch:
  define i8* @test4(i32 %value, i8* %buffer) {
  entry:
    %incptr = getelementptr inbounds i8, i8* %buffer, i64 1
    %cmp = icmp ugt i32 %value, 127
    br i1 %cmp, label %loop.header, label %exit

  loop.header:                                      ; preds = %entry
    br label %loop.body

  loop.body:                                        ; preds = %loop.body, %loop.header
    %0 = phi i8* [ %buffer, %loop.header ], [ %loopptr, %loop.body ]
    %loopptr = phi i8* [ %incptr, %loop.header ], [ %incptr2, %loop.body ]
    %newval = phi i32 [ %value, %loop.header ], [ %shr, %loop.body ]
    %shr = lshr i32 %newval, 7
    %incptr2 = getelementptr inbounds i8, i8* %0, i64 2
    %cmp2 = icmp ugt i32 %newval, 16383
    br i1 %cmp2, label %loop.body, label %loop.exit

  loop.exit:                                        ; preds = %loop.body
    br label %exit

  exit:                                             ; preds = %loop.exit, %entry
    %ptr2 = phi i8* [ %incptr2, %loop.exit ], [ %incptr, %entry ]
    %incptr3 = getelementptr inbounds i8, i8* %ptr2, i64 1
    ret i8* %incptr3
  }

Review: http://reviews.llvm.org/D8245
llvm-svn: 232718
2015-03-19 11:05:08 +00:00
Rafael Espindola 7f45440bd9 Note that we don't support COFF on PPC.
Should bring back the windows bots.

llvm-svn: 232701
2015-03-19 02:40:56 +00:00
Justin Bogner cfb53e49b6 llvm-cov: Only emit colour by default if the output is a tty
This replaces the -no-color flag with a -color={auto|always|never}
option, with auto as the default, which is much saner.

llvm-svn: 232693
2015-03-19 00:02:23 +00:00
Simon Pilgrim cf1d7df2e3 Fixed failing test due to missing target triple causing different results on different buildbots.
llvm-svn: 232685
2015-03-18 22:51:45 +00:00
Rafael Espindola 242548906d Teach getDefaultFormat that we only support ELF on some architectures.
This should bring the windows bots back.

It is a bit ugly, but it is better than what we had before: The triple would
say that the object format was COFF, but llc/llvm-mc would produce an ELF.

llvm-svn: 232683
2015-03-18 22:19:16 +00:00
Simon Pilgrim 5ec5c9cafe [X86][SSE] Avoid scalarization of v2i64 vector shifts (REAPPLIED)
Fixed broken tests.

Differential Revision: http://reviews.llvm.org/D8416

llvm-svn: 232682
2015-03-18 22:18:51 +00:00
Eric Christopher 050f590a0c Revert "[X86][SSE] Avoid scalarization of v2i64 vector shifts" as it
appears to have broken tests/bots.

This reverts commit r232660.

llvm-svn: 232670
2015-03-18 21:01:00 +00:00
Reid Kleckner 0f9e27a371 Use WinEHPrepare to outline SEH finally blocks
No outlining is necessary for SEH catch blocks. Use the blockaddr of the
handler in place of the usual outlined function.

Reviewers: majnemer, andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D8370

llvm-svn: 232664
2015-03-18 20:26:53 +00:00
Simon Pilgrim 5c837edc2a [X86][SSE] Avoid scalarization of v2i64 vector shifts
Currently v2i64 vectors shifts (non-equal shift amounts) are scalarized, costing 4 x extract, 2 x x86-shifts and 2 x insert instructions - and it gets even more awkward on 32-bit targets.

This patch separately shifts the vector by both shift amounts and then shuffles the partial results back together, costing 2 x shuffles and 2 x sse-shifts instructions (+ 2 movs on pre-AVX hardware).

Note - this patch only improves the SHL / LSHR logical shifts as only these are supported in SSE hardware.

Differential Revision: http://reviews.llvm.org/D8416

llvm-svn: 232660
2015-03-18 19:35:31 +00:00
Matthias Braun 3b36533112 TableGen: Fix register class lane masks being too conservative.
When calculating the lanemask of a register class we have to include the
masks of subregisters supported by any of the class members, not just
the ones supported by all class members.

This fixes problems when coalescing towards a subclass with additional
subregisters available.

The attached testcase works fine as is, but does crash if you enable
subregister liveness on x86 without this change applied.

llvm-svn: 232652
2015-03-18 17:56:09 +00:00
Rafael Espindola 38438bae21 Handle X86::reloc_riprel_4byte in 32 bits mode.
We can get there with .code64.

Fixes pr22349.

llvm-svn: 232651
2015-03-18 17:33:40 +00:00
Sanjay Patel e4cdb8fcb3 Use utils/update_llc_test_checks.py to update all CHECKs
The checks here were so vague that we could nuke intrinsics
from existence and still pass the test because we'd match
the function name.

llvm-svn: 232647
2015-03-18 16:38:44 +00:00
Krzysztof Parzyszek 47ab1f2007 [Hexagon] Intrinsics for circular and bit-reversed loads and stores
llvm-svn: 232645
2015-03-18 16:23:44 +00:00
Sanjay Patel e90d0387d9 fixed to test features, not CPU model
The 'vmovntdq' was only passing due to a fluke in
SandyBridge codegen that splits 32-byte stores in half, 
but that meant that the test was not correctly checking
for the 32-byte store that we thought we were generating.

The lax checking in this file will be addressed in
another commit. There are bigger problems here.

llvm-svn: 232644
2015-03-18 16:07:10 +00:00
Krzysztof Parzyszek 78cc36fed7 [Hexagon] Handle ENDLOOP0 in InsertBranch and RemoveBranch
llvm-svn: 232643
2015-03-18 15:56:43 +00:00
Sid Manning 51c3560c48 Add support for .ifnes psuedo-op.
llvm-svn: 232636
2015-03-18 14:20:54 +00:00
Daniel Jasper 9ec834036e Change test to accept an additional critical edge split.
The two hot blocks are right next to each other and I verified that
there is no performance regression by compressing/uncompressing some
files with a minigzip built with the different options.

llvm-svn: 232629
2015-03-18 12:45:45 +00:00
John Brawn 0dbcd65442 [ARM] Align stack objects passed to memory intrinsics
Memcpy, and other memory intrinsics, typically tries to use LDM/STM if
the source and target addresses are 4-byte aligned. In CodeGenPrepare
look for calls to memory intrinsics and, if the object is on the
stack, 4-byte align it if it's large enough that we expect that memcpy
would want to use LDM/STM to copy it.

Differential Revision: http://reviews.llvm.org/D7908

llvm-svn: 232627
2015-03-18 12:01:59 +00:00
John Brawn 2063cb6305 Add missing newline to end of test file.
llvm-svn: 232626
2015-03-18 10:45:12 +00:00
Josh Magee 89f5dec0bb Add testcases for BEXTR.
These BEXTR cases are a check for the 64-bit load form and two negative cases where the bitrange is non-contiguous.  From a private patch equivalent to r189742/PR17028.

llvm-svn: 232580
2015-03-18 01:34:06 +00:00
Krzysztof Parzyszek d5972bdaf8 Missed testcase for r232577
llvm-svn: 232578
2015-03-18 00:44:46 +00:00
Sanjoy Das cb8bca1777 [SCEV] Make isImpliedCond smarter.
Summary:
This change teaches isImpliedCond to infer things like "X sgt 0" => "X -
1 sgt -1".  The `ConstantRange` class has the logic to do the heavy
lifting, this change simply gets ScalarEvolution to exploit that when
reasonable.

Depends on D8345

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8346

llvm-svn: 232576
2015-03-18 00:41:29 +00:00
David Majnemer e48237df95 DAGCombiner: fold (xor (shl 1, x), -1) -> (rotl ~1, x)
Targets which provide a rotate make it possible to replace a sequence of
(XOR (SHL 1, x), -1) with (ROTL ~1, x).  This saves an instruction on
architectures like X86 and POWER(64).

Differential Revision: http://reviews.llvm.org/D8350

llvm-svn: 232572
2015-03-18 00:03:36 +00:00
David Majnemer 7db449a6e7 COFF: Let globals with private linkage reside in their own section
COFF COMDATs (for selection kinds other than 'select any') require at
least one non-section symbol in the symbol table.
Satisfy this by morally enhancing the linkage from private to internal.

Differential Revision: http://reviews.llvm.org/D8394

llvm-svn: 232570
2015-03-17 23:54:51 +00:00
Pirama Arumuga Nainar 12aeefc63b Fix bug while building FP16 constant vectors for AArch64
Summary: Building FP16 constant vectors caused the FP16 data to be bitcast to i64.  This patch creates a BITCAST node with the correct value, and adds a test to verify correct handling.

Reviewers: mcrosier

Reviewed By: mcrosier

Subscribers: mcrosier, jmolloy, ab, srhines, llvm-commits, rengolin, aemerson

Differential Revision: http://reviews.llvm.org/D8369

llvm-svn: 232562
2015-03-17 23:10:29 +00:00
Kevin Enderby 8e29ec9ec2 Add the option -no-symbolic-operands to llvm-objdump used with -macho and
-disassemble to not symbolic operands when disassembling.

llvm-svn: 232558
2015-03-17 22:26:11 +00:00
Rafael Espindola 7fce7e62db Emit the offset directly instead of creating a dummy expression.
We were creating an expression of the form (S+C)-S which is just C.

Patch by Frédéric Riss. I just added the testcase.

llvm-svn: 232549
2015-03-17 21:30:21 +00:00
Kevin Enderby ab5e6c9925 Add the option, -no-leading-addr llvm-objdump used with -macho and
-disassemble or -section to not print the leading addresses on each line.

llvm-svn: 232547
2015-03-17 21:07:39 +00:00
David Majnemer 63b1d99943 Revert "COFF: Let globals with private linkage reside in their own section"
This reverts commit r232539.  This was committed accidently.

llvm-svn: 232543
2015-03-17 20:41:11 +00:00
David Majnemer 47e3842982 COFF: Let globals with private linkage reside in their own section
Summary:
COFF COMDATs (for selection kinds other than 'select any') require at
least one non-section symbol in the symbol table.
Satisfy this by morally enhancing the linkage from private to internal.

Reviewers: rafael

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8374

llvm-svn: 232539
2015-03-17 20:39:25 +00:00
Rafael Espindola 9ab09237dc Centralize the handling of unique ids for temporary labels.
Before this patch code wanting to create temporary labels for a given entity
(function, cu, exception range, etc) had to keep its own counter to have stable
symbol names.

createTempSymbol would still add a suffix to make sure a new symbol was always
returned, but it kept a single counter. Because of that, if we were to use
just createTempSymbol("cu_begin"), the label could change from cu_begin42 to
cu_begin43 because some other code started using temporary labels.

Simplify this by just keeping one counter per prefix and removing the various
specialized counters.

llvm-svn: 232535
2015-03-17 20:07:06 +00:00
Michael Zolotukhin 6d8a2aa976 TLI: Add addVectorizableFunctionsFromVecLib.
Also, add several entries to vectorizable functions table, and
corresponding tests. The table isn't complete, it'll be populated later.

Review: http://reviews.llvm.org/D8131
llvm-svn: 232531
2015-03-17 19:50:55 +00:00
Michael Zolotukhin c3d60efb1d TTI: Honour cost model for estimating cost of vector-intrinsic and calls.
Review: http://reviews.llvm.org/D8096
llvm-svn: 232528
2015-03-17 19:37:28 +00:00
Richard Barton 30934c0926 [ARM] Fix offset calculation in ARMBaseRegisterInfo::needsFrameBaseReg
The input offset to needsFrameBaseReg is a negative value below the top of the
stack frame, but when converting to a positive offset from the bottom of the
stack frame this value was negated, causing the final offset to be too large
by twice the input offset's magnitude. Fix that by not negating the offset.

Patch by John Brawn

Differential Revision: http://reviews.llvm.org/D8316

llvm-svn: 232513
2015-03-17 18:20:47 +00:00
Michael Liao 24fcae8fa0 [SwitchLowering] Remove incoming values in the reverse order
- To prevent invalidating *successive* indices.
 

llvm-svn: 232510
2015-03-17 18:03:10 +00:00
Kevin Enderby 6a22175d59 Add the option, -dis-symname to llvm-objdump used with -macho and
-disassemble to disassemble just one symbol’s instructions.

llvm-svn: 232503
2015-03-17 17:10:57 +00:00
Dmitry Vyukov 618d580ec9 asan: optimization experiments
The experiments can be used to evaluate potential optimizations that remove
instrumentation (assess false negatives). Instead of completely removing
some instrumentation, you set Exp to a non-zero value (mask of optimization
experiments that want to remove instrumentation of this instruction).
If Exp is non-zero, this pass will emit special calls into runtime
(e.g. __asan_report_exp_load1 instead of __asan_report_load1). These calls
make runtime terminate the program in a special way (with a different
exit status). Then you run the new compiler on a buggy corpus, collect
the special terminations (ideally, you don't see them at all -- no false
negatives) and make the decision on the optimization.

The exact reaction to experiments in runtime is not implemented in this patch.
It will be defined and implemented in a subsequent patch.

http://reviews.llvm.org/D8198

llvm-svn: 232502
2015-03-17 16:59:19 +00:00
Samuel Antao f68156015d Fix R0 use in PowerPC VSX store for FastIsel.
The VSX stores are sometimes generated with a undefined index register, causing %noreg to be used and R0 to be emitted later on. The semantics of the VSX store (e.g. stdsdx) requires R0 to be used as base if we want zero to be used in the computation of the effective address instead of the content of R0. This patch checks if no index register was generated and forces R0 to be used as base address.

llvm-svn: 232486
2015-03-17 15:00:57 +00:00
Rafael Espindola ccfbbd5596 Use createTempSymbol to avoid collisions instead of an ad hoc method.
llvm-svn: 232483
2015-03-17 14:50:32 +00:00
Rafael Espindola ec8da3de01 Call EmitFunctionHeader just before EmitFunctionBody.
This avoids switching to .AMDGPU.config and back and hardcoding the
section it switches back to.

llvm-svn: 232479
2015-03-17 14:34:42 +00:00
Rafael Espindola dc4263c760 Move the EH symbol to the asm printer and use it for the SJLJ case too.
llvm-svn: 232475
2015-03-17 13:57:48 +00:00
Toma Tabacu dcebf5b901 [mips] [IAS] Add support for the XOR $reg,imm pseudo-instruction.
Summary:
This adds a MipsInstAlias which expands to XORi $reg,$reg,imm. For example, "xor $6, 0x3A" should be expanded to "xori $6, $6, 58".
This should work for all MIPS ISAs.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8284

llvm-svn: 232473
2015-03-17 13:17:44 +00:00
Rafael Espindola ba41539548 Replace a use of GetTempSymbol with createTempSymbol.
This is cleaner and avoids a crash in a corner case.

llvm-svn: 232471
2015-03-17 12:54:04 +00:00
Renato Golin 1235060734 [ARM] Add support for ARMV6K subtarget (LLVM)
ARMv6K is another layer between ARMV6 and ARMV6T2. This is the LLVM
side of the changes.

ARMV6 family LLVM implementation.

+-------------------------------------+
| ARMV6                               |
+----------------+--------------------+
| ARMV6M (thumb) | ARMV6K (arm,thumb) | <- From ARMV6K and ARMV6M processors
+----------------+--------------------+    have support for hint instructions
| ARMV6T2 (arm,thumb,thumb2)          |    (SEV/WFE/WFI/NOP/YIELD). They can
+-------------------------------------+    be either real or default to NOP.
| ARMV7 (arm,thumb,thumb2)            |    The two processors also use
+-------------------------------------+    different encoding for them.

Patch by Vinicius Tinti.

llvm-svn: 232468
2015-03-17 11:55:28 +00:00
Ahmed Bougacha e0afb1fe6c [AArch64] Use intermediate step for concat_vectors of illegal truncs.
Optimize concat_vectors of truncated vectors, where the intermediate
type is illegal, to avoid said illegality,  e.g.,
  (v4i16 (concat_vectors (v2i16 (truncate (v2i64))),
                         (v2i16 (truncate (v2i64)))))
->
  (v4i16 (truncate (v4i32 (concat_vectors (v2i32 (truncate (v2i64))),
                                          (v2i32 (truncate (v2i64)))))))

This isn't really target-specific, and, as such, would best go in the
DAGCombiner.  However, ISD::TRUNCATE legality isn't keyed on both input
and result type, so we might generate worse code when we don't know
better.  On AArch64 we know it's fine for v2i64->v4i16 and v4i32->v8i8.
rdar://20022387

llvm-svn: 232459
2015-03-17 03:23:09 +00:00
Sanjoy Das 1ac93a5d89 [IRCE] Re-commit tests cases.
Re-commit the test cases added in r232444.  These now use
-irce-print-changed-loops and -irce-print-range-checks so they run
correctly on a without asserts build of llvm.

llvm-svn: 232452
2015-03-17 01:40:24 +00:00
Sanjoy Das 95d041215a [IRCE] Delete two tests.
I accidentally checked in two tests that used -debug-only -- these fail
on a release LLVM build.  Temporarily delete these from the repo to keep
the bots green while I fix this locally.

llvm-svn: 232446
2015-03-17 00:54:50 +00:00
Sanjoy Das e2cde6f195 [IRCE] Support half-range checks.
This change to IRCE gets it to recognize "half" range checks.  Half
range checks are range checks that only either check if the index is
`slt` some positive integer ("length") or if the index is `sge` `0`.

The range solver does not try to be clever / aggressive about solving
half-range checks -- it transforms "I < L" to "0 <= I < L" and "0 <= I"
to "0 <= I < INT_SMAX".  This is safe, but not always optimal.

llvm-svn: 232444
2015-03-17 00:42:13 +00:00
Justin Bogner d0a6243529 llvm-cov: Warn instead of error if a .gcda has arcs from an exit block
Patch by Vanderson M. Rosario. Thanks!

llvm-svn: 232443
2015-03-17 00:18:51 +00:00
Justin Bogner 3faa76bfab GCOV: Make the exit block placement from r223193 optional
By default we want our gcov emission to stay 4.2 compatible, which
means we need to continue emit the exit block last by default. We add
an option to emit it before the body for users that need it.

llvm-svn: 232438
2015-03-16 23:52:03 +00:00
Peter Collingbourne ad0bdcd238 LowerBitSets: do not use private aliases at all on Darwin.
LLVM currently turns these into linker-private symbols, which can be dead
stripped by the Darwin linker.

llvm-svn: 232435
2015-03-16 23:36:24 +00:00
David Blaikie 12cf5d70e8 Add testing for mismatched explicit type on a gep operator when loading from bitcode
llvm-svn: 232427
2015-03-16 22:03:50 +00:00
David Blaikie c695cc7e58 Add testing for mismatched explicit type on a load instruction when loading from bitcode
llvm-svn: 232424
2015-03-16 21:48:46 +00:00
Justin Bogner a438717638 InstrProf: Fix CoverageMappingReader on big endian
This makes the reader check the endianness of the object file its
given and behave appropriately. For the test I dug up a really old
linker and created a ppc-apple-darwin file for llvm-cov to read.

llvm-svn: 232422
2015-03-16 21:40:18 +00:00
David Majnemer a20616ec10 CodeGen: @llvm.eh.typeid.for replaced @llvm.eh.typeid.for.i32
We removed @llvm.eh.typeid.for.i32 and replaced it with
@llvm.eh.typeid.for quite some time ago.  Fix up some test cases which
never got updated.

llvm-svn: 232421
2015-03-16 21:36:38 +00:00
David Blaikie 675e8cb09e Test bitcode parsing error-handling for incorrect explicit type
(turns out I had regressed this when sinking handling of this type down
into GetElementPtrInst::Create - since that asserted before the error
handling was performed)

llvm-svn: 232420
2015-03-16 21:35:48 +00:00
Duncan P. N. Exon Smith b786572d7c DebugInfo: Fix testcases that fail -verify-debug-info=true
As part of PR22777, fix testcases that fail the debug info verifier.
The changes fall into the following categories:

  - Empty `filename:` fields in `MDFile`s.  Compile units and some types
    require non-empty filenames.  A number of testcases have empty
    filenames, probably due to hand-reduction of testcases.
  - Not-quite empty arrays: `!{i32 0}`.  This used to be equivalent in
    the debug info schema to `!{}`.  They cause problems for
    `!MDSubroutineType`'s `types:` array, since it requires all operands
    to be valid types.  (Note that `!{null}` is the correct type array
    for functions that take no arguments and return `void`.)
  - Significantly bitrotted testcases.  Nodes got left behind a few
    upgrades ago because of missing or invalid tags.

llvm-svn: 232415
2015-03-16 21:10:12 +00:00
Duncan P. N. Exon Smith 18e92078f2 Verifier: Remove unnecessary double-checks
Turns out `visitIntrinsicFunctionCall()` descends into all operands
already, so explicitly descending in `visitDbgIntrinsic()` (part of
r232296) isn't useful.

Updating a testcase that doesn't really need `-verify-debug-info` (since
r231082) as confirmation.

llvm-svn: 232408
2015-03-16 20:24:02 +00:00
Kevin Enderby bc847fa4ed Add the options, -dylibs-used and -dylib-id to llvm-objdump used with -macho
to print the Mach-O dynamic shared libraries used by a linked image or the
library id of a shared library.

llvm-svn: 232406
2015-03-16 20:08:09 +00:00
Duncan P. N. Exon Smith 3eea196a4f AsmParser: Stop requiring 'name:' when it's not printed
r230877 optimized which fields are written out for `CHECK`-ability, but
apparently missed changing some of them to optional in `LLParser`.

Fixes PR22921.

llvm-svn: 232400
2015-03-16 19:01:54 +00:00
Sanjay Patel 11ce908e4c fixed to test feature, not CPU
llvm-svn: 232398
2015-03-16 18:24:28 +00:00
Sanjay Patel a8ec726bb6 add CHECK-LABELs for more reliable testing
llvm-svn: 232391
2015-03-16 17:59:07 +00:00
Sanjay Patel 43ec821cb8 fixed to test feature, not CPU; removed unnecessary declaration
llvm-svn: 232387
2015-03-16 17:01:34 +00:00
Tom Stellard 7c840bcbf3 R600/SI: don't try min3/max3/med3 with f64
There are no opcodes for this. This also adds a test case.

v2: make test more robust

Patch by: Grigori Goronzy

llvm-svn: 232386
2015-03-16 15:53:55 +00:00
Petar Jovanovic b592a75f5e [MIPS] Fix justify error for small structures
Fix justify error for small structures bigger than 32 bits in fixed
arguments for MIPS64 big endian. There was a problem when small structures
are passed as fixed arguments. The structures that are bigger than 32 bits
but smaller than 64 bits were not left justified properly on MIPS64 big
endian. This is fixed by shifting the value to make it left justified when
appropriate.

Patch by Aleksandar Beserminji.

Differential Revision: http://reviews.llvm.org/D8174

llvm-svn: 232382
2015-03-16 15:01:09 +00:00
Rafael Espindola 933f51af54 Use the i8 immediate cmp instructions when possible.
llvm-svn: 232378
2015-03-16 14:25:08 +00:00
Justin Bogner 0a582d766c InstrProf: Remove xfails for big-endian from coverage tests
This still doesn't actually work correctly for big endian input files,
but since these tests all use little endian input files they don't
actually fail. I'll be committing a real fix for big endian soon, but
I don't have proper tests for it yet.

llvm-svn: 232354
2015-03-16 07:29:49 +00:00
Michael Gottesman dd60f9bb09 [objc-arc] Make the ARC optimizer more conservative by forcing it to be non-safe in both direction, but mitigate the problem by noting that we just care if there was a further use.
The problem here is the infamous one direction known safe. I was
hesitant to turn it off before b/c of the potential for regressions
without an actual bug from users hitting the problem. This is that bug ;
).

The main performance impact of having known safe in both directions is
that often times it is very difficult to find two releases without a use
in-between them since we are so conservative with determining potential
uses. The one direction known safe gets around that problem by taking
advantage of many situations where we have two retains in a row,
allowing us to avoid that problem. That being said, the one direction
known safe is unsafe. Consider the following situation:

retain(x)
retain(x)
call(x)
call(x)
release(x)

Then we know the following about the reference count of x:

// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
retain(x)
// rc(x) == N+2
call A(x)
call B(x)
// rc(x) >= 1 (since we can not release a deallocated pointer).
release(x)
// rc(x) >= 0

That is all the information that we can know statically. That means that
we know that A(x), B(x) together can release (x) at most N+1 times. Lets
say that we remove the inner retain, release pair.

// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
call A(x)
call B(x)
// rc(x) >= 1
release(x)
// rc(x) >= 0

We knew before that A(x), B(x) could release x up to N+1 times meaning
that rc(x) may be zero at the release(x). That is not safe. On the other
hand, consider the following situation where we have a must use of
release(x) that x must be kept alive for after the release(x)**. Then we
know that:

// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
retain(x)
// rc(x) == N+2
call A(x)
call B(x)
// rc(x) >= 2 (since we know that we are going to release x and that that release can not be the last use of x).
release(x)
// rc(x) >= 1 (since we can not deallocate the pointer since we have a must use after x).
…
// rc(x) >= 1
use(x)

Thus we know that statically the calls to A(x), B(x) can together only
release rc(x) N times. Thus if we remove the inner retain, release pair:

// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
call A(x)
call B(x)
// rc(x) >= 1
…
// rc(x) >= 1
use(x)

We are still safe unless in the final … there are unbalanced retains,
releases which would have caused the program to blow up anyways even
before optimization occurred. The simplest form of must use is an
additional release that has not been paired up with any retain (if we
had paired the release with a retain and removed it we would not have
the additional use). This fits nicely into the ARC framework since
basically what you do is say that given any nested releases regardless
of what is in between, the inner release is known safe. This enables us to get
back the lost performance.

<rdar://problem/19023795>

llvm-svn: 232351
2015-03-16 07:02:36 +00:00
Justin Bogner 7b33cc9dbc InstrProf: Do a better job of reading coverage mapping data.
This code was casting regions of a memory buffer to a couple of
different structs. This is wrong in a few ways:

1. It breaks aliasing rules.
2. If the buffer isn't aligned, it hits undefined behaviour.
3. It completely ignores endianness differences.
4. The structs being defined for this aren't specifying their padding
   properly, so this doesn't even represent the data properly on some
   platforms.

This commit is mostly NFC, except that it fixes reading coverage for
32 bit binaries as a side effect of getting rid of the mispadded
structs. I've included a test for that.

I've also baked in that we only handle little endian more explicitly,
since that was true in practice already. I'll fix this to handle
endianness properly in a followup commit.

llvm-svn: 232346
2015-03-16 06:55:45 +00:00
Frederic Riss bce93ff011 [dsymutil] Add support to generate .debug_pubnames and .debug_pubtypes
The information gathering part of the patch stores a bit more information
than what is strictly necessary for these 2 sections. The rest will
become useful when we start emitting __apple_* type accelerator tables.

llvm-svn: 232342
2015-03-16 02:05:10 +00:00
NAKAMURA Takumi 3289643c28 Rework r232337. Let llvm/test/tools/dsymutil/X86/basic-linking-x86.test dospath-tolerant.
llvm-svn: 232339
2015-03-16 00:40:47 +00:00
NAKAMURA Takumi 208caf22d9 Suppress llvm/test/tools/dsymutil/X86/basic-linking-x86.test for now. Will fix later.
llvm-svn: 232337
2015-03-15 23:07:16 +00:00
NAKAMURA Takumi 6e40ce9f40 llvm/test/tools/dsymutil/X86/basic-lto-*-linking-x86.test: Relax expressions to meet dos path.
llvm-svn: 232336
2015-03-15 23:07:05 +00:00
Frederic Riss 63786b016f [dsymutil] Add support for linking line tables.
This code comes with a lot of cruft that is meant to mimic darwin's
dsymutil behavior. A much simpler approach (described in the numerous
FIXMEs that I put in there) gives the right output for the vast
majority of cases. The extra corner cases that are handled differently
need to be investigated: they seem to correctly handle debug info that
is in the input, but that info looks suspicious in the first place.

Anyway, the current code needs to handle this, but I plan to revisit it
as soon as the big round of validation against the classic dsymutil is
over.

llvm-svn: 232333
2015-03-15 20:45:43 +00:00
Simon Pilgrim 253fbb17e3 [SSE} Added tests for float4-float3 conversions (PR11580)
llvm-svn: 232324
2015-03-15 16:19:15 +00:00
David Majnemer f45bbd0da3 llvm-cxxdump: Rename llvm-vtabledump to llvm-cxxdump
llvm-vtabledump has grown enough functionality not related to vtables
that it deserves a name which is more descriptive.

llvm-svn: 232301
2015-03-15 01:30:58 +00:00
Frederic Riss 912d0f1261 [dsymutil] Add function size to the debug map.
The debug map embedded by ld64 in binaries conatins function sizes.
These sizes are less precise than the ones given by the debug information
(byte granularity vs linker atom granularity), but they might cover code
that is referenced in the line table but not in the DIE tree (that might
very well be a compiler bug that I need to investigate later).
Anyway, extracting that information is necessary to be able to mimic
dsymutil's behavior exactly.

llvm-svn: 232300
2015-03-15 01:29:30 +00:00
Duncan P. N. Exon Smith 166121ad0b Verifier: Check debug info intrinsic arguments
Verify that debug info intrinsic arguments are valid.  (These checks
will not recurse through the full debug info graph, so they don't need
to be cordoned of in `DebugInfoVerifier`.)

With those checks in place, changing the `DbgIntrinsicInst` accessors to
downcast to `MDLocalVariable` and `MDExpression` is natural (added isa
specializations in `Metadata.h` to support this).

Added tests to `test/Verifier` for the new -verify checks, and fixed the
debug info in all the in-tree tests.

If you have out-of-tree testcases that have started to fail to -verify,
hopefully the verify checks are helpful.  The most likely problem is
that the expression argument is `!{}` (instead of `!MDExpression()`).

llvm-svn: 232296
2015-03-15 01:21:30 +00:00
Duncan P. N. Exon Smith 87c7686664 Assembler: Rewrite test for function-local metadata
This test for function-local metadata did strange things, and never
really sent in valid arguments for `llvm.dbg.declare` and
`llvm.dbg.value` intrinsics.  Those that might have once been valid have
bitrotted.

Rewrite it to be a targeted test for function-local metadata --
unrelated to debug info, which is tested elsewhere -- and rename it to
better match other metadata-related tests.

(Note: the scope of function-local metadata changed drastically during
the metadata/value split, but I didn't properly clean up this testcase.
Most of the IR in this file, while invalid for debug info intrinsics,
used to provide coverage for various (now illegal) forms of
function-local metadata.)

llvm-svn: 232290
2015-03-15 00:45:51 +00:00
Simon Pilgrim ece7475951 Simplified some stack folding tests.
Replaced explicit pmovzx* intrinsic tests with general shuffles

llvm-svn: 232286
2015-03-14 23:16:43 +00:00
Mehdi Amini b344ac9afe Update InstCombine to transform aggregate stores into scalar stores.
Summary: This is a first step toward getting proper support for aggregate loads and stores.

Test Plan: Added unittests

Reviewers: reames, chandlerc

Reviewed By: chandlerc

Subscribers: majnemer, joker.eph, chandlerc, llvm-commits

Differential Revision: http://reviews.llvm.org/D7780

Patch by Amaury Sechet

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 232284
2015-03-14 22:19:33 +00:00
Frederic Riss dfb9790a3d [dsymutil] Add support for debug_loc section.
There is no need to look into the location expressions to transfer them,
the only modification to apply is to patch their base address to reflect
the linked function address.

llvm-svn: 232267
2015-03-14 15:49:07 +00:00
Daniel Jasper 15e6954aea [MachineLICM] First steps of sinking GEPs near calls.
Specifically, if there are copy-like instructions in the loop header
they are moved into the loop close to their uses. This reduces the live
intervals of the values and can avoid register spills.

This is working towards a fix for http://llvm.org/PR22230.
Review: http://reviews.llvm.org/D7259

Next steps:
- Find a better cost model (which non-copy instructions should be sunk?)
- Make this dependent on register pressure

llvm-svn: 232262
2015-03-14 10:58:38 +00:00
Frederic Riss 563b1b057a [dsymutil] Generate debug_aranges section.
This actually shares most of its implementation with the  generation
of the debug_ranges (the absence of 'a' is not a typo) contribution
for the unit's DW_AT_ranges attribute.

llvm-svn: 232246
2015-03-14 03:46:51 +00:00
Ahmed Bougacha 082c5c707a Add a bunch of CHECK missing colons in tests. NFC.
Some wouldn't pass;  fixed most, the rest will be fixed separately.

llvm-svn: 232239
2015-03-14 01:43:57 +00:00
Peter Collingbourne c9f277f754 LowerBitSets: Do not export symbols for bit set referenced globals on Darwin.
The linker on that platform may re-order symbols or strip dead symbols, which
will break bit set checks. Avoid this by hiding the symbols from the linker.

llvm-svn: 232235
2015-03-14 00:00:49 +00:00
Frederic Riss 25440876b0 [dsymutil] Implement DW_AT_ranges linking.
Nothing fancy, just a straightforward offset to apply to the original
debug_ranges entries to get them in line with the linked addresses.

llvm-svn: 232232
2015-03-13 23:30:31 +00:00
Rafael Espindola e7ce9ec398 Use add32ri8 and friends on fast isel.
This fixes pr22854.

The core issue on the bug is that there are multiple instructions that
print the same in assembly. In fact, there doesn't seem to be any
syntax for specifying that a constant that fits in 8 bits should use a 32 bit
immediate.

The attached patch changes fast isel to consider i16immSExt8,
i32immSExt8, and i64immSExt8. They were disabled because fastisel didn’t know
to call the predicate back in the day.

llvm-svn: 232223
2015-03-13 22:18:18 +00:00
Robert Lougher 1858ba7626 Reapply "[Reassociate] Add initial support for vector instructions."
This reapplies the patch previously committed at revision 232190.  This was
reverted at revision 232196 as it caused test failures in tests that did not
expect operands to be commuted.  I have made the tests more resilient to
reassociation in revision 232206.

llvm-svn: 232209
2015-03-13 20:53:01 +00:00
Duncan P. N. Exon Smith be95b4afc6 instcombine: alloca: Canonicalize scalar allocation array size
As a follow-up to r232200, add an `-instcombine` to canonicalize scalar
allocations to `i32 1`.  Since r232200, `iX 1` (for X != 32) are only
created by RAUWs, so this shouldn't fire too often.  Nevertheless, it's
a cheap check and a nice cleanup.

llvm-svn: 232202
2015-03-13 19:42:09 +00:00
Duncan P. N. Exon Smith 720762e2c0 AsmWriter: Write alloca array size explicitly (and -instcombine fixup)
Write the `alloca` array size explicitly when it's non-canonical.
Previously, if the array size was `iX 1` (where X is not 32), the type
would mutate to `i32` when round-tripping through assembly.

The testcase I added fails in `verify-uselistorder` (as well as
`FileCheck`), since the use-lists for `i32 1` and `i64 1` change.
(Manman Ren came across this when running `verify-uselistorder` on some
non-trivial, optimized code as part of PR5680.)

The type mutation started with r104911, which allowed array sizes to be
something other than an `i32`.  Starting with r204945, we
"canonicalized" to `i64` on 64-bit platforms -- and then on every
round-trip through assembly, mutated back to `i32`.

I bundled a fixup for `-instcombine` to avoid r204945 on scalar
allocations.  (There wasn't a clean way to sequence this into two
commits, since the assembly change on its own caused testcase churn, and
the `-instcombine` change can't be tested without the assembly changes.)

An obvious alternative fix -- change `AllocaInst::AllocaInst()`,
`AsmWriter` and `LLParser` to treat `intptr_t` as the canonical type for
scalar allocations -- was rejected out of hand, since this required
teaching them each about the data layout.

A follow-up commit will add an `-instcombine` to canonicalize the scalar
allocation array size to `i32 1` rather than leaving `iX 1` alone.

rdar://problem/20075773

llvm-svn: 232200
2015-03-13 19:30:44 +00:00
Robert Lougher 5e0ea66d59 Revert: "[Reassociate] Add initial support for vector instructions."
This reverts revision 232190 due to buildbot failure reported on clang-hexagon-elf
for test arm64_vtst.c.  To be investigated.

llvm-svn: 232196
2015-03-13 19:20:46 +00:00
Frederic Riss 6afcfce2d9 [dsymutil] Fix handling of cross-cu forward references.
We recorded the forward references in the CU that holds the referenced
DIE, but this is wrong as those will get resoled *after* the CU that
holds the reference. Record the references in their originating CU along
with a pointer to the remote CU to be able to compute the fixed up
offset at the right time.

llvm-svn: 232193
2015-03-13 18:35:57 +00:00
Frederic Riss 5a62dc3793 [dsymutil] Add relocation of compile_units low_pc/high_pc.
They need to be handled specifically as they could vary pretty
widely depending on how the linker moves functions around.

llvm-svn: 232192
2015-03-13 18:35:54 +00:00
Frederic Riss 111a0a8305 [dsymutil] Fix location cloning for newer dwarf versions.
The typo got unnoticed because we were testing only on Dwarf 2. Add a
Dwarf4 test that exercises the code path, and also tests some newer
FORMs that the other test doesn't cover.

llvm-svn: 232191
2015-03-13 18:35:39 +00:00
Robert Lougher 1bad505c3c [Reassociate] Add initial support for vector instructions.
This patch adds initial support for vector instructions to the reassociation
pass. It enables most parts of the pass to work with vectors but to keep the
size of the patch small, optimization of Xor trees, canonicalization of
negative constants and converting shifts to muls, etc., have been left out.
This will be handled in later patches.

The patch is based on an initial patch by Chad Rosier.

Differential Revision: http://reviews.llvm.org/D7566

llvm-svn: 232190
2015-03-13 18:33:27 +00:00
Sanjoy Das f1e9e1df25 [SCEV] Fix PR22856.
Summary:
ScalarEvolutionExpander assumes that the header block of a loop is a
legal place to have a use for a phi node.  This is true only for phis
that are either in the header or dominate the header block, but it is
not true for phi nodes that are strictly internal to the loop body.

This change teaches ScalarEvolutionExpander to place uses of PHI nodes
in the basic block the PHI nodes belong to.  This is always legal, and
`hoistIVInc` ensures that the said position dominates `IsomorphicInc`.

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8311

llvm-svn: 232189
2015-03-13 18:31:19 +00:00
David Blaikie f72d05bc7b [opaque pointer type] Add textual IR support for explicit type parameter to gep operator
Similar to gep (r230786) and load (r230794) changes.

Similar migration script can be used to update test cases, which
successfully migrated all of LLVM and Polly, but about 4 test cases
needed manually changes in Clang.

(this script will read the contents of stdin and massage it into stdout
- wrap it in the 'apply.sh' script shown in previous commits + xargs to
apply it over a large set of test cases)

import fileinput
import sys
import re

rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL)

def conv(match):
  line = match.group(1)
  line += match.group(4)
  line += ", "
  line += match.group(2)
  return line

line = sys.stdin.read()
off = 0
for match in re.finditer(rep, line):
  sys.stdout.write(line[off:match.start()])
  sys.stdout.write(conv(match))
  off = match.end()
sys.stdout.write(line[off:])

llvm-svn: 232184
2015-03-13 18:20:45 +00:00
Kevin Enderby f064075e54 Add the option, -non-verbose to llvm-objdump used with -macho to print things
using numeric values and not their symbolic constant names.

The routines that print Mach-O stuff already had a verbose parameter and this
change is just changing the passing true to passing !NonVerbose.  With just a
couple of fixes and a bunch of test case updates.

llvm-svn: 232182
2015-03-13 17:56:32 +00:00
Andrea Di Biagio 510feca1b8 [X86][AVX] Fix wrong lowering of v4x64 shuffles into concat_vector plus extract_subvector nodes.
This patch fixes a bug in the shuffle lowering logic implemented by function
'lowerV2X128VectorShuffle'.

The are few cases where function 'lowerV2X128VectorShuffle' wrongly expands a
shuffle of two v4X64 vectors into a CONCAT_VECTORS of two EXTRACT_SUBVECTOR
nodes. The problematic expansion only occurs when the shuffle mask M has an
'undef' element at position 2, and M is equivalent to mask <0,1,4,5>.
In that case, the algorithm propagates the wrong vector to one of the two
new EXTRACT_SUBVECTOR nodes.

Example:
;;
define <4 x double> @test(<4 x double> %A, <4 x double> %B) {
entry:
  %0 = shufflevector <4 x double> %A, <4 x double> %B, <4 x i32><i32 undef, i32 1, i32 undef, i32 5>
  ret <4 x double> %0
}
;;

Before this patch, llc (-mattr=+avx) generated:
  vinsertf128 $1, %xmm0, %ymm0, %ymm0

With this patch, llc correctly generates:
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

Added test lower-vec-shuffle-bug.ll

Differential Revision: http://reviews.llvm.org/D8259

llvm-svn: 232179
2015-03-13 17:29:49 +00:00
Matt Arsenault 314eac7477 R600/SI: Add test for min / max with immediate
Make sure this isn't getting confused by canonicalizations
of comparisons with a constant.

llvm-svn: 232177
2015-03-13 16:43:48 +00:00
David Majnemer e2a4b856d8 ConstantFold: Fix big shift constant folding
Constant folding for shift IR instructions ignores all bits above 32 of
second argument (shift amount).
Because of that, some undef results are not recognized and APInt can
raise an assert failure if second argument has more than 64 bits.

Patch by Paweł Bylica!

Differential Revision: http://reviews.llvm.org/D7701

llvm-svn: 232176
2015-03-13 16:39:46 +00:00
Owen Anderson 41a185c521 Teach TBAA analysis to report errors on cyclic TBAA metadata rather than hanging.
llvm-svn: 232144
2015-03-13 07:09:33 +00:00
Owen Anderson 08f46e1de6 Fix an infinite recursion in the verifier caused by calling isSized on a recursive type.
llvm-svn: 232143
2015-03-13 06:41:26 +00:00
Hao Liu 04183242b3 [MachineCopyPropagation] Fix a bug causing incorrect removal for the instruction sequences as follows
%Q5_Q6<def> = COPY %Q2_Q3
   %D5<def> =
   %D3<def> =
   %D3<def> = COPY %D6     // Incorrectly removed in MachineCopyPropagation
   Using of %D3 results in incorrect result ...

   Reviewed in http://reviews.llvm.org/D8242 

llvm-svn: 232142
2015-03-13 05:15:23 +00:00
Nick Lewycky b6ef9a14de When forming an addrec out of a phi don't just look at the last computation and steal its flags for our own, there may be other computations in the middle. Check whether the LHS of the computation is the phi itself and then we know it's safe to steal the flags. Fixes PR22795.
There's a missed optimization opportunity where we could look at the full chain of computation and take the intersection of the flags instead of only looking one instruction deep.

llvm-svn: 232134
2015-03-13 01:37:52 +00:00
Sanjay Patel 4339abe66f [X86, AVX2] Replace inserti128 and extracti128 intrinsics with generic shuffles
This should complete the job started in r231794 and continued in r232045:
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.

This should complete the removal of insert/extract128 intrinsics.

The Clang precursor patch for this change was checked in at r232109.

llvm-svn: 232120
2015-03-12 23:16:18 +00:00
Simon Pilgrim fb53eded5f Removed useless palignr test - we don't actually provide a llvm.x86.ssse3.palign.r.128 intrinsic
Differential Revision: http://reviews.llvm.org/D8302

llvm-svn: 232108
2015-03-12 21:42:03 +00:00
Tom Stellard c0503926f5 R600/SI: Remove _e32 and _e64 suffixes from mnemonics
Instead print them as part of the $dst operand.  The AsmMatcher
requires the 32-bit and 64-bit encodings have the same mnemonic in
order to parse them correctly.

llvm-svn: 232105
2015-03-12 21:34:22 +00:00
Andrew Kaylor 147385bbac Adding WinEHPrepare tests (currently XFAILs)
llvm-svn: 232104
2015-03-12 21:32:59 +00:00
Krzysztof Parzyszek 99f2276adf Unxfail passing test on Hexagon
test/CodeGen/Generic/2008-02-20-MatchingMem.ll

llvm-svn: 232098
2015-03-12 20:38:10 +00:00
Quentin Colombet f59b2d034c [X86] Fix a regression introduced by r223641.
The permps and permd instructions have their operands swapped compared to the
intrinsic definition. Therefore, they do not fall into the INTR_TYPE_2OP
category.

I did not create a new category for those two, as they are the only one AFAICT
in that case.

<rdar://problem/20108262>

llvm-svn: 232085
2015-03-12 19:34:12 +00:00
Krzysztof Parzyszek a29622a8c5 Remove unused complex patterns for addressing modes on Hexagon.
llvm-svn: 232057
2015-03-12 16:44:50 +00:00
Andrea Di Biagio de2fb00a16 [X86] Fix wrong target specific combine on SETCC nodes.
Part of the folding logic implemented by function 'PerformISDSETCCCombine'
only worked under the assumption that the condition code in input could have
been either SETNE or SETEQ.
Unfortunately that assumption was incorrect, and in some cases the algorithm
ended up incorrectly folding SETCC nodes.

The incorrect folding only affected SETCC dag nodes where:
 - one of the operands was a build_vector of all zeroes;
 - the other operand was a SIGN_EXTEND from a vector of MVT:i1 elements;
 - the condition code was neither SETNE nor SETEQ.

Example:
  (setcc (v4i32 (sign_extend v4i1:%A)), (v4i32 VectorOfAllZeroes), setge)

Before this patch, the entire dag node sequence from the example was
incorrectly folded to node %A.

With this patch, the dag node sequence is folded to a
  (xor %A, (v4i1 VectorOfAllOnes)).

Added test setcc-combine.ll.

Thanks to Greg Bedwell for spotting this issue.

llvm-svn: 232046
2015-03-12 15:16:58 +00:00
Sanjay Patel af1846c097 [X86, AVX] replace vextractf128 intrinsics with generic shuffles
Now that we've replaced the vinsertf128 intrinsics, 
do the same for their extract twins.

This is very much like D8086 (checked in at r231794):
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

This is also the LLVM sibling to the cfe D8275 patch.

Differential Revision: http://reviews.llvm.org/D8276

llvm-svn: 232045
2015-03-12 15:15:19 +00:00
Simon Pilgrim 4952a0cba2 [X86][AVX2] Added missing palignr stack folding test
llvm-svn: 232033
2015-03-12 13:12:33 +00:00
Elena Demikhovsky 5d06b4c80c AVX-512: Added encoding tests for VPROR, VPROL instructions,
fixed opcode.

llvm-svn: 232018
2015-03-12 07:28:41 +00:00
Kevin Qin 49bc764310 Reapply 'Run LICM pass after loop unrolling pass.'
It's firstly committed at r231630, and reverted at r231635.

Function pass InstructionSimplifier is inserted as barrier to
make sure loop unroll pass won't affect on LICM pass.

llvm-svn: 232011
2015-03-12 05:36:01 +00:00
Jingyue Wu e8290f21b5 [NVPTXAsmPrinter] do not print .align on function headers
Summary:
PTX does not allow .align directives on function headers.

Fixes PR21551.

Test Plan: test/Codegen/NVPTX/function-align.ll

Reviewers: eliben, jholewinski

Reviewed By: eliben, jholewinski

Subscribers: llvm-commits, eliben, jpienaar, jholewinski

Differential Revision: http://reviews.llvm.org/D8274

llvm-svn: 232004
2015-03-12 01:50:30 +00:00
Reid Kleckner e1b22ece4c Remove some CHECK-NOT lines in favor of CHECK-NEXT
NFC, this is just shorter.

llvm-svn: 232000
2015-03-12 01:38:48 +00:00
Reid Kleckner 47c8e7a0e7 Stop calling DwarfEHPrepare from WinEHPrepare
Instead, run both EH preparation passes, and have them both ignore
functions with unrecognized EH personalities. Pass delegation involved
some hacky code for creating an AnalysisResolver that we don't need now.

llvm-svn: 231995
2015-03-12 00:36:20 +00:00
Reid Kleckner 016c6b2104 Handle big index in getelementptr instruction
CodeGen incorrectly ignores (assert from APInt) constant index bigger
than 2^64 in getelementptr instruction. This is a test and fix for that.

Patch by Paweł Bylica!

Reviewed By: rnk

Subscribers: majnemer, rnk, mcrosier, resistor, llvm-commits

Differential Revision: http://reviews.llvm.org/D8219

llvm-svn: 231984
2015-03-11 23:36:10 +00:00
Andrew Kaylor 6b67d42773 Extended support for native Windows C++ EH outlining
Differential Review: http://reviews.llvm.org/D7886

llvm-svn: 231981
2015-03-11 23:22:06 +00:00
Kevin Enderby cd66be5dda Add the option, -info-plist to llvm-objdump used with -macho to print the
Mach-O info plist section as strings.

llvm-svn: 231974
2015-03-11 22:06:32 +00:00
Jozef Kolek 4468947fce [mips][microMIPS] Make usage of NOT16 by code generator
Differential Revision: http://reviews.llvm.org/D7748

llvm-svn: 231963
2015-03-11 20:28:31 +00:00
Sanjay Patel f5b673dd50 add CHECK-LABELs for better reliability
llvm-svn: 231962
2015-03-11 20:12:07 +00:00
Rafael Espindola ab447e436d Put jump tables in unique sections on COFF.
If a function is going in an unique section (because of -ffunction-sections
for example), putting a jump table in .rodata will keep .rodata alive and
that will keep alive any other function that also has a jump table.

Instead, put the jump table in a unique section that is associated with the
function.

llvm-svn: 231961
2015-03-11 19:58:37 +00:00
Tim Northover 8cda34f5e7 ARM: simplify and extend byval handling
The main issue being fixed here is that APCS targets handling a "byval align N"
parameter with N > 4 were miscounting what objects were where on the stack,
leading to FrameLowering setting the frame pointer incorrectly and clobbering
the stack.

But byval handling had grown over many years, and had multiple layers of cruft
trying to compensate for each other and calculate padding correctly. This only
really needs to be done once, in the HandleByVal function. Elsewhere should
just do what it's told by that call.

I also stripped out unnecessary APCS/AAPCS distinctions (now that Clang emits
byvals with the correct C ABI alignment), which simplified HandleByVal.

rdar://20095672

llvm-svn: 231959
2015-03-11 18:54:22 +00:00
Frederic Riss 31da324aaf [dsymutil] Correctly clone address attributes.
DW_AT_low_pc on functions is taken care of by the relocation processing, but
DW_AT_high_pc and DW_AT_low_pc on other lexical scopes need special handling.

llvm-svn: 231955
2015-03-11 18:45:52 +00:00
David Majnemer d61a6fd8ed InstCombine: Don't fold call bitcast into args if callee is byval
This fixes a bug reported here:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150309/265341.html

llvm-svn: 231948
2015-03-11 18:03:05 +00:00
Juergen Ributzka 0c598cdaf1 Add the "vbroadcasti128" instruction back.
This is a follow-up to r231182. This adds the "vbroadcasti128" instruction
back, but without the intrinsic mapping. Also add a test to check the
instriction encoding.

This is related to rdar://problem/18742778.

llvm-svn: 231945
2015-03-11 17:29:03 +00:00
Derek Schuff 072f93fe72 Make NaCl's use of .init_array for static constructors match Linux
Summary:
The generic ELF TargetObjectFile defaults to .ctors, but Linux's
defaults to .init_array by calling InitializeELF with the value of
UseInitArray from TargetMachine. Make NaCl's behavior match.

Reviewers: jvoung
Differential Revision: http://reviews.llvm.org/D8240

llvm-svn: 231934
2015-03-11 16:16:09 +00:00
Sanjay Patel c04b6f242c Inliner should not add callgraph edges for intrinsic calls (PR22857)
The CallGraphNode function "addCalledFunction()" asserts that edges are not to intrinsics.

This patch makes sure that the Inliner does not add such an edge to the callgraph.

Fix for clang crash by assertion: https://llvm.org/bugs/show_bug.cgi?id=22857

Differential Revision: http://reviews.llvm.org/D8231

llvm-svn: 231927
2015-03-11 15:12:32 +00:00
Benjamin Kramer 23445cadb2 Prefer pipes over temporary files in a feeble attempt to stabilize this test on windows.
llvm-svn: 231923
2015-03-11 13:55:41 +00:00
Rafael Espindola 9eb4ddc54c Relax CHECK to match mips syntax.
llvm-svn: 231919
2015-03-11 12:48:24 +00:00
Elena Demikhovsky 0b9dbe33aa AVX-512: Added SKX forms of shift instructions.
Added rotation instructions, encoding only.
Added encoding tests for all these forms.

llvm-svn: 231916
2015-03-11 10:25:42 +00:00
Justin Bogner f19da3c33a Now that r231902's test is executed, make it actually pass
As of r231908, the test I added in r231902 actually gets run - but I'd
checked in a stale version of the input so it didn't pass. Fix the
input and un-xfail the test.

llvm-svn: 231911
2015-03-11 08:17:25 +00:00
Owen Anderson a3c68fdf82 Fix another verifier crash where a GC intrinsic would look at the internals of another intrinsic in order to verify itself.
This causes a crash if the referenced intrinsic was malformed.  In this case, we
would already have reported an error on the referenced intrinsic, but then
crashed on the second one when it tried to introspect the first without
error checking.

llvm-svn: 231910
2015-03-11 06:57:30 +00:00
Daniel Jasper 92e701b16f Make test added in r231902 actually be executed.
There were also errors in the CHECK line which I fixed and the test
doesn't actually pass as the "100" is in the wrong line. Not sure
whether this is a test failure or a coverage failure so making the test
XFAIL for now.

llvm-svn: 231908
2015-03-11 06:44:51 +00:00
Rafael Espindola 23562fcfe9 Don't print labels that on ELF are never used.
llvm-svn: 231904
2015-03-11 04:20:31 +00:00
Justin Bogner 4379535e3f InstrProf: Teach llvm-cov to handle universal binaries when given -arch
llvm-svn: 231902
2015-03-11 02:30:51 +00:00
Rafael Espindola d9f6e5daaa Relax label CHECK to mach COFF syntax.
Should fix the cygwin bots.

I added a cygwin specific test that would have caught this on Linux.

llvm-svn: 231899
2015-03-11 01:08:32 +00:00
Rafael Espindola f1a13f5ad5 Print section start labels when first switching to the section.
This is less brittle and avoids polluting the start of the file with every
debug section.

llvm-svn: 231898
2015-03-11 00:51:37 +00:00
Rafael Espindola 9b8a4e301a Split test in two to handle building without x86.
llvm-svn: 231886
2015-03-10 23:44:12 +00:00
Rafael Espindola b03bc79bed Add missing section symbol to COFF's .debug_types.dwo.
Should bring the cygwin bots back.

I added a triple to the test that was failing so that it would have failed
on Linux.

llvm-svn: 231882
2015-03-10 23:06:32 +00:00
Philip Reames 71c4035c18 If a conditional branch jumps to the same target, remove the condition
Given that large parts of inst combine is restricted to instructions which have one use, getting rid of a use on the condition can help the effectiveness of the optimizer. Also, it allows the condition to potentially be deleted by instcombine rather than waiting for another pass.

I noticed this completely by accident in another test case. It's not anything that actually came from a real workload.

p.s. We should probably do the same thing for switch instructions.

Differential Revision: http://reviews.llvm.org/D8220

llvm-svn: 231881
2015-03-10 22:52:37 +00:00
Paul Robinson 857b4434df Emit correct linkage-name attribute based on DWARF version.
There are still 4 tests that check for DW_AT_MIPS_linkage_name,
because they specify DWARF 2 or 3 in the module metadata. So, I didn't
create an explicit version-based test for the attribute.

Differential Revision: http://reviews.llvm.org/D8227

llvm-svn: 231880
2015-03-10 22:44:45 +00:00
Philip Reames 1c29227144 Infer known bits from dominating conditions
This patch adds limited support in ValueTracking for inferring known bits of a value from conditional expressions which must be true to reach the instruction we're trying to optimize. At this time, the feature is off by default. Once landed, I'm hoping for feedback from others on both profitability and compile time impact.

Forms of conditional value propagation have been tried in LLVM before and have failed due to compile time problems.  In an attempt to side step that, this patch only considers conditions where the edge leaving the branch dominates the context instruction. It does not attempt full dataflow.  Even with that restriction, it handles many interesting cases:
 * Early exits from functions
 * Early exits from loops (for context instructions in the loop and after the check)
 * Conditions which control entry into loops, including multi-version loops (such as those produced during vectorization, IRCE, loop unswitch, etc..)

Possible applications include optimizing using information provided by constructs such as: preconditions, assumptions, null checks, & range checks.

This patch implements two approaches to the problem that need further benchmarking.  Approach 1 is to directly walk the dominator tree looking for interesting conditions.  Approach 2 is to inspect other uses of the value being queried for interesting comparisons.  From initial benchmarking, it appears that Approach 2 is faster than Approach 1, but this needs to be further validated.  

Differential Revision: http://reviews.llvm.org/D7708

llvm-svn: 231879
2015-03-10 22:43:20 +00:00
Quentin Colombet 1b274f99ad [CodeGenPrepare] Refine the cost model provided by the promotion helper.
- Use TargetLowering to check for the actual cost of each extension.
- Provide a factorized method to check for the cost of an extension:
  TargetLowering::isExtFree.
- Provide a virtual method TargetLowering::isExtFreeImpl for targets to be able
  to tune the cost of non-free extensions.

This refactoring offers a better granularity to model what really happens on
different targets.

No performance changes and very few code differences.

Part of <rdar://problem/19267165> 

llvm-svn: 231855
2015-03-10 21:48:15 +00:00
Nemanja Ivanovic 0adf26b9b0 Add support for part-word atomics for PPC
http://reviews.llvm.org/D8090#inline-67337

llvm-svn: 231843
2015-03-10 20:51:07 +00:00
Ahmed Bougacha fab5892f8b [AArch64] Avoid going through GPRs for across-vector instructions.
This adds new node types for each intrinsic.
For instance, for addv, we have AArch64ISD::UADDV, such that:
  (v4i32 (uaddv ...))
is the same as
  (v4i32 (scalar_to_vector (i32 (int_aarch64_neon_uaddv ...))))
that is,
  (v4i32 (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)),
           (i32 (int_aarch64_neon_uaddv ...)), ssub)

In a combine, we transform all such across-vector-lanes intrinsics to:

  (i32 (extract_vector_elt (uaddv ...), 0))

This has one big advantage: by making the extract_element explicit, we
enable the existing patterns for lane-aware instructions to fire.
This lets us avoid needlessly going through the GPRs.  Consider:

    uint32x4_t test_mul(uint32x4_t a, uint32x4_t b) {
        return vmulq_n_u32(a, vaddvq_u32(b));
    }

We now generate:
    addv.4s  s1, v1
    mul.4s   v0, v0, v1[0]
instead of the previous:
    addv.4s  s1, v1
    fmov     w8, s1
    dup.4s   v1, w8
    mul.4s   v0, v1, v0

rdar://20044838

llvm-svn: 231840
2015-03-10 20:45:38 +00:00
Bruno Cardoso Lopes b3a58b4c3c [AsmPrinter][TLOF] Reintroduce AArch64 test
Follow up from r231505.

Fix the non-determinism by using a MapVector and reintroduce the AArch64
testcase. Defer deleting the got candidates up to the end and remove
them in a bulk, avoiding linear time removal of each element.

Thanks to Renato Golin for trying it out on other platforms.

llvm-svn: 231830
2015-03-10 20:05:23 +00:00
Kit Barton 20d3981e15 Change the generation of the vmuluwm instruction to be based on the MUL opcode.
Phabricator review: http://reviews.llvm.org/D8185

llvm-svn: 231827
2015-03-10 19:49:38 +00:00
Adam Nemet 58913d65ad [LoopAccesses 3/3] Print the dependences with -analyze
The dependences are now expose through the new getInterestingDependences
API so we can use that with -analyze too and fix the FIXME.

This lets us remove the test that relied on -debug to check the
dependences.

llvm-svn: 231807
2015-03-10 17:40:43 +00:00
Igor Laevsky 85f7f727d3 Teach lowering to correctly handle invoke statepoint and gc results tied to them. Note that we still can not lower gc.relocates for invoke statepoints.
Also it extracts getCopyFromRegs helper function in SelectionDAGBuilder as we need to be able to customize type of the register exported from basic block during lowering of the gc.result.
(Resubmitting this change after not being able to reproduce buildbot failure)

Differential Revision: http://reviews.llvm.org/D7760

llvm-svn: 231800
2015-03-10 16:26:48 +00:00
Sanjay Patel 19792fb270 [X86, AVX] replace vinsertf128 intrinsics with generic shuffles
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

This is the sibling patch for the Clang half of this change:
http://reviews.llvm.org/D8088

Differential Revision: http://reviews.llvm.org/D8086

llvm-svn: 231794
2015-03-10 16:08:36 +00:00
Karthik Bhat 8d7f7eda14 Fix a memory corruption in Dependency Analysis.
This crash occurs due to memory corruption when trying to update dependency
direction based on Constraints.

This crash was observed during lnt regression of Polybench benchmark test case dynprog.

Review: http://reviews.llvm.org/D8059
llvm-svn: 231788
2015-03-10 14:32:02 +00:00
Karthik Bhat 8d0099bdab Fix a crash in Dependency Analysis.
This crash in Dependency analysis is because we assume here that in case of UsefulGEP
both source and destination have the same number of operands which may not be true.
This incorrect assumption results in crash while populating Pairs. Fix the same.

This crash was observed during lnt regression for code such as-
  struct s{
    int A[10][10];
    int C[10][10][10]; 
  } S;
  void dep_constraint_crash_test(int k,int N)  {
     for( int i=0;i<N;i++)
       for( int j=0;j<N;j++)
         S.A[0][0] = S.C[0][0][k];
  }
Review: http://reviews.llvm.org/D8162

llvm-svn: 231784
2015-03-10 13:31:03 +00:00
Owen Anderson 58364dc4da Fix a crash in InstCombine where we could try to truncate a switch comparison to zero width.
llvm-svn: 231761
2015-03-10 06:51:39 +00:00
Owen Anderson e90f992b21 Fix a stack overflow in the assembler when checking that GEPs must be over sized types.
We failed to use a marking set to properly handle recursive types, which caused use
to recurse infinitely and eventually overflow the stack.

llvm-svn: 231760
2015-03-10 06:34:57 +00:00
Owen Anderson 3e7e67b5ed Fix an issue in the verifier where we could try to read information out of a malformed statepoint intrinsic.
In this situation we would always have already flagged an error on the statepoint intrinsic,
but then we carry on to parse other, related GC intrinsics, and could end up crashing during that
verification when they try to access data from the malformed statepoint.

llvm-svn: 231759
2015-03-10 05:58:21 +00:00
Owen Anderson 51b75b8c34 Fix an infinite loop in InstCombine when an instruction with no users and side effects can be constant folded.
ReplaceInstUsesWith needs to return nullptr when the input has no users,
because in that case it does not mutate the program.  Otherwise, we can
get stuck in an infinite loop of repeatedly attempting to constant fold
and instruction with no users.

llvm-svn: 231755
2015-03-10 05:13:47 +00:00
Rafael Espindola fcc2821882 Use a better name for compile unit labels.
They mark the start of a compile unit, so name them .Lcu_*. Using
Section->getLabelBeginName() makes it looks like they mark the start of the
section.

While at it, switch to createTempSymbol to avoid collisions with labels
created in inline assembly. Not sure if a "don't crash" test is worth it.

With this getLabelBeginName is dead, delete it.

llvm-svn: 231750
2015-03-10 03:58:36 +00:00
George Burgess IV ab03af277b Added ConstantExpr support to CFLAA.
CFLAA didn't know how to properly handle ConstantExprs; it would silently
ignore them. This was a problem if the ConstantExpr is, say, a GEP of a global,
because CFLAA wouldn't realize that there's a global there. :)

llvm-svn: 231743
2015-03-10 02:58:15 +00:00
George Burgess IV b54a8d62a4 Added special handling for inttoptr in CFLAA.
We now treat pointers given to ptrtoint and pointers retrieved from
inttoptr as similar to arguments or globals (can alias anything, etc.)

This solves some of the problems we were having with giving incorrect
results.

llvm-svn: 231741
2015-03-10 02:40:06 +00:00
Kostya Serebryany 48a4023f40 [sanitizer] fix instrumentation with -mllvm -sanitizer-coverage-block-threshold=0 to actually do something useful.
llvm-svn: 231736
2015-03-10 01:58:27 +00:00
Frederic Riss 0e9a50f5b5 DwarfAccelTable: Fix handling of hash collisions.
It turns out accelerator tables where totally broken if they contained
entries with colliding hashes. The failure mode is pretty bad, as it not
only impacted the colliding entries, but would basically make all the
entries after the first hash collision pointing in the wrong place.

The testcase uses the symbol names that where found to collide during a
clang build.

From a performance point of view, the patch adds a sort and a linear
walk over each bucket contents. While it has a measurable impact on the
accelerator table emission, it's not showing up significantly in clang
profiles (and I'd argue that correctness is priceless :-)).

llvm-svn: 231732
2015-03-10 00:46:31 +00:00
Colin LeMahieu fa79110cc7 [Hexagon] Removing unused patterns.
llvm-svn: 231723
2015-03-09 23:08:46 +00:00
David Blaikie 8d75794bfb LLParser: gep: Simplify parsing error handling
llvm-svn: 231722
2015-03-09 23:08:44 +00:00
Ahmed Bougacha c809761dc0 [CodeGen] Replace the reused stores' chain for extractelt expansion.
This fixes a subtle issue that was introduced in r205153.

When reusing a store for the extractelement expansion (to load directly
from it, inserting of going through the stack), later stores to the
same location might have overwritten the data we were expecting to
extract from.

To fix that, we need to explicitly replace the chain going out of the
reused store, so that later stores also have an explicit dependency on
the generated element-extracting loads, and can't clobber them.

rdar://20066785
Differential Revision: http://reviews.llvm.org/D8180

llvm-svn: 231721
2015-03-09 22:51:05 +00:00
Ahmed Bougacha 540469d8a2 [X86] Add nounwind to vector-idiv.ll testcases. NFC.
In preparation for a patch where cfi directives get in the way.

llvm-svn: 231720
2015-03-09 22:46:02 +00:00
Reid Kleckner be0a05060f Reland r229944: EH: Prune unreachable resume instructions during Dwarf EH preparation
Fix the double-deletion of AnalysisResolver when delegating through to
Dwarf EH preparation by creating one from scratch. Hopefully the new
pass manager simplifies this.

This reverts commit r229952.

llvm-svn: 231719
2015-03-09 22:45:16 +00:00
Rafael Espindola 4f4ef15ade Use a MapVector instead of an extra sort.
This also has the advantage of not depending on the brittle getLabelBeginName.

llvm-svn: 231714
2015-03-09 22:08:37 +00:00
Colin LeMahieu 2efa2d01d7 [Hexagon] Reapply r231699. Remove assumption that second operand is an immediate when checking if A2_tfrsi is combinable.
llvm-svn: 231710
2015-03-09 21:48:13 +00:00
Sanjoy Das 91b5477aad [SCEV] Unify getUnsignedRange and getSignedRange
Summary:
This removes some duplicated code, and also helps optimization: e.g. in
the test case added, `%idx ULT 128` in `@x` is not currently optimized
to `true` by `-indvars` but will be, after this change.

The only functional change in ths commit is that for add recurrences,
ScalarEvolution::getRange will be more aggressive -- computing the
unsigned (resp. signed) range for a SCEVAddRecExpr will now look at the
NSW (resp. NUW) bits and check for signed (resp. unsigned) overflow.
This can be a strict improvement in some cases (such as the attached
test case), and should be no worse in other cases.

Reviewers: atrick, nlewycky

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8142

llvm-svn: 231709
2015-03-09 21:43:43 +00:00
Sanjoy Das f257452986 [SCEV] Add a `scalar-evolution-print-constant-ranges' option
Summary:
Unused in this commit, but will be used in a subsequent change (D8142)
by a FileCheck test.

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8143

llvm-svn: 231708
2015-03-09 21:43:39 +00:00
Colin LeMahieu ed853397c6 [Hexagon] Reverting r231699
llvm-svn: 231703
2015-03-09 21:19:02 +00:00
Colin LeMahieu 8c4dfaa13b [Hexagon] Updating constant set to simpler versions.
llvm-svn: 231699
2015-03-09 20:33:12 +00:00
Rafael Espindola 14862d3e37 Don't prime the section map.
This was just creating unused labels for .text when the module had no
functions.

llvm-svn: 231694
2015-03-09 20:09:58 +00:00
Colin LeMahieu 96bfaa9766 [Hexagon] Eliminating immediate condition set.
llvm-svn: 231693
2015-03-09 19:57:18 +00:00
Justin Bogner 24ee64ba87 InstrProf: Use the proftext format for these coverage tests
This format's easier to understand and update by hand.

llvm-svn: 231686
2015-03-09 18:54:58 +00:00
Justin Bogner f95ca0758c InstrProf: Allow hexadecimal function hashes in proftext format
llvm-svn: 231685
2015-03-09 18:54:49 +00:00
Rafael Espindola a60017902c Print jump tables before exception tables.
In the case where just tables are part of the function section, this produces
more readable assembly by avoiding switching to the eh section and back
to .text.

This would also break with non unique section names, as trying to switch to
a unique section actually creates a new one.

llvm-svn: 231677
2015-03-09 18:29:12 +00:00
Reed Kotler 07d3a2f6b2 Add logical ops to Mips fast-isel
Summary:
Code is mostly copied from AArch64 port and modified where needed for Mips.

This handles the "non" legal cases of logical ops. Legal cases are handled by tablegen patterns.

Test Plan:
Make check test logopm.ll

All of test-suite passes at O0/O2 and mips32 r1/r2 with this new change.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: echristo, llvm-commits, aemerson, rfuhler

Differential Revision: http://reviews.llvm.org/D6599

llvm-svn: 231665
2015-03-09 16:28:10 +00:00
Marek Olsak 4d00dd2b93 R600/SI: Limit SGPRs to 80 on Tonga and Iceland
This is a candidate for stable.

llvm-svn: 231659
2015-03-09 15:48:09 +00:00
Andrea Di Biagio 228d9d4399 Fix line ending in test CodeGen/X86/pr22774.ll. NFC.
Also, replaced line with 'target triple' with flag -mtriple on the RUN line.
Removed the data layout string as it is not needed.

llvm-svn: 231654
2015-03-09 15:02:01 +00:00
Kevin Qin 65b07b8e1b Revert r231630 - Run LICM pass after loop unrolling pass.
As it broke llvm bootstrap.

llvm-svn: 231635
2015-03-09 07:26:37 +00:00
Owen Anderson f8f259df48 Fix a bug in the LLParser where we failed to diagnose landingpads with non-constant clause operands.
Fixing this also exposed a related issue where the landingpad under construction was not
cleaned up when an error was raised, which would cause bad reference errors before the
error could actually be printed.

llvm-svn: 231634
2015-03-09 07:13:42 +00:00
Kevin Qin aef68418de [AArch64] Enable partial & runtime unrolling on cortex-a57
For inner one of nested loops, it is more likely to be a hot loop,
and the runtime check can be promoted out from patch 0001, so the
overhead is less, we can try a doubled threshold to unroll more loops.

llvm-svn: 231632
2015-03-09 06:14:28 +00:00
Kevin Qin 715b01e979 Introduce runtime unrolling disable matadata and use it to mark the scalar loop from vectorization.
Runtime unrolling is an expensive optimization which can bring benefit
only if the loop is hot and iteration number is relatively large enough.
For some loops, we know they are not worth to be runtime unrolled.
The scalar loop from vectorization is one of the cases.

llvm-svn: 231631
2015-03-09 06:14:18 +00:00
Kevin Qin a998735def Run LICM pass after loop unrolling pass.
Runtime unrollng will introduce a runtime check in loop prologue.
If the unrolled loop is a inner loop, then the proglogue will be inside
the outer loop. LICM pass can help to promote the runtime check out if
the checked value is loop invariant.

llvm-svn: 231630
2015-03-09 06:14:07 +00:00
Mehdi Amini eb242a5041 InstCombine: fix fold "fcmp x, undef" to account for NaN
Summary:
See the two test cases.

; Can fold fcmp with undef on one side by choosing NaN for the undef

; Can fold fcmp with undef on both side
;   fcmp u_pred undef, undef -> true
;   fcmp o_pred undef, undef -> false
; because whatever you choose for the first undef
; you can choose NaN for the other undef

Reviewers: hfinkel, chandlerc, majnemer

Reviewed By: majnemer

Subscribers: majnemer, llvm-commits

Differential Revision: http://reviews.llvm.org/D7617

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231626
2015-03-09 03:20:25 +00:00
Owen Anderson 7e621e9d5e Teach DataLayout to infer a plausible alignment for things even when nothing is specified by the user.
llvm-svn: 231613
2015-03-08 21:53:59 +00:00
Andrea Di Biagio 6c7d70469c [X86][AVX] Fix wrong lowering of VPERM2X128 nodes
There were cases where the backend computed a wrong permute mask for a VPERM2X128 node.

Example:
\code
define <8 x float> @foo(<8 x float> %a, <8 x float> %b) {
  %shuffle = shufflevector <8 x float> %a, <8 x float> %b, <8 x i32> <i32 undef, i32 undef, i32 6, i32 7, i32 undef, i32 undef, i32 6, i32 7>
  ret <8 x float> %shuffle
}
\code end

Before this patch, llc (with -mattr=+avx) emitted the following vperm2f128:
  vperm2f128 $0, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[0,1,0,1]

With this patch, llc emits a vperm2f128 with a correct permute mask:
  vperm2f128 $17, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[2,3,2,3]

Differential Revision: http://reviews.llvm.org/D8119

llvm-svn: 231601
2015-03-08 16:28:47 +00:00
David Majnemer 1a666e0f69 ExecutionEngine: Preliminary support for dynamically loadable coff objects
Provide basic support for dynamically loadable coff objects. Only handles a subset of x64 currently.

Patch by Andy Ayers!

Differential Revision: http://reviews.llvm.org/D7793

llvm-svn: 231574
2015-03-07 20:21:27 +00:00
Andrea Di Biagio c9d79e8103 [DAGCombiner] Fix wrong folding of AND dag nodes.
This patch fixes the logic in the DAGCombiner that folds an AND node according
to rule: (and (X (load V)), C) -> (X (load V))

An AND between a vector load 'X' and a constant build_vector 'C' can be folded
into the load itself only if we can prove that the AND operation is redundant.
The algorithm implemented by 'visitAND' firstly computes the splat value 'S'
from C, and then checks if S has the lower 'B' bits set (where B is the size in
bits of the vector element type). The algorithm takes into account also the
'undef' bits in the splat mask.

Unfortunately, the algorithm only worked under the assumption that the size of S
is a multiple of the vector element type. With this patch, we conservatively
avoid folding the AND if the splat bits are not compatible with the vector
element type.

Added X86 test and-load-fold.ll

Differential Revision: http://reviews.llvm.org/D8085

llvm-svn: 231563
2015-03-07 12:24:55 +00:00
Simon Pilgrim bede80a440 [DAGCombiner] SCALAR_TO_VECTOR(EXTRACT_VECTOR_ELT(V,C)) -> VECTOR_SHUFFLE
This patch attempts to convert a SCALAR_TO_VECTOR using an operand from an EXTRACT_VECTOR_ELT into a VECTOR_SHUFFLE.

This prevents many cases of spilling scalar data between the gpr + simd registers. 

At present the optimization only accepts cases where there is no TRUNC of the scalar type (i.e. all types must match).

Differential Revision: http://reviews.llvm.org/D8132

llvm-svn: 231554
2015-03-07 05:52:42 +00:00
Eric Christopher e035e26655 Remove use of misched-bench from this test and replace it with
non-temporary enabling options. This is part of removing misched-bench
as an option.

llvm-svn: 231546
2015-03-07 01:39:06 +00:00
Frederic Riss 23e20e95e9 [dsymutil] Apply relocations to DIE data before cloning.
Doing this gets function's low_pc and global variable's locations right
in the output debug info. It also could get right other attributes
that need to be relocated (in linker terms), but I don't know of any
other than the address attributes.

This doesn't fixup low_pc attributes in compile_unit, lexical_block
or inlined subroutine, nor does it get right high_pc attributes
for function. This will come in a subsequent commit.

llvm-svn: 231544
2015-03-07 01:25:09 +00:00
Eric Christopher 7e70aba1a8 Recommit r231324 with a fix to the ARM execution domain code
to disable lane switching if we don't actually have the instruction
set we want to switch to. Models the earlier check above the
conditional for the pass.

The testcase is one that triggered with the assert that's added
as part of the fix, use it to avoid adding a new testcase as it
highlights the same problem.

llvm-svn: 231539
2015-03-07 00:12:22 +00:00
Frederic Riss 9833de65a7 [dsymutil] Support cloning DIE reference attributes.
Reference attributes are mainly handled by just creating DIEEntry
attributes for them. There is a special case for DW_FORM_ref_addr
attributes though, because the DIEEntry code needs a DwarfDebug
code to emit them (and we don't have one as we do no CodeGen).
In that case, just use DIEInteger attributes with the right form.

llvm-svn: 231531
2015-03-06 23:22:53 +00:00
Olivier Sallenave 049d803ce0 Do not restrict interleaved unrolling to small loops, depending on the target.
llvm-svn: 231528
2015-03-06 23:12:04 +00:00
Quentin Colombet 66b616351c [AArch64][LoadStoreOptimizer] Generate LDP + SXTW instead of LD[U]R + LD[U]RSW.
Teach the load store optimizer how to sign extend a result of a load pair when
it helps creating more pairs.
The rational is that loads are more expensive than sign extensions, so if we
gather some in one instruction this is better!

<rdar://problem/20072968>

llvm-svn: 231527
2015-03-06 22:42:10 +00:00
Sanjay Patel 3fee49b236 fixed to test features, not CPUs
llvm-svn: 231524
2015-03-06 21:50:42 +00:00
Sanjay Patel a800b6c04b fixed to test features, not CPUs
llvm-svn: 231523
2015-03-06 21:50:27 +00:00
Sanjay Patel 4593045f01 loosen checking for buildbots
llvm-svn: 231522
2015-03-06 21:30:18 +00:00
Sanjay Patel 3fd51f3c4d fixed to test only the feature, not the feature and a CPU
llvm-svn: 231521
2015-03-06 21:24:56 +00:00
Sanjay Patel eb60f0728d fixed to test only the feature, not the feature and a CPU
llvm-svn: 231520
2015-03-06 21:19:32 +00:00
Sanjay Patel 9c04ad5ed7 fixed test to use FileCheck
llvm-svn: 231519
2015-03-06 21:16:15 +00:00
Sanjay Patel 9881f9531c fixed to use CHECK-LABELs
llvm-svn: 231517
2015-03-06 21:05:02 +00:00
Sanjay Patel 6a53998a48 fixed to test only the feature, not the feature and a CPU
llvm-svn: 231516
2015-03-06 20:58:15 +00:00
Sanjay Patel 869cea48cc fixed to test only the feature, not the feature and a CPU
llvm-svn: 231515
2015-03-06 20:57:40 +00:00
Sanjay Patel dba8012f69 fixed to test feature, not CPU
llvm-svn: 231513
2015-03-06 20:51:25 +00:00
Sanjay Patel 7c6eaf03d7 fixed to test features, not CPUs
llvm-svn: 231512
2015-03-06 20:46:16 +00:00
Sanjay Patel 829c7347d1 fixed test to use SSE2 attribute
llvm-svn: 231510
2015-03-06 20:38:55 +00:00
Sanjay Patel 2b7229c34d fixed to test only the feature, not the feature and a CPU
llvm-svn: 231509
2015-03-06 20:34:20 +00:00
Matthias Braun 898d11e864 DAGCombiner: Canonicalize select(and/or,x,y) depending on target.
This is based on the following equivalences:
select(C0 & C1, X, Y) <=> select(C0, select(C1, X, Y), Y)
select(C0 | C1, X, Y) <=> select(C0, X, select(C1, X, Y))

Many target cannot perform and/or on the CPU flags and therefore the
right side should be choosen to avoid materializign the i1 flags in an
integer register. If the target can perform this operation efficiently
we normalize to the left form.

Differential Revision: http://reviews.llvm.org/D7622

llvm-svn: 231507
2015-03-06 19:49:10 +00:00
Bruno Cardoso Lopes 61b9fd4686 [AsmPrinter][TLOF] Remove AArch64 test to appease buildbots
Follow up from r231497. Using XFAIL would still trigger fail on some
buildbots. Will re-introduce it as soon as I have a fix.

llvm-svn: 231505
2015-03-06 19:42:18 +00:00
Bruno Cardoso Lopes 6e38693507 [AsmPrinter][TLOF] XFAIL AArch64 test to appease buildbots
The checking for extgotequiv and localgotequiv rely on the emission
order, which is not guaranteed because we use DenseMap to hold the GOT
equivalents. XFAIL this now until I get time to use MapVector and test
out the solution. In the meantime, appease buildbots.

llvm-svn: 231497
2015-03-06 18:38:42 +00:00
Frederic Riss ef648462d2 [dsymutil] Add debug_str construction support.
With this comes the ability to correctly clone string attributes in DIEs.

llvm-svn: 231493
2015-03-06 17:56:30 +00:00
Bruno Cardoso Lopes 5b75f4a356 [AsmPrinter][TLOF] Make AArch64 test a bit more flexible
llvm-svn: 231481
2015-03-06 15:11:41 +00:00
Bruno Cardoso Lopes 2d54aa496e [AsmPrinter][TLOF] Split tests and move to appropriate directories
Follow up from r231474 and 231475 to appease buildbots

llvm-svn: 231480
2015-03-06 14:41:56 +00:00
Bruno Cardoso Lopes 618c67a018 [AsmPrinter][TLOF] 32-bit MachO support for replacing GOT equivalents
Add MachO 32-bit (i.e. arm and x86) support for replacing global GOT equivalent
symbol accesses. Unlike 64-bit targets, there's no GOTPCREL relocation, and
access through a non_lazy_symbol_pointers section is used instead.

-- before

    _extgotequiv:
       .long _extfoo

    _delta:
       .long _extgotequiv-_delta

-- after

    _delta:
       .long L_extfoo$non_lazy_ptr-_delta

       .section __IMPORT,__pointers,non_lazy_symbol_pointers
    L_extfoo$non_lazy_ptr:
       .indirect_symbol _extfoo
       .long 0

llvm-svn: 231475
2015-03-06 13:49:05 +00:00
Bruno Cardoso Lopes 52b1391df6 [AsmPrinter][TLOF] ARM64 MachO support for replacing GOT equivalents
Follow up r230264 and add ARM64 support for replacing global GOT
equivalent symbol accesses by references to the GOT entry for the final
symbol instead, example:

-- before

   .globl  _foo
  _foo:
   .long   42

   .globl  _gotequivalent
  _gotequivalent:
   .quad   _foo

   .globl  _delta
  _delta:
   .long   _gotequivalent-_delta

-- after

   .globl  _foo
  _foo:
   .long   42

   .globl  _delta
  Ltmp3:
   .long _foo@GOT-Ltmp3

llvm-svn: 231474
2015-03-06 13:48:45 +00:00
Toma Tabacu 4e0cf8e211 [mips] [IAS] Add missing constraints and improve testing for the .module directive.
Summary:
None of the .set directives can be used before the .module directives. The .set mips0/pop/push were not triggering this constraint.
Also added testing for all the other implemented directives which are supposed to trigger this constraint.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7140

llvm-svn: 231465
2015-03-06 12:15:12 +00:00
Karthik Bhat 88db86dd29 Add a new pass "Loop Interchange"
This pass interchanges loops to provide a more cache-friendly memory access.

For e.g. given a loop like -
  for(int i=0;i<N;i++)
    for(int j=0;j<N;j++)
      A[j][i] = A[j][i]+B[j][i];

is interchanged to -
  for(int j=0;j<N;j++)
    for(int i=0;i<N;i++)
      A[j][i] = A[j][i]+B[j][i];

This pass is currently disabled by default.

To give a brief introduction it consists of 3 stages-

LoopInterchangeLegality : Checks the legality of loop interchange based on Dependency matrix.
LoopInterchangeProfitability: A very basic heuristic has been added to check for profitibility. This will evolve over time.
LoopInterchangeTransform : Which does the actual transform.

LNT Performance tests shows improvement in Polybench/linear-algebra/kernels/mvt and Polybench/linear-algebra/kernels/gemver becnmarks.

TODO:
1) Add support for reductions and lcssa phi.
2) Improve profitability model.
3) Improve loop selection algorithm to select best loop for interchange. Currently the innermost loop is selected for interchange.
4) Improve compile time regression found in llvm lnt due to this pass.
5) Fix issues in Dependency Analysis module.

A special thanks to Hal for reviewing this code.
Review: http://reviews.llvm.org/D7499

llvm-svn: 231458
2015-03-06 10:11:25 +00:00
David Majnemer b61f4e403d X86: Form IMGREL relocations for LLVM Functions
We supported forming IMGREL relocations from ConstantExprs involving
__ImageBase if the minuend was a GlobalVariable.  Extend this
functionality to all GlobalObjects.

llvm-svn: 231456
2015-03-06 08:11:32 +00:00
Michael Zolotukhin 03dd1082ad LegalizeTypes: Handle shift by 0 in ExpandShiftByConstant.
Though such shifts are usually optimized away by combiner, we still can
encounter them after a vector shift is legalized.

llvm-svn: 231443
2015-03-06 01:13:01 +00:00
Rafael Espindola a5b9e1cf39 Remember to move a type to the correct set when setting the body.
We would set the body of a struct type (therefore making it non-opaque)
but were forgetting to move it to the non-opaque set.

Fixes pr22807.

llvm-svn: 231442
2015-03-06 00:50:21 +00:00
Michael Gottesman f6bcb81000 [objc-arc] Remove annotations code.
It will always be in the history if it is needed again. Now it is just dead
code.

llvm-svn: 231435
2015-03-06 00:34:29 +00:00
Nadav Rotem c99a38796c Teach ComputeNumSignBits about signed reminder.
This optimization a continuation of r231140 that reasoned about signed div.

llvm-svn: 231433
2015-03-06 00:23:58 +00:00
Philip Reames e21ce4540c [RewriteStatepointsForGC] Yet more test cases for relocation
At this point, we should have decent coverage of the involved code.  I've got a few more test cases to cleanup and submit, but what's here is already reasonable.

I've got a collection of liveness tests which will be posted for review along with a decent liveness algorithm in the next few days.  Once those are in, the code in this file should be well tested and I can start renaming things without risk of serious breakage.  

llvm-svn: 231414
2015-03-05 22:28:06 +00:00
Sanjay Patel 302404b277 [AVX] Lower / fast-isel scalar FP selects into VBLENDV instructions (PR22483)
This patch reduces code size for all AVX targets and increases speed for some chips.

SSE 4.1 introduced the useless (see code comments) 2-register form of BLENDV and
only in the packed float/double flavors.

AVX subsequently made the instruction useful by adding a 4-register operand form.

So we just need to paper over the lack of scalar forms of this instruction, complicate
the code to choose float or double forms, and use blendv on scalars since all FP is in
xmm registers anyway.

This gives us an approximately 50% speed up for a blendv microbenchmark sequence
on SandyBridge and Haswell:
blendv : 29.73 cycles/iter
logic : 43.15 cycles/iter

No new test cases with this patch because:

1. fast-isel-select-sse.ll tests the positive side for regular X86 lowering and fast-isel
2. sse-minmax.ll and fp-select-cmp-and.ll confirm that we're not firing for scalar selects without AVX
3. fp-select-cmp-and.ll and logical-load-fold.ll confirm that we're not firing for scalar selects with constants.

http://llvm.org/bugs/show_bug.cgi?id=22483

Differential Revision: http://reviews.llvm.org/D8063

llvm-svn: 231408
2015-03-05 21:46:54 +00:00
Ahmed Bougacha 1b67630cb3 [AArch64] Teach AsmPrinter about GlobalAddress operands.
Fixes PR22761, rdar://20024866.
Differential Revision: http://reviews.llvm.org/D8042

llvm-svn: 231400
2015-03-05 20:04:21 +00:00
Philip Reames 03ea8642b1 [RewriteStatepointsForGC] Add additional tests around relocation
These are focused around the actual relocation rewriting itself, not the rest of the infrastructure.

llvm-svn: 231399
2015-03-05 19:52:13 +00:00
Rafael Espindola 092b619e55 Use the correct func begin symbol in all places in ppc.
I missed an occurrence of the old symbol in my previous patch.

llvm-svn: 231398
2015-03-05 19:47:50 +00:00
Ahmed Bougacha 4200cc95b4 [ARM] Enable vector extload combine for legal types.
This commit enables forming vector extloads for ARM.
It only does so for legal types, and when we can't fold the extension
in a wide/long form of the user instruction.

Enabling it for larger types isn't as good an idea on ARM as it is on
X86, because: 
- we pretend that extloads are legal, but end up generating vld+vmov
- we have instructions like vld {dN, dM}, which can't be generated
  when we "manually expand" extloads to vld+vmov.

For legal types, the combine doesn't fire that often: in the
integration tests only in a big endian testcase, where it removes a
pointless AND.

Related to rdar://19723053
Differential Revision: http://reviews.llvm.org/D7423

llvm-svn: 231396
2015-03-05 19:37:53 +00:00
Rafael Espindola 86bd6a1202 Use the generic Lfunc_begin label on ppc.
This removes yet another custom label to mark the start of a function.

llvm-svn: 231390
2015-03-05 18:55:50 +00:00
David Majnemer 71b9b6be1b X86: Optimize address mode matching for FRAME_ALLOC_RECOVER nodes
We know that the absolute symbol will be less than 2GB and thus will
always fit.

llvm-svn: 231389
2015-03-05 18:50:12 +00:00
Reid Kleckner cfb9ce53c1 Replace llvm.frameallocate with llvm.frameescape
Turns out it's pretty straightforward and simplifies the implementation.

Reviewers: andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D8051

llvm-svn: 231386
2015-03-05 18:26:34 +00:00
Simon Pilgrim 7189084bef [DagCombiner] Allow shuffles to merge through bitcasts
Currently shuffles may only be combined if they are of the same type, despite the fact that bitcasts are often introduced in between shuffle nodes (e.g. x86 shuffle type widening).

This patch allows a single input shuffle to peek through bitcasts and if the input is another shuffle will merge them, shuffling using the smallest sized type, and re-applying the bitcasts at the inputs and output instead.

Dropped old ShuffleToZext test - this patch removes the use of the zext and vector-zext.ll covers these anyhow.

Differential Revision: http://reviews.llvm.org/D7939

llvm-svn: 231380
2015-03-05 17:14:04 +00:00
Kit Barton e48b1e1c4f While reviewing the changes to Clang to add builtin support for the vsld, vsrd, and vsrad instructions, it was pointed out that the builtins are generating the LLVM opcodes (shl, lshr, and ashr) not calls to the intrinsics. This patch changes the implementation of the vsld, vsrd, and vsrad instructions from from intrinsics to VXForm_1 instructions and makes them legal with P8 Altivec. It also removes the definition of the int_ppc_altivec_vsld, int_ppc_altivec_vsrd, and int_ppc_altivec_vsrad intrinsics.
llvm-svn: 231378
2015-03-05 16:24:38 +00:00
Igor Laevsky 8d0851f509 Revert change r231366 as it broke clang-native-arm-cortex-a9 Analysis/properties.m test.
llvm-svn: 231374
2015-03-05 15:41:14 +00:00
Elena Demikhovsky de05f10de2 AVX-512, SKX: Enabled masked_load/store operations for this target.
Added lowering for ISD::CONCAT_VECTORS and ISD::INSERT_SUBVECTOR for i1 vectors,
it is needed to pass all masked_memop.ll tests for SKX.

llvm-svn: 231371
2015-03-05 15:11:35 +00:00
Igor Laevsky 1725997f14 Teach lowering to correctly handle invoke statepoint and gc results tied to them. Note that we still can not lower gc.relocates for invoke statepoints.
Also it extracts getCopyFromRegs helper function in SelectionDAGBuilder as we need to be able to customize type of the register exported from basic block during lowering of the gc.result.

llvm-svn: 231366
2015-03-05 14:11:21 +00:00
Michael Kuperstein bcb26d6880 [InstCombine] Fix an assertion when fmul has a ConstantExpr operand
isNormalFp and isFiniteNonZeroFp should not assume vector operands can not be constant expressions.

Patch by Pawel Jurek <pawel.jurek@intel.com>
Differential Revision: http://reviews.llvm.org/D8053

llvm-svn: 231359
2015-03-05 08:38:57 +00:00
Craig Topper 0ee8470a43 [X86] Use vmovss to handle inserting an element into index 0 of a v8f32 vector of zeros.
llvm-svn: 231354
2015-03-05 06:38:42 +00:00
Rafael Espindola 07c03d316d Use the existing begin and end symbol for debug info.
llvm-svn: 231338
2015-03-05 02:05:42 +00:00
Kostya Serebryany 83ce8779d5 [sanitizer] add nosanitize metadata to more coverage instrumentation instructions
llvm-svn: 231333
2015-03-05 01:20:05 +00:00
Chandler Carruth af7e99f2f4 [MBP] Revert r231238 which attempted to fix a nasty bug where MBP is
just arbitrarily interleaving unrelated control flows once they get
moved "out-of-line" (both outside of natural CFG ordering and with
diamonds that cannot be fully laid out by chaining fallthrough edges).

This easy solution doesn't work in practice, and it isn't just a small
bug. It looks like a very different strategy will be required. I'm
working on that now, and it'll again go behind some flag so that
everyone can experiment and make sure it is working well for them.

llvm-svn: 231332
2015-03-05 01:07:03 +00:00
Paul Robinson 49e38965dc Turn off .debug_pubnames/pubtypes for PS4.
Differential Revision: http://reviews.llvm.org/D8067

llvm-svn: 231322
2015-03-05 00:08:27 +00:00
Matthias Braun eca5151780 Improve test robustness
Improve test robustness in preparation of coming commits:
- Avoid undefs which may get propagated too much.
- Remove several pointless add 0, instructions

llvm-svn: 231307
2015-03-04 22:31:18 +00:00