Commit Graph

26280 Commits

Author SHA1 Message Date
Michael Liao d617a3015d Fix PR18054
- Fix bug in (vsext (vzext x)) -> (vsext x) in SIGN_EXTEND_IN_REG
  lowering where we need to check whether x is a vector type (in-reg
  type) of i8, i16 or i32; otherwise, that optimization is not valid.

llvm-svn: 195779
2013-11-26 20:31:31 +00:00
Tim Northover fa36dfeeca Darwin-ARM: use movw/movt for static relocations
llvm-svn: 195759
2013-11-26 12:45:05 +00:00
Richard Sandiford dd7dd930d1 [SystemZ] Fix incorrect use of RISBG for a zero-extended right shift
We would wrongly transform the testcase into the equivalent of an AND with 1.
The problem was that, when testing whether the shifted-in bits of the right
shift were significant, we used the width of the final zero-extended result
rather than the width of the shifted value.

llvm-svn: 195731
2013-11-26 10:53:16 +00:00
Kevin Qin 599c47d0de Refactored the implementation of AArch64 NEON instruction ZIP, UZP
and TRN.
Fix a bug when mixed use of vget_high_u8() and vuzp_u8().

llvm-svn: 195716
2013-11-26 03:26:47 +00:00
Kevin Qin 33ca18fdcf [AArch64]Implement 128 bit register copy with NEON.
llvm-svn: 195713
2013-11-26 02:33:42 +00:00
Andrew Trick 391dbadb51 StackMap: Implement support for DirectMemRefOp.
A Direct stack map location records the address of frame index. This
address is itself the value that the runtime requested. This differs
from IndirectMemRefOp locations, which refer to a stack locations from
which the requested values must be loaded. Direct locations can
directly communicate the address if an alloca, while IndirectMemRefOp
handle register spills.

For example:

entry:
  %a = alloca i64...
  llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a)

Since both the alloca and stackmap intrinsic are in the entry block,
and the intrinsic takes the address of the alloca, the runtime can
assume that LLVM will not substitute alloca with any intervening
value. This must be verified by the runtime by checking that the stack
map's location is a Direct location type. The runtime can then
determine the alloca's relative location on the stack immediately after
compilation, or at any time thereafter. This differs from Register and
Indirect locations, because the runtime can only read the values in
those locations when execution reaches the instruction address of the
stack map.

llvm-svn: 195712
2013-11-26 02:03:25 +00:00
Andrew Trick d3ab37cfeb whitespace
llvm-svn: 195711
2013-11-26 02:03:20 +00:00
Cameron McInally c592e5251c Add an intrinsic for the SSE2 PAUSE instruction.
llvm-svn: 195697
2013-11-26 00:20:43 +00:00
Rafael Espindola a834e30130 Do the string comparison in the constructor instead of once per nop.
Thanks to Roman Divacky for the suggestion.

llvm-svn: 195684
2013-11-25 20:50:03 +00:00
Rafael Espindola 1b8bfdaae3 Don't use nopl in cpus that don't support it.
Patch by Mikulas Patocka. I added the test. I checked that for cpu names that
gas knows about, it also doesn't generate nopl.

The modified cpus:
i686 - there are i686-class CPUs that don't have nopl: Via c3, Transmeta
        Crusoe, Microsoft VirtualBox - see
        https://bbs.archlinux.org/viewtopic.php?pid=775414
k6, k6-2, k6-3, winchip-c6, winchip2 - these are 586-class CPUs
via c3 c3-2 - see https://bugs.archlinux.org/task/19733 as a proof that
        Via c3 and c3-Nehemiah don't have nopl

llvm-svn: 195679
2013-11-25 20:15:14 +00:00
Tim Northover d34094e525 Fix indentation typo
llvm-svn: 195660
2013-11-25 17:04:35 +00:00
Tim Northover db962e2c45 ARM: remove special cases for Darwin dynamic-no-pic mode.
These are handled almost identically to static mode (and ELF's global address
materialisation), except that a symbol may have "$non_lazy_ptr" appended. This
can be handled by passing appropriate flags along with the instruction instead
of using entirely separate pseudo-instructions.

llvm-svn: 195655
2013-11-25 16:24:52 +00:00
Tim Northover dfe2156c91 ARM: remove unused patterns.
There is no sane way for an LEApcrel (= single ADR) instruction to generate a
global address on any ARM target I know of. Fortunately, no-one was trying to
any more, but there were vestigial patterns.

llvm-svn: 195644
2013-11-25 14:40:57 +00:00
Amara Emerson 34df448f7c [ARM] Enable FeatureMP for Cortex-A5 by default.
Patch by Oliver Stannard.

llvm-svn: 195640
2013-11-25 13:17:15 +00:00
Tim Northover 89ccb616bd X86: enable AVX2 under Haswell native compilation
Patch by Adam Strzelecki

llvm-svn: 195632
2013-11-25 09:52:59 +00:00
Hao Liu fbd2b4484c Fixed a bug about disassembling AArch64 post-index load/store single element instructions.
ie. echo "0x00 0x04 0x80 0x0d" | ../bin/llvm-mc -triple=aarch64 -mattr=+neon -disassemble
    echo "0x00 0x00 0x80 0x0d" | ../bin/llvm-mc -triple=aarch64 -mattr=+neon -disassemble
will be disassembled into the same instruction st1 {v0b}[0], [x0], x0.

llvm-svn: 195591
2013-11-25 01:53:26 +00:00
NAKAMURA Takumi edbeaee857 SparcFrameLowering.cpp: Prune 'DL' [-Wunused-variable]
llvm-svn: 195590
2013-11-25 00:52:46 +00:00
Venkatraman Govindaraju 1116868a0d [Sparc] Emit large negative adjustments to SP/FP with sethi+xor instead of sethi+or. This generates correct code for both sparc32 and sparc64.
llvm-svn: 195576
2013-11-24 20:23:25 +00:00
Venkatraman Govindaraju 9c338504e5 [Sparc]: Implement LEA pattern for sparcv9.
llvm-svn: 195575
2013-11-24 20:07:35 +00:00
Venkatraman Govindaraju f79528c132 [SparcV9]: Do not emit .register directives for global registers that are clobbered by calls but not used in the function itself.
llvm-svn: 195574
2013-11-24 18:41:49 +00:00
Venkatraman Govindaraju 0510db0597 [SparcV9] Enable custom lowering of DYNAMIC_STACKALLOC in sparc64.
llvm-svn: 195573
2013-11-24 17:41:41 +00:00
Reed Kotler a787aa2b1e Make sure that for C++ emitting LwConstant32 pseudos, that it corresponds
to what is needed for constant islands. The prescan method for Mips16 constant
islands will eventually go away. It is only temporary and should be done
earlier when the instructions are first created or from the DAG. If we keep
it here we need to handle better the situation where constant islands
is called multiple times since don't want to prescan more than once.

llvm-svn: 195569
2013-11-24 06:18:50 +00:00
Reed Kotler d3b28ebe03 Fix a funny bug I introduced during conversion of ARM constant islands to Mips.
I had to move some code and I moved a declaration forward past it's first use
in the function but by nutty coincidence there was another variable of the same
name and type and  with completely unrelated function that was declared globally
in the class so no compilation error ensued.
It required some unusual conditions for it to even matter. Caused test
case casts.c in test-suite to fail during compilation with a duplicate 
symbol error. I would have noticed it during final code review for this port.

llvm-svn: 195565
2013-11-24 02:53:09 +00:00
Tom Stellard c0845334da R600/SI: Fixing handling of condition codes
We were ignoring the ordered/onordered bits and also the signed/unsigned
bits of condition codes when lowering the DAG to MachineInstrs.

NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195514
2013-11-22 23:07:58 +00:00
Jim Grosbach 860934a924 X86: Perform integer comparisons at i32 or larger.
Utilizing the 8 and 16 bit comparison instructions, even when an input can
be folded into the comparison instruction itself, is typically not worth it.
There are too many partial register stalls as a result, leading to significant
slowdowns. By always performing comparisons on at least 32-bit
registers, performance of the calculation chain leading to the
comparison improves. Continue to use the smaller comparisons when
minimizing size, as that allows better folding of loads into the
comparison instructions.

rdar://15386341

llvm-svn: 195496
2013-11-22 19:57:47 +00:00
Paul Robinson d89125a5d8 Teach ISel not to optimize 'optnone' functions (revised).
Improvements over r195317:
- Set/restore EnableFastISel flag instead of just running FastISel within
  SelectAllBasicBlocks; the flag is checked in various places, and
  FastISel won't run properly if those places don't do the right thing.
- Test looks for normal ISel versus FastISel behavior, and not
  something more subtle that doesn't work everywhere.

Based on work by Andrea Di Biagio.

llvm-svn: 195491
2013-11-22 19:11:24 +00:00
Michael Liao 02160d580b Fix PR18014
- When simplifying the mask generation for BLEND, check whether that mask is
  also consumed by other non-BLEND insns. If true, skip that simplification.

llvm-svn: 195476
2013-11-22 17:56:57 +00:00
Richard Sandiford f03789ca3f [SystemZ] Fix TMHH and TMHL usage for z10 with -O0
I've no idea why I decided to handle TMxx differently from all the other
high/low logic operations, but it was a stupid thing to do.  The high
registers aren't available as separate 32-bit registers on z10,
so subreg_h32 can't be used on a GR64 there.

I've normally been testing with z196 and with -O3 and so hadn't noticed
this until now.

llvm-svn: 195473
2013-11-22 17:28:28 +00:00
Rafael Espindola 5a8e985ad3 Don't produce tail calls when the caller is x86_thiscallcc.
The callee will not pop the stack for us.

llvm-svn: 195467
2013-11-22 15:18:28 +00:00
Daniel Sanders d40aea8768 Fix typo in a comment added in r195455.
Credit to Matheus Almeida for spotting it.

llvm-svn: 195456
2013-11-22 13:22:52 +00:00
Daniel Sanders 630dbe0a14 [mips][msa] Fix corner case for integer constant splats with undef values.
lowerBUILD_VECTOR() was treating integer constant splats as being legal
regardless of whether they had undef values. This caused instruction
selection failures when the undefs were legalized to zero, making the
constant non-splat.

Fixed this by requiring HasAnyUndef to be false for a integer constant
splat to be legal. If it is true, a new node is generated with the undefs
replaced with the necessary values to remain a splat.

llvm-svn: 195455
2013-11-22 13:14:06 +00:00
Richard Barton c31078cded Add support for Cortex-A12.
Patch by Oliver Stannard!

llvm-svn: 195448
2013-11-22 11:53:16 +00:00
Daniel Sanders fd8e416879 [mips][msa] Float vector constants cannot use ldi.[wd] directly. Bitcast from the appropriate integer vector type.
Fixes an instruction selection failure detected by llvm-stress.

llvm-svn: 195444
2013-11-22 11:24:50 +00:00
Kostya Serebryany 4007009815 Revert r195318 as it causes miscompilation (PR18029)
llvm-svn: 195439
2013-11-22 10:30:39 +00:00
Hao Liu e8bdc8c864 Fix a Cygwin build failure caused by enum values starting with '_', which is conflicted with some platform macros.
This patch only renames variables, no functional change.

llvm-svn: 195432
2013-11-22 09:24:41 +00:00
Hao Liu 25aed9bb5b Fix the bugs about AArch64 Load/Store vector types and bitcast between i64 and vector types.
e.g. "%tmp = load <2 x i64>* %ptr" can't be selected. 
     "%tmp = bitcast i64 %in to <2 x i32>" can't be selected.

llvm-svn: 195424
2013-11-22 08:47:22 +00:00
Hao Liu 91ae869692 Revert last change by haoliu because of buildbot failure.
llvm-svn: 195423
2013-11-22 08:34:54 +00:00
Hao Liu b75d80fdf0 Fix a Cygwin build failure caused by enum values starting with '_', which is conflicted with some platform macros.
This solution only renames variables, no functional change.

NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195421
2013-11-22 08:17:16 +00:00
Jiangning Liu a91633a435 For AArch64 back-end instruction selection, lower Neon_Lowxxx with EXTRCT_SUBREG.
llvm-svn: 195408
2013-11-22 02:45:13 +00:00
Lang Hames 1ca1123598 Fix a typo where we were creating <def,kill> operands instead of
<def,dead> ones.

Add an assertion to make sure we catch this in the future.

Fixes <rdar://problem/15464559>.

llvm-svn: 195401
2013-11-22 00:46:32 +00:00
Tom Stellard cd6b0a658a R600: Implement TargetInstrInfo::isLegalToSplitMBBAt()
Splitting a basic block will create a new ALU clause, so we need to make
sure we aren't moving uses of registers that are local to their
current clause into a new one.

I had a test case for this, but unfortunately unrelated schedule changes
invalidated it, and I wasn't been able to come up with another one.

NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195399
2013-11-22 00:41:08 +00:00
Ekaterina Romanova d5fa55470c SHLD/SHRD are VectorPath (microcode) instructions known to have poor latency on certain architectures. While generating SHLD/SHRD instructions is acceptable when optimizing for size, optimizing for speed on these platforms should be implemented using alternative sequences of instructions composed of add, adc, shr, shl, or and lea which are directPath instructions. These alternative instructions not only have a lower latency but they also increase the decode bandwidth by allowing simultaneous decoding of a third directPath instruction.
AMD's processors family K7, K8, K10, K12, K15 and K16 are known to have SHLD/SHRD instructions with very poor latency. Optimization guides for these processors recommend using an alternative sequence of instructions. For these AMD's processors, I disabled folding (or (x << c) | (y >> (64 - c))) when we are not optimizing for size.

It might be beneficial to disable this folding for some of the Intel's processors. However, since I couldn't find specific recommendations regarding using SHLD/SHRD instructions on Intel's processors, I haven't disabled this peephole for Intel.

llvm-svn: 195383
2013-11-21 23:21:26 +00:00
Daniel Sanders c8c50fb41f [mips][msa] Fix a corner case in performORCombine() when combining nodes into VSELECT.
Mask == ~InvMask asserts if the width of Mask and InvMask differ.
The combine isn't valid (with two exceptions, see below) if the widths differ
so test for this before testing Mask == ~InvMask.

In the specific cases of Mask=~0 and InvMask=0, as well as Mask=0 and
InvMask=~0, the combine is still valid. However, there are more appropriate
combines that could be used in these cases such as folding x & 0 to 0, or
x & ~0 to x.

llvm-svn: 195364
2013-11-21 16:11:31 +00:00
Artyom Skrobov 468ee230ea [ARM] add basic Cortex-A7 support to LLVM backend
llvm-svn: 195358
2013-11-21 14:03:21 +00:00
Daniel Sanders 6e664bcef3 [mips][msa/dsp] Only do DSP combines if DSP is enabled.
Fixes a crash (null pointer dereferenced) when MSA is enabled.

llvm-svn: 195343
2013-11-21 11:40:14 +00:00
NAKAMURA Takumi 66c95430b8 Whitespace.
llvm-svn: 195341
2013-11-21 11:08:31 +00:00
NAKAMURA Takumi 43aa939625 Revert r195317 (and r195333), "Teach ISel not to optimize 'optnone' functions."
It broke, at least, i686 target. It is reproducible with "llc -mtriple=i686-unknown".

FYI, it didn't appear to add either "-O0" or "-fast-isel".

llvm-svn: 195339
2013-11-21 10:55:15 +00:00
Ana Pazos 9ac2fc85d2 Implemented Neon scalar vdup_lane intrinsics.
Fixed scalar dup alias and added test case.

llvm-svn: 195330
2013-11-21 08:16:15 +00:00
Ana Pazos fbc1adbaa7 Implemented Neon scalar by element intrinsics.
Intrinsics implemented: vqdmull_lane, vqdmulh_lane, vqrdmulh_lane,
vqdmlal_lane, vqdmlsl_lane scalar Neon intrinsics.

llvm-svn: 195327
2013-11-21 07:37:04 +00:00
Bill Wendling 07787f8747 The basic problem is that some mainstream programs cannot deal with the way
clang optimizes tail calls, as in this example:

int foo(void);
int bar(void) {
 return foo();
}

where the call is transformed to:

  calll .L0$pb
.L0$pb:
  popl  %eax
.Ltmp0:
  addl  $_GLOBAL_OFFSET_TABLE_+(.Ltmp0-.L0$pb), %eax
  movl  foo@GOT(%eax), %eax
  popl  %ebp
  jmpl  *%eax                   # TAILCALL

However, the GOT references must all be resolved at dlopen() time, and so this
approach cannot be used with lazy dynamic linking (e.g. using RTLD_LAZY), which
usually populates the PLT with stubs that perform the actual resolving.

This patch changes X86TargetLowering::LowerCall() to skip tail call
optimization, if the called function is a global or external symbol.

Patch by Dimitry Andric!

PR15086

llvm-svn: 195318
2013-11-21 07:04:30 +00:00