Commit Graph

32592 Commits

Author SHA1 Message Date
Tom Stellard 6b42f2d8aa R600/SI: Remove unused register class
llvm-svn: 231491
2015-03-06 17:00:16 +00:00
Bruno Cardoso Lopes 618c67a018 [AsmPrinter][TLOF] 32-bit MachO support for replacing GOT equivalents
Add MachO 32-bit (i.e. arm and x86) support for replacing global GOT equivalent
symbol accesses. Unlike 64-bit targets, there's no GOTPCREL relocation, and
access through a non_lazy_symbol_pointers section is used instead.

-- before

    _extgotequiv:
       .long _extfoo

    _delta:
       .long _extgotequiv-_delta

-- after

    _delta:
       .long L_extfoo$non_lazy_ptr-_delta

       .section __IMPORT,__pointers,non_lazy_symbol_pointers
    L_extfoo$non_lazy_ptr:
       .indirect_symbol _extfoo
       .long 0

llvm-svn: 231475
2015-03-06 13:49:05 +00:00
Bruno Cardoso Lopes 52b1391df6 [AsmPrinter][TLOF] ARM64 MachO support for replacing GOT equivalents
Follow up r230264 and add ARM64 support for replacing global GOT
equivalent symbol accesses by references to the GOT entry for the final
symbol instead, example:

-- before

   .globl  _foo
  _foo:
   .long   42

   .globl  _gotequivalent
  _gotequivalent:
   .quad   _foo

   .globl  _delta
  _delta:
   .long   _gotequivalent-_delta

-- after

   .globl  _foo
  _foo:
   .long   42

   .globl  _delta
  Ltmp3:
   .long _foo@GOT-Ltmp3

llvm-svn: 231474
2015-03-06 13:48:45 +00:00
Toma Tabacu 4e0cf8e211 [mips] [IAS] Add missing constraints and improve testing for the .module directive.
Summary:
None of the .set directives can be used before the .module directives. The .set mips0/pop/push were not triggering this constraint.
Also added testing for all the other implemented directives which are supposed to trigger this constraint.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7140

llvm-svn: 231465
2015-03-06 12:15:12 +00:00
David Majnemer b61f4e403d X86: Form IMGREL relocations for LLVM Functions
We supported forming IMGREL relocations from ConstantExprs involving
__ImageBase if the minuend was a GlobalVariable.  Extend this
functionality to all GlobalObjects.

llvm-svn: 231456
2015-03-06 08:11:32 +00:00
Ahmed Bougacha c6dcf7a7cc [X86] Remove stale comment. NFC.
It turns out 256bit V[SZ]EXT nodes are still
generated by the new shuffle lowering, so this
is here to stay!

llvm-svn: 231422
2015-03-05 23:18:41 +00:00
Sanjay Patel 302404b277 [AVX] Lower / fast-isel scalar FP selects into VBLENDV instructions (PR22483)
This patch reduces code size for all AVX targets and increases speed for some chips.

SSE 4.1 introduced the useless (see code comments) 2-register form of BLENDV and
only in the packed float/double flavors.

AVX subsequently made the instruction useful by adding a 4-register operand form.

So we just need to paper over the lack of scalar forms of this instruction, complicate
the code to choose float or double forms, and use blendv on scalars since all FP is in
xmm registers anyway.

This gives us an approximately 50% speed up for a blendv microbenchmark sequence
on SandyBridge and Haswell:
blendv : 29.73 cycles/iter
logic : 43.15 cycles/iter

No new test cases with this patch because:

1. fast-isel-select-sse.ll tests the positive side for regular X86 lowering and fast-isel
2. sse-minmax.ll and fp-select-cmp-and.ll confirm that we're not firing for scalar selects without AVX
3. fp-select-cmp-and.ll and logical-load-fold.ll confirm that we're not firing for scalar selects with constants.

http://llvm.org/bugs/show_bug.cgi?id=22483

Differential Revision: http://reviews.llvm.org/D8063

llvm-svn: 231408
2015-03-05 21:46:54 +00:00
Ahmed Bougacha 1b67630cb3 [AArch64] Teach AsmPrinter about GlobalAddress operands.
Fixes PR22761, rdar://20024866.
Differential Revision: http://reviews.llvm.org/D8042

llvm-svn: 231400
2015-03-05 20:04:21 +00:00
Rafael Espindola 092b619e55 Use the correct func begin symbol in all places in ppc.
I missed an occurrence of the old symbol in my previous patch.

llvm-svn: 231398
2015-03-05 19:47:50 +00:00
Ahmed Bougacha 4200cc95b4 [ARM] Enable vector extload combine for legal types.
This commit enables forming vector extloads for ARM.
It only does so for legal types, and when we can't fold the extension
in a wide/long form of the user instruction.

Enabling it for larger types isn't as good an idea on ARM as it is on
X86, because: 
- we pretend that extloads are legal, but end up generating vld+vmov
- we have instructions like vld {dN, dM}, which can't be generated
  when we "manually expand" extloads to vld+vmov.

For legal types, the combine doesn't fire that often: in the
integration tests only in a big endian testcase, where it removes a
pointless AND.

Related to rdar://19723053
Differential Revision: http://reviews.llvm.org/D7423

llvm-svn: 231396
2015-03-05 19:37:53 +00:00
Rafael Espindola 86bd6a1202 Use the generic Lfunc_begin label on ppc.
This removes yet another custom label to mark the start of a function.

llvm-svn: 231390
2015-03-05 18:55:50 +00:00
David Majnemer 71b9b6be1b X86: Optimize address mode matching for FRAME_ALLOC_RECOVER nodes
We know that the absolute symbol will be less than 2GB and thus will
always fit.

llvm-svn: 231389
2015-03-05 18:50:12 +00:00
Reid Kleckner cfb9ce53c1 Replace llvm.frameallocate with llvm.frameescape
Turns out it's pretty straightforward and simplifies the implementation.

Reviewers: andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D8051

llvm-svn: 231386
2015-03-05 18:26:34 +00:00
Kit Barton e48b1e1c4f While reviewing the changes to Clang to add builtin support for the vsld, vsrd, and vsrad instructions, it was pointed out that the builtins are generating the LLVM opcodes (shl, lshr, and ashr) not calls to the intrinsics. This patch changes the implementation of the vsld, vsrd, and vsrad instructions from from intrinsics to VXForm_1 instructions and makes them legal with P8 Altivec. It also removes the definition of the int_ppc_altivec_vsld, int_ppc_altivec_vsrd, and int_ppc_altivec_vsrad intrinsics.
llvm-svn: 231378
2015-03-05 16:24:38 +00:00
Elena Demikhovsky de05f10de2 AVX-512, SKX: Enabled masked_load/store operations for this target.
Added lowering for ISD::CONCAT_VECTORS and ISD::INSERT_SUBVECTOR for i1 vectors,
it is needed to pass all masked_memop.ll tests for SKX.

llvm-svn: 231371
2015-03-05 15:11:35 +00:00
Craig Topper 0ee8470a43 [X86] Use vmovss to handle inserting an element into index 0 of a v8f32 vector of zeros.
llvm-svn: 231354
2015-03-05 06:38:42 +00:00
Hans Wennborg 6d8e6d5ee4 Revert r231324 "Remove the conditional addition of the execution dependency fixing"
See PR22799.

llvm-svn: 231348
2015-03-05 03:24:49 +00:00
Eric Christopher 385f4b36d8 Remove the conditional addition of the execution dependency fixing
pass from the ARM backend as the pass itself will detect any use
of the appropriate register class.

llvm-svn: 231324
2015-03-05 00:28:55 +00:00
Eric Christopher 63b44882ef Cleanup and remove a chunk of getARMSubtarget calls in the
ARM TargetMachine pass pipeline construction by pushing them down
into the appropriate pass.

llvm-svn: 231323
2015-03-05 00:23:40 +00:00
Nemanja Ivanovic e8effe1edb Add LLVM support for PPC cryptography builtins
Review: http://reviews.llvm.org/D7955

llvm-svn: 231285
2015-03-04 20:44:33 +00:00
Mehdi Amini 46a43556db Make DataLayout Non-Optional in the Module
Summary:
DataLayout keeps the string used for its creation.

As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().

Get rid of DataLayoutPass: the DataLayout is in the Module

The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.

Make DataLayout Non-Optional in the Module

Module->getDataLayout() will never returns nullptr anymore.

Reviewers: echristo

Subscribers: resistor, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D7992

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231270
2015-03-04 18:43:29 +00:00
Marek Olsak d2af89df10 R600/SI: Add an intrinsic for S_FLBIT_I32 / V_FFBH_I32
Required by OpenGL (ARB_gpu_shader5).

llvm-svn: 231259
2015-03-04 17:33:45 +00:00
Nemanja Ivanovic d384cd9907 Test commit. Removed an unnecessary space
llvm-svn: 231257
2015-03-04 17:09:12 +00:00
JF Bastien f14889ee34 Mutate TargetLowering::shouldExpandAtomicRMWInIR to specifically dictate how AtomicRMWInsts are expanded.
Summary:
In PNaCl, most atomic instructions have their own @llvm.nacl.atomic.* function, each one, with a few exceptions, represents a consistent behaviour across all NaCl-supported targets. Unfortunately, the atomic RMW operations nand, [u]min, and [u]max aren't directly represented by any such @llvm.nacl.atomic.* function. This patch refines shouldExpandAtomicRMWInIR in TargetLowering so that a future `Le32TargetLowering` class can selectively inform the caller how the target desires the atomic RMW instruction to be expanded (ie via load-linked/store-conditional for ARM/AArch64, via cmpxchg for X86/others?, or not at all for Mips) if at all.

This does not represent a behavioural change and as such no tests were added.

Patch by: Richard Diamond.

Reviewers: jfb

Reviewed By: jfb

Subscribers: jfb, aemerson, t.p.northover, llvm-commits

Differential Revision: http://reviews.llvm.org/D7713

llvm-svn: 231250
2015-03-04 15:47:57 +00:00
Jozef Kolek c925808ee5 [mips][microMIPS] Make usage of ADDU16 and SUBU16 by code generator
Differential Revision: http://reviews.llvm.org/D7609

llvm-svn: 231249
2015-03-04 15:47:42 +00:00
Bill Schmidt d90aff2c4f [PowerPC] Remove unnecessary and incomplete commentary
This "itinerary class map" in PPCSchedule.td is incomplete and
redundant with the actual code.  As it provides no value, we've
decided to remove it.

No functional change.

llvm-svn: 231246
2015-03-04 14:56:05 +00:00
Andrea Di Biagio df93ccf49a [X86][FastISel] Simplify the logic in method X86SelectSIToFP.
The target-independent selection algorithm in FastISel already knows how
to select a SINT_TO_FP if the target is SSE but not AVX.

On targets that have SSE but not AVX, the tablegen'd 'fastEmit' functions
for ISD::SINT_TO_FP know how to select instruction X86::CVTSI2SSrr
(for an i32 to f32 conversion) and X86::CVTSI2SDrr (for an i32 to f64
conversion).

This patch simplifies the logic in method X86SelectSIToFP knowing that
the code would not be reachable if the subtarget doesn't have AVX.
No functional change intended.

llvm-svn: 231243
2015-03-04 14:23:25 +00:00
Toma Tabacu e1e3ffe71d [mips] Rename the LA/LI/DLI TableGen definitions and classes. NFC.
Summary:
Use more reasonable names for these pseudo-instructions.
As there's only one definition tied to any one of these classes, I named them with abbreviated versions of their respective class' name.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7831

llvm-svn: 231240
2015-03-04 13:01:14 +00:00
Vasileios Kalintiris 8761490d2e [mips] Keep the parameter list of Filler::searchRange() consistent. NFC.
Summary:
Move the "Filler" parameter to the end of the parameter list as it is,
conceptually, the only output parameter of that function.

Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7726

llvm-svn: 231239
2015-03-04 12:37:58 +00:00
Vasileios Kalintiris 2ef2888273 [mips] Specify the correct value type when combining a CMovFP node.
This commit fixes a bug introduced in r230956 where we were creating
CMovFP_{T,F} nodes with multiple return value types (one for each operand).
With this change the return value type of the new node is the same as the
value type of the True/False operands of the original node.

llvm-svn: 231237
2015-03-04 12:10:18 +00:00
Kristof Beyls aea8461820 Fix PR22408 - LLVM producing AArch64 TLS relocations that GNU linkers cannot handle yet.
As is described at http://llvm.org/bugs/show_bug.cgi?id=22408, the GNU linkers
ld.bfd and ld.gold currently only support a subset of the whole range of AArch64
ELF TLS relocations. Furthermore, they assume that some of the code sequences to
access thread-local variables are produced in a very specific sequence.
When the sequence is not as the linker expects, it can silently mis-relaxe/mis-optimize
the instructions.
Even if that wouldn't be the case, it's good to produce the exact sequence,
as that ensures that linkers can perform optimizing relaxations.

This patch:

* implements support for 16MiB TLS area size instead of 4GiB TLS area size. Ideally clang
  would grow an -mtls-size option to allow support for both, but that's not part of this patch.
* by default doesn't produce local dynamic access patterns, as even modern ld.bfd and ld.gold
  linkers do not support the associated relocations. An option (-aarch64-elf-ldtls-generation)
  is added to enable generation of local dynamic code sequence, but is off by default.
* makes sure that the exact expected code sequence for local dynamic and general dynamic
  accesses is produced, by making use of a new pseudo instruction. The patch also removes
  two (AArch64ISD::TLSDESC_BLR, AArch64ISD::TLSDESC_CALL) pre-existing AArch64-specific pseudo
  SDNode instructions that are superseded by the new one (TLSDESC_CALLSEQ).

llvm-svn: 231227
2015-03-04 09:12:08 +00:00
Davide Italiano fcae934c03 [MC][Target] Implement support for R_X86_64_SIZE{32,64}.
Differential Revision:	D7990
Reviewed by:	rafael, majnemer

llvm-svn: 231216
2015-03-04 06:49:39 +00:00
Pete Cooper ef21bd444d Remove MCStreamer.h include from MCContext.h and explictly include it where necessary. NFC
llvm-svn: 231193
2015-03-04 01:24:11 +00:00
Juergen Ributzka 1f7a17661c Remove 'llvm.x86.avx2.vbroadcasti128' intrinsic.
The intrinsic is no longer generated by the front-end. Remove the intrinsic and
auto-upgrade it to a vector shuffle.

Reviewed by Nadav

This is related to rdar://problem/18742778.

llvm-svn: 231182
2015-03-04 00:13:25 +00:00
Eric Christopher 6f1e5680f6 Remove subtarget dependence in pass pipeline setup for AArch64.
llvm-svn: 231165
2015-03-03 23:22:40 +00:00
David Blaikie 01a9a412a7 Avoid copying LiveInterval, this could lead to a double-delete
llvm-svn: 231154
2015-03-03 22:25:48 +00:00
David Blaikie 7f1e0565b3 Revert "Remove the explicit SDNodeIterator::operator= in favor of the implicit default"
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.

This reverts commit r231135.

llvm-svn: 231136
2015-03-03 21:18:16 +00:00
David Blaikie bb8da4c08f Remove the explicit SDNodeIterator::operator= in favor of the implicit default
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.

llvm-svn: 231135
2015-03-03 21:17:08 +00:00
Paul Robinson 06a8eb8343 [X86][ELF] Correct relocation for DWARF TLS references
Previously we had only Linux using DTPOFF for these; all X86 ELF
targets should. Fixes a side issue mentioned in PR21077.

Differential Revision: http://reviews.llvm.org/D8011

llvm-svn: 231130
2015-03-03 21:01:27 +00:00
Sanjay Patel 36a2dc895f remove enum value names from comments; NFC
llvm-svn: 231129
2015-03-03 20:58:35 +00:00
Sanjay Patel 948602bd17 use bool operator shortcut; NFC
llvm-svn: 231123
2015-03-03 20:41:27 +00:00
Kit Barton 0cfa7b7ad0 Add the following 64-bit vector integer arithmetic instructions added in POWER8:
vaddudm
vsubudm
vmulesw
vmulosw
vmuleuw
vmulouw
vmuluwm
vmaxsd
vmaxud
vminsd
vminud
vcmpequd
vcmpequd.
vcmpgtsd
vcmpgtsd.
vcmpgtud
vcmpgtud.
vrld
vsld
vsrd
vsrad

Phabricator review: http://reviews.llvm.org/D7959

llvm-svn: 231115
2015-03-03 19:55:45 +00:00
Eric Christopher 43768e311c 80-column fixup.
llvm-svn: 231088
2015-03-03 17:54:39 +00:00
Chad Rosier 8e38f30e49 [AArch64] When combining constant mul of -3, prefer (sub x, (shl x, N)).
This change only effects codegen when the constant is -3.

llvm-svn: 231085
2015-03-03 17:31:01 +00:00
Michael Kuperstein 84dff4c94c [X86][Haswell][SchedModel] Fix patterns for scalar FMA3 variants.
llvm-svn: 231073
2015-03-03 15:47:02 +00:00
Elena Demikhovsky d207f17fa1 AVX-512: Moved patterns for masked load/store under avx_store, avx_load classes.
No functional changes.

llvm-svn: 231069
2015-03-03 15:03:35 +00:00
Craig Topper ef04b2b505 [X86] Remove some unused code from disassembler.
llvm-svn: 231055
2015-03-03 05:24:03 +00:00
Ahmed Bougacha afbd6887c4 [X86] Special-case 2x CMOV when custom-inserting.
This lets us avoid a few copies that are otherwise hard to get rid of.
The way this is done is, the custom-inserter looks at the following
instruction for another CMOV, and replaces both at the same time.
A previous version used a new CMOV2 opcode, but the custom inserter
is expected to be able to return a different basic block anyway, which
means it's OK - though far from ideal - to alter that block's contents.
Explicitly document that, in case it ever makes a difference.
Alternatives welcome!

Follow-up to r231045.

rdar://19767934
Closes http://reviews.llvm.org/D8019

llvm-svn: 231046
2015-03-03 01:21:16 +00:00
Ahmed Bougacha 066d0b8e64 [X86] Combine (cmov (and/or (setcc) (setcc))) into (cmov (cmov)).
Fold and/or of setcc's to double CMOV:

(CMOV F, T, ((cc1 | cc2) != 0)) -> (CMOV (CMOV F, T, cc1), T, cc2)
(CMOV F, T, ((cc1 & cc2) != 0)) -> (CMOV (CMOV T, F, !cc1), F, !cc2)

When we can't use the CMOV instruction, it might increase branch
mispredicts.  When we can, or when there is no mispredict, this
improves throughput and reduces register pressure.

These can't be catched by generic combines, because the pattern can
appear when legalizing some instructions (such as fcmp une).

rdar://19767934
http://reviews.llvm.org/D7634

llvm-svn: 231045
2015-03-03 01:09:14 +00:00
Paul Robinson c583abc888 Remove useless .debug_macinfo section setup.
llvm-svn: 231001
2015-03-02 19:52:42 +00:00
Jan Vesely 468e055f54 R600: Use c++11 style for loop
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 230987
2015-03-02 18:56:52 +00:00
Paul Robinson 9f4cfc574e Revert r230979, should apply to all X86 ELF.
llvm-svn: 230985
2015-03-02 18:50:18 +00:00
Paul Robinson 10ae2e52de [PS4] Correct relocation for DWARF TLS references.
llvm-svn: 230979
2015-03-02 17:44:52 +00:00
Elena Demikhovsky 18fd49602b AVX-512: Add assembly parser support for Rounding mode
By Asaf Badouh <asaf.badouh@intel.com>

llvm-svn: 230962
2015-03-02 15:00:34 +00:00
Benjamin Kramer e86b0f186b NVPTX: Remove dead code.
Fun fact: This file was never referenced since the initial checkin of
the NVPTX backend.

llvm-svn: 230957
2015-03-02 13:16:28 +00:00
Vasileios Kalintiris e741eb2c7d [mips] Optimize conditional moves where RHS is zero.
Summary:
When the RHS of a conditional move node is zero, we can utilize the $zero
register by inverting the conditional move instruction and by swapping the
order of its True/False operands.

Reviewers: dsanders

Differential Revision: http://reviews.llvm.org/D7945

llvm-svn: 230956
2015-03-02 12:47:32 +00:00
Elena Demikhovsky 2689d78909 AVX-512: Simplified MOV patterns, no functional changes.
llvm-svn: 230954
2015-03-02 12:46:21 +00:00
Craig Topper 9c26bcca5a [X86] There are only 8 mask registers. Fail disassembly if instruction tries to reference more.
llvm-svn: 230931
2015-03-02 03:33:11 +00:00
Craig Topper 09b27e7b24 [X86] Fix diassembler crash on AVX512 cmpps/cmppd with immediate that doesn't fit in 5-bits. Fixes PR22743.
llvm-svn: 230924
2015-03-02 00:22:29 +00:00
Sanjoy Das e5d1466ab3 [AArch64] fix an invalid-iterator-use bug.
Summary:
In AArch64PromoteConstant::appendAndTransferDominatedUses,
`InsertPts[NewPt]` invalidates IPI.  Therefore, `InsertPts[NewPt] =
std::move(IPI->second)` is not legal.

This was caught by running `make check` with
http://reviews.llvm.org/D7931.

Reviewers: t.p.northover, grosbach, bkramer

Reviewed By: bkramer

Subscribers: aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D7988

llvm-svn: 230923
2015-03-02 00:17:18 +00:00
Benjamin Kramer 42cc33e816 X86: Replace variadic function with init list. NFC.
llvm-svn: 230911
2015-03-01 21:47:40 +00:00
Benjamin Kramer 030133c5db ArrayRef: Remove the equals helper with many arguments.
With initializer lists there is a really neat idiomatic way to write
this, 'ArrayRef.equals({1, 2, 3, 4, 5})'. Remove the equal method which
always had a hard limit on the number of arguments. I considered
rewriting it with variadic templates but that's not really a good fit
for a function with homogeneous arguments.

'ArrayRef == {1, 2, 3, 4, 5}' would've been even more awesome, but C++11
doesn't allow init lists with binary operators.

llvm-svn: 230907
2015-03-01 21:05:05 +00:00
Benjamin Kramer 7149aabf8b Make some non-constant static variables non-static or fully const.
Otherwise we have to emit thread-safe initialization for them. NFC.

llvm-svn: 230894
2015-03-01 18:09:56 +00:00
Elena Demikhovsky 0995479e67 Reverted 230471 - gather scatter handling in table gen.
llvm-svn: 230892
2015-03-01 08:23:41 +00:00
Elena Demikhovsky 02ffd26023 AVX-512: Added mask and rounding mode for scalar arithmetics
Added more tests for scalar instructions to destinguish between AVX and AVX-512 forms.

llvm-svn: 230891
2015-03-01 07:44:04 +00:00
Craig Topper 782d620657 [X86] Remove the blendpd/blendps/pblendw/pblendd intrinsics. They can represented by shuffle_vector instructions.
llvm-svn: 230860
2015-02-28 19:33:17 +00:00
Alexei Starovoitov 1b7b56fbcc bpf: fix build
complete the plumbing of passing TargetRegisterInfo through
computeRegisterProperties started by r230583

llvm-svn: 230858
2015-02-28 18:03:04 +00:00
Benjamin Kramer 5fbfe2ffdc Convert push_back loops into append calls.
No functionality change intended.

llvm-svn: 230849
2015-02-28 13:20:15 +00:00
Benjamin Kramer f1362f6196 ArrayRefize memory operand folding. NFC.
llvm-svn: 230846
2015-02-28 12:04:00 +00:00
Benjamin Kramer 4f6ac16292 Replace std::copy with a back inserter with vector append where feasible
All of the cases were just appending from random access iterators to a
vector. Using insert/append can grow the vector to the perfect size
directly and moves the growing out of the loop. No intended functionalty
change.

llvm-svn: 230845
2015-02-28 10:11:12 +00:00
Bill Schmidt e3959eb54e [PowerPC] Fix PR22711 - Misaligned .toc section
Straightforward patch to emit an alignment directive when emitting a
TOC entry.  The test case was generated from the test in PR22711 that
demonstrated a misaligned .toc section.  The object code is run
through llvm-readobj to verify that the correct alignment has been
applied to the .toc section.

Thanks to Ulrich Weigand for running down where the fix was needed.

llvm-svn: 230801
2015-02-27 22:14:10 +00:00
Charles Davis 83687fb9e6 Target/X86: Never use the redzone for Win64 ABI functions.
Summary:
Until now, we did this (among other things) based on whether or not the
target was Windows. This is clearly wrong, not just for Win64 ABI functions
on non-Windows, but for System V ABI functions on Windows, too. In this
change, we make this decision based on the ABI the calling convention
specifies instead.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7953

llvm-svn: 230793
2015-02-27 21:11:16 +00:00
Hal Finkel 5c3cacf5c0 [PowerPC] Use vector types for memcpy and friends (sometimes)
When using Altivec, we can use vector loads and stores for aligned memcpy and
friends. Starting with the P7 and VXS, we have reasonable unaligned vector
stores. Starting with the P8, we have fast unaligned loads too.

For QPX, we use vector loads are stores, but only for aligned memory accesses.

llvm-svn: 230788
2015-02-27 19:58:28 +00:00
Renato Golin a78995c0a0 Equally to NetBSD, Bitrig/ARM uses the Itanium-ABI.
Patch by Patrick Wildt.

llvm-svn: 230762
2015-02-27 16:35:27 +00:00
Zoran Jovanovic 71a33e2ad6 [mips][microMIPS] Change register class for GP register
Differential Revision: http://reviews.llvm.org/D7934

llvm-svn: 230760
2015-02-27 15:03:50 +00:00
Tom Stellard aec94b3bf3 R600/SI: Add missing mubuf instructions
llvm-svn: 230759
2015-02-27 14:59:46 +00:00
Tom Stellard 49282c92c5 R600/SI: Consistently put soffset before the offset operand for mubuf instructions
This matches the assembly syntax.

llvm-svn: 230758
2015-02-27 14:59:44 +00:00
Tom Stellard 1f9939fba6 R600/SI: Add slc, glc, and tfe to non-atomic _ADDR64 instructions
llvm-svn: 230757
2015-02-27 14:59:41 +00:00
Chandler Carruth 9ad2ffac23 [x86] Run most of the rest of the shuffle combining over non-128-bit
vectors. This lets us fix the rest of the v16 lowering problems when
pshufb is clearly better.

We might still be able to improve some of the lowerings by enabling the
other combine-based rewriting to fire for non-128-bit vectors, but this
at least should remove any regressions from using the fancy v16i16
lowering strategy.

llvm-svn: 230753
2015-02-27 12:13:14 +00:00
Chandler Carruth 66b705bc64 [x86] Teach a bunch of the x86-specific shuffle combining to work with
256-bit vectors as well as 128-bit vectors. Fixes some of the redundant
shuffles for v16i16.

llvm-svn: 230752
2015-02-27 11:45:13 +00:00
Chandler Carruth 97f3260f57 [x86] Make the v8i16 clever single-input shuffle lowering usable for
repeated 128-bit lane shuffles of wider vector types and use it to lower
256-bit v16i16 vector shuffles where applicable.

This should let us perfectly lowering the pattern of pshuflw and pshufhw
even for AVX2 256-bit patterns.

I've not added AVX-512 support, but it should be trivial for someone
working on that to wire up.

Note that currently this generates bad, long shuffle chains because we
don't combine 256-bit target shuffles. The subsequent patches will fix
that.

llvm-svn: 230751
2015-02-27 11:33:46 +00:00
Toma Tabacu 344c167436 [mips] Remove redundant periods from -mattr=help descriptions for MIPS.
Summary: Also fixes an infringement of the 80-column limit rule.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7910

llvm-svn: 230748
2015-02-27 10:44:02 +00:00
Chandler Carruth ddc4d085cc [x86] Make the single-input v8i16 lowering directly recurse rather than
going back through the entire vector shuffle lowering.

This is an important step to being able to re-use this logic.

llvm-svn: 230743
2015-02-27 09:11:38 +00:00
Vasileios Kalintiris 18581f16b4 [mips] Account for constant-zero operands in ADDE nodes.
Summary:
We identify the cases where the operand to an ADDE node is a constant
zero. In such cases, we can avoid generating an extra ADDu instruction
disguised as an identity move alias (ie. addu $r, $r, 0 --> move $r, $r).

Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7906

llvm-svn: 230742
2015-02-27 09:01:39 +00:00
Charles Davis 84d28de627 Target/X86: Save Win64 non-volatile registers in a Win64 ABI function.
Summary:
This change causes us to actually save non-volatile registers in a Win64
ABI function that calls a System V ABI function, and vice-versa.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7919

llvm-svn: 230714
2015-02-27 00:57:01 +00:00
Eric Christopher 1cdefae9c4 Rewrite MachineOperand::print and MachineInstr::print to avoid
uses of TM->getSubtargetImpl and propagate to all calls.

This could be a debugging regression in places where we had a
TargetMachine and/or MachineFunction but don't have it as part
of the MachineInstr. Fixing this would require passing a
MachineFunction/Function down through the print operator, but
none of the existing uses in tree seem to do this.

llvm-svn: 230710
2015-02-27 00:11:34 +00:00
Eric Christopher 11e4df73c8 getRegForInlineAsmConstraint wants to use TargetRegisterInfo for
a lookup, pass that in rather than use a naked call to getSubtargetImpl.
This involved passing down and around either a TargetMachine or
TargetRegisterInfo. Update all callers/definitions around the targets
and SelectionDAG.

llvm-svn: 230699
2015-02-26 22:38:43 +00:00
Chandler Carruth 653773d004 [x86] Fix PR22706 where we would incorrectly try lower a v32i8 dynamic
blend as legal.

We made the same mistake in two different places. Whenever we are custom
lowering a v32i8 blend we need to check whether we are custom lowering
it only for constant conditions that can be shuffled, or whether we
actually have AVX2 and full dynamic blending support on bytes. Both are
fixed, with comments added to make it clear what is going on and a new
test case.

llvm-svn: 230695
2015-02-26 22:15:34 +00:00
Chandler Carruth 7bd840d058 [x86] Restructure the comments and the conditions for handling
dynamic blends.

This makes it much more clear what is going on. The case we're handling
is that of dynamic conditions, and we're bailing when the nature of the
vector types and subtarget preclude lowering the dynamic condition
vselect as an actual blend.

No functionality changed here, but this will make a subsequent bug-fix
to this code much more clear.

llvm-svn: 230690
2015-02-26 21:29:06 +00:00
Chandler Carruth efc6819041 [x86] Re-order the combines of select in the X86 backend. This doesn't
change functionality, but makes it more clear that the dynamic case and
the shuffle case don't overlap in any interesting way.

llvm-svn: 230689
2015-02-26 21:21:36 +00:00
Chandler Carruth 0757f14c69 [x86] Add an assert to catch if we ever try to blend a v32i8 without
AVX2.

llvm-svn: 230688
2015-02-26 21:18:20 +00:00
Reid Kleckner e81017248c Don't sibcall between SysV and Win64 convention functions
The shadow stack space expectations won't match.

Fixes PR22709.

llvm-svn: 230667
2015-02-26 19:43:20 +00:00
Petar Jovanovic 90ec1b175e Fix justify error for small structures in varargs for MIPS64BE
There was a problem when passing structures as variable arguments.
The structures smaller than 64 bit were not left justified on MIPS64
big endian. This is now fixed by shifting the value to make it left-
justified when appropriate.

This fixes the bug http://llvm.org/bugs/show_bug.cgi?id=21608

Patch by Aleksandar Beserminji.

Differential Revision: http://reviews.llvm.org/D7881

llvm-svn: 230657
2015-02-26 18:35:15 +00:00
Sumanth Gundapaneni 28a3b86b06 Use ".arch_extension" ARM directive to support hwdiv on krait
In case of "krait" CPU, asm printer doesn't emit any ".cpu" so the
features bits are not computed. This patch lets the asm printer
emit ".cpu cortex-a9" directive for krait and the hwdiv feature is
enabled through ".arch_extension". In short, krait is treated
as "cortex-a9" with hwdiv. We can not emit ".krait" as CPU since
it is not supported bu GNU GAS yet

llvm-svn: 230651
2015-02-26 18:08:41 +00:00
Sumanth Gundapaneni a9049ea368 Use ".arch_extension" ARM directive to specify the additional CPU features
This patch is in response to r223147 where the avaiable features are
computed based on ".cpu" directive. This will work clean for the standard
variants like cortex-a9. For custom variants which rely on standard cpu names
for assembly, the additional features of a CPU should be propagated. This can be
done via ".arch_extension" as long as the assembler supports it. The
implementation for krait along with unit test will be submitted in next patch.

llvm-svn: 230650
2015-02-26 18:07:35 +00:00
Tom Stellard eb05c610b4 R600/SI: Remove M0 from DS assembly strings
This matches the assembly syntax for the proprietary compiler.

llvm-svn: 230645
2015-02-26 17:08:43 +00:00
Michael Kuperstein 4af7449659 [X86][Haswell][SchedModel] Fix WriteMULm latency.
The latency for the WriteMULm class was set to 4, which is actually lower than the latency for WriteMULr (5). 
A better estimate would be 4 added to WriteMULr, that is, 9.

llvm-svn: 230634
2015-02-26 14:30:09 +00:00
Chandler Carruth 8e0a3ea52c [x86] Sink the single-input v8i16 lowering code that is actually
formulaic into the top v8i16 lowering routine.

This makes the generalized lowering a completely general and single path
lowering which will allow generalizing it in turn for multiple 128-bit
lanes.

llvm-svn: 230623
2015-02-26 11:00:40 +00:00
Chandler Carruth 11e7f6b50a [x86] Remove a SimpleTy usage. No need for it here, we already have the
MVT.

llvm-svn: 230622
2015-02-26 10:37:01 +00:00
Chandler Carruth d283cb6203 [x86] Make the vector shuffle helpers order the SDLoc and MVT arguments.
This ordering matches that of DAG.getNode.

llvm-svn: 230617
2015-02-26 08:19:24 +00:00
Reid Kleckner e2008ae475 Pass /nologo to ml64 for quieter builds
It still prints "Assembling path/to/X86CompilationCallback_Win64.asm",
but linking does the same thing.

llvm-svn: 230596
2015-02-26 00:51:33 +00:00
Eric Christopher 5f54195e4a Remove a FIXME.
Explanation: This function is in TargetLowering because it uses
RegClassForVT which would need to be moved to TargetRegisterInfo
and would necessitate moving isTypeLegal over as well - a massive
change that would just require TargetLowering having a TargetRegisterInfo
class member that it would use.

llvm-svn: 230585
2015-02-26 00:00:35 +00:00
Eric Christopher 23a3a7c871 Remove an argument-less call to getSubtargetImpl from TargetLoweringBase.
This required plumbing a TargetRegisterInfo through computeRegisterProperties
and into findRepresentativeClass which uses it for register class
iteration. This required passing a subtarget into a few target specific
initializations of TargetLowering.

llvm-svn: 230583
2015-02-26 00:00:24 +00:00
Hal Finkel cf59921670 [PowerPC] Make LDtocL and friends invariant loads
LDtocL, and other loads that roughly correspond to the TOC_ENTRY SDAG node,
represent loads from the TOC, which is invariant. As a result, these loads can
be hoisted out of loops, etc. In order to do this, we need to generate
GOT-style MMOs for TOC_ENTRY, which requires treating it as a legitimate memory
intrinsic node type. Once this is done, the MMO transfer is automatically
handled for TableGen-driven instruction selection, and for nodes generated
directly in PPCISelDAGToDAG, we need to transfer the MMOs manually.

Also, we were not transferring MMOs associated with pre-increment loads, so do
that too.

Lastly, this fixes an exposed bug where R30 was not added as a defined operand of
UpdateGBR.

This problem was highlighted by an example (used to generate the test case)
posted to llvmdev by Francois Pichet.

llvm-svn: 230553
2015-02-25 21:36:59 +00:00
David Majnemer e1bbad9eb2 X86, Win64: Allow 'mov' to restore the stack pointer if we have a FP
The Win64 epilogue structure is very restrictive, it permits a very
small number of opcodes and none of them are 'mov'.

This means that given:
  mov %rbp, %rsp
  pop %rbp

The mov isn't the epilogue, only the pop is.  This is problematic unless
a frame pointer is present in which case we are free to do whatever we'd
like in the "body" of the function.  If a frame pointer is present,
unwinding will undo the prologue operations in reverse order regardless
of the fact that we are at an instruction which is reseting the stack
pointer.

llvm-svn: 230543
2015-02-25 21:13:37 +00:00
Hal Finkel 0746211811 [PowerPC] Cleanup unused target-specific SDAG nodes
We had somehow accumulated a few target-specific SDAG nodes dealing with PPC64
TOC access that were referenced only in TableGen patterns. The associated
(pseudo-)instructions are used, but are being generated directly. NFC.

llvm-svn: 230518
2015-02-25 18:06:45 +00:00
Matthias Braun 02892ec62d AArch64: Add debug message for large shift constants.
As requested in code review.

llvm-svn: 230517
2015-02-25 18:03:50 +00:00
Vladimir Medic bcb7467540 [MIPS]Multiple and add instructions for Mips are currently available in mips32r2/mips64r2 and later but should also be available in mips4, mips5, and mips64. This patch fixes the requested features and updates the corresponding test files.
llvm-svn: 230500
2015-02-25 15:24:37 +00:00
Bruno Cardoso Lopes ab7afa9144 [X86][MMX] Reapply: Add MMX instructions to foldable tables
Reapply r230248.

Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.

llvm-svn: 230499
2015-02-25 15:14:02 +00:00
Bruno Cardoso Lopes 48b10681f9 [X86][MMX] Prevent MMX_MOVD64rm folding
MMX_MOVD64rm zero-extends i32 load results into i64 registers.

The peephole optimizer will try to fold it in other MMX foldable
instructions, the wrong thing to do, since there's no MMX memory
instruction that loads from i32 and does implict zero extension.

Remove 'canFoldAsLoad' from MOVD64rm in order to prevent such folding.
The current MMX tests already test this, but since there are no MMX
instructions in the foldable tables yet, this did not trigger. This
commit prepares the addition of those instructions.

llvm-svn: 230498
2015-02-25 15:13:52 +00:00
Renato Golin b9887ef32a Improve handling of stack accesses in Thumb-1
Thumb-1 only allows SP-based LDR and STR to be word-sized, and SP-base LDR,
STR, and ADD only allow offsets that are a multiple of 4. Make some changes
to better make use of these instructions:

* Use word loads for anyext byte and halfword loads from the stack.
* Enforce 4-byte alignment on objects accessed in this way, to ensure that
  the offset is valid.
* Do the same for objects whose frame index is used, in order to avoid having
  to use more than one ADD to generate the frame index.
* Correct how many bits of offset we think AddrModeT1_s has.

Patch by John Brawn.

llvm-svn: 230496
2015-02-25 14:41:06 +00:00
Aaron Ballman 5561ed448b Silencing a "result of 32-bit shift implicitly converted to 64 bits (was 64-bit shift intended?)" warning in MSVC; NFC.
llvm-svn: 230489
2015-02-25 13:05:24 +00:00
Aaron Ballman 70c27ded97 Silencing a -Wsign-compare warning triggered in MSVC; NFC.
llvm-svn: 230488
2015-02-25 13:02:23 +00:00
Elena Demikhovsky 56eadcf5ce AVX-512: Gather and Scatter patterns
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from 
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.

(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)

llvm-svn: 230471
2015-02-25 09:46:31 +00:00
Hal Finkel c93a9a2cb4 [PowerPC] Add support for the QPX vector instruction set
This adds support for the QPX vector instruction set, which is used by the
enhanced A2 cores on the IBM BG/Q supercomputers. QPX vectors are 256 bytes
wide, holding 4 double-precision floating-point values. Boolean values, modeled
here as <4 x i1> are actually also represented as floating-point values
(essentially  { -1, 1 } for { false, true }). QPX shares many features with
Altivec and VSX, but is distinct from both of them. One major difference is
that, instead of adding completely-separate vector registers, QPX vector
registers are extensions of the scalar floating-point registers (lane 0 is the
corresponding scalar floating-point value). The operations supported on QPX
vectors mirrors that supported on the scalar floating-point values (with some
additional ones for permutations and logical/comparison operations).

I've been maintaining this support out-of-tree, as part of the bgclang project,
for several years. This is not the entire bgclang patch set, but is most of the
subset that can be cleanly integrated into LLVM proper at this time. Adding
this to the LLVM backend is part of my efforts to rebase bgclang to the current
LLVM trunk, but is independently useful (especially for codes that use LLVM as
a JIT in library form).

The assembler/disassembler test coverage is complete. The CodeGen test coverage
is not, but I've included some tests, and more will be added as follow-up work.

llvm-svn: 230413
2015-02-25 01:06:45 +00:00
Eric Christopher fe59972bbc Rename UpdateRegAllocHint to match style guidelines.
llvm-svn: 230357
2015-02-24 19:10:57 +00:00
Matthias Braun 7526035155 AArch64: Relax assert about large shift sizes.
The reason why these large shift sizes happen is because OpaqueConstants
currently inhibit alot of DAG combining, but that has to be addressed in
another commit (like the proposal in D6946).

Differential Revision: http://reviews.llvm.org/D6940

llvm-svn: 230355
2015-02-24 18:52:04 +00:00
Tom Stellard ecc419c31d R600/SI: Remove isel mubuf legalization
We legalize mubuf instructions post-instruction selection, so this
code is no longer needed.

llvm-svn: 230352
2015-02-24 17:59:19 +00:00
Tim Northover e95c5b3236 ARM: treat [N x i32] and [N x i64] as AAPCS composite types
The logic is almost there already, with our special homogeneous aggregate
handling. Tweaking it like this allows front-ends to emit AAPCS compliant code
without ever having to count registers or add discarded padding arguments.

Only arrays of i32 and i64 are needed to model AAPCS rules, but I decided to
apply the logic to all integer arrays for more consistency.

llvm-svn: 230348
2015-02-24 17:22:34 +00:00
Sanjay Patel a709f3a5ae simplify control flow; NFC
llvm-svn: 230342
2015-02-24 16:26:02 +00:00
Michael Kuperstein d2f3b87812 [x32] Mark RBX as reserved when EBX is the base pointer.
This should have gone into r230334.

llvm-svn: 230339
2015-02-24 16:13:16 +00:00
Sanjay Patel 2898548598 fix typo in comment; NFC
llvm-svn: 230338
2015-02-24 16:11:05 +00:00
Michael Kuperstein 8ffb409135 [x32] x32 should use ebx as the base pointer.
This fixes the original issue in PR22655, but not the secondary one.

llvm-svn: 230334
2015-02-24 15:27:13 +00:00
Toma Tabacu a90f144a1d [mips] Reformat some TableGen definitions. NFC.
Summary: Separated some instruction and pseudo-instruction definitions from InstAlias definitions, added banner for pseudo-instructions and removed a redundant whitespace from a pseudo-instruction definition. No functional change.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7552

llvm-svn: 230327
2015-02-24 11:52:19 +00:00
Craig Topper cf51397c48 [X86] Remove the AbsMem32 type from the assembly parser. Only really need the 16-bit version which will automatically get prioritized over AbsMem.
llvm-svn: 230313
2015-02-24 08:02:13 +00:00
Reed Kotler 5fb7d8b508 Beginning of alloca implementation for Mips fast-isel
Summary: Begin to add various address modes; including alloca.

Test Plan: Make sure there are no regressions in test-suite at O0/02 in mips32r1/r2

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: echristo, rfuhler, llvm-commits

Differential Revision: http://reviews.llvm.org/D6426

llvm-svn: 230300
2015-02-24 02:36:45 +00:00
Bob Wilson 8e29dec986 Fix handling of negative offsets for AddrModeT2_i8s4 in rewriteT2FrameIndex.
This is a follow up to r230233 to fix something that I noticed by
inspection. The AddrModeT2_i8s4 addressing mode does not support
negative offsets. I spent a good chunk of the day trying to come up with
a testcase for this but was not successful. This addressing mode is used
to spill and restore GPRPair registers in Thumb2 code and that does not
happen often. We also make very limited used of negative offsets when
lowering frame indexes. I am going ahead with the change anyway, because
I am pretty confident that it is correct. I also added a missing assertion
to check that the low bits of the scaled offset are zero.

llvm-svn: 230297
2015-02-24 01:37:31 +00:00
David Majnemer 3aa0bd81a2 X86: Only use 'lea' in Win64 epilogues if a frame pointer exists
We can only use 'add' in epilogues, 'lea' is not permitted unless we've
established a frame pointer in the prologue.

llvm-svn: 230286
2015-02-24 00:11:32 +00:00
David Majnemer 006c490ba8 X86: Use a smaller 'mov' instruction for stack probe calls
Prologue emission, in some cases, requires calls to a stack probe helper
function.  The amount of stack to probe is passed as a register
argument in the Win64 ABI but the instruction sequence used is
pessimistic: it assumes that the number of bytes to probe is greater
than 4 GB.

Instead, select a more appropriate opcode depending on the number of
bytes we are going to probe.

llvm-svn: 230270
2015-02-23 21:50:30 +00:00
David Majnemer 31d868b618 X86: Use 'mov' instead of 'lea' in Win64 SEH prologues when possible
'mov' and 'lea' are equivalent when the displacement applied with 'lea'
is zero.  However, 'mov' should encode smaller.

llvm-svn: 230269
2015-02-23 21:50:27 +00:00
David Majnemer b85e023b8b X86: Explain why we cannot use a 'mov' in a Win64 epilogue
llvm-svn: 230268
2015-02-23 21:50:25 +00:00
David Majnemer 086f6a7e6e X86: Consistently use 'epilogue' instead of 'epilog'
llvm-svn: 230267
2015-02-23 21:50:18 +00:00
Bruno Cardoso Lopes 24492b057e [AsmPrinter] Access pointers to globals via pcrel GOT entries
Front-ends could use global unnamed_addr to hold pointers to other
symbols, like @gotequivalent below:

@foo = global i32 42
@gotequivalent = private unnamed_addr constant i32* @foo

@delta = global i32 trunc (i64 sub (i64 ptrtoint (i32** @gotequivalent to i64),
                                    i64 ptrtoint (i32* @delta to i64))
                           to i32)

The global @delta holds a data "PC"-relative offset to @gotequivalent,
an unnamed pointer to @foo. The darwin/x86-64 assembly output for this follows:

 .globl  _foo
_foo:
 .long   42

 .globl  _gotequivalent
_gotequivalent:
 .quad   _foo

 .globl  _delta
_delta:
 .long   _gotequivalent-_delta

Since unnamed_addr indicates that the address is not significant, only
the content, we can optimize the case above by replacing pc-relative
accesses to "GOT equivalent" globals, by a PC relative access to the GOT
entry of the final symbol instead. Therefore, "delta" can contain a pc
relative relocation to foo's GOT entry and we avoid the emission of
"gotequivalent", yielding the assembly code below:

 .globl  _foo
_foo:
 .long   42

 .globl  _delta
_delta:
 .long   _foo@GOTPCREL+4

There are a couple of advantages of doing this: (1) Front-ends that need
to emit a great deal of data to store pointers to external symbols could
save space by not emitting such "got equivalent" globals and (2) IR
constructs combined with this opt opens a way to represent GOT pcrel
relocations by using the LLVM IR, which is something we previously had
no way to express.

Differential Revision: http://reviews.llvm.org/D6922

rdar://problem/18534217

llvm-svn: 230264
2015-02-23 21:26:18 +00:00
Bruno Cardoso Lopes 32173cdf06 Revert "[X86][MMX] Add MMX instructions to foldable tables"
This reverts commit r230226 since it breaks win buildbots.

llvm-svn: 230248
2015-02-23 19:53:37 +00:00
Eric Christopher ed47b22951 Rewrite the global merge pass to be subprogram agnostic for now.
It was previously using the subtarget to get values for the global
offset without actually checking each function as it was generating
code. Go ahead and solidify the current behavior and make the
existing FIXMEs more prominent.

As a note the ARM backend previously had a thumb1 and non-thumb1
set of defaults. Only the former was tested so I've changed the
behavior to only use that for now.

llvm-svn: 230245
2015-02-23 19:28:45 +00:00
Chad Rosier 543900539f Prevent hoisting fmul from THEN/ELSE to IF if there is fmsub/fmadd opportunity.
This patch adds the isProfitableToHoist API.  For AArch64, we want to prevent a
fmul from being hoisted in cases where it is more profitable to form a
fmsub/fmadd.

Phabricator Review: http://reviews.llvm.org/D7299
Patch by Lawrence Hu <lawrence@codeaurora.org>

llvm-svn: 230241
2015-02-23 19:15:16 +00:00
Daniel Sanders afe27c7d27 [mips] Honour -mno-odd-spreg for vector insert/extract when MSA is enabled.
Summary:
-mno-odd-spreg prohibits the use of odd-numbered single-precision floating
point registers. However, vector insert/extract was still using them when
manipulating the subregisters of an MSA register. Fixed this by ensuring
that insertion/extraction is only performed on even-numbered vector
registers when -mno-odd-spreg is given.

Reviewers: vmedic, sstankovic

Reviewed By: sstankovic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7672

llvm-svn: 230235
2015-02-23 17:22:16 +00:00
Bob Wilson 89e94fc3ad Fix incorrect immediate size for AddrModeT2_i8s4 in rewriteT2FrameIndex.
The natural way to handle this addressing mode would be to say that it has
8 bits and gets scaled by 4, but since the MC layer is expecting the scaling
to be already reflected in the immediate value, we have been setting the
Scale to 1. That's fine, but then NumBits needs to be adjusted to reflect
the effective increase in the range of the immediate. That adjustment was
missing.

The consequence is that the register scavenger can fail.
The estimateRSStackSizeLimit() function in ARMFrameLowering.cpp correctly
assumes that the AddrModeT2_i8s4 address mode can handle scaled offsets up to
1020. Under just the right circumstances, we fail to reserve space for the
scavenger because it thinks that nothing will be needed. However, the overly
pessimistic behavior in rewriteT2FrameIndex causes some frame indexes to be
out of range and require scavenged registers, and so the scavenger asserts.

Unfortunately I have not been able to come up with a testcase for this. I
can only reproduce it on an internal branch where the frame layout and
register allocation is slightly different than trunk. We really need a
way to serialize MachineInstr-level IR to write reasonable tests for things
like this.

rdar://problem/19909005

llvm-svn: 230233
2015-02-23 16:57:19 +00:00
Bruno Cardoso Lopes f488e2ae69 [X86][MMX] Add MMX instructions to foldable tables
Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.

llvm-svn: 230226
2015-02-23 15:23:22 +00:00
Bruno Cardoso Lopes 9e1c4c17d9 [X86][MMX] Support folding loads in psll, psrl and psra intrinsics
llvm-svn: 230225
2015-02-23 15:23:14 +00:00
Elena Demikhovsky 52e81bc499 AVX-512: recommitted 229837 + bugfix + test
llvm-svn: 230223
2015-02-23 15:12:31 +00:00
Elena Demikhovsky 145e5b4409 restructured X86 scalar unary operation templates
I made the templates general, no need to define pattern separately for each instruction/intrinsic.
Now only need to add r_Int pattern for AVX.

llvm-svn: 230221
2015-02-23 14:14:02 +00:00
NAKAMURA Takumi 3d61760bd6 Fix a warning on HexagonMCCodeEmitter::MCII. [-Wunused-private-field]
llvm-svn: 230170
2015-02-22 09:58:29 +00:00
Craig Topper 8659344d93 [X86] Add some missing redundant MMX and SSE encodings for disassembler.
llvm-svn: 230165
2015-02-22 07:50:41 +00:00
Matt Arsenault f07833057c R600/SI: Use v_madmk_f32
llvm-svn: 230149
2015-02-21 21:29:10 +00:00
Matt Arsenault 0325d3d27f R600/SI: Try to use v_madak_f32
This is a code size optimization when the constant
only has one use.

llvm-svn: 230148
2015-02-21 21:29:07 +00:00
Matt Arsenault 657b1cb739 R600/SI: Don't crash when getting immediate operand size
llvm-svn: 230147
2015-02-21 21:29:04 +00:00
Matt Arsenault 70120fa813 R600/SI: Fix mad*k definitions
llvm-svn: 230146
2015-02-21 21:29:00 +00:00
Benjamin Kramer cd8e4ebadb Remove dead prototype.
llvm-svn: 230137
2015-02-21 14:35:00 +00:00
Benjamin Kramer 71bfe3d1a4 X86: Remove custom lowering of SIGN_EXTEND_INREG
This was just replicating logic from the legalizer. Covered by existing
tests.

llvm-svn: 230136
2015-02-21 14:31:29 +00:00
Eric Christopher cde54069f9 Remove obsolete comment.
llvm-svn: 230134
2015-02-21 08:48:23 +00:00
Eric Christopher 327fc9721c Have the MipsAsmPrinter fp stub emission code take a custom
MCSubtargetInfo as the MachineFunction has gone away and we need
to emit code at the module level.

llvm-svn: 230133
2015-02-21 08:48:22 +00:00
Eric Christopher d5bc07e866 Turn an if+llvm_unreachable into an assert and reword comment.
llvm-svn: 230132
2015-02-21 08:32:38 +00:00
Eric Christopher bb40164e48 Endianness can be gotten from the DataLayout which we already
have. Also, the subtarget is invalid at this point.

llvm-svn: 230131
2015-02-21 08:32:22 +00:00
David Majnemer d5ab35f265 X86: Call __main using the SelectionDAG
Synthesizing a call directly using the MI layer would confuse the frame
lowering code.  This is problematic as frame lowering is highly
sensitive the particularities of calls, etc.

llvm-svn: 230129
2015-02-21 05:49:45 +00:00
Tim Northover 3b6b7ca2bc CodeGen: convert CCState interface to using ArrayRefs
Everyone except R600 was manually passing the length of a static array
at each callsite, calculated in a variety of interesting ways. Far
easier to let ArrayRef handle that.

There should be no functional change, but out of tree targets may have
to tweak their calls as with these examples.

llvm-svn: 230118
2015-02-21 02:11:17 +00:00
David Majnemer 89d0564b6a Win64: Stack alignment constraints aren't applied during SET_FPREG
Stack realignment occurs after the prolog, not during, for Win64.
Because of this, don't factor in the maximum stack alignment when
establishing a frame pointer.

This fixes PR22572.

llvm-svn: 230113
2015-02-21 01:04:47 +00:00
Reid Kleckner 8142a08ce7 X86: Remove pre-2010 dead code in mergeSPUpdatesDown
llvm-svn: 230075
2015-02-20 22:13:25 +00:00
Simon Pilgrim b7875837c7 LowerScalarImmediateShift - Merged v16i8 and v32i8 shift lowering. NFC.
llvm-svn: 230074
2015-02-20 22:13:03 +00:00
Matt Arsenault 20711b7bae R600/SI: Remove v_sub_f64 pseudo
The expansion code does the same thing. Since
the operands were not defined with the correct
types, this has the side effect of fixing operand
folding since the expanded pseudo would never use
SGPRs or inline immediates.

llvm-svn: 230072
2015-02-20 22:10:45 +00:00
Matt Arsenault 8d6300346f R600: Use new fmad node.
This enables a few useful combines that used to only
use fma.

Also since v_mad_f32 apparently does not support denormals,
disable the existing cases that are custom handled if they are
requested.

llvm-svn: 230071
2015-02-20 22:10:41 +00:00
Jozef Kolek 0365675522 Reversed revision 229706. The reason is regression, which is caused by the
usage of instruction ADDU16 by CodeGen. For this instruction an improper
register is allocated, i.e. the register that is not from register set defined
for the instruction.

llvm-svn: 230053
2015-02-20 20:26:52 +00:00
Eric Christopher db51e2a52a Fix an asan use-after-free bug introduced by the asm printer
changes to remove non-Function based subtargets out of the asm
printer. For module level emission we'll need to construct up
an MCSubtargetInfo so that we can encode instructions for
emission.

llvm-svn: 230050
2015-02-20 19:54:07 +00:00
Andrea Di Biagio 7035178aeb [X86][FastIsel] Teach how to select float-half conversion intrinsics.
This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and
intrinsic 'convert_to_fp16'.
If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion,
and VCVTPH2PSrr for a half-float conversion.

Differential Revision: http://reviews.llvm.org/D7673

llvm-svn: 230043
2015-02-20 19:37:14 +00:00
Eric Christopher 9cda4b7ed9 Remove a use of the Subtarget in the darwin ppc asm printer.
EmitFunctionStubs is called from doFinalization and so can't
depend on the Subtarget existing. It's also irrelevant as
we know we're darwin since we're in the darwin asm printer.

llvm-svn: 230039
2015-02-20 18:53:42 +00:00
Eric Christopher 1df0c519fc Get the cached subtarget off the MachineFunction rather than
inquiring for a new one from the TargetMachine.

llvm-svn: 230037
2015-02-20 18:44:15 +00:00
Sanjay Patel 1b53f74c4c canonicalize a v2f64 blendi of 2 registers
This canonicalization step saves us 3 pattern matching possibilities * 4 math ops
for scalar FP math that uses xmm regs. The backend can re-commute the operands
post-instruction-selection if that makes register allocation better.

The tests in llvm/test/CodeGen/X86/sse-scalar-fp-arith.ll cover this scenario already,
so there are no new tests with this patch.

Differential Revision: http://reviews.llvm.org/D7777

llvm-svn: 230024
2015-02-20 16:55:27 +00:00
Kit Barton 263edb99ab I incorrectly marked the VORC instruction as isCommutable when I added it.
This fix removes the VORC instruction definition from the isCommutable block.

Phabricator review: http://reviews.llvm.org/D7772

llvm-svn: 230020
2015-02-20 15:54:58 +00:00
Chandler Carruth 0c5f059865 [x86] Switching the shuffle equivalence test to a variadic template was
the wrong answer. We also got initializer lists which are *way* cleaner
for this kind of thing. Let's use those and make this a normal, boring
functionn accepting ArrayRef.

llvm-svn: 230004
2015-02-20 10:47:28 +00:00
Eric Christopher 0218f8cfed Fix wording and grammar in Mips subtarget options.
llvm-svn: 230001
2015-02-20 08:42:34 +00:00
Eric Christopher 3ee30d0607 Get the cached subtarget off the MachineFunction rather than
inquiring for a new one from the TargetMachine.

llvm-svn: 230000
2015-02-20 08:39:06 +00:00
Eric Christopher 22b2ad265f Get the cached subtarget off the MachineFunction rather than
inquiring for a new one from the TargetMachine.

llvm-svn: 229999
2015-02-20 08:24:37 +00:00
Eric Christopher 155290edf9 Get the cached subtarget off the MachineFunction rather than
inquiring for a new one from the TargetMachine.

llvm-svn: 229998
2015-02-20 08:24:34 +00:00
Eric Christopher ad1ef04ab1 Save the MachineFunction in startFunction so that we can use it for
lookups of the subtarget later.

llvm-svn: 229996
2015-02-20 08:01:55 +00:00
Eric Christopher 4369c9b42c Use the cached subtarget from the MachineFunction rather than
doing a lookup on the TargetMachine.

llvm-svn: 229995
2015-02-20 08:01:52 +00:00
Eric Christopher 1947a9e2e2 Make the TargetMachine::getSubtarget that takes a Function argument
take a reference to match the getSubtargetImpl that takes a Function
argument.

llvm-svn: 229994
2015-02-20 07:32:59 +00:00
Nick Lewycky a2bda08806 Fix build in release mode, -Wunused-variable on this lambda function used only in an assert.
llvm-svn: 229977
2015-02-20 07:16:17 +00:00
David Blaikie 9a3644c472 Fix -Wunused-variable warning in non-asserts build, and optimize a little bit while I'm here.
llvm-svn: 229970
2015-02-20 06:28:38 +00:00
Hal Finkel e5aaf3f2cd [PowerPC] Loop Data Prefetching for the BG/Q
The IBM BG/Q supercomputer's A2 cores have a hardware prefetching unit, the
L1P, but it does not prefetch directly into the A2's L1 cache. Instead, it
prefetches into its own L1P buffer, and the latency to access that buffer is
significantly higher than that to the L1 cache (although smaller than the
latency to the L2 cache). As a result, especially when multiple hardware
threads are not actively busy, explicitly prefetching data into the L1 cache is
advantageous.

I've been using this pass out-of-tree for data prefetching on the BG/Q for well
over a year, and it has worked quite well. It is enabled by default only for
the BG/Q, but can be enabled for other cores as well via a command-line option.

Eventually, we might want to add some TTI interfaces and move this into
Transforms/Scalar (there is nothing particularly target dependent about it,
although only machines like the BG/Q will benefit from its simplistic
strategy).

llvm-svn: 229966
2015-02-20 05:08:21 +00:00
Chandler Carruth 4041f2217b [x86] Remove the old vector shuffle lowering code and its flag.
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]

Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.

In case you aren't using a diff algorithm that can handle this:
  X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)

llvm-svn: 229964
2015-02-20 04:25:04 +00:00
Chandler Carruth eb206aa1ea [x86] Now that the new vector shuffle legality is enabled and everything
is going well, remove the flag and the code for the old legality tests.

This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.

llvm-svn: 229963
2015-02-20 03:59:35 +00:00
Chandler Carruth d2b14b296c [x86] Make the new vector shuffle legality test on by default, which
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.

My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.

llvm-svn: 229958
2015-02-20 03:05:47 +00:00
Eric Christopher 0d94fa98e5 Revert "AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics."
The instructions were being generated on architectures that don't support avx512.

This reverts commit r229837.

llvm-svn: 229942
2015-02-20 00:45:28 +00:00
Eric Christopher 06b32cdfed Add a license header to the AVX512 file.
llvm-svn: 229941
2015-02-20 00:36:53 +00:00
Ahmed Bougacha db141ac37d [ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.

The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes.  When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.

However, for ARMISD nodes, we just used the MMO alignment, no matter
what.  In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.

And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).

To fix this, we can just mirror the hack done for ISD nodes:  only
take into account the MMO alignment when the access is overaligned.

Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.

We can do the same thing for generic load/stores.

Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).

rdar://19717869, rdar://14062261.

llvm-svn: 229932
2015-02-19 23:52:41 +00:00
Ahmed Bougacha dfdf54bed0 [ARM] Minor cleanup to CombineBaseUpdate. NFC.
In preparation for a future patch:
- rename isLoad to isLoadOp: the former is confusing, and can be taken
  to refer to the fact that the node is an ISD::LOAD.  (it isn't, yet.)
- change formatting here and there.
- add some comments.
- const-ify bools.

llvm-svn: 229929
2015-02-19 23:30:37 +00:00
Ahmed Bougacha 4c2b0781a5 [CodeGen] Use ArrayRef instead of std::vector&. NFC.
The former lets us use SmallVectors.  Do so in ARM and AArch64.

llvm-svn: 229925
2015-02-19 23:13:10 +00:00
Colin LeMahieu 1174fea31c [Hexagon] Moving remaining methods off of HexagonMCInst in to HexagonMCInstrInfo and eliminating HexagonMCInst class.
llvm-svn: 229914
2015-02-19 21:10:50 +00:00
Eric Christopher 64d35be6d6 Remove unused argument from emitInlineAsmStart.
llvm-svn: 229907
2015-02-19 19:52:25 +00:00
Colin LeMahieu 745c4710db [Hexagon] Moving more functions off of HexagonMCInst and in to HexagonMCInstrInfo.
llvm-svn: 229903
2015-02-19 19:49:27 +00:00
Colin LeMahieu af304e5192 [Hexagon] Creating HexagonMCInstrInfo namespace as landing zone for static functions detached from HexagonMCInst.
llvm-svn: 229885
2015-02-19 19:00:00 +00:00
Colin LeMahieu f08a3ccf50 [Hexagon] Removing static variable holding MCInstrInfo.
llvm-svn: 229872
2015-02-19 17:38:39 +00:00
Benjamin Kramer ea68a944a1 Demote vectors to arrays. No functionality change.
llvm-svn: 229861
2015-02-19 15:26:17 +00:00
Chandler Carruth 5d1a84b7b8 [x86] Delete still more piles of complex code now that we have a good
systematic lowering of v8i16.

This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.

In some cases, this is yet another significant improvement.

llvm-svn: 229859
2015-02-19 15:21:57 +00:00
Chandler Carruth 0b39536390 [x86] Teach the unpack lowering how to lower with an initial unpack in
addition to lowering to trees rooted in an unpack.

This saves shuffles and or registers in many various ways, lets us
handle another class of v4i32 shuffles pre SSE4.1 without domain
crosses, etc.

llvm-svn: 229856
2015-02-19 15:06:13 +00:00
Chandler Carruth 352eba1c29 [x86] Dramatically improve v8i16 shuffle lowering by not using its
terribly complex partial blend logic.

This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.

This does expose one (minor) weakness of the unpack lowering that I'll
try to address.

In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.

llvm-svn: 229853
2015-02-19 14:08:24 +00:00
Chandler Carruth 2c0390ca4b [x86] Remove the final fallback in the v8i16 lowering that isn't really
needed, and significantly improve the SSSE3 path.

This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.

This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.

llvm-svn: 229851
2015-02-19 13:56:49 +00:00
Chandler Carruth f0f0d27391 [x86] Simplify the pre-SSSE3 v16i8 lowering significantly by decomposing
them into permutes and a blend with the generic decomposition logic.

This works really well in almost every case and lets the code only
manage the expansion of a single input into two v8i16 vectors to perform
the actual shuffle. The blend-based merging is often much nicer than the
pack based merging that this replaces. The only place where it isn't we
end up blending between two packs when we could do a single pack. To
handle that case, just teach the v2i64 lowering to handle these blends
by digging out the operands.

With this we're down to only really random permutations that cause an
explosion of instructions.

llvm-svn: 229849
2015-02-19 13:15:12 +00:00
Chandler Carruth 8817e5e01b [x86] Remove the insanely over-aggressive unpack lowering strategy for
v16i8 shuffles, and replace it with new facilities.

This uses precise patterns to match exact unpacks, and the new
generalized unpack lowering only when we detect a case where we will
have to shuffle both inputs anyways and they terminate in exactly
a blend.

This fixes all of the blend horrors that I uncovered by always lowering
blends through the vector shuffle lowering. It also removes *sooooo*
much of the crazy instruction sequences required for v16i8 lowering
previously. Much cleaner now.

The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when
it would be marginally nicer to use pshufb+pshufb+por. However, the
difference there is *tiny*. In many cases its a win because we re-use
the pshufb mask. In others, we get to avoid the pshufb entirely. I've
left a FIXME, but I'm dubious we can really do better than this. I'm
actually pretty happy with this lowering now.

For SSE2 this exposes some horrors that were really already there. Those
will have to fixed by changing a different path through the v16i8
lowering.

llvm-svn: 229846
2015-02-19 12:10:37 +00:00
Jozef Kolek 5d171fc291 [mips][microMIPS] Make usage of AND16, OR16 and XOR16 by code generator
Differential Revision: http://reviews.llvm.org/D7611

llvm-svn: 229845
2015-02-19 11:51:32 +00:00
Chandler Carruth 38dea42ddf [x86] The SELECT x86 DAG combine also does legalization. It used to rely
on things not being marked as either custom or legal, but we now do
custom lowering of more VSELECT nodes. To cope with this, manually
replicate the legality tests here. These have to stay in sync with the
set of tests used in the custom lowering of VSELECT.

Ideally, we wouldn't do any of this combine-based-legalization when we
have an actual custom legalization step for VSELECT, but I'm not going
to be able to rewrite all of that today.

I don't have a test case for this currently, but it was found when
compiling a number of the test-suite benchmarks. I'll try to reduce
a test case and add it.

This should at least fix the test-suite fallout on build bots.

llvm-svn: 229844
2015-02-19 11:43:37 +00:00
Michael Kuperstein efd7a96d2e Reverting r229831 due to multiple ARM/PPC/MIPS build-bot failures.
llvm-svn: 229841
2015-02-19 11:38:11 +00:00
Elena Demikhovsky 69e8b45b13 AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics.
llvm-svn: 229837
2015-02-19 10:48:04 +00:00
Chandler Carruth bcb6c5f62d [x86] Add support for bit-wise blending and use it in the v8 and v16
lowering paths. I'm going to be leveraging this to simplify a lot of the
overly complex lowering of v8 and v16 shuffles in pre-SSSE3 modes.

Sadly, this isn't profitable on v4i32 and v2i64. There, the float and
double blending instructions for pre-SSE4.1 are actually pretty good,
and we can't beat them with bit math. And once SSE4.1 comes around we
have direct blending support and this ceases to be relevant.

Also, some of the test cases look odd because the domain fixer
canonicalizes these to floating point domain. That's OK, it'll use the
integer domain when it matters and some day I may be able to update
enough of LLVM to canonicalize the other way.

This restores almost all of the regressions from teaching x86's vselect
lowering to always use vector shuffle lowering for blends. The remaining
problems are because the v16 lowering path is still doing crazy things.
I'll be re-arranging that strategy in more detail in subsequent commits
to finish recovering the performance here.

llvm-svn: 229836
2015-02-19 10:46:52 +00:00
Chandler Carruth b89464a9b6 [x86,sdag] Two interrelated changes to the x86 and sdag code.
First, don't combine bit masking into vector shuffles (even ones the
target can handle) once operation legalization has taken place. Custom
legalization of vector shuffles may exist for these patterns (making the
predicate return true) but that custom legalization may in some cases
produce the exact bit math this matches. We only really want to handle
this prior to operation legalization.

However, the x86 backend, in a fit of awesome, relied on this. What it
would do is mark VSELECTs as expand, which would turn them into
arithmetic, which this would then match back into vector shuffles, which
we would then lower properly. Amazing.

Instead, the second change is to teach the x86 backend to directly form
vector shuffles from VSELECT nodes with constant conditions, and to mark
all of the vector types we support lowering blends as shuffles as custom
VSELECT lowering. We still mark the forms which actually support
variable blends as *legal* so that the custom lowering is bypassed, and
the legal lowering can even be used by the vector shuffle legalization
(yes, i know, this is confusing. but that's how the patterns are
written).

This makes the VSELECT lowering much more sensible, and in fact should
fix a bunch of bugs with it. However, as you'll see in the test cases,
right now what it does is point out the *hilarious* deficiency of the
new vector shuffle lowering when it comes to blends. Fortunately, my
very next patch fixes that. I can't submit it yet, because that patch,
somewhat obviously, forms the exact and/or pattern that the DAG combine
is matching here! Without this patch, teaching the vector shuffle
lowering to produce the right code infloops in the DAG combiner. With
this patch alone, we produce terrible code but at least lower through
the right paths. With both patches, all the regressions here should be
fixed, and a bunch of the improvements (like using 2 shufps with no
memory loads instead of 2 andps with memory loads and an orps) will
stay. Win!

There is one other change worth noting here. We had hilariously wrong
vectorization cost estimates for vselect because we fell through to the
code path that assumed all "expand" vector operations are scalarized.
However, the "expand" lowering of VSELECT is vector bit math, most
definitely not scalarized. So now we go back to the correct if horribly
naive cost of "1" for "not scalarized". If anyone wants to add actual
modeling of shuffle costs, that would be cool, but this seems an
improvement on its own. Note the removal of 16 and 32 "costs" for doing
a blend. Even in SSE2 we can blend in fewer than 16 instructions. ;] Of
course, we don't right now because of OMG bad code, but I'm going to fix
that. Next patch. I promise.

llvm-svn: 229835
2015-02-19 10:36:19 +00:00
Michael Kuperstein ba5b04c798 Use std::bitset for SubtargetFeatures
Previously, subtarget features were a bitfield with the underlying type being uint64_t. 
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.

No functional change.

Differential Revision: http://reviews.llvm.org/D7065

llvm-svn: 229831
2015-02-19 09:01:04 +00:00
Eric Christopher d84f5d30e2 Remove the local subtarget variable from the SystemZ asm printer
and update the two calls accordingly.

llvm-svn: 229805
2015-02-19 01:26:28 +00:00
Eric Christopher 0795a2ef0c Remove a few more calls to TargetMachine::getSubtarget from the
R600 port.

llvm-svn: 229804
2015-02-19 01:10:55 +00:00
Eric Christopher 7edca437f5 Grab the subtarget off of the machine function for the R600
asm printer and clean up a bunch of uses.

llvm-svn: 229803
2015-02-19 01:10:53 +00:00
Eric Christopher 96caeda730 Remove the DisasmEnabled AsmPrinter variable and just look it
up on the subtarget where it's set anyhow than looking it up
2-3 times in the same place.

llvm-svn: 229802
2015-02-19 01:10:49 +00:00
Peter Collingbourne fb8002cbe0 MC: Remove NullStreamer hook, as it is redundant with NullTargetStreamer.
llvm-svn: 229799
2015-02-19 00:45:07 +00:00
Peter Collingbourne 20c7259ce9 Introduce Target::createNullTargetStreamer and use it from IRObjectFile.
A null MCTargetStreamer allows IRObjectFile to ignore target-specific
directives. Previously we were crashing.

Differential Revision: http://reviews.llvm.org/D7711

llvm-svn: 229797
2015-02-19 00:45:02 +00:00
Eric Christopher ca929f2469 Avoid using a self-referential initializer and fix up uses.
llvm-svn: 229790
2015-02-19 00:22:47 +00:00
Eric Christopher 111de895a0 80-column fixups.
llvm-svn: 229789
2015-02-19 00:15:33 +00:00
Eric Christopher 02389e3886 Remove all use of is64bit off of NVPTXSubtarget and clean up code
accordingly. This changes the constructors of a number of classes
that don't need to know the subtarget's 64-bitness.

llvm-svn: 229787
2015-02-19 00:08:27 +00:00
Eric Christopher beffc4e84f Remove all use of getDrvInterface off of NVPTXSubtarget and clean
up code accordingly. Delete code that was checking for all cases
of an enum.

llvm-svn: 229786
2015-02-19 00:08:23 +00:00
Eric Christopher 6aad8b1801 Migrate the NVPTX backend asm printer to a per function subtarget.
This involved moving two non-subtarget dependent features (64-bitness
and the driver interface) to the NVPTX target machine and updating
the uses (or migrating around the subtarget use for ease of review).
Otherwise use the cached subtarget or create a default subtarget
based on the TargetMachine cpu and feature string for the module
level assembler emission.

llvm-svn: 229785
2015-02-19 00:08:14 +00:00
Marek Olsak 9b8f32eed1 R600/SI: Fix READLANE and WRITELANE lane select for VI
VOP2 declares vsrc1, but VOP3 declares src1.
We can't use the same "ins" if the operands have different names in VOP2
and VOP3 encodings.

This fixes a hang in geometry shaders which spill M0 on VI.
(BTW it doesn't look like M0 needs spilling and the spilling seems
duplicated 3 times)

llvm-svn: 229752
2015-02-18 22:12:45 +00:00
Marek Olsak 8eeebcccb5 R600/SI: Simplify verification of AMDGPU::OPERAND_REG_INLINE_C
llvm-svn: 229751
2015-02-18 22:12:41 +00:00
Marek Olsak b8c818337d R600/SI: Remove explicit VOP operand checking
This should be handled by the OperandType checking.

llvm-svn: 229750
2015-02-18 22:12:37 +00:00
Jozef Kolek 3c6724f442 [mips][microMIPS] Make usage of ADDU16 and SUBU16 by code generator
Differential Revision: http://reviews.llvm.org/D7609

llvm-svn: 229706
2015-02-18 17:33:56 +00:00
Jozef Kolek 1fd6548297 [mips][microMIPS] Implement JALX instruction
Differential Revision: http://reviews.llvm.org/D5047

llvm-svn: 229702
2015-02-18 17:15:48 +00:00
Daniel Sanders 1779314e3c [mips] Add backend support for Mips32r[35] and Mips64r[35].
Summary:
These ISA's didn't add any instructions so they are almost identical to
Mips32r2 and Mips64r2. Even the ELF e_flags are the same, However the ISA
revision in .MIPS.abiflags is 3 or 5 respectively instead of 2.

Reviewers: vmedic

Reviewed By: vmedic

Subscribers: tomatabacu, llvm-commits, atanasyan

Differential Revision: http://reviews.llvm.org/D7381

llvm-svn: 229695
2015-02-18 16:24:50 +00:00
Kit Barton 298beb5e86 This patch adds the VSX logical instructions introduced in the Power ISA 2.07. It also removes the added complexity that favors VMX versions of the three instructions.
Phabricator review: http://reviews.llvm.org/D7616

Commiting on Nemanja's behalf.

llvm-svn: 229694
2015-02-18 16:21:46 +00:00
Tom Stellard 1ca873bbc5 R600/SI: Don't set isCodeGenOnly = 1 on all instructions
We only need to set this on pseudo instructions which won't
be used by the assembler.

llvm-svn: 229689
2015-02-18 16:08:17 +00:00
Tom Stellard c34c37ae66 R600/SI: Add missing VOP1 instructions
llvm-svn: 229688
2015-02-18 16:08:15 +00:00
Tom Stellard 894b9883f4 R600/SI: Add missing VOP2 instructions
llvm-svn: 229687
2015-02-18 16:08:14 +00:00
Tom Stellard 0c0008cb6e R600/SI: Add definition for S_CBRANCH_G_FORK
llvm-svn: 229686
2015-02-18 16:08:13 +00:00
Tom Stellard ce449ade7e R600/SI: Add missing SOP1 instructions
llvm-svn: 229685
2015-02-18 16:08:11 +00:00
Tom Stellard ee21faa029 R600/SI: Refactor SOP2 definitions
llvm-svn: 229684
2015-02-18 16:08:09 +00:00
Vasileios Kalintiris 611cb70b83 [mips] Avoid redundant sign extension of the result of binary bitwise instructions.
Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7581

llvm-svn: 229675
2015-02-18 14:57:05 +00:00
Benjamin Kramer 6ca8992018 X86: Use bitset to manage a bag of bits. NFC.
Doesn't matter in terms of memory usage or perf here, but it's a neat
simplification.

llvm-svn: 229672
2015-02-18 14:10:44 +00:00
Toma Tabacu 8874eac5e6 [mips] [IAS] Fix using .cpsetup with local labels (PR22518).
Summary:
Parse for an MCExpr instead of an Identifier and use the symbol for relocations, not just the symbol's name.

This fixes errors when using local labels in .cpsetup (PR22518).

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: seanbruno, emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D7697

llvm-svn: 229671
2015-02-18 13:46:53 +00:00
Chandler Carruth bbb377c3a1 [x86] Tighten the assertions to document that canonicalization has
actually removed all but a *very* small number of choices for v2i64.
Also remove dead code handling cases that simply cannot arise.

llvm-svn: 229670
2015-02-18 11:46:29 +00:00
Chandler Carruth 811f0ee8c1 [x86] Switch an if which is trivially true to an assert. NFC
llvm-svn: 229669
2015-02-18 11:46:27 +00:00
Chandler Carruth 8f3e585b17 [x86] Remove some more 'bit' nomenclature from the generic shift
lowering.

llvm-svn: 229668
2015-02-18 11:46:23 +00:00
Chandler Carruth 672a98ea28 [x86] Fold together the two shift lowering strategies. They were doing
quite literally the same work, we just need to special case the >64-bit
element shift code emission to emit the byte shift instructions and
offsets. This also makes reasoning about each of the vector lowering
strategies easier as we don't have to remember to use both forms.

llvm-svn: 229662
2015-02-18 10:40:38 +00:00
Bradley Smith 26c9922a59 [ARM] Add missing M/R class CPUs
Add some of the missing M and R class Cortex CPUs, namely:

Cortex-M0+ (called Cortex-M0plus for GCC compatibility)
Cortex-M1
SC000
SC300
Cortex-R5

llvm-svn: 229660
2015-02-18 10:33:30 +00:00
Ulrich Weigand b7e5909a42 [SystemZ] Clean up warning
Removed (unreachable) default case in switch to clean up warning:

lib/Target/SystemZ/SystemZISelLowering.cpp:1974:5:
error: default label in switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]

llvm-svn: 229658
2015-02-18 09:42:23 +00:00
Chandler Carruth 48cc6c623a [x86] Refactor the bit shift code the same as I just did the byte shift
code.

While this didn't have the miscompile (it used MatchLeft consistently)
it missed some cases where it could use right shifts. I've added a test
case Craig Topper came up with to exercise the right shift matching.

This code is really identical between the two. I'm going to merge them
next so that we don't keep two copies of all of this logic.

llvm-svn: 229655
2015-02-18 09:19:58 +00:00
Ulrich Weigand 7db6918e2b [SystemZ] Support all TLS access models - CodeGen part
The current SystemZ back-end only supports the local-exec TLS access model.
This patch adds all required CodeGen support for the other TLS models, which
means in particular:

- Expand initial-exec TLS accesses by loading TLS offsets from the GOT
  using @indntpoff relocations.

- Expand general-dynamic and local-dynamic accesses by generating the
  appropriate calls to __tls_get_offset.  Note that this routine has
  a non-standard ABI and requires loading the GOT pointer into %r12,
  so the patch also adds support for the GLOBAL_OFFSET_TABLE ISD node.

- Add a new platform-specific optimization pass to remove redundant
  __tls_get_offset calls in the local-dynamic model (modeled after
  the corresponding X86 pass).

- Add test cases verifying all access models and optimizations.

llvm-svn: 229654
2015-02-18 09:13:27 +00:00
Ulrich Weigand 7bdd7c2346 [SystemZ] Support all TLS access models - MC part
The current SystemZ back-end only supports the local-exec TLS access model.
This patch adds all required MC support for the other TLS models, which
means in particular:

- Support additional relocation types for
  Initial-exec model: R_390_TLS_IEENT
  Local-dynamic-model: R_390_TLS_LDO32, R_390_TLS_LDO64,
                       R_390_TLS_LDM32, R_390_TLS_LDM64, R_390_TLS_LDCALL
  General-dynamic model: R_390_TLS_GD32, R_390_TLS_GD64, R_390_TLS_GDCALL

- Support assembler syntax to generate additional relocations
  for use with __tls_get_offset calls:
    :tls_gdcall:
    :tls_ldcall:

The patch also adds a new test to verify fixups and relocations,
and removes the (already unused) FK_390_PLT16DBL/FK_390_PLT32DBL
fixup kinds.

llvm-svn: 229652
2015-02-18 09:11:36 +00:00
Elena Demikhovsky 714f23bcdb AVX-512: Added support for FP instructions with embedded rounding mode.
By Asaf Badouh <asaf.badouh@intel.com>

llvm-svn: 229645
2015-02-18 07:59:20 +00:00
Chandler Carruth 55553f5299 [x86] Rewrite the byte shift detection to not use boolean variables to
track state.

I didn't like this in the code review because the pattern tends to be
error prone, but I didn't see a clear way to rewrite it. Turns out that
there were bugs here, I found them when fuzz testing our shuffle
lowering for correctness on x86.

The core of the problem is that we need to consistently test all our
preconditions for the same directionality of shift and the same input
vector. Instead, formulate this as two predicates (one doesn't depend on
the input in any way), pass things like the directionality and input
vector as inputs, and loop over the alternatives.

This fixes a pattern of very rare miscompiles coming out of this code.
Turned up roughly 4 out of every 1 million v8 shuffles in my fuzz
testing. The new code is over half a million test runs with no failures
yet. I've also fuzzed every other function in the lowering code with
over 3.5 million test cases and not discovered any other miscompiles.

llvm-svn: 229642
2015-02-18 07:13:48 +00:00
Craig Topper b324e43aed [X86] Remove AVX2 and SSE2 pslldq and psrldq intrinsics. We can represent them in IR with vector shuffles now. All their uses have been removed from clang in favor of shuffles.
llvm-svn: 229640
2015-02-18 06:24:44 +00:00
Matt Arsenault 0ba644b66b R600/SI: Rename dst encoding field to be consistent with docs
The docs call this vdst instead of just dst.

llvm-svn: 229614
2015-02-18 02:15:37 +00:00
Matt Arsenault e3dbcf6656 R600/SI: Consistently capitalize encoding field names
Some formats capitalized these, but most didn't. Change
them all to be consistently lowercase.

Now, non-encoding fields and convenience bits are capitalized.
Also remove weird looking empty line in some of the formats.

llvm-svn: 229613
2015-02-18 02:15:35 +00:00
Matt Arsenault 1ecac06a6f R600/SI: Set noNamedPositionallyEncodedOperands
llvm-svn: 229612
2015-02-18 02:15:32 +00:00
Matt Arsenault 096ec1e10c R600/SI: Fix src1_modifiers for class instructions
src1 doesn't have modifiers, but the operand was missing
resulting in an encoding build error when all fields
are required.'

llvm-svn: 229611
2015-02-18 02:15:30 +00:00
Matt Arsenault 65fa1c425d R600/SI: Fix not setting clamp / omod for v_cndmask_b32_e64
Rename the multiclass since it now applies to the output
modifiers as well.

llvm-svn: 229610
2015-02-18 02:15:27 +00:00
Matt Arsenault 284d7dfb53 R600: Fix operand encoding error
llvm-svn: 229609
2015-02-18 02:10:42 +00:00
Matt Arsenault 1991f5e40b R600/SI: Fix encoding error from glc bit on VI SMRD instructions
llvm-svn: 229608
2015-02-18 02:10:40 +00:00
Matt Arsenault e6c5241814 R600/SI: Fix operand encoding for flat instructions
llvm-svn: 229607
2015-02-18 02:10:37 +00:00
Matt Arsenault 07e3bb153f R600/SI: Fix error from vdst on no return atomics
Set the ignored field to 0 so we can enable
noNamedPositionallyEncodedOperands.

llvm-svn: 229606
2015-02-18 02:10:35 +00:00
Matt Arsenault caa1288fff R600/SI: Add missing offset operand to buffer bothen
llvm-svn: 229605
2015-02-18 02:04:38 +00:00
Matt Arsenault 2ad8bab7ee R600/SI: Add missing soffset operand to global atomics
llvm-svn: 229604
2015-02-18 02:04:35 +00:00
Matt Arsenault 3c34ae293c R600/SI: Fix brace identation
llvm-svn: 229603
2015-02-18 02:04:31 +00:00
Eric Christopher 8af49b3214 Make the Mips AsmPrinter independent of global subtarget
initialization. Initialize the subtarget once per function and
migrate EmitStartOfAsmFile to either use calls on the
TargetMachine or get information from the subtarget we'd use
for assembling.

The top-level-ness of the MIPS attribute output for assembly is,
by nature, contrary to how we'd want to do this for an LTO
situation where we have multiple cpu architectures so this
solution is good enough for now.

llvm-svn: 229596
2015-02-18 01:01:57 +00:00
Eric Christopher bbe6ff50f3 Unify selectMipsCPU implementations.
llvm-svn: 229595
2015-02-18 00:55:06 +00:00
Andrea Di Biagio e7b58ee555 [X86][FastIsel] Teach how to select scalar integer to float/double conversions.
This patch teaches fast-isel how to select a (V)CVTSI2SSrr for an integer to 
float conversion, and how to select a (V)CVTSI2SDrr for an integer to double
conversion.

Added test 'fast-isel-int-float-conversion.ll'.

Differential Revision: http://reviews.llvm.org/D7698

llvm-svn: 229589
2015-02-17 23:40:58 +00:00
Rafael Espindola df19519800 Add r228939 back with a fix.
The problem in the original patch was not switching back to .text after printing
an eh table.

Original message:

On ELF, put PIC jump tables in a non executable section.

Fixes PR22558.

llvm-svn: 229586
2015-02-17 23:34:51 +00:00
Sanjay Patel e951a3839a rename variables again because these tables also deal with stores; NFC
Suggestion by Simon Pilgrim

llvm-svn: 229574
2015-02-17 22:38:06 +00:00
Simon Pilgrim 1d89a02abb [X86][SSE] Generalised unpckl/unpckh shuffle matching
Added commuted unpckl/unpckh shuffle matching patterns as many cases containing undefined lanes fail to commute by themselves.

Differential Revision: http://reviews.llvm.org/D7564

llvm-svn: 229571
2015-02-17 22:24:32 +00:00
Sanjay Patel 1a20fdf36f Add comment to explain a non-obvious setting; NFC.
This is paraphrased from Simon Pilgrim's comment in:
http://reviews.llvm.org/D7492

llvm-svn: 229566
2015-02-17 22:09:54 +00:00
Sanjay Patel 203ee500e9 remove function names from comments; NFC
llvm-svn: 229558
2015-02-17 21:55:20 +00:00
Sanjay Patel 52f9f7c0f3 replace meaningless variable names; NFCI
llvm-svn: 229549
2015-02-17 21:37:28 +00:00
Tom Stellard 7b3aa88ac1 R600/SI: Fix asam errors in SIFoldOperands
We were trying to fold into implicit uses, which led to out of bounds
access of the MCInstrDesc::OpInfo arrray.

llvm-svn: 229533
2015-02-17 20:11:54 +00:00
Sanjay Patel b811c1d6a5 prevent folding a scalar FP load into a packed logical FP instruction (PR22371)
Change the memory operands in sse12_fp_packed_scalar_logical_alias from scalars to vectors. 
That's what the hardware packed logical FP instructions define: 128-bit memory operands.
There are no scalar versions of these instructions...because this is x86.

Generating the wrong code (folding a scalar load into a 128-bit load) is still possible
using the peephole optimization pass and the load folding tables. We won't completely
solve this bug until we either fix the lowering in fabs/fneg/fcopysign and any other
places where scalar FP logic is created or fix the load folding in foldMemoryOperandImpl()
to make sure it isn't changing the size of the load.

Differential Revision: http://reviews.llvm.org/D7474

llvm-svn: 229531
2015-02-17 20:08:21 +00:00
Eric Christopher a49d68e078 Make the ARM AsmPrinter independent of global subtarget
initialization. Initialize the subtarget once per function and
migrate Emit{Start|End}OfAsmFile to either use attributes on the
TargetMachine or get information from the subtarget we'd use
for assembling. One bit (getISAEncoding) touched the general
AsmPrinter and the debug output. Handle this one by passing
the function for the subprogram down and updating all callers
and users.

The top-level-ness of the ARM attribute output for assembly is,
by nature, contrary to how we'd want to do this for an LTO
situation where we have multiple cpu architectures so this
solution is good enough for now.

llvm-svn: 229528
2015-02-17 20:02:32 +00:00
Tom Stellard bc3776803b R600/SI: Extend private extload pattern to include zext loads
llvm-svn: 229507
2015-02-17 16:36:00 +00:00
Benjamin Kramer 6cd780ff21 Prefer SmallVector::append/insert over push_back loops.
Same functionality, but hoists the vector growth out of the loop.

llvm-svn: 229500
2015-02-17 15:29:18 +00:00
Andrea Di Biagio eb97f92489 [X86] Silence -Wsign-compare warnings.
GCC 4.8 reported two new warnings due to comparisons
between signed and unsigned integer expressions. The new warnings were
accidentally introduced by revision 229480.
Added explicit casts to silence the warnings. No functional change intended.

llvm-svn: 229488
2015-02-17 11:20:11 +00:00
Elena Demikhovsky ba84672519 AVX-512: changes in intel_ocl_bi calling conventions
- added mask types v8i1 and v16i1 to possible function parameters
- enabled passing 512-bit vectors in standard CC
- added a test for KNL intel_ocl_bi conventions

llvm-svn: 229482
2015-02-17 09:20:12 +00:00
Michael Kuperstein ff5acaf50c [X86] Combine vector anyext + and into a vector zext
Vector zext tends to get legalized into a vector anyext, represented as a vector shuffle with an undef vector + a bitcast, that gets ANDed with a mask that zeroes the undef elements.
Combine this into an explicit shuffle with a zero vector instead. This allows shuffle lowering to match it as a zext, instead of matching it as an anyext and emitting an explicit AND.
This combine only covers a subset of the cases, but it's a start.

Differential Revision: http://reviews.llvm.org/D7666

llvm-svn: 229480
2015-02-17 08:22:51 +00:00
Eric Christopher 5c0e009d3a Make the PowerPC AsmPrinter independent of global subtarget
initialization. Initialize the subtarget once per function and
migrate EmitStartOfAsmFile to either use attributes on the
TargetMachine or get information from all of the various
subtargets.

llvm-svn: 229475
2015-02-17 07:21:21 +00:00
Eric Christopher 75dc3904a5 Add a FIXME to move IsLittleEndian to the target machine.
llvm-svn: 229472
2015-02-17 06:45:17 +00:00
Eric Christopher fee6aaf683 Move ABI handling and 64-bitness to the PowerPC target machine.
This required changing how the computation of the ABI is handled
and how some of the checks for ABI/target are done.

llvm-svn: 229471
2015-02-17 06:45:15 +00:00
Chandler Carruth 55db07016e [x86] Teach the unpack lowering to try wider element unpacks.
This allows it to match still more places where previously we would have
to fall back on floating point shuffles or other more complex lowering
strategies.

I'm hoping to replace some of the hand-rolled unpack matching with this
routine is it gets more and more clever.

llvm-svn: 229463
2015-02-17 02:12:24 +00:00
Hal Finkel 5cedafb8cd [PowerPC] Support non-direct-sub/superclass VSX copies
Our register allocation has become better recently, it seems, and is now
starting to generate cross-block copies into inflated register classes. These
copies are not transformed into subregister insertions/extractions by the
PPCVSXCopy class, and so need to be handled directly by
PPCInstrInfo::copyPhysReg. The code to do this was *almost* there, but not
quite (it was unnecessarily restricting itself to only the direct
sub/super-register-class case (not copying between, for example, something in
VRRC and the lower-half of VSRC which are super-registers of F8RC).

Triggering this behavior manually is difficult; I'm including two
bugpoint-reduced test cases from the test suite.

llvm-svn: 229457
2015-02-16 23:46:30 +00:00
Simon Atanasyan 79ba8407d2 [Mips] Add .MIPS.options section descriptor kinds enumeration
No functional changes.

llvm-svn: 229452
2015-02-16 22:59:29 +00:00
Ahmed Bougacha bf2b90e92d [ARM] Remove unused declaration. NFC.
GlobalMerge was moved to lib/CodeGen a while ago, and is no longer
called "ARMGlobalMerge".

llvm-svn: 229448
2015-02-16 22:30:08 +00:00
Cameron McInally c5764cbe4e [AVX512] Make 512b vector floating point rounds legal on AVX512.
llvm-svn: 229445
2015-02-16 22:15:42 +00:00
Simon Pilgrim b2c00f3286 [X86][SSE] Add SSE MOVQ instructions to SSEPackedInt domain
Patch to explicitly add the SSE MOVQ (rr,mr,rm) instructions to SSEPackedInt domain - prevents a number of costly domain switches.

Differential Revision: http://reviews.llvm.org/D7600

llvm-svn: 229439
2015-02-16 21:50:56 +00:00
Craig Topper 49df44e2e2 [X86] Remove the multiply by 8 that goes into the shift constant for X86ISD::VSHLDQ and X86ISD::VSRLDQ. This simplifies the pattern matching in isel and allows these nodes to become the patterns embedded in the instruction.
llvm-svn: 229431
2015-02-16 20:52:07 +00:00
Craig Topper 44026efa88 [X86] Remove x86.avx2.psll.dq.bs and x86.avx2.psrl.dq.bs intrinsics.
llvm-svn: 229430
2015-02-16 20:51:59 +00:00
Matthias Braun d6b108e445 ARM: Transfer kill flag when lowering VSTMQIA to VSTMDIA.
llvm-svn: 229425
2015-02-16 19:34:30 +00:00
Aaron Ballman da9501b25c We require MSVC 1800 as our minimum, so these checks can safely go away; NFC. (It seems this code has been copy/pasted around, unfortunately.)
llvm-svn: 229417
2015-02-16 18:34:57 +00:00
Andrew Trick 05938a5481 AArch64: Safely handle the incoming sret call argument.
This adds a safe interface to the machine independent InputArg struct
for accessing the index of the original (IR-level) argument. When a
non-native return type is lowered, we generate the hidden
machine-level sret argument on-the-fly. Before this fix, we were
representing this argument as OrigArgIndex == 0, which is an outright
lie. In particular this crashed in the AArch64 backend where we
actually try to access the type of the original argument.

Now we use a sentinel value for machine arguments that have no
original argument index. AArch64, ARM, Mips, and PPC now check for this
case before accessing the original argument.

Fixes <rdar://19792160> Null pointer assertion in AArch64TargetLowering

llvm-svn: 229413
2015-02-16 18:10:47 +00:00
Chandler Carruth 1e57e2deb8 [x86] Add a generic unpack-targeted lowering technique. This can be used
to generically lower blends and is particularly nice because it is
available frome SSE2 onward. This removes a lot of the remaining domain
crossing blends in SSE2 code.

I'm hoping to replace some of the "interleaved" lowering hacks with
something closer to this which should be more principled. First, this
needs to learn how to detect and use other interleavings besides that of
the natural type provided. That will be a follow-up patch though.

llvm-svn: 229378
2015-02-16 12:28:18 +00:00
Chandler Carruth c802085b3a [x86] Add initial basic support for forming blends of v16i8 vectors.
This blend instruction is ... really lame. The register usage is insane.
As a consequence this is probably only *barely* better than 2 pshufbs
followed by a por, and that mostly because it only has to read from
a single memory location.

However, this doesn't fix as much as I kind of expected, so more to go.
Pretty sure that the ordering and delegation of v16i8 is just really,
really bad.

llvm-svn: 229373
2015-02-16 10:58:23 +00:00
Chandler Carruth e63bbd97a7 [x86] Switch my usage of VariadicFunction to a "normal" variadic
template now that we can use them.

This is, of course, horribly ugly because of the required recursive
formulation. Suggestions for making it less ugly welcome.

llvm-svn: 229367
2015-02-16 09:59:48 +00:00
Craig Topper 7e8dcef094 [X86] Add support for lowering shuffles to 256-bit PALIGNR instruction.
llvm-svn: 229359
2015-02-16 06:29:06 +00:00
Chandler Carruth 87e580a659 [x86] Teach the 128-bit vector shuffle lowering routines to take
advantage of the existence of a reasonable blend instruction.

The 256-bit vector shuffle lowering has leveraged the general technique
of decomposed shuffles and blends for quite some time, but this never
made it back into the 128-bit code, and there are a large number of
patterns where this is substantially better. For example, this removes
almost all domain crossing in vector shuffles that involve some blend
and some permutation with SSE4.1 and later. See the massive reduction
in 'shufps' for integer test cases in this commit.

This isn't perfect yet for a few reasons:

1) The v8i16 shuffle lowering continues to plague me. We don't always
   form an unpack-based blend when that would be better. But the wins
   pretty drastically outstrip the losses here.
2) The v16i8 shuffle lowering is just a disaster here. I never went and
   implemented blend support here for some terrible reason. I'll do
   that next probably. I've not updated it for now.

More variations on this technique are coming as well -- we don't
shuffle-into-unpack or shuffle-into-palignr, both of which would also be
profitable.

Note that some test cases grow significantly in the number of
instructions, but I expect to actually be faster. We use
pshufd+pshufd+blendw instead of a single shufps, but the pshufd's are
very likely to pipeline well (two ports on most modern intel chips) and
the blend is a *very* fast instruction. The domain switch penalty will
essentially always be more than a blend instruction, which is the only
increase in tree height.

llvm-svn: 229350
2015-02-16 01:52:02 +00:00
Aaron Ballman f9a1897c72 Removing LLVM_DELETED_FUNCTION, as MSVC 2012 was the last reason for requiring the macro. NFC; LLVM edition.
llvm-svn: 229340
2015-02-15 22:54:22 +00:00
Aaron Ballman b46962fe5d Removing LLVM_EXPLICIT, as MSVC 2012 was the last reason for requiring the macro. NFC; LLVM edition.
llvm-svn: 229335
2015-02-15 22:00:20 +00:00
Simon Pilgrim 2a7bedb73e Coding style fixes to recent patches. NFC.
llvm-svn: 229312
2015-02-15 14:19:29 +00:00
Simon Pilgrim 00bd79d794 [X86][AVX2] vpslldq/vpsrldq byte shifts for AVX2
This patch refactors the existing lowerVectorShuffleAsByteShift function to add support for 256-bit vectors on AVX2 targets.

It also fixes a tablegen issue that prevented the lowering of vpslldq/vpsrldq vec256 instructions.

Differential Revision: http://reviews.llvm.org/D7596

llvm-svn: 229311
2015-02-15 13:19:52 +00:00
Chandler Carruth bf0fb06e0d [x86] Teach the decomposed shuffle/blend lowering to use an early blend
when that will allow it to lower with a single permute instead of
multiple permutes.

It tries to detect when it will only have to do a single permute in
either case to maximize folding of loads and such.

This cuts a *lot* of the avx2 shuffle permute counts in half. =]

llvm-svn: 229309
2015-02-15 12:42:15 +00:00
Chandler Carruth 75d9a97569 [x86] Teach the shuffle mask equivalence test to look through build
vectors and detect equivalent inputs.

This lets the code match unpck-style instructions when only one of the
inputs are lined up but the other input is a splat and so which lanes we
pull from doesn't matter. Today, this doesn't really happen, but just by
accident. I have a patch that normalizes how we shuffle splats, and with
that patch this will be necessary for a lot of the mask equivalence
tests to work.

I don't really know how to write a test case for this specific change
until the other change lands though.

llvm-svn: 229307
2015-02-15 12:07:55 +00:00
Chandler Carruth 4fe214b1f2 [x86] Tweak the ordering of unpack matching vs. element insertion, and
don't try to do element insertion for non-zero-index floating point
vectors.

We don't have any useful patterns or lowering for element insertion into
high elements of a floating point vector, and the generic shuffle
lowering will end up being better -- namely it will fall back to unpck.
But we should try to handle other forms of element insertion before
matching unpck patterns.

While this doesn't matter much right now, I'm working on a patch that
makes unpck matching much more powerful, and that patch will break
without this re-ordering.

llvm-svn: 229306
2015-02-15 12:01:14 +00:00
Chandler Carruth 56e0ceda0d [x86] Stop shuffling zero vectors. =]
I was somewhat surprised this pattern really came up, but it does. It
seems better to just directly handle it than try to special case every
place where we end up forming a shuffle that devolves to a shuffle of
a zero vector.

llvm-svn: 229301
2015-02-15 10:34:52 +00:00
Chandler Carruth 3d272daaed [x86] Use a more helpful parenthesizing of these comparisons. Silences
a -Wparentheses complaint from GCC.

llvm-svn: 229300
2015-02-15 10:15:20 +00:00
Chandler Carruth 62558c1d4d [x86] When splitting 256-bit vectors into 128-bit vectors, don't extract
subvectors from buildvectors. That doesn't really make any sense and it
breaks all of the down-stream matching of buildvectors to cleverly lower
shuffles.

With this, we now get the shift-based lowering of 256-bit vector
shuffles with AVX1 when we split them into 128-bit vectors. We also do
much better on the zero-extension patterns, although there remains quite
a bit of room for improvement here.

llvm-svn: 229299
2015-02-15 10:12:02 +00:00
Chandler Carruth a6f8a3661c [x86] Make computing the zeroable elements slightly more powerful, at
least in theory.

I don't actually have a test case that benefits from this, but
theoretically, it could come up, and I don't want to try to think about
whether this is the culprit or something else is, so I'd rather just
make this code powerful. =/ Makes me sad that I can't really test it
though.

llvm-svn: 229298
2015-02-15 09:33:36 +00:00
Chandler Carruth 0ddfe0c7c5 [x86] Add a slight variation on some of the other generic shuffle
lowerings -- one which decomposes into an initial blend followed by
a permute.

Particularly on newer chips, blends are handled independently of
shuffles and so this is much less bottlenecked on the single port that
floating point shuffles are executed with on Intel.

I'll be adding this lowering to a bunch of other code paths in
subsequent commits to handle still more places where we can effectively
leverage blends when they're available in the ISA.

llvm-svn: 229292
2015-02-15 08:26:30 +00:00
Craig Topper 78c424dfca [X86] Add assembly parser support for mnemonic aliases for AVX-512 vpcmp instructions.
llvm-svn: 229287
2015-02-15 07:13:48 +00:00
Craig Topper f02ad93270 [X86] Add assembler predicates for the rest of the AVX512 feature flags. This makes the assembly matching consistent across all AVX512 instructions. Without this we were allowing some AVX512 instructions to be parsed always, but not the foundation instructions.
llvm-svn: 229280
2015-02-15 04:54:55 +00:00
Craig Topper a3776de242 [X86] Add the remaining 11 possible exact ModRM formats. This makes their encodings linear which can then be used to simplify some other code.
llvm-svn: 229279
2015-02-15 04:16:44 +00:00
Simon Pilgrim 31457d54f7 [X86][XOP] Enable commutation for XOP instructions
Patch to allow XOP instructions (integer comparison and integer multiply-add) to be commuted. The comparison instructions sometimes require the compare mode to be flipped but the remaining instructions can use default commutation modes.

This patch also sets the SSE domains of all the XOP instructions.

Differential Revision: http://reviews.llvm.org/D7646

llvm-svn: 229267
2015-02-14 22:40:46 +00:00
Craig Topper 43860838dc [X86] Improve parsing support AVX/SSE floating point compare instruction mnemonic aliases. They'll now print with the alias the parser received instead of converting to the explicit immediate form.
llvm-svn: 229266
2015-02-14 21:54:03 +00:00
Duncan P. N. Exon Smith 025c0ad74c Target: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229261
2015-02-14 15:36:52 +00:00
Duncan P. N. Exon Smith b5054333ec NVPTX: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229260
2015-02-14 15:35:43 +00:00
Simon Pilgrim b0bac23fcd Line ending fix. NFC.
llvm-svn: 229256
2015-02-14 13:27:53 +00:00
Chandler Carruth 003ed332bf Remove a variable only used in an assert and sink its initializer into
the assert. Fixes -Wunused-variable on non-asserts builds.

llvm-svn: 229250
2015-02-14 09:14:44 +00:00
Matt Arsenault 0bbcd8ba2f R600/SI: Implement correct f64 fdiv
This version passes the OpenCL conformance test.

llvm-svn: 229239
2015-02-14 04:30:08 +00:00
Matt Arsenault 044f1d19cf R600/SI: Use complex operand folding for div_scale
llvm-svn: 229238
2015-02-14 04:24:28 +00:00
Matt Arsenault 1bc9d95047 R600/SI: Fix implicit vcc operand to v_div_fmas_*
This should allow finally fixing the f64 fdiv implementation.

Test is disabled for VI since there seems to be a problem with one
of the buffer load instructions on it.

llvm-svn: 229236
2015-02-14 04:22:00 +00:00
Matt Arsenault 6e26b8d854 R600/SI: Fix schedule model for v_div_scale_{f32|f64}
llvm-svn: 229235
2015-02-14 04:03:18 +00:00
Matt Arsenault 35733e2dec R600/SI: Really fix size of VReg_1
llvm-svn: 229234
2015-02-14 03:54:32 +00:00
Matt Arsenault 1bcc8cba5a R600/SI: Rename encoding field to match docs for VOP3b
llvm-svn: 229233
2015-02-14 03:54:29 +00:00
Matt Arsenault 31ec598a2a R600/SI: Fix not encoding src2 for v_div_scale_{f32|f64}
This apparently got lost in the VI changes.

llvm-svn: 229230
2015-02-14 03:40:35 +00:00
Matt Arsenault 692acf1438 R600/SI: Fix VOP3b encoding on VI
llvm-svn: 229228
2015-02-14 03:02:23 +00:00
Matt Arsenault 95546b46ab R600/SI: Fix phys reg copies in SIFoldOperands
llvm-svn: 229227
2015-02-14 02:55:57 +00:00
Matt Arsenault 9998168982 R600/SI: Fix copies from SGPR to VCC
This shows up without optimizations when vcc is required
to be used.

llvm-svn: 229226
2015-02-14 02:55:56 +00:00
Matt Arsenault 834b1aa806 R600/SI: Add hack to copy from a VGPR to VCC
This hopefully should be fixed when VReg_1 is removed.

llvm-svn: 229225
2015-02-14 02:55:54 +00:00
Duncan P. N. Exon Smith 5bedaf934f PowerPC: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229224
2015-02-14 02:54:07 +00:00
Matt Arsenault f417ff8f2a R600/SI: Fix size of VReg_1
This is really a 32-bit register, if we try to check the size of it,
we want 32-bits.

llvm-svn: 229223
2015-02-14 02:51:44 +00:00
Duncan P. N. Exon Smith 8480c87ce6 R600: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229222
2015-02-14 02:45:45 +00:00
Duncan P. N. Exon Smith 2e75314352 Mips: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229221
2015-02-14 02:37:48 +00:00
Duncan P. N. Exon Smith 2cff9e19a2 ARM: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229220
2015-02-14 02:24:44 +00:00
Duncan P. N. Exon Smith 003bb7d96e AArch64: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229218
2015-02-14 02:09:06 +00:00
Duncan P. N. Exon Smith 5975a703e6 X86: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229214
2015-02-14 01:59:52 +00:00
Ahmed Bougacha 8f2b4f0be8 [X86] Factor out the CMOV pseudo definitions. NFCI.
llvm-svn: 229206
2015-02-14 01:36:53 +00:00
Matthias Braun 33cc10724d Revert "On ELF, put PIC jump tables in a non executable section."
This reverts commit r228939.

The commit broke something in the output of exception handling tables on
darwin x86-64.

llvm-svn: 229203
2015-02-14 01:16:54 +00:00
Eric Christopher b2a5fa98e4 Use the template method to grab the target specific subtarget.
llvm-svn: 229191
2015-02-14 00:09:46 +00:00
Eric Christopher fcd3d87ad8 The base pointer save offset can be computed at initialization time,
do so and fix up the calls.

llvm-svn: 229169
2015-02-13 22:48:53 +00:00
Eric Christopher a10d58dba8 Move the target machine variable so that it's initialized early
enough we can use it to initialize frame lowering.

llvm-svn: 229168
2015-02-13 22:48:51 +00:00
Eric Christopher e8dbfe1cf8 Stash the TargetMachine on the subtarget so we can access it later.
Clean up a subtarget function that has it passed in while we're at it.

llvm-svn: 229164
2015-02-13 22:23:04 +00:00
Eric Christopher a4ae213193 PPC LinkageSize can be computed at initialization time, do so.
llvm-svn: 229163
2015-02-13 22:22:57 +00:00
Sanjay Patel baa6bc378f [SSE/AVX] Use multiclasses to reduce the mass of scalar math patterns; NFCI
This takes the preposterous number of patterns in this section
that were last added to in r219033 down to just plain obnoxious.

With a little more work, we might get this down to just comical.

I've added more test cases to the existing file that checks these
patterns, but it seems that some of these patterns simply don't
exist with today's shuffle lowering.

llvm-svn: 229158
2015-02-13 21:52:42 +00:00
Sanjay Patel 34da52a894 fix typos; NFC
llvm-svn: 229155
2015-02-13 21:07:22 +00:00
Tom Stellard e1e4a2d310 R600/SI: Refactor SOP1 classes
llvm-svn: 229152
2015-02-13 21:02:37 +00:00
Tom Stellard 6c65e9a99a R600/SI: Lowercase register names
llvm-svn: 229151
2015-02-13 21:02:36 +00:00
Tom Stellard d09fa9cec8 R600/SI: Remove some unused TableGen classes
llvm-svn: 229150
2015-02-13 21:02:33 +00:00
Vasileios Kalintiris 99eeb8aae4 [mips] Refactor and simplify MipsSEDAGToDAGISel::selectIntAddrLSL2MM(). NFC.
Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7618

llvm-svn: 229140
2015-02-13 19:14:22 +00:00
Vasileios Kalintiris 46963f6e73 [mips] Use isa<> instead of dyn_cast<> with unused value. NFC.
Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7615

llvm-svn: 229138
2015-02-13 19:12:16 +00:00
Matt Arsenault 774e20b42a R600/SI: Remove handling of fpimm
llvm-svn: 229136
2015-02-13 19:05:07 +00:00
Matt Arsenault 11a4d6774b R600/SI: Allow f64 inline immediates in i64 operands
This requires considering the size of the operand when
checking immediate legality.

llvm-svn: 229135
2015-02-13 19:05:03 +00:00
Jozef Kolek 650a61a943 [mips][microMIPS] Delay slot filler: Replace the microMIPS JR with the JRC
This patch adds functionality in MIPS delay slot filler such as if delay slot
filler have to put NOP instruction into the delay slot of microMIPS JR
instruction, then instead of emitting NOP this instruction is replaced by
compact jump instruction JRC.

Differential Revision: http://reviews.llvm.org/D7522

llvm-svn: 229128
2015-02-13 17:51:27 +00:00
Toma Tabacu 16a74499af [mips] Improve support for the .set at/noat assembler directives.
Summary:
Made the following changes:
  Added calls to emitDirectiveSetNoAt() and emitDirectiveSetAt().
  Added special emit function for .set at=$reg, emitDirectiveSetAtWithArg(unsigned RegNo).
  Improved parsing error checks for .set at.
  Refactored parser code for .set at.
  Improved testing of both directives.
  Improved code readability and comments.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7176

llvm-svn: 229097
2015-02-13 10:30:57 +00:00
Chandler Carruth 30d69c2e36 [PM] Remove the old 'PassManager.h' header file at the top level of
LLVM's include tree and the use of using declarations to hide the
'legacy' namespace for the old pass manager.

This undoes the primary modules-hostile change I made to keep
out-of-tree targets building. I sent an email inquiring about whether
this would be reasonable to do at this phase and people seemed fine with
it, so making it a reality. This should allow us to start bootstrapping
with modules to a certain extent along with making it easier to mix and
match headers in general.

The updates to any code for users of LLVM are very mechanical. Switch
from including "llvm/PassManager.h" to "llvm/IR/LegacyPassManager.h".
Qualify the types which now produce compile errors with "legacy::". The
most common ones are "PassManager", "PassManagerBase", and
"FunctionPassManager".

llvm-svn: 229094
2015-02-13 10:01:29 +00:00
Chandler Carruth 71f308adb7 Re-sort #include lines using my handy dandy ./utils/sort_includes.py
script. This is in preparation for changes to lots of include lines.

llvm-svn: 229088
2015-02-13 09:09:03 +00:00
Craig Topper 916708f152 [X86] Add support for parsing and printing the mnemonic aliases for the XOP VPCOM instructions.
llvm-svn: 229078
2015-02-13 07:42:25 +00:00
Craig Topper 007a713ebf Fix a typo in a comment. NFC
llvm-svn: 229071
2015-02-13 06:07:29 +00:00
Craig Topper 4e0700f365 [X86] Remove int_x86_sse2_psll_dq_bs and int_x86_sse2_psrl_dq_bs intrinsics. The builtins aren't used by clang.
llvm-svn: 229069
2015-02-13 06:07:24 +00:00
Matt Arsenault 63bef0d177 R600/SI: Remove unnecessary check for fpimm
llvm-svn: 229034
2015-02-13 02:47:22 +00:00
Eric Christopher dc3a8a4a66 PPCFrameLowering's FramePointerOffset can be computed at initialization
time. Do so.

llvm-svn: 228998
2015-02-13 00:39:38 +00:00
Eric Christopher 736d39e189 The TOC save offset can be computed at compile time, do so and
propagate changes.

llvm-svn: 228997
2015-02-13 00:39:36 +00:00
Eric Christopher f71609b5dd The return save offset can be computed at initialization time - do
so and save the value.

llvm-svn: 228996
2015-02-13 00:39:27 +00:00
David Majnemer a12fcb790f X86: Don't crash if we can't decode the pshufb mask
Constant pool entries are uniqued by their contents regardless of their
type.  This means that a pshufb can have a shuffle mask which isn't a
simple array of bytes.

The code path which attempts to decode the mask didn't check for
failure, causing PR22559.

llvm-svn: 228979
2015-02-12 23:26:26 +00:00
Rafael Espindola e4bcad4754 Learn that __DATA,__objc_classrefs is not atomized via symbols.
This should hopefully fix objc on AArch64.

llvm-svn: 228976
2015-02-12 23:11:59 +00:00
Olivier Sallenave 05e69157b6 Change max interleave factor to 12 for POWER7 and POWER8.
llvm-svn: 228973
2015-02-12 22:57:58 +00:00
Rafael Espindola 3105fd8335 Remove mostly unused setters.
Most of the code was setting the TargetOptions directly.

llvm-svn: 228961
2015-02-12 21:16:34 +00:00
Reed Kotler aa150ed780 Add bulk of returning of values to Mips fast-isel
Summary:
Implement the bulk of returning values in Mips fast-isel



Test Plan:
reatabi.ll

Passes test-suite at -O0,-O2 and with mips32r2 and mips32r1.





Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits, aemerson, rfuhler

Differential Revision: http://reviews.llvm.org/D5920

llvm-svn: 228958
2015-02-12 21:05:12 +00:00
Simon Pilgrim 295eaad2b3 Relaxed over-zealous alignment requirement for VEX-encoded AES instructions
llvm-svn: 228953
2015-02-12 20:01:03 +00:00
Rafael Espindola 203c5b9f39 On ELF, put PIC jump tables in a non executable section.
Fixes PR22558.

llvm-svn: 228939
2015-02-12 17:46:49 +00:00
Rafael Espindola 29786d4c16 Put each jump table in an independent section if the function is too.
This allows the linker to GC both, fixing pr22557.

llvm-svn: 228937
2015-02-12 17:16:46 +00:00
Benjamin Kramer 5f6a907288 MathExtras: Bring Count(Trailing|Leading)Ones and CountPopulation in line with countTrailingZeros
Update all callers.

llvm-svn: 228930
2015-02-12 15:35:40 +00:00
Michael Kuperstein f4d1aca568 [X86] Call frame optimization - allow stack-relative movs to be folded into a push
Since we track esp precisely, there's no reason not to allow this.

llvm-svn: 228924
2015-02-12 14:17:35 +00:00
Asiri Rathnayake e045e378ad ARM: Fix another regression introduced in r223113
The changes in r223113 (ARM modified-immediate syntax) have broken
instructions like:
  mov r0, #~0xffffff00
The problem is that I've added a spurious range check on the immediate
operand to ensure that it lies between INT32_MIN and UINT32_MAX. While
this range check is correct in theory, it causes problems because the
operand is stored in an int64_t (by MC). So valid 32-bit constants like
\#~0xffffff00 become out of range. The solution is to simply remove this
range check. It is not possible to validate the range of the immediate
operand with the current setup because: 1) The operand is stored in an
int64_t by MC, 2) The immediate can be of the forms #imm, #-imm, #~imm
or even #((~imm)) etc. So we just chop the value to 32 bits and use it.

Also noted that the original range check was note tested by any of the
unit tests. I've added a new test to cover #~imm kind of operands.

Change-Id: I411e90d84312a2eff01b732bb238af536c4a7599
llvm-svn: 228920
2015-02-12 13:37:28 +00:00
Elena Demikhovsky d2cb3c8876 AVX-512: Fixed the "test" operation for i1 type
Using KORTESTW for comparison i1 value with zero was wrong since the instruction tests 16 bits.
KORTESTW may be used with KSHIFTL+KSHIFTR that clean the 15 upper bits.
I removed (X86cmp i1, 0) pattern and zero-extend i1 to i8 and then use TESTB.

There are some cases where i1 is in the mask register and the upper bits are already zeroed.
Then KORTESTW is the better solution, but it is subject for optimization.
Meanwhile, I'm fixing the correctness issue.

llvm-svn: 228916
2015-02-12 08:40:34 +00:00
Michael Kuperstein db95d04be4 [X86] A heuristic to estimate the size impact for converting stack-relative parameter movs to pushes
This gives a rough estimate of whether using pushes instead of movs is profitable, in terms of size.
We go over all calls in the MachineFunction and compute:
a) For each callsite that can not use pushes, the penalty of not having a reserved call frame.
b) For each callsite that can use pushes, the gain of actually replacing the movs with pushes (and the potential penalty of having to readjust the stack).

Differential Revision: http://reviews.llvm.org/D7561

llvm-svn: 228915
2015-02-12 08:36:35 +00:00
Hal Finkel 7a0516ea66 [PowerPC] Mark jumps as expensive (using using CR bits)
On PowerPC, which has a full set of logical operations on (its multiple sets
of) condition-register bits, it is not profitable to break of complex
conditions feeding a jump into multiple jumps. We can turn off this feature of
CGP/SDAGBuilder by marking jumps as "expensive".

P7 test-suite speedups (no regressions):
MultiSource/Benchmarks/FreeBench/pcompress2/pcompress2
	-0.626647% +/- 0.323583%
MultiSource/Benchmarks/Olden/power/power
	-18.2821% +/- 8.06481%

llvm-svn: 228895
2015-02-12 01:02:52 +00:00
Tom Stellard 0648588e7d R600/SI: Disable subreg liveness
This is temporary while we try to fix a crash in the register coalescer.

llvm-svn: 228861
2015-02-11 18:24:53 +00:00
Tom Stellard de5b7b180a R600: Split AMDGPUPassConfig into R600PassConfig and GCNPassConfig
llvm-svn: 228850
2015-02-11 17:11:51 +00:00
Tom Stellard c65b36061a R600: Create an R600TargetMachine for pre-gcn GPUs
No functinality change. R600TargetMachine inherits from
AMDGPUTargetMachine.

llvm-svn: 228849
2015-02-11 17:11:50 +00:00
Daniel Sanders a19216c8f4 [mips] Merge disassemblers into a single implementation.
Summary:
Currently we have Mips32 and Mips64 disassemblers and this causes the target
triple to affect the disassembly despite all the relevant information being in
the ELF header. These implementations do not need to be separate.

This patch merges them together such that the appropriate tables are checked
for the subtarget (e.g. Mips64 is checked when GP64 is enabled).

Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7498

llvm-svn: 228825
2015-02-11 11:28:56 +00:00
Michael Kuperstein 1921d3d6f3 [X86] Split information collection from actual transformation in call frame optimization
This splits collecting information from actually performing the transformation, so that we can add a heuristic in between the two.
NFC.

Differential Revision: http://reviews.llvm.org/D7497

llvm-svn: 228817
2015-02-11 08:53:55 +00:00
Arnaud A. de Grandmaison de79026d5e [PBQP] Cautiously update edge costs in the solver
The NodeMetadata are maintained in an incremental way. When an edge between
2 nodes has its cost updated, in the course of graph reduction for example,
the NodeMetadata need first to have the old edge cost removed, then the new
edge cost added. Only once the NodeMetadata have been fully updated, it
becomes safe to consider promoting the nodes to the
ConservativelyAllocatable or OptimallyReducible sets. Previously, this
promotion was occuring right after the removing the old cost, and this was
breaking the assumption that a ConservativelyAllocatable should not be
spilled.

This patch also adds asserts to:
 - enforces the invariant that a node's reduction can not be downgraded,
 - only not provably allocatable or optimally reducible nodes can be spilled.

llvm-svn: 228816
2015-02-11 08:25:36 +00:00
Zachary Turner 3bd47cee78 Use ADDITIONAL_HEADER_DIRS in all LLVM CMake projects.
This allows IDEs to recognize the entire set of header files for
each of the core LLVM projects.

Differential Revision: http://reviews.llvm.org/D7526
Reviewed By: Chris Bieneman

llvm-svn: 228798
2015-02-11 03:28:02 +00:00
Tom Stellard 94b7231740 R600/SI: Store immediate offsets > 12-bits in soffset
This will save us from having to extend these offsets to 64-bits
and storing them in a pair of vgprs.

llvm-svn: 228776
2015-02-11 00:34:35 +00:00
Tom Stellard c53861ab84 R600/SI: Add soffset operand to mubuf addr64 instruction
We were previously hard-coding soffset to 0.

llvm-svn: 228775
2015-02-11 00:34:32 +00:00
David Majnemer ca19485f08 X86: @llvm.frameaddress should defer to SelectionDAG for Win CFI
llvm-svn: 228754
2015-02-10 22:00:34 +00:00
David Majnemer 13d0b11d7b X86: Make @llvm.frameaddress work correctly with Windows unwind codes
Simply loading or storing the frame pointer is not sufficient for
Windows targets.  Instead, create a synthetic frame object that we will
lower later.  References to this synthetic object will be replaced with
the correct reference to the frame address.

llvm-svn: 228748
2015-02-10 21:22:05 +00:00
Bill Schmidt 67f36bd0d8 Fix up r228725, missed change in PPCSubtarget definition
llvm-svn: 228728
2015-02-10 19:31:55 +00:00
Bill Schmidt 82f1c775a0 [PowerPC] Fix reverted patch r227976 to avoid register assignment issues
See full discussion in http://reviews.llvm.org/D7491.

We now hide the add-immediate and call instructions together in a
separate pseudo-op, which is tagged to define GPR3 and clobber the
call-killed registers.  The PPCTLSDynamicCall pass prior to RA now
expands this op into the two separate addi and call ops, with explicit
definitions of GPR3 on both instructions, and explicit clobbers on the
call instruction.  The pass is now marked as requiring and preserving
the LiveIntervals and SlotIndexes analyses, and fixes these up after
the replacement sequences are introduced.

Self-hosting has been verified on LE P8 and BE P7 with various
optimization levels, etc.  It has also been verified with the
--no-tls-optimize flag workaround removed.

llvm-svn: 228725
2015-02-10 19:09:05 +00:00
David Majnemer a7d908eb2b X86: Emit Win64 SaveXMM opcodes at the right offset in the right order
Walk the instructions marked FrameSetup and consider any stores of XMM
registers to the stack as needing a SaveXMM opcode.

This fixes PR22521.

Differential Revision: http://reviews.llvm.org/D7527

llvm-svn: 228724
2015-02-10 19:01:47 +00:00
Hal Finkel 57c6ac5e41 [PowerPC] Support the (old) cntlz instruction alias
Some old assembly code uses the cntlz alias for cntlzw, binutils supports this,
and we should too. Fixes PR22519.

llvm-svn: 228719
2015-02-10 18:45:02 +00:00
Colin LeMahieu 404d5b242d [Hexagon] Adding vector load with post-increment instructions. Adding decoder function for 64bit control register class.
llvm-svn: 228708
2015-02-10 16:59:36 +00:00
Zoran Jovanovic 416886793f [mips][microMIPS] Implement movep instruction
Differential Revision: http://reviews.llvm.org/D7465

llvm-svn: 228703
2015-02-10 16:36:20 +00:00
Simon Pilgrim d142ab7d08 [X86][AVX2] Missing AVX2 memory folding instructions
Added most of the missing vector folding patterns for AVX2 (as well as fixing the vpermpd and verpmq patterns)

Differential Revision: http://reviews.llvm.org/D7492

llvm-svn: 228688
2015-02-10 13:22:57 +00:00
Simon Pilgrim cd32254a35 [X86][XOP] Added XOP memory folding patterns + tests
This patch adds the complete AMD Bulldozer XOP instruction set to the memory folding pattern tables for stack folding, etc.

Note: Many of the XOP instructions have multiple table entries as it can fold loads from different sources.

Differential Revision: http://reviews.llvm.org/D7484

llvm-svn: 228685
2015-02-10 12:57:17 +00:00
Jozef Kolek d68d424abf [mips][microMIPS] Fix disassembling of 16-bit microMIPS instructions LWM16 and SWM16
Differential Revision: http://reviews.llvm.org/D7436

llvm-svn: 228683
2015-02-10 12:41:13 +00:00
Andrea Di Biagio 62622d2396 [X86][FastIsel] Avoid introducing legacy SSE instructions if the target has AVX.
This patch teaches X86FastISel how to select AVX instructions for scalar
float/double convert operations.

Before this patch, X86FastISel always selected legacy SSE instructions
for FPExt (from float to double) and FPTrunc (from double to float).

For example:
\code
  define double @foo(float %f) {
    %conv = fpext float %f to double
    ret double %conv
  }
\end code

Before (with -mattr=+avx -fast-isel) X86FastIsel selected a CVTSS2SDrr which is
legacy SSE:
  cvtss2sd %xmm0, %xmm0

With this patch, X86FastIsel selects a VCVTSS2SDrr instead:
  vcvtss2sd %xmm0, %xmm0, %xmm0

Added test fast-isel-fptrunc-fpext.ll to check both the register-register and
the register-memory float/double conversion variants.

Differential Revision: http://reviews.llvm.org/D7438

llvm-svn: 228682
2015-02-10 12:04:41 +00:00
Craig Topper 9e71b82f40 [X86] Preserve mem refs on newly created 'Store' node instead of 'Load' node when handling store unfolding.
Bug spotted by Steve King.

I have no idea how to test this.

llvm-svn: 228672
2015-02-10 06:29:28 +00:00
Craig Topper f7e92f10b6 [X86] Remove unnecessary alignment checks from the load folding tables.
llvm-svn: 228671
2015-02-10 05:10:50 +00:00
David Majnemer 93c22a45be X86: Emit an ABI compliant prologue and epilogue for Win64
Win64 has specific contraints on what valid prologues and epilogues look
like.  This constraint is born from the flexibility and descriptiveness
of Win64's unwind opcodes.

Prologues previously emitted by LLVM could not be represented by the
unwind opcodes, preventing operations powered by stack unwinding to
successfully work.

Differential Revision: http://reviews.llvm.org/D7520

llvm-svn: 228641
2015-02-10 00:57:42 +00:00
Eric Christopher d49868080e Migrate PPCAsmPrinter's subtarget from reference to pointer in
preparation for making it MachineFunction dependent.

llvm-svn: 228638
2015-02-10 00:44:17 +00:00
David Blaikie 36a036909c Fix the clang -Werror build (-Wunused-variable)
llvm-svn: 228635
2015-02-10 00:16:36 +00:00
Colin LeMahieu 328b1633d7 [Hexagon] Adding missing load instructions and removing an unused multiclass parameter.
llvm-svn: 228630
2015-02-09 23:45:24 +00:00