Commit Graph

10005 Commits

Author SHA1 Message Date
Hal Finkel f5c07ada1d Revert "[PPC] Use alias symbols in address computation."
This reverts commit r209638 because it broke self-hosting on ppc64/Linux. (the
Clang-compiled TableGen would segfault because it jumped to an invalid address
from within _ZNK4llvm17ManagedStaticBase21RegisterManagedStaticEPFPvvEPFvS1_E
(which is within the command-line parameter registration process)).

llvm-svn: 209745
2014-05-28 15:25:06 +00:00
Tilmann Scheller 7c747fc7c6 [AArch64] Add store post-index update folding regression tests for the load/store optimizer.
Add regression tests for the following transformation:

  str X, [x20]
   ...
  add x20, x20, #32
   ->
  str X, [x20], #32

with X being either w0, x0, s0, d0 or q0.

llvm-svn: 209715
2014-05-28 06:43:00 +00:00
Tilmann Scheller 35e451461f [AArch64] Add load post-index update folding regression tests for the load/store optimizer.
Add regression tests for the following transformation:

 ldr X, [x20]
  ...
 add x20, x20, #32
  ->
 ldr X, [x20], #32

 with X being either w0, x0, s0, d0 or q0.

llvm-svn: 209711
2014-05-28 05:44:14 +00:00
Sasa Stankovic e41db2fe31 [mips] Optimize long branch for MIPS64 by removing %higher and %highest.
%higher and %highest can have non-zero values only for offsets greater
than 2GB, which is highly unlikely, if not impossible when compiling a
single function. This makes long branch for MIPS64 3 instructions smaller.

Differential Revision: http://llvm-reviews.chandlerc.com/D3281.diff

llvm-svn: 209678
2014-05-27 18:53:06 +00:00
Tim Northover 9a6217b650 AArch64: add test for NZCV cross-copy save.
llvm-svn: 209665
2014-05-27 16:50:09 +00:00
Tim Northover de9402d345 AArch64: add AArch64-specific test for 'c' and 'n'.
llvm-svn: 209664
2014-05-27 16:50:03 +00:00
Bill Schmidt 71dddd51d9 [PATCH] Correct type used for VADD_SPLAT optimization on PowerPC
In PPCISelLowering.cpp: PPCTargetLowering::LowerBUILD_VECTOR(), there
is an optimization for certain patterns to generate one or two vector
splats followed by a vector add or subtract.  This operation is
represented by a VADD_SPLAT in the selection DAG.  Prior to this
patch, it was possible for the VADD_SPLAT to be assigned the wrong
data type, causing incorrect code generation.  This patch corrects the
problem.

Specifically, the code previously assigned the value type of the
BUILD_VECTOR node to the newly generated VADD_SPLAT node.  This is
correct much of the time, but not always.  The problem is that the
call to isConstantSplat() may return a SplatBitSize that is not the
same as the number of bits in the original element vector type.  The
correct type to assign is a vector type with the same element bit size
as SplatBitSize.

The included test case shows an example of this, where the
BUILD_VECTOR node has a type of v16i8.  The vector to be built is {0,
16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16}.  isConstantSplat
detects that we can generate a splat of 16 for type v8i16, which is
the type we must assign to the VADD_SPLAT node.  If we do not, we
generate a vspltisb of 8 and a vaddubm, which generates the incorrect
result {16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16}.  The correct code generation is a vspltish of 8 and a vadduhm.

This patch also corrected code generation for
CodeGen/PowerPC/2008-07-10-SplatMiscompile.ll, which had been marked
as an XFAIL, so we can remove the XFAIL from the test case.

llvm-svn: 209662
2014-05-27 15:57:51 +00:00
Amara Emerson ceeb1c4830 [ARM] Emit correct build attributes for the relocation models.
Patch by Asiri Rathnayake.

llvm-svn: 209656
2014-05-27 13:30:21 +00:00
Tim Northover 4f1909f1da ARM: teach AAPCS-VFP to deal with Cortex-M4.
Cortex-M4 only has single-precision floating point support, so any LLVM
"double" type will have been split into 2 i32s by now. Fortunately, the
consecutive-register framework turns out to be precisely what's needed to
reconstruct the double and follow AAPCS-VFP correctly!

rdar://problem/17012966

llvm-svn: 209650
2014-05-27 10:43:38 +00:00
Filipe Cabecinhas 82ac07c283 Convert some X86 blendv* intrinsics into IR.
Summary:
Implemented an InstCombine transformation that takes a blendv* intrinsic
call and translates it into an IR select, if the mask is constant.

This will eventually get lowered into blends with immediates if possible,
or pblendvb (with an option to further optimize if we can transform the
pblendvb into a blend+immediate instruction, depending on the selector).
It will also enable optimizations by the IR passes, which give up on
sight of the intrinsic.

Both the transformation and the lowering of its result to asm got shiny
new tests.

The transformation is a bit convoluted because of blendvp[sd]'s
definition:

Its mask is a floating point value! This forces us to convert it and get
the highest bit. I suppose this happened because the mask has type
__m128 in Intel's intrinsic and v4sf (for blendps) in gcc's builtin.

I will send an email to llvm-dev to discuss if we want to change this or
not.

Reviewers: grosbach, delena, nadav

Differential Revision: http://reviews.llvm.org/D3859

llvm-svn: 209643
2014-05-27 03:42:20 +00:00
Rafael Espindola ac69cee6a2 [PPC] Use alias symbols in address computation.
This seems to match what gcc does for ppc and what every other llvm
backend does.

llvm-svn: 209638
2014-05-26 19:08:19 +00:00
Tim Northover 68ae503de9 AArch64: force i1 to be zero-extended at an ABI boundary.
This commit is debatable. There are two possible approaches, neither
of which is really satisfactory:

1. Use "@foo(i1 zeroext)" to mean an extension to 32-bits on Darwin,
   and 8 bits otherwise.
2. Redefine "@foo(i1)" to mean that the i1 is extended by the caller
   to 8 bits. This goes against the spirit of "zeroext" I think, but
   it's a bit of a vague construct anyway (by definition you're going
   to extend to the amount required by the ABI, that's why it's the
   ABI!).

This implements option 2. The DAG machinery really isn't setup for the
first (there's a fairly strong assumption that "zeroext" goes to at
least the smallest register size), and even if it was the resulting
DAG looks like it would be inferior in many cases.

Theoretically we could add AssertZext nodes in the consumers of
ABI-passed values too now, but this actually seems to make the code
worse in practice by making truncation proceed in two steps. The code
produced is equally valid if we continue to assume only the low bit is
defined.

Should fix PR19850

llvm-svn: 209637
2014-05-26 17:22:07 +00:00
Tim Northover 47e003c65d AArch64: simplify calling conventions slightly.
We can eliminate the custom C++ code in favour of some TableGen to
check the same things. Functionality should be identical, except for a
buffer overrun that was present in the C++ code and meant webkit
failed if any small argument needed to be passed on the stack.

llvm-svn: 209636
2014-05-26 17:21:53 +00:00
Tilmann Scheller cc3ebc8a98 [AArch64] Add store + add folding regression tests for the load/store optimization pass.
Add tests for the following transform:

 str X, [x0, #32]
  ...
 add x0, x0, #32
  ->
 str X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209627
2014-05-26 13:36:47 +00:00
Tilmann Scheller 112ada831f [AArch64] Add more regression tests for the load/store optimization pass.
Cover the following cases:

  ldr X, [x0, #32]
   ...
  add x0, x0, #32
   ->
  ldr X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209624
2014-05-26 12:15:51 +00:00
Tilmann Scheller 968d599dc4 Remove accidentally committed whitespace.
llvm-svn: 209619
2014-05-26 09:40:40 +00:00
Tilmann Scheller 2d746bcda3 [AArch64] Add a regression test for the load store optimizer.
We have a couple of regression tests for load/store pairing, but (to my knowledge) there are no regression tests for the load/store + add/sub folding.

As a first step towards increased test coverage of this area, this commit adds a test for one instance of a load + add to pre-indexed load transformation.

llvm-svn: 209618
2014-05-26 09:37:19 +00:00
Rafael Espindola f557536f56 Just check the entire string.
Thanks to David Blaikie for the suggestion.

llvm-svn: 209610
2014-05-26 04:08:51 +00:00
Rafael Espindola 4a04c4b69c Emit data or code export directives based on the type.
Currently we look at the Aliasee to decide what type of export
directive to use. It seems better to use the type of the alias
directly. This is similar to how we handle the alias having the
same address but other attributes (linkage, visibility) from the
aliasee.

With this patch it is now possible to do things like

target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-windows-msvc"
@foo = global [6 x i8] c"\B8*\00\00\00\C3", section ".text", align 16
@f = dllexport alias i32 (), [6 x i8]* @foo
!llvm.module.flags = !{!0}
!0 = metadata !{i32 6, metadata !"Linker Options", metadata !1}
!1 = metadata !{metadata !2, metadata !3}
!2 = metadata !{metadata !"/DEFAULTLIB:libcmt.lib"}
!3 = metadata !{metadata !"/DEFAULTLIB:oldnames.lib"}

llvm-svn: 209600
2014-05-25 12:49:07 +00:00
Rafael Espindola d234b0f08c Make these CHECKs a bit more strict.
The " at the end of the line makes sure we matched the entire directive.

llvm-svn: 209599
2014-05-25 12:43:13 +00:00
Tim Northover 3b0846e8f7 AArch64/ARM64: move ARM64 into AArch64's place
This commit starts with a "git mv ARM64 AArch64" and continues out
from there, renaming the C++ classes, intrinsics, and other
target-local objects for consistency.

"ARM64" test directories are also moved, and tests that began their
life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
triple. Both should be equivalent though.

This finishes the AArch64 merge, and everyone should feel free to
continue committing as normal now.

llvm-svn: 209577
2014-05-24 12:50:23 +00:00
Tim Northover cc08e1fe1b AArch64/ARM64: remove AArch64 from tree prior to renaming ARM64.
I'm doing this in two phases for a better "git blame" record. This
commit removes the previous AArch64 backend and redirects all
functionality to ARM64. It also deduplicates test-lines and removes
orphaned AArch64 tests.

The next step will be "git mv ARM64 AArch64" and rewire most of the
tests.

Hopefully LLVM is still functional, though it would be even better if
no-one ever had to care because the rename happens straight
afterwards.

llvm-svn: 209576
2014-05-24 12:42:26 +00:00
Tim Northover e471e43484 ARM64: extract a 32-bit subreg when selecting an inreg extend
After the load/store refactoring, we were sometimes trying to feed a
GPR64 into a 32-bit register offset operand. This failed in
copyPhysReg.

llvm-svn: 209566
2014-05-24 07:05:42 +00:00
Nico Rieck fe2413692f Revert part of "Fix broken FileCheck prefixes"
This reverts part of commit r209538.

llvm-svn: 209544
2014-05-23 19:33:49 +00:00
Rafael Espindola a5bb2f61cf Use alias linkage and visibility to decide tls access mode.
This matches both what we do for the non-thread case and what gcc does.

With this patch clang would match gcc's behaviour in

static __thread int a = 42;
extern __thread int b __attribute__((alias("a")));
int *f(void) { return &a; }
int *g(void) { return &b; }

if not for pr19843. Manually writing the IL does produce the same access modes.

It is also a step in the direction of fixing pr19844.

llvm-svn: 209543
2014-05-23 19:16:56 +00:00
Nico Rieck fb4c55fb64 Remove unused CHECK lines
llvm-svn: 209539
2014-05-23 19:06:44 +00:00
Nico Rieck bd15945ab2 Fix broken FileCheck prefixes
llvm-svn: 209538
2014-05-23 19:06:24 +00:00
Rafael Espindola fbe44200ce Convert test to use FileCheck.
llvm-svn: 209528
2014-05-23 16:51:13 +00:00
Daniel Sanders ac27263512 [mips][mips64r6] [ls][dw][lr] are not available in MIPS32r6/MIPS64r6
Summary:
Instead the system is required to provide some means of handling unaligned
load/store without special instructions. Options include full hardware
support, full trap-and-emulate, and hybrids such as hardware support within
a cache line and trap-and-emulate for multi-line accesses.

MipsSETargetLowering::allowsUnalignedMemoryAccesses() has been configured to
assume that unaligned accesses are 'fast' on the basis that I expect few
hardware implementations will opt for pure-software handling of unaligned
accesses. The ones that do handle it purely in software can override this.

mips64-load-store-left-right.ll has been merged into load-store-left-right.ll

The stricter testing revealed a Bits!=Bytes bug in passByValArg(). This has
been fixed and the variables renamed to clarify the units they hold.

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3872

llvm-svn: 209512
2014-05-23 13:18:02 +00:00
Jiangning Liu 4b5b757d65 [ARM64] Fix a bug in shuffle vector lowering to generate corect vext ISD with swapped input vectors.
llvm-svn: 209495
2014-05-23 02:54:50 +00:00
Matt Arsenault 05e96f4444 R600: Try to convert BFE back to standard bit ops when possible.
This allows existing DAG combines to work on them, and then
we can re-match to BFE if necessary during instruction selection.

llvm-svn: 209462
2014-05-22 18:09:12 +00:00
Matt Arsenault 5565f65e13 R600: Add dag combine for BFE
llvm-svn: 209461
2014-05-22 18:09:07 +00:00
Matt Arsenault bf8694d36d R600: Implement ComputeNumSignBitsForTargetNode for BFE
llvm-svn: 209460
2014-05-22 18:09:03 +00:00
Matt Arsenault 493c5f1bc4 R600: Expand mul24 for GPUs without it
llvm-svn: 209458
2014-05-22 18:00:24 +00:00
Matt Arsenault f15a05623e R600: Expand mad24 for GPUs without it
llvm-svn: 209457
2014-05-22 18:00:20 +00:00
Matt Arsenault eb260206c3 R600: Add intrinsics for mad24
llvm-svn: 209456
2014-05-22 18:00:15 +00:00
Andrea Di Biagio c8dd1ad85b [X86] Improve the lowering of BITCAST from MVT::f64 to MVT::v4i16/MVT::v8i8.
This patch teaches the x86 backend how to efficiently lower ISD::BITCAST dag
nodes from MVT::f64 to MVT::v4i16 (and vice versa), and from MVT::f64 to
MVT::v8i8 (and vice versa).

This patch extends the logic from revision 208107 to also handle MVT::v4i16
and MVT::v8i8. Also, this patch correctly propagates Undef values when
performing the widening of a vector (example: when widening from v2i32 to
v4i32, the upper 64bits of the resulting vector are 'undef').

llvm-svn: 209451
2014-05-22 16:21:39 +00:00
Tim Northover f9e798ba6a Segmented stacks: omit __morestack call when there's no frame.
Patch by Florian Zeitz

llvm-svn: 209436
2014-05-22 13:03:43 +00:00
Daniel Sanders 18b68e16af [mips] Make unalignedload.ll test stricter and easier to modify for MIPS32r6/MIPS64r6
Summary:
* Split into two functions, one to test each struct.
* R0 and R2 must be defined by an lw with a %got reference to the correct
  symbol.
* Test for $4 (first argument) where appropriate instead of accepting any
  register.
* Test that the two lbu's are correctly combined into $4

Depends on D3844

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3845

llvm-svn: 209424
2014-05-22 11:55:04 +00:00
Daniel Sanders e1f46e58c9 [mips] Change lwl and lwr in inlineasm_constraint.ll to lw
Summary:
lwl and lwr are not available in MIPS32r6/MIPS64r6. The purpose of the test
is to check that the '$1' expands to '0($x)' rather than to test something related
to the lwl or lwr instructions so we can simply switch to lw.

Depends on D3842

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3844

llvm-svn: 209423
2014-05-22 11:51:06 +00:00
Daniel Sanders b230595839 [mips] Use addiu in inline assembly tests since addi is not available in all ISA's
Summary:
This patch is necessary so that they do not fail on MIPS32r6/MIPS64r6 when
-integrated-as is enabled by default and we correctly detect the host CPU.

No functional change since these tests are testing the behaviour of the
constraint used for the third operand rather than the mnemonic.

Depends on D3842

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3843

llvm-svn: 209421
2014-05-22 11:46:58 +00:00
Tim Northover 71d04225cf AArch64/ARM64: enable more AArch64 tests.
llvm-svn: 209408
2014-05-22 07:40:55 +00:00
Saleem Abdulrasool 2bd1262a32 ARM: introduce llvm.arm.undefined intrinsic
This intrinsic permits the emission of platform specific undefined sequences.
ARM has reserved the 0xde opcode which takes a single integer parameter (ignored
by the CPU).  This permits the operating system to implement custom behaviour on
this trap.  The llvm.arm.undefined intrinsic is meant to provide a means for
generating the target specific behaviour from the frontend.  This is
particularly useful for Windows on ARM which has made use of a series of these
special opcodes.

llvm-svn: 209390
2014-05-22 04:46:46 +00:00
Matt Arsenault c3a73c3087 R600/SI: Match fp_to_uint / uint_to_fp for f64
llvm-svn: 209388
2014-05-22 03:20:30 +00:00
David Blaikie 1ea9db2dce DebugInfo: Use the SPMap to find the parent CU of inlined functions as they may not be in the current CU
Committed in r209178 then reverted in r209251 due to LTO breakage,
here's a proper fix for the case of the missing subprogram DIE. The DIEs
were there, just in other compile units. Using the SPMap we can find the
right compile unit to search for and produce cross-unit references to
describe this kind of inlining.

One existing test case needed to be updated because it had a function
that wasn't in the CU's subprogram list, so it didn't appear in the
SPMap.

llvm-svn: 209335
2014-05-21 23:14:12 +00:00
Matt Arsenault 6a57fd8b47 R600: Partially fix constant initializers for structs and vectors.
This should extend the current workaround to work with structs
that only contain legal, scalar types.

llvm-svn: 209331
2014-05-21 22:42:42 +00:00
Matt Arsenault 310f761a42 R600: Add failing testcases for constant initializers.
Constant initializers involving illegal types hit an assertion.

Patch by: Jan Vesely <jan.vesely@rutgers.edu>

llvm-svn: 209330
2014-05-21 22:42:38 +00:00
Quentin Colombet b4d53f1afa [X86] Fix a bug in the lowering of BLENDI introduced in r209043.
ISD::VSELECT mask uses 1 to identify the first argument and 0 to identify the
second argument.
On the other hand, BLENDI uses 0 to identify the first argument and 1 to
identify the second argument.
Fix the generation of the blend mask to account for this difference.

The bug did not show up with r209043, because we were not checking for the
actual arguments of the blend instruction!
This commit also fixes the test cases.

Note: The same mask works for the BLENDr variant because the arguments are
swapped during instruction selection (see the BLENDXXrr patterns).

<rdar://problem/16975435>

llvm-svn: 209324
2014-05-21 22:00:39 +00:00
Dave Estes 4aa7340d14 Test comment commit.
llvm-svn: 209306
2014-05-21 16:19:51 +00:00
Saleem Abdulrasool 8d60fdc50d ARM: correct bundle generation for MOV32T relocations
Although the previous code would construct a bundle and add the correct elements
to it, it would not finalise the bundle.  This resulted in the InternalRead
markers not being added to the MachineOperands nor, more importantly, the
externally visible defs to the bundle itself.  So, although the bundle was not
exposing the def, the generated code would be correct because there was no
optimisations being performed.  When optimisations were enabled, the post
register allocator would kick in, and the hazard recognizer would reorder
operations around the load which would define the value being operated upon.

Rather than manually constructing the bundle, simply construct and finalise the
bundle via the finaliseBundle call after both MIs have been emitted.  This
improves the code generation with optimisations where IMAGE_REL_ARM_MOV32T
relocations are emitted.

The changes to the other tests are the result of the bundle generation
preventing the scheduler from hoisting the moves across the loads.  The net
effect of the generated code is equivalent, but, is much more identical to what
is actually being lowered.

llvm-svn: 209267
2014-05-21 01:25:24 +00:00
Alexey Samsonov 60354bd3cf Fix test added in r209242: llc shouldn't create files in source tree
llvm-svn: 209252
2014-05-20 22:40:31 +00:00
Adam Nemet 2ba6492b7b [ARM64] PR19792: Fix cycle in DAG after performPostLD1Combine
Povray and dealII currently assert with "Overran sorted position" in
AssignTopologicalOrder.  The problem is that performPostLD1Combine can
introduce cycles.

Consider:

(insert_vector_elt (INSERT_SUBREG undef,
                                  (load (add %vreg0, Constant<8>), undef),  <= A
                                  TargetConstant<2>),
                   (load %vreg0, undef),                                    <= B
                   Constant<1>)

This is turned into a LD1LANEpost node.  However the address in A is not a
valid user of the post-incremented address of B in LD1LANEpost.

llvm-svn: 209242
2014-05-20 21:47:07 +00:00
Eric Christopher 2feed5fd68 Move the function and data section flags into the options struct and
make the functions to set them non-static.
Move and rename the llvm specific backend options to avoid conflicting
with the clang option.

Paired with a backend commit to update.

llvm-svn: 209238
2014-05-20 21:25:34 +00:00
Quentin Colombet c88baa5c10 [LSR] Canonicalize reg1 + ... + regN into reg1 + ... + 1*regN.
This commit introduces a canonical representation for the formulae.
Basically, as soon as a formula has more that one base register, the scaled
register field is used for one of them. The register put into the scaled
register is preferably a loop variant.
The commit refactors how the formulae are built in order to produce such
representation.
This yields a more accurate, but still perfectible, cost model.

<rdar://problem/16731508>

llvm-svn: 209230
2014-05-20 19:25:04 +00:00
Renato Golin e919fcac05 Avoids DCE on write_register
llvm-svn: 209222
2014-05-20 17:40:03 +00:00
Adam Nemet 571eb5fc91 [PowerPC] PR19796: Also match ISD::TargetConstant in isIntS16Immediate
The SplitIndexingFromLoad changes exposed a latent isel bug in the PowerPC64
backend.  We matched an immediate offset with STWX8 even though it only
supports register offset.

The culprit is the complex-pattern predicate, SelectAddrIdx, which decides
that if the offset is not ISD::Constant it must be a register.

Many thanks to Bill Schmidt for testing this.

llvm-svn: 209219
2014-05-20 17:20:34 +00:00
Benjamin Kramer 7bd6bee385 Legalizer: Make bswap promotion safe for vectors.
llvm-svn: 209202
2014-05-20 09:42:31 +00:00
David Blaikie 424b59b1ce DebugInfo: Assume all subprogram DIEs have been created before any abstract subprograms are constructed.
Since we visit the whole list of subprograms for each CU at module
start, this is clearly true - don't test for the case, just assert it.

A few old test cases seemed to have incomplete subprogram lists, but any
attempt to reproduce them shows full subprogram lists that even include
entities that have been completely inlined and the out of line
definition removed.

llvm-svn: 209178
2014-05-19 23:16:19 +00:00
Chad Rosier b06ed63ebf [ARM64] Adds Cortex-A53 scheduling support for vector load/store post.
Patch by Dave Estes<cestes@codeaurora.org>!
PR19761 http://reviews.llvm.org/D3829

llvm-svn: 209176
2014-05-19 22:59:51 +00:00
Andrea Di Biagio 7a85cadfd6 [X86] Add ISel patterns to improve the selection of TZCNT and LZCNT.
Instructions TZCNT (requires BMI1) and LZCNT (requires LZCNT), always
provide the operand size as output if the input operand is zero.

We can take advantage of this knowledge during instruction selection
stage in order to simplify a few corner case.

llvm-svn: 209159
2014-05-19 20:38:59 +00:00
Filipe Cabecinhas dc92102766 Added more insertps optimizations
Summary:
When inserting an element that's coming from a vector load or a broadcast
of a vector (or scalar) load, combine the load into the insertps
instruction.
Added PerformINSERTPSCombine for the case where we need to fix the load
(load of a vector + insertps with a non-zero CountS).
Added patterns for the broadcasts.

Also added tests for SSE4.1, AVX, and AVX2.

Reviewers: delena, nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3581

llvm-svn: 209156
2014-05-19 19:45:57 +00:00
Jyotsna Verma 9a103563f4 reverting r209132
llvm-svn: 209139
2014-05-19 16:22:11 +00:00
Bradley Smith c3b931d005 [ARM64] Split tbz/tbnz into W/X register variant
llvm-svn: 209134
2014-05-19 15:58:15 +00:00
Jyotsna Verma daeb25d4e0 Hexagon: Add encoding bits to the mpy instructions.
llvm-svn: 209132
2014-05-19 15:32:07 +00:00
Benjamin Kramer f3ad23551d SDAG: Legalize vector BSWAP into a shuffle if the shuffle is legal but the bswap not.
- On ARM/ARM64 we get a vrev because the shuffle matching code is really smart. We still unroll anything that's not v4i32 though.
- On X86 we get a pshufb with SSSE3. Required more cleverness in isShuffleMaskLegal.
- On PPC we get a vperm for v8i16 and v4i32. v2i64 is unrolled.

llvm-svn: 209123
2014-05-19 13:12:38 +00:00
Filipe Cabecinhas cc8c96c133 Change the blend tests to AVX, not AVX2.
llvm-svn: 209107
2014-05-19 04:47:12 +00:00
Saleem Abdulrasool a521845381 ARM: improve WoA ABI conformance for frame register
Windows on ARM uses R11 for the frame pointer even though the environment is a
pure Thumb-2, thumb-only environment.  Replicate this behaviour to improve
Windows ABI compatibility.  This register is used for fast stack walking, and
thus is part of the Windows ABI.

llvm-svn: 209085
2014-05-18 04:12:52 +00:00
Saleem Abdulrasool 2476759673 test: fix copy-paste mistake
Accidental over-quoting of the match string.

llvm-svn: 209058
2014-05-17 04:32:38 +00:00
Saleem Abdulrasool 46fed305db ARM: use the proper target object format for WoA
WoA uses COFF, not ELF.  ARMISelLowering::createTLOF would previously return ELF
for any non-MachO platform.  This was a missed site when the original change for
target format support for Windows on ARM was done.

llvm-svn: 209057
2014-05-17 04:28:08 +00:00
Chandler Carruth c85473143c [x86] Fix a bad predicate I spotted by inspection -- pshufhw and pshuflw
were added in SSE2, no SSSE3. Found this while auditing all uses of
SSSE3 in the X86 target. I don't actually expect this to make
a significant difference on anything and I don't have any detailed test
cases but I updated the existing test cases that already covered some of
this code path.

llvm-svn: 209056
2014-05-17 03:29:20 +00:00
Filipe Cabecinhas 89654da069 Implemented special cases for PerformVSELECTCombine.
vselects with constant masks, after legalization, will get turned into
specialized shuffle_vectors so they can be matched to blend+imm
instructions.

Fixed some tests.

llvm-svn: 209044
2014-05-16 22:47:54 +00:00
Filipe Cabecinhas e15551832c Lower vselects into X86ISD::BLENDI when appropriate.
LowerVSELECT will, if possible, generate a X86ISD::BLENDI DAG node if the
condition is constant and we can emit that instruction, given the
subtarget.

This is not enough for all cases. An additional SELECTCombine optimization
will be committed.

Fixed tests that were expecting variable blends but where a blend+imm can
be generated.
Added test where we can't emit blend+immediate.
Added avx2 blend+imm tests.

llvm-svn: 209043
2014-05-16 22:47:49 +00:00
Tom Stellard f719ee9e76 R600/SI: Promote f32 SELECT to i32
llvm-svn: 209024
2014-05-16 20:56:41 +00:00
Rafael Espindola 6b238633b7 Fix most of PR10367.
This patch changes the design of GlobalAlias so that it doesn't take a
ConstantExpr anymore. It now points directly to a GlobalObject, but its type is
independent of the aliasee type.

To avoid changing all alias related tests in this patches, I kept the common
syntax

@foo = alias i32* @bar

to mean the same as now. The cases that used to use cast now use the more
general syntax

@foo = alias i16, i32* @bar.

Note that GlobalAlias now behaves a bit more like GlobalVariable. We
know that its type is always a pointer, so we omit the '*'.

For the bitcode, a nice surprise is that we were writing both identical types
already, so the format change is minimal. Auto upgrade is handled by looking
through the casts and no new fields are needed for now. New bitcode will
simply have different types for Alias and Aliasee.

One last interesting point in the patch is that replaceAllUsesWith becomes
smart enough to avoid putting a ConstantExpr in the aliasee. This seems better
than checking and updating every caller.

A followup patch will delete getAliasedGlobal now that it is redundant. Another
patch will add support for an explicit offset.

llvm-svn: 209007
2014-05-16 19:35:39 +00:00
David Blaikie 825f487b68 DebugInfo: Assume the CU's Subprogram list only contains definitions.
DIBuilder maintains this invariant and the current DwarfDebug code could
end up doing weird things if it contained declarations (such as putting
the definition DIE inside a CU that contained the declaration - this
doesn't seem like a good idea, so rather than adding logic to handle
this case we'll just ban in for now & cross that bridge if we come to
it later).

llvm-svn: 209004
2014-05-16 18:26:53 +00:00
Chad Rosier d978ca0308 [ARM64] Increases the Sched Model accuracy for Cortex-A53.
Patch by Dave Estes <cestes@codeaurora.org>
http://reviews.llvm.org/D3769

llvm-svn: 209001
2014-05-16 17:15:33 +00:00
James Molloy a70697e10e Re-enable inline memcpy expansion for Thumb1.
Patch by Moritz Roth!

llvm-svn: 208994
2014-05-16 14:24:22 +00:00
James Molloy 556763d2ef Fix the Load/Store optimization pass to work with Thumb1.
Patch by Moritz Roth!

llvm-svn: 208992
2014-05-16 14:14:30 +00:00
Rafael Espindola 5a52b9f139 Revert "Implement global merge optimization for global variables."
This reverts commit r208934.

The patch depends on aliases to GEPs with non zero offsets. That is not
supported and fairly broken.

The good news is that GlobalAlias is being redesigned and will have support
for offsets, so this patch should be a nice match for it.

llvm-svn: 208978
2014-05-16 13:02:18 +00:00
Tim Northover 5896b066e6 TableGen: fix operand counting for aliases
TableGen has a fairly dubious heuristic to decide whether an alias should be
printed: does the alias have lest operands than the real instruction. This is
bad enough (particularly with no way to override it), but it should at least be
calculated consistently for both strings.

This patch implements that logic: first get the *correct* string for the
variant, in the same way as the Matcher, without guessing; then count the
number of whitespace chars.

There are basically 4 changes this brings about after the previous
commits; all of these appear to be good, so I have changed the tests:

+ ARM64: we print "neg X, Y" instead of "sub X, xzr, Y".
+ ARM64: we skip implicit "uxtx" and "uxtw" modifiers.
+ Sparc: we print "mov A, B" instead of "or %g0, A, B".
+ Sparc: we print "fcmpX A, B" instead of "fcmpX %fcc0, A, B"

llvm-svn: 208969
2014-05-16 09:42:04 +00:00
Hao Liu 8579f0d4d1 [ARM64]Implement NEON post-increment LD1(lane) and post-increment LD1R.
llvm-svn: 208955
2014-05-16 09:39:02 +00:00
Saleem Abdulrasool 056fc3da4a ARM: add some integer/floating point conversion libcalls
Add some Windows on ARM specific library calls.  These are provided by msvcrt,
and can be used to perform integer to floating-point conversions (and
vice-versa) mirroring similar functions in the RTABI.

llvm-svn: 208949
2014-05-16 05:41:33 +00:00
Jiangning Liu 932e1c3924 Implement global merge optimization for global variables.
This commit implements two command line switches -global-merge-on-external
and -global-merge-aligned, and both of them are false by default, so this
optimization is disabled by default for all targets.

For ARM64, some back-end behaviors need to be tuned to get this optimization
further enabled.

llvm-svn: 208934
2014-05-15 23:45:42 +00:00
Reed Kotler 6280d9711d Finish materialize for ints
Summary: We add code to materialize all integer literals.

Test Plan: simplestorei.ll

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3596

llvm-svn: 208923
2014-05-15 21:54:15 +00:00
NAKAMURA Takumi 2d35a06080 llvm/test/CodeGen/X86/combine-sse41-intrinsics.ll: Add explicit triple.
llvm-svn: 208897
2014-05-15 15:45:31 +00:00
Andrea Di Biagio d621120533 [X86] Teach the backend how to fold SSE4.1/AVX/AVX2 blend intrinsics.
Added target specific combine rules to fold blend intrinsics according
to the following rules:
 1) fold(blend A, A, Mask) -> A;
 2) fold(blend A, B, <allZeros>) -> A;
 3) fold(blend A, B, <allOnes>) -> B.

Added two new tests to verify that the new folding rules work for all
the optimized blend intrinsics.

llvm-svn: 208895
2014-05-15 15:18:15 +00:00
Tom Stellard 73b98ed8cf R600/SI: Only use SALU instructions for 64-bit add in a block of CF depth 0
llvm-svn: 208886
2014-05-15 14:41:54 +00:00
Tom Stellard 365a2b49f2 R600/SI: Use VALU instructions for i1 ops
llvm-svn: 208885
2014-05-15 14:41:50 +00:00
Tim Northover 2509a3fc64 ARM64: print correct aliases for NEON mov & mvn instructions
In all cases, if a "mov" alias exists, it is the canonical form of the
instruction. Now that TableGen can support aliases containing syntax variants,
we can enable them and improve the quality of the asm output.

llvm-svn: 208874
2014-05-15 12:11:02 +00:00
Tim Northover d8d65a69cf TableGen/ARM64: print aliases even if they have syntax variants.
To get at least one use of the change (and some actual tests) in with its
commit, I've enabled the AArch64 & ARM64 NEON mov aliases.

llvm-svn: 208867
2014-05-15 11:16:32 +00:00
Jiangning Liu 09cc564310 [ARM64] Support aggressive fastcc/tailcallopt breaking ABI by popping out argument stack from callee.
llvm-svn: 208837
2014-05-15 01:33:17 +00:00
David Blaikie 9ba7254688 DebugInfo: Sure up subprogram variable list handling with more assertions and fewer conditionals.
Many old tests using prior schemas still had some brokenness here (both
indirect arrays and arrays with single bogus elements). Fixed those up
so they don't hit the new assertions.

Also reduced nesting in some places, etc.

llvm-svn: 208817
2014-05-14 21:52:46 +00:00
Jay Foad a0653a3e6c Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

llvm-svn: 208811
2014-05-14 21:14:37 +00:00
Christian Pirker 6692e7c116 ARM-BE: test files for vector argument passing
Reviewed at http://reviews.llvm.org/D3766

llvm-svn: 208793
2014-05-14 16:59:44 +00:00
Christian Pirker 85cdab63c4 [ARM64-BE] Fix byte order of CIE and FDE frames for exception handling
Reviewed at http://reviews.llvm.org/D3741

llvm-svn: 208792
2014-05-14 16:51:58 +00:00
Logan Chien 95188b9092 Fix ARM EHABI when function has landingpad and nounwind.
If the function has the landingpad instruction, then the
handlerdata should be emitted even if the function has
nouwnind attribute.  Otherwise, following code will not
work:

    void test1() noexcept {
      try {
        throw_exception();
      } catch (...) {
        log_unexpected_exception();
      }
    }

Since the cantunwind was incorrectly emitted and the
LSDA is not available.

llvm-svn: 208791
2014-05-14 16:38:30 +00:00
Logan Chien ba1b6951c3 More test case for r208715.
The commit r208166 will cause some regression on ARM EHABI.
This fix has been committed in r208715, and an assertion failure
test case has been committed in r208770.

This commit further extends the unittest so that the actual
value in the handlerdata will be checked.

llvm-svn: 208790
2014-05-14 16:37:32 +00:00
Benjamin Kramer 594f963ea6 X86: If we have an instruction that sets a flag and a zero test on the input of that instruction try to eliminate the test.
For example
	tzcntl	%edi, %ebx
	testl %edi, %edi
	je	.label

can be rewritten into
	tzcntl	%edi, %ebx
	jb 	.label

A minor complication is that tzcnt sets CF instead of ZF when the input
is zero, we have to rewrite users of the flags from ZF to CF. Currently
we recognize patterns using lzcnt, tzcnt and popcnt.

Differential Revision: http://reviews.llvm.org/D3454

llvm-svn: 208788
2014-05-14 16:14:45 +00:00
Evgeniy Stepanov b4aa2b422b Regression test for ARM EHABI breakage in r208166.
llvm-svn: 208770
2014-05-14 11:13:31 +00:00
Matt Arsenault 4b0402e317 R600/SI: Try to fix BFE operands when moving to VALU
This was broken by r208479

llvm-svn: 208740
2014-05-13 23:45:50 +00:00
Christian Pirker 39db7ec81f ARMEB: Fix byte order of EH frame unwinding instructions, with modified test file
This commit was already commited as revision rL208689 and discussd in
phabricator revision D3704.
But the test file was crashing on OS X and windows.

I fixed the test file in the same way as in rL208340.

llvm-svn: 208711
2014-05-13 16:44:30 +00:00
Joey Gouly 12a8bf09d0 [CGP] r205941 changed the logic, so that a cast happens *before* 'Result' is
compared to 'AddrMode.BaseReg'. In the case that 'AddrMode.BaseReg' is
nullptr, 'Result' will also be nullptr, so the cast causes an assertion. We
should use dyn_cast_or_null here to check 'Result' is not null and it is an
instruction.

Bug found by Mats Petersson, and I reduced his IR to get a test case.

llvm-svn: 208705
2014-05-13 15:42:45 +00:00
Rafael Espindola 2e7eceb317 Revert "ARMEB: Fix byte order of EH frame unwinding instructions"
This reverts commit r208689.

The test was crashing on OS X and windows.

llvm-svn: 208704
2014-05-13 15:19:56 +00:00
Christian Pirker ea3514ecdb ARMEB: Fix byte order of EH frame unwinding instructions
llvm-svn: 208689
2014-05-13 11:41:49 +00:00
Weiming Zhao dd83691cc3 Folding into CSEL when there is ZEXT between SETCC and ADD
Normally, patterns like (add x, (setcc cc ...)) will be folded into
(csel x, x+1, not cc). However, if there is a ZEXT after SETCC, they
won't be folded. This patch recognizes the ZEXT and allows the
generation of CSINC.

This patch fixes bug 19680.

llvm-svn: 208660
2014-05-13 00:40:58 +00:00
Adam Nemet 5d78558c2b [DAGCombiner] Split up an indexed load if only the base pointer value is live
Right now the load may not get DCE'd because of the side-effect of updating
the base pointer.

This can happen if we lower a read-modify-write of an illegal larger type
(e.g. i48) such that the modification only affects one of the subparts (the
lower i32 part but not the higher i16 part).  See the testcase.

In order to spot the dead load we need to revisit it when SimplifyDemandedBits
decided that the value of the load is masked off.  This is the
CommitTargetLoweringOpt piece.

I checked compile time with ARM64 by sending SPEC bitcode files through llc.
No measurable change.

Fixes <rdar://problem/16031651>

llvm-svn: 208640
2014-05-12 23:00:03 +00:00
Louis Gerbarg b4013235e3 Fix ARM bswap16.ll test on Windows
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.

llvm-svn: 208638
2014-05-12 22:13:07 +00:00
Reid Kleckner 7a59e0845f Try to fix an SDAG dependence issue with sret
r208453 added support for having sret on the second parameter.  In that
change, the code for copying sret into a virtual register was hoisted
into the loop that lowers formal parameters.  This caused a "Wrong
topological sorting" assertion failure during scheduling when a
parameter is passed in memory.  This change undoes that by creating a
second loop that deals with sret.

I'm worried that this fix is incomplete.  I don't fully understand the
dependence issues.  However, with this change we produce the same DAGs
we used to produce, so if they are broken, they are just as broken as
they have always been.

llvm-svn: 208637
2014-05-12 22:01:27 +00:00
Adam Nemet 63e4b30f79 [Test] Trim unnecessary .c and .cpp from config.suffix in lit.local.cfg
Tested by comparing make check VERBOSE=1 before and after to make sure
no tests are missed.  (VERBOSE=1 prints the list of tests.)

Only one test :( remains where .cpp is required:

tools/llvm-cov/range_based_for.cpp:// RUN: llvm-cov range_based_for.cpp | FileCheck %s --check-prefix=STDOUT

The topic was discussed in this thread:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140428/214905.html

llvm-svn: 208621
2014-05-12 19:57:31 +00:00
Louis Gerbarg efdcf23736 Add support bswap16 to/from memory compiling to rev16 on ARM/Thumb
The current patterns for REV16 misses mostn __builtin_bswap16() due to
legalization promoting the operands to from load/stores toi32s and then
truncing/extending them. This patch adds new patterns that catch the resultant
DAGs and codegens them to rev16 instructions. Tests included.

rdar://15353652

llvm-svn: 208620
2014-05-12 19:53:52 +00:00
Tim Northover ee20caaf82 TableGen: use PrintMethods to print more aliases
llvm-svn: 208607
2014-05-12 18:04:06 +00:00
Matt Arsenault 62b1737081 R600: Add mul24 intrinsics
llvm-svn: 208604
2014-05-12 17:49:57 +00:00
Matt Arsenault 2adca6090f Make SimplifyDemandedBits understand BUILD_PAIR
llvm-svn: 208598
2014-05-12 17:14:48 +00:00
Benjamin Kramer 3b36b72a9c X86: Make sure that we have SSE4.1 before we generate insertps nodes.
PR19721.

llvm-svn: 208552
2014-05-12 13:12:08 +00:00
Christian Pirker 238c7c165b ARM: Implement big endian bit-conversion for NEON type
llvm-svn: 208538
2014-05-12 11:19:20 +00:00
Elena Demikhovsky 8e8fde8e93 AVX-512: changes in intrinsics
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.

llvm-svn: 208522
2014-05-12 07:18:51 +00:00
Hal Finkel 0d8db46799 [PowerPC] Add global named register support
Support for the intrinsics that read from and write to global named registers
is added for r1, r2 and r13 (depending on the subtarget).

llvm-svn: 208509
2014-05-11 19:29:11 +00:00
Hal Finkel c4c6c87666 [PowerPC] On PPC32, 128-bit shifts might be runtime calls
The counter-loops formation pass needs to know what operations might be
function calls (because they can't appear in counter-based loops). On PPC32,
128-bit shifts might be runtime calls (even though you can't use __int128 on
PPC32, it seems that SROA might form them).

Fixes PR19709.

llvm-svn: 208501
2014-05-11 16:23:29 +00:00
Filipe Cabecinhas 0e3d1cb5d6 Fixed a bug when lowering build_vector (PR19694)
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.

llvm-svn: 208487
2014-05-11 08:12:56 +00:00
Vincent Lejeune 29c0c210fc R600/SI: Fold fabs/fneg into src input modifier
llvm-svn: 208480
2014-05-10 19:18:39 +00:00
Vincent Lejeune 94af31fbe8 R600/SI: Prettier display of input modifiers
llvm-svn: 208479
2014-05-10 19:18:33 +00:00
Tim Northover 55b3e22927 ARM64: fix SELECT_CC lowering in absence of NaNs.
We were swapping the true & false results while testing for FMAX/FMIN,
but not putting them back to the original state if the later checks
failed.

Should fix PR19700.

llvm-svn: 208469
2014-05-10 07:37:50 +00:00
Reid Kleckner c487d73f41 Revert "[ms-cxxabi] Add a new calling convention that swaps 'this' and 'sret'"
This reverts commit r200561.

This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value.  This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.

Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.

llvm-svn: 208459
2014-05-09 22:56:42 +00:00
Reid Kleckner 7941856445 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

llvm-svn: 208453
2014-05-09 22:32:13 +00:00
Reid Kleckner d0eda92845 Fix ARM intrinsics-overflow.ll test on Windows
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.

llvm-svn: 208448
2014-05-09 21:52:48 +00:00
Louis Gerbarg 3342bf1451 Add custom lowering for add/sub with overflow intrinsics to ARM
This patch adds support to ARM for custom lowering of the
llvm.{u|s}add.with.overflow.i32 intrinsics for i32/i64. This is particularly useful
for handling idiomatic saturating math functions as generated by
InstCombineCompare.

Test cases included.

rdar://14853450

llvm-svn: 208435
2014-05-09 17:02:49 +00:00
Tom Stellard 4c00b52e1a R600/SI: Teach SIInstrInfo::moveToVALU() how to move S_LOAD_*_IMM instructions
llvm-svn: 208432
2014-05-09 16:42:22 +00:00
Tom Stellard d6cb8e8efd R600/SI: Fix SMRD pattern for offsets > 32 bits
We were dropping the high bits of 64-bit immediate offsets.

llvm-svn: 208431
2014-05-09 16:42:21 +00:00
Tom Stellard a2acad785a R600: Expand i64 SELECT_CC
llvm-svn: 208430
2014-05-09 16:42:19 +00:00
Tom Stellard afa8b532b1 R600: Move MIN/MAX matching from LowerOperation() to PerformDAGCombine()
llvm-svn: 208429
2014-05-09 16:42:16 +00:00
James Molloy dd1aa14a21 Attempt to pacify the bots - this commit requires asserts.
llvm-svn: 208424
2014-05-09 16:20:53 +00:00
Oliver Stannard c24f2171ca ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.

llvm-svn: 208413
2014-05-09 14:01:47 +00:00
Daniel Sanders b7f1c6ff3e [mips][mips64r6] Add experimental support for MIPS32r6 and MIPS64r6
Summary:
Adds MIPS32r6/MIPS64r6 and checks the compatibility requirements for these
processors.

I've also included comments to describe removed and re-encoded instructions,
along with placeholder def's for the new instructions but there are no
functional changes to codegen at this point.

Reviewers: jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3622

llvm-svn: 208399
2014-05-09 09:46:21 +00:00
Saleem Abdulrasool 40bca0afab ARM: support PIC on Windows on ARM
Handle lowering of global addresses for PIC mode compilation on Windows.  Always
use the movw/movt load to load the address as Windows on ARM requires ARMv7+ and
is a pure Thumb environment.

llvm-svn: 208385
2014-05-09 00:58:32 +00:00
Filipe Cabecinhas e4b482b3ed Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper, delena

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

llvm-svn: 208372
2014-05-08 23:16:08 +00:00
Justin Bogner 7833d9facb test/CodeGen: Check that the correct register is used in a store
This tightens up r208351 to ensure that a store is fed with the
correct value.

Thanks to Quentin Colombet for spotting this!

llvm-svn: 208368
2014-05-08 22:45:07 +00:00
Justin Bogner 1de42075fc Make a CodeGen test more robust against vector register selection
llvm-svn: 208351
2014-05-08 18:53:56 +00:00
Andrea Di Biagio e85ba4df52 [X86] Add target specific combine rules to fold SSE2/AVX2 packed arithmetic shift intrinsics.
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.

The rules are:
 - Always fold a packed arithmetic shift by zero to its first operand;
 - Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
   the shift count is known to be smaller than the vector element size.

This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.

Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.

llvm-svn: 208342
2014-05-08 17:44:04 +00:00
Saleem Abdulrasool 39a939d7d2 test: fix test on Windows
When building on Windows, the default target is Windows.  Windows on ARM does
not support ARM mode compilation, resulting in test failures.  Simply specify a
triple to ensure that we are testing the correct behaviour.

llvm-svn: 208340
2014-05-08 17:11:29 +00:00
Christian Pirker b5728191c2 ARM big endian function argument passing
llvm-svn: 208316
2014-05-08 14:06:24 +00:00
James Molloy c42ea14f74 [ARM64-BE] Teach fast-isel about how to set up sub-word stack arguments for big endian calls.
SelectionDAG already knows about this, but fast-isel was ignorant.

llvm-svn: 208307
2014-05-08 12:53:50 +00:00
Tim Northover 18f8bb84fa ARM64: make sure FastISel emits SSA MachineInstrs
We need to use a temporary register for a 2-step operation like REM.

llvm-svn: 208297
2014-05-08 10:30:56 +00:00
Hao Liu 1187a3d8db AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ARM64 backend.
llvm-svn: 208284
2014-05-08 07:38:13 +00:00
Filipe Cabecinhas 095d9d573a Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

llvm-svn: 208271
2014-05-08 00:25:16 +00:00
Quentin Colombet 60cdff65c7 [X86] Add a test case for r208252.
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.

I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0

Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0

<rdar://problem/16800495> 

llvm-svn: 208260
2014-05-07 22:52:58 +00:00
Chad Rosier 788e5e3d7c [ARM64][fast-isel] Disable target specific optimizations at -O0. Functionally,
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0.  The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check.  The AdvSIMDScalar pass is disabled by default at all
optimization levels.  This patch leaves that pass disabled by default.

Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging.  This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).

llvm-svn: 208223
2014-05-07 16:41:55 +00:00
Tim Northover 88a51d983e AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

llvm-svn: 208210
2014-05-07 14:10:27 +00:00
James Molloy d3c401a2d0 [ARM64-BE] Fix fast-isel, and add appropriate RUN lines to appropriate tests.
llvm-svn: 208200
2014-05-07 12:33:55 +00:00
James Molloy 36132057da [ARM64-BE] Fix variable-argument saving.
llvm-svn: 208199
2014-05-07 12:33:48 +00:00
James Molloy 4049e4fd77 [ARM64-BE] Implement the lane-twiddling logic at AAPCS boundaries for big endian.
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.

The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.

llvm-svn: 208198
2014-05-07 12:33:41 +00:00
James Molloy 30e0e11eb4 [ARM64-BE] Implement the crazy bitcast handling for big endian vectors.
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.

Consider a simple memory load followed by a bitconvert then a store.
  v0 = load v2i32
  v1 = BITCAST v2i32 v0 to v4i16
       store v4i16 v2

In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.

LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence:  v0 = load v2i32

  v1 = REV v2i32                  (implicit)
  v2 = BITCAST v2i32 v1 to v4i16
  v3 = REV v4i16 v2               (implicit)
       store v4i16 v3

But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:

  v0 = load v2i32
  v1 = REV v2i32                  (implicit)
  v2 = REV v2i32
  v3 = BITCAST v2i32 v2 to v4i16
  v4 = REV v4i16
  v5 = REV v4i16 v4               (implicit)
       store v4i16 v5

This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
  (REV64_2s (REV64_4h X)) === (REV32_4h X)

There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.

Most bitconverts require some sort of conversion. The only exceptions are:
  a) Identity conversions -  vNfX <-> vNiX
  b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX

Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.

This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.

llvm-svn: 208194
2014-05-07 11:28:53 +00:00
James Molloy ccc7f982c1 [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

llvm-svn: 208192
2014-05-07 11:28:36 +00:00
Tim Northover df723343fa AArch64/ARM64: run test on ARM64 too.
llvm-svn: 208188
2014-05-07 10:47:04 +00:00
Tim Northover 76a94e6ead AArch64/ARM64: put annotation in test
It makes finding already covered tests much easier with "grep -L
arm64".

llvm-svn: 208187
2014-05-07 10:47:00 +00:00
Joerg Sonnenberger cf86ce136c Allow using normal .eh_frame based unwinding on ARM. Use the same
encodings as x86. Use this exception model for NetBSD.

llvm-svn: 208166
2014-05-07 07:49:34 +00:00
Saleem Abdulrasool acd0338c61 ARM: fix WoA PEI instruction selection
The ARM::BLX instruction is an ARM mode instruction.  The Windows on ARM target
is limited to Thumb instructions.  Correctly use the thumb mode tBLXr
instruction.  This would manifest as an errant write into the object file as the
instruction is 4-bytes in length rather than 2.  The result would be a corrupted
object file that would eventually result in an executable that would crash at
runtime.

llvm-svn: 208152
2014-05-07 03:03:27 +00:00
Joerg Sonnenberger 818e725158 If a function needs a frame pointer, but r11 (aka fp) has not been used,
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.

llvm-svn: 208129
2014-05-06 20:43:01 +00:00
Andrea Di Biagio c14ccc9184 [X86] Improve the lowering of BITCAST dag nodes from type f64 to type v2i32 (and vice versa).
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.

With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).

This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.

llvm-svn: 208107
2014-05-06 17:09:03 +00:00
Renato Golin c7aea40ec6 Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

llvm-svn: 208104
2014-05-06 16:51:25 +00:00
Rafael Espindola 52dc5d828f Special case aliases in GlobalValue::getAlignment.
An alias has the address of what it points to, so it also has the same
alignment.

This allows a few optimizations to see past aliases for free.

llvm-svn: 208103
2014-05-06 16:48:58 +00:00
Kevin Qin 1353c3405d [ARM64] Enable alignment control option in front-end for ARM64.
This is the modification in llvm part.

llvm-svn: 208074
2014-05-06 09:48:52 +00:00
Reid Kleckner 4a406d32e9 Fix i128 div/mod on mingw64
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.

Patch by Jameson Nash!

llvm-svn: 208029
2014-05-06 01:20:42 +00:00
Tom Stellard 45b3dcd35b R600: Expand i64 ISD:SUB
llvm-svn: 208005
2014-05-05 21:47:15 +00:00
Filipe Cabecinhas fe59062b75 Revert "Optimize shufflevector that copies an i64/f64 and zeros the rest."
This reverts commit 207992. I misread the phab number on the LGTM.

llvm-svn: 207993
2014-05-05 19:40:36 +00:00
Filipe Cabecinhas 263d98c19f Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

llvm-svn: 207992
2014-05-05 19:36:28 +00:00
Michael Zolotukhin e37f33c466 Move test from r207969 to another folder and rename it.
llvm-svn: 207984
2014-05-05 18:10:15 +00:00
Rafael Espindola 595f54205c Remove the -disable-cfi option.
This also add a release note about it. If this stays I will cleanup MC
next week.

llvm-svn: 207977
2014-05-05 17:33:26 +00:00
Rafael Espindola 82ad91915e Modify test to not use -disable-cfi.
llvm-svn: 207974
2014-05-05 16:47:07 +00:00
Rafael Espindola f463b63448 Convert a CodeGen test into a MC test.
llvm-svn: 207971
2014-05-05 15:34:13 +00:00
Saleem Abdulrasool e8a7afef86 CodeGen: correct memset emittance for WoA
Windows on ARM does not conform to AEABI.  However, memset would be emitted
using the AEABI signature, resulting in inverted parameters.  Handle this
special case appropriately.

llvm-svn: 207943
2014-05-04 23:13:21 +00:00
Saleem Abdulrasool 9c4716e4b6 CodeGen: strengthen WoA AEABI avoidance tests
Add additional test cases for WoA AEABI avoidance checking.

llvm-svn: 207942
2014-05-04 23:13:18 +00:00
Elena Demikhovsky e73333a50f AVX-512: minor change in rndscale intrinsic
llvm-svn: 207937
2014-05-04 13:35:37 +00:00
Saleem Abdulrasool 82b69fa105 X86: repair export compatibility with MinGW/cygwin
Both MinGW and cygwin (i686) construct export directives without the global
leader prefix.  This is mostly due to the fact that they use GNU ld which does
not correctly handle the export directive.  This apparently has been been broken
for a while.  However, this was recently reported as being broken by
mingwandroid and diorcety of the msys2 project.

Remove the global leader prefix if targeting MinGW or cygwin, otherwise, retain
the global leader prefix.  Add an explicit test for cygwin's behaviour of export
directives.

llvm-svn: 207926
2014-05-04 00:03:48 +00:00
Joey Gouly b0afd1b929 [ARM64] Correctly select ANDWri in FastISel.
http://reviews.llvm.org/D3598

llvm-svn: 207917
2014-05-03 17:27:06 +00:00
Tim Northover 820e041a3c DAGCombine: prevent formation of illegal ConstantFP nodes.
llvm-svn: 207850
2014-05-02 17:25:02 +00:00
Tom Stellard 3dbf1f8df0 R600: Expand vector sin and cos.
v2: move code to AMDGPUISelLowering.cpp
    squash with tests (both EG and SI)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 207845
2014-05-02 15:41:47 +00:00
Tom Stellard 605e116e8e R600: Expand TruncStore i64 -> {i16,i8}
llvm-svn: 207844
2014-05-02 15:41:46 +00:00
Tim Northover d7360900a8 AArch64/ARM64: add patterns for post-indexed ST1 ops.
llvm-svn: 207840
2014-05-02 14:54:27 +00:00
Tim Northover d0b07e133b AArch64/ARM64: support indexed loads/stores on vector types.
While post-indexed LD1/ST1 instructions do exist for vector loads,
this patch makes use of the more flexible addressing-modes in LDR/STR
instructions.

llvm-svn: 207838
2014-05-02 14:54:15 +00:00
Benjamin Kramer 42d262f410 Allow SelectionDAG::FoldConstantArithmetic to work when it's called with a vector VT but scalar values.
llvm-svn: 207835
2014-05-02 12:35:22 +00:00
Michael J. Spencer 1f10c5ea94 [IR] Make {extract,insert}element accept an index of any integer type.
Given the following C code llvm currently generates suboptimal code for
x86-64:

__m128 bss4( const __m128 *ptr, size_t i, size_t j )
{
    float f = ptr[i][j];
    return (__m128) { f, f, f, f };
}

=================================================

define <4 x float> @_Z4bss4PKDv4_fmm(<4 x float>* nocapture readonly %ptr, i64 %i, i64 %j) #0 {
  %a1 = getelementptr inbounds <4 x float>* %ptr, i64 %i
  %a2 = load <4 x float>* %a1, align 16, !tbaa !1
  %a3 = trunc i64 %j to i32
  %a4 = extractelement <4 x float> %a2, i32 %a3
  %a5 = insertelement <4 x float> undef, float %a4, i32 0
  %a6 = insertelement <4 x float> %a5, float %a4, i32 1
  %a7 = insertelement <4 x float> %a6, float %a4, i32 2
  %a8 = insertelement <4 x float> %a7, float %a4, i32 3
  ret <4 x float> %a8
}

=================================================

        shlq    $4, %rsi
        addq    %rdi, %rsi
        movslq  %edx, %rax
        vbroadcastss    (%rsi,%rax,4), %xmm0
        retq

=================================================

The movslq is uneeded, but is present because of the trunc to i32 and then
sext back to i64 that the backend adds for vbroadcastss.

We can't remove it because it changes the meaning. The IR that clang
generates is already suboptimal. What clang really should emit is:

  %a4 = extractelement <4 x float> %a2, i64 %j

This patch makes that legal. A separate patch will teach clang to do it.

Differential Revision: http://reviews.llvm.org/D3519

llvm-svn: 207801
2014-05-01 22:12:39 +00:00
Reed Kotler bab3f23da6 Add basic functionality for assignment of ints.
This creates a lot of core infrastructure in which to add, with little
effort, quite a bit more to mips fast-isel

Test Plan: simplestore.ll

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3527

llvm-svn: 207790
2014-05-01 20:39:21 +00:00
Matt Arsenault 06028dd7be R600/SI: Fix verifier error with pseudo store instructions.
Use i32 instead of specifying SReg_32. When this is
the pseudo INDIRECT_BASE_ADDR, this would give a bogus
verifier error.

llvm-svn: 207770
2014-05-01 16:37:52 +00:00
Bradley Smith 3567cc1b42 [ARM64] Prefer generation of bzero on Darwin only
llvm-svn: 207760
2014-05-01 13:11:59 +00:00
Tim Northover 534acbdf73 AArch64/ARM64: print BFM instructions as BFI or BFXIL
The canonical form of the BFM instruction is always one of the more explicit
extract or insert operations, which makes reading output much easier.

llvm-svn: 207752
2014-05-01 12:29:38 +00:00
Weiming Zhao 7f6daf1799 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589

llvm-svn: 207702
2014-04-30 21:07:24 +00:00
Michael Zolotukhin 1f4a960ccf [X86] Never hoist the shift value of a shift instruction.
There is no need to check if we want to hoist the immediate value of an
shift instruction. Simply return TCC_Free right away.

This change is like r206101, but for X86.

rdar://problem/16190769

llvm-svn: 207692
2014-04-30 19:17:32 +00:00
Tim Northover a8c577e454 ARM64: print fp immediates without using scientific notation.
llvm-svn: 207669
2014-04-30 16:13:34 +00:00
Tom Stellard 1bd80725b3 R600/SI: Use VALU instructions for copying i1 values
We can't use SALU instructions for this since they ignore the EXEC mask
and are always executed.

This fixes several OpenCV tests.

llvm-svn: 207661
2014-04-30 15:31:33 +00:00
Tom Stellard 0c354f25c9 R600/SI: Teach moveToVALU how to handle some SMRD instructions
llvm-svn: 207660
2014-04-30 15:31:29 +00:00
Chad Rosier 864e35db0a [ARM64][fast-isel] Fast-isel doesn't know how to handle f128.
llvm-svn: 207659
2014-04-30 15:29:57 +00:00
Sasa Stankovic 7b061a42b1 [mips] Fix MipsLongBranch pass to work when the offset from the branch to the
target cannot be determined accurately. This is the case for NaCl where the
sandboxing instructions are added in MC layer, after the MipsLongBranch pass.
It is also the case when the code has inline assembly. Instead of calculating
offset in the MipsLongBranch pass, use %hi(sym1 - sym2) and %lo(sym1 - sym2)
expressions that are resolved during the fixup.

This patch also deletes microMIPS test file test/CodeGen/Mips/micromips-long-branch.ll
and implements microMIPS CHECKs in a much simpler way in a file
test/CodeGen/Mips/longbranch.ll, together with MIPS32 and MIPS64.

llvm-svn: 207656
2014-04-30 15:06:25 +00:00
Tim Northover 0ac99404f0 ARM64: print lsr instead of lsrv for variable shifts (etc)
The canonical syntax for shifts by a variable amount does not end with 'v', but
that syntax should be supported as an alias (presumably for legacy reasons).

llvm-svn: 207649
2014-04-30 13:37:07 +00:00
Tim Northover 20ad359b77 AArch64/ARM64: use HS instead of CS & LO instead of CC.
On instructions using the NZCV register, a couple of conditions have dual
representations: HS/CS and LO/CC (meaning unsigned-higher-or-same/carry-set and
unsigned-lower/carry-clear). The first of these is more descriptive in most
circumstances, so we should print it.

llvm-svn: 207644
2014-04-30 13:14:03 +00:00
Daniel Sanders e296a0fce5 [mips][msa] Fix vector insertions where the index is variable
Summary:
This isn't supported directly so we rotate the vector by the desired number of
elements, insert to element zero, then rotate back.

The i64 case generates rather poor code on MIPS32. There is an obvious
optimisation to be made in future (do both insert.w's inside a shared 
rotate/unrotate sequence) but for now it's sufficient to select valid code
instead of aborting.

Depends on D3536

Reviewers: matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://reviews.llvm.org/D3537

llvm-svn: 207640
2014-04-30 12:09:32 +00:00
Tim Northover 970c4a8d35 ARM64: use hex immediates for movz/movk instructions
Since these are mostly used in "lsl #16", "lsl #32", "lsl #48" combinations to
piece together an immediate in 16-bit chunks, hex is probably the most
appropriate format.

llvm-svn: 207635
2014-04-30 11:19:40 +00:00
Tim Northover 4b2f8a990e ARM64: hexify printing various immediate operands
This is mostly aimed at the NEON logical operations and MOVI/MVNI (since they
accept weird shifts which are more naturally understandable in hex notation).

Also changes BRK/HINT etc, which is probably a neutral change, but easier than
the alternative.

llvm-svn: 207634
2014-04-30 11:19:28 +00:00
Tim Northover cfd6e66544 ARM64: print canonical syntax for add/sub (imm) instructions.
Since these instructions only accept a 12-bit immediate, possibly shifted left
by 12, the canonical syntax used by the architecture reference manual is "#N {,
lsl #12 }". We should accept an immediate that has already been shifted, (e.g.

Also, print a comment giving the full addend since it can be helpful.

llvm-svn: 207633
2014-04-30 11:19:15 +00:00
James Molloy 7c39df37b2 [ARM64] Ensure arm64_be is dealt with when emitting debug info.
This is a partial port of r204816 (cpirker "Elf support for MC-JIT
runtime dynamic linker") from AArch64 to ARM64.

llvm-svn: 207625
2014-04-30 10:15:35 +00:00
Tim Northover 41cec5c3cb ARM64: make sure FastISel uses a GPR64 source in 64-bit extensions.
llvm-svn: 207620
2014-04-30 09:32:01 +00:00