Commit Graph

28399 Commits

Author SHA1 Message Date
Tim Northover 120195542c ARM64: remove dead validation code from the AsmParser.
If this code triggers, any immediate has already been validated so it can't
possibly trigger a diagnostic.

llvm-svn: 208564
2014-05-12 14:13:21 +00:00
Tim Northover 2625a993f9 ARM64: merge "extend" and "shift" addressing-mode enums.
In terms of assembly, these have too much overlap to be neatly modelled as
disjoint classes: in many cases "lsl" is an acceptable alternative to either
"uxtw" or "uxtx".

llvm-svn: 208563
2014-05-12 14:13:17 +00:00
Rafael Espindola 7f4ccced49 Remove an always true argument.
llvm-svn: 208557
2014-05-12 13:30:10 +00:00
Benjamin Kramer 3b36b72a9c X86: Make sure that we have SSE4.1 before we generate insertps nodes.
PR19721.

llvm-svn: 208552
2014-05-12 13:12:08 +00:00
Daniel Sanders aadc357e5f [mips] Marked up instructions added in MIPS32 and tested that IAS for -mcpu=mips2 does not accept them
Summary:
To limit the number of tests required, only one 32-bit and one 64-bit ISA
prior to MIPS32/MIPS64 are explicitly tested.

Depends on D3695

Reviewers: vmedic

Differential Revision: http://reviews.llvm.org/D3696

llvm-svn: 208549
2014-05-12 13:04:32 +00:00
Rafael Espindola 9e1b99cbcd Remove MCUseCFI from TargetMachine.
It was always true.

llvm-svn: 208547
2014-05-12 13:01:42 +00:00
Daniel Sanders 07cdea2baa [mips] Marked up instructions added in MIPS-V and tested that IAS for -mcpu=mips[1234] does not accept them
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-V that was available in MIPS32R2

Most of these instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now. It happens
because many of the MIPS V instructions have not been implemented.

Depends on D3694

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3695

llvm-svn: 208546
2014-05-12 12:52:44 +00:00
Daniel Sanders 070fd1c42a [mips] Fold FeatureBitCount into FeatureMips32 and FeatureMips64
Summary:
DCL[ZO] are now correctly marked as being MIPS64 instructions. This has no
effect on the CodeGen tests since expansion of i64 prevented their use
anyway.

The check for MIPS16 to prevent the use of CLZ no longer prevents DCLZ as
well. This is not a functional change since DCLZ is still prohibited by
being a MIPS64 instruction (MIPS16 is only compatible with MIPS32).

No functional change

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3694

llvm-svn: 208544
2014-05-12 12:41:59 +00:00
Daniel Sanders fcea8102e8 [mips] Fold FeatureSEInReg into FeatureMips32r2
Summary: No functional change

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3693

llvm-svn: 208543
2014-05-12 12:28:15 +00:00
Daniel Sanders 39d0051847 [mips] Fold FeatureSwap into FeatureMips32r2 and FeatureMips64r2
Summary:
dsbh and dshd are not available on Mips32r2. No codegen test changes
required since expansion of i64 prevented the use of these instructions
anyway.

Depends on D3690

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3692

llvm-svn: 208542
2014-05-12 12:15:41 +00:00
Daniel Sanders 94eda2e1ab [mips] Replace FeatureFPIdx with FeatureMips4_32r2
Summary:
No functional change.

The minor change to the MIPS16 code is in preparation for a patch that will handle 32-bit FPIdx instructions separately to 64-bit (because they were added in different revisions)

Depends on D3677

Reviewers: rkotler, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3690

llvm-svn: 208541
2014-05-12 11:56:16 +00:00
Bradley Smith bbec45a4f1 [ARM64] Add proper bounds checking/diagnostics to logical shifts
llvm-svn: 208540
2014-05-12 11:49:16 +00:00
Christian Pirker 238c7c165b ARM: Implement big endian bit-conversion for NEON type
llvm-svn: 208538
2014-05-12 11:19:20 +00:00
NAKAMURA Takumi 6c383021c9 X86ISelLowering.cpp:LowerINTRINSIC_W_CHAIN(): Prune impossible "default:" [-Wcovered-switch-default]
llvm-svn: 208533
2014-05-12 10:16:46 +00:00
Bradley Smith d5de13e4d6 [ARM64] Add diagnostics for bitfield extract/insert instructions
Unfortunately, since ARM64 models all these instructions as aliases,
the checks need to be done at the time the alias is seen rather than
during instruction validation as AArch64 does it.

llvm-svn: 208529
2014-05-12 09:44:57 +00:00
Bradley Smith 9ba3c963ff [ARM64] Correct more bounds checks/diagnostics for arithmetic shift operands
llvm-svn: 208528
2014-05-12 09:41:43 +00:00
Bradley Smith ad363d7121 [ARM64] Move register/register MOV handling into tablegen and improve diagnostics
llvm-svn: 208527
2014-05-12 09:38:16 +00:00
Elena Demikhovsky 4f591c0d45 Fixed compilation issue
llvm-svn: 208524
2014-05-12 07:45:41 +00:00
Elena Demikhovsky 8e8fde8e93 AVX-512: changes in intrinsics
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.

llvm-svn: 208522
2014-05-12 07:18:51 +00:00
Matt Arsenault 46013d903f Fix return before else
llvm-svn: 208510
2014-05-11 21:24:41 +00:00
Hal Finkel 0d8db46799 [PowerPC] Add global named register support
Support for the intrinsics that read from and write to global named registers
is added for r1, r2 and r13 (depending on the subtarget).

llvm-svn: 208509
2014-05-11 19:29:11 +00:00
Hal Finkel f0e086a0bc Pass the value type to TLI::getRegisterByName
We must validate the value type in TLI::getRegisterByName, because if we
don't and the wrong type was used with the IR intrinsic, then we'll assert
(because we won't be able to find a valid register class with which to
construct the requested copy operation). For PPC64, additionally, the type
information is necessary to decide between the 64-bit register and the 32-bit
subregister.

No functionality change.

llvm-svn: 208508
2014-05-11 19:29:07 +00:00
Hal Finkel b33e9872a0 Add 'override' to getRegisterByName in *ISelLowering.h
No functionality change intended.

llvm-svn: 208507
2014-05-11 19:28:55 +00:00
Hal Finkel c4c6c87666 [PowerPC] On PPC32, 128-bit shifts might be runtime calls
The counter-loops formation pass needs to know what operations might be
function calls (because they can't appear in counter-based loops). On PPC32,
128-bit shifts might be runtime calls (even though you can't use __int128 on
PPC32, it seems that SROA might form them).

Fixes PR19709.

llvm-svn: 208501
2014-05-11 16:23:29 +00:00
Filipe Cabecinhas 0e3d1cb5d6 Fixed a bug when lowering build_vector (PR19694)
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.

llvm-svn: 208487
2014-05-11 08:12:56 +00:00
Vincent Lejeune 29c0c210fc R600/SI: Fold fabs/fneg into src input modifier
llvm-svn: 208480
2014-05-10 19:18:39 +00:00
Vincent Lejeune 94af31fbe8 R600/SI: Prettier display of input modifiers
llvm-svn: 208479
2014-05-10 19:18:33 +00:00
Vincent Lejeune 79a5834647 R600/SI: Use pseudo instruction for fabs/clamp/fneg
llvm-svn: 208478
2014-05-10 19:18:25 +00:00
Tim Northover 55b3e22927 ARM64: fix SELECT_CC lowering in absence of NaNs.
We were swapping the true & false results while testing for FMAX/FMIN,
but not putting them back to the original state if the later checks
failed.

Should fix PR19700.

llvm-svn: 208469
2014-05-10 07:37:50 +00:00
Reid Kleckner c487d73f41 Revert "[ms-cxxabi] Add a new calling convention that swaps 'this' and 'sret'"
This reverts commit r200561.

This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value.  This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.

Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.

llvm-svn: 208459
2014-05-09 22:56:42 +00:00
Reid Kleckner 7941856445 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

llvm-svn: 208453
2014-05-09 22:32:13 +00:00
Jonathan Roelofs 7005e347a2 Fix broken build
ARM64 backend was missing a required_library entry.

llvm-svn: 208437
2014-05-09 18:06:22 +00:00
Louis Gerbarg 3342bf1451 Add custom lowering for add/sub with overflow intrinsics to ARM
This patch adds support to ARM for custom lowering of the
llvm.{u|s}add.with.overflow.i32 intrinsics for i32/i64. This is particularly useful
for handling idiomatic saturating math functions as generated by
InstCombineCompare.

Test cases included.

rdar://14853450

llvm-svn: 208435
2014-05-09 17:02:49 +00:00
Tom Stellard 4c00b52e1a R600/SI: Teach SIInstrInfo::moveToVALU() how to move S_LOAD_*_IMM instructions
llvm-svn: 208432
2014-05-09 16:42:22 +00:00
Tom Stellard d6cb8e8efd R600/SI: Fix SMRD pattern for offsets > 32 bits
We were dropping the high bits of 64-bit immediate offsets.

llvm-svn: 208431
2014-05-09 16:42:21 +00:00
Tom Stellard a2acad785a R600: Expand i64 SELECT_CC
llvm-svn: 208430
2014-05-09 16:42:19 +00:00
Tom Stellard afa8b532b1 R600: Move MIN/MAX matching from LowerOperation() to PerformDAGCombine()
llvm-svn: 208429
2014-05-09 16:42:16 +00:00
Daniel Sanders e57d866ed0 [mips] Marked up instructions added in MIPS-IV and tested that IAS for -mcpu=mips[123] does not accept them
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-IV that was available in MIPS32

A small number of instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now.

Depends on D3676

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3677

llvm-svn: 208414
2014-05-09 14:06:17 +00:00
Oliver Stannard c24f2171ca ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.

llvm-svn: 208413
2014-05-09 14:01:47 +00:00
Daniel Sanders 395b8181ec [mips] Remove unused CondMov feature bit
Summary:
No functional change

Depends on D3675

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3676

llvm-svn: 208410
2014-05-09 13:15:07 +00:00
Daniel Sanders f2056bef32 [mips] Marked up instructions added in MIPS-III and tested that IAS for -mcpu=mips[12] does not accept them
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-III that was available in MIPS32

A small number of instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now.

There's some obvious InstAlias's that ought to be marked MIPS-III but arent.
This is because they are not currently tested. I intend to catch these with
a final pass through the tablegen records to find tablegen records without
ISA annotations.

Depends on D3674

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3675

llvm-svn: 208408
2014-05-09 13:02:27 +00:00
Andrea Di Biagio 932ff1ddd7 Fix 80 col violation.
No functional change intended.

llvm-svn: 208405
2014-05-09 11:08:23 +00:00
Benjamin Kramer 8bbadc0383 [asan] Stop leaking X86Operands.
llvm-svn: 208400
2014-05-09 09:48:03 +00:00
Daniel Sanders b7f1c6ff3e [mips][mips64r6] Add experimental support for MIPS32r6 and MIPS64r6
Summary:
Adds MIPS32r6/MIPS64r6 and checks the compatibility requirements for these
processors.

I've also included comments to describe removed and re-encoded instructions,
along with placeholder def's for the new instructions but there are no
functional changes to codegen at this point.

Reviewers: jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3622

llvm-svn: 208399
2014-05-09 09:46:21 +00:00
Daniel Sanders 52bdd651e5 [mips] Added missing dsra -> dsrav and sra -> srav aliases.
Summary: dsll, dsrl, sll, and srl already exist.

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3673

llvm-svn: 208397
2014-05-09 09:24:49 +00:00
Saleem Abdulrasool 40bca0afab ARM: support PIC on Windows on ARM
Handle lowering of global addresses for PIC mode compilation on Windows.  Always
use the movw/movt load to load the address as Windows on ARM requires ARMv7+ and
is a pure Thumb environment.

llvm-svn: 208385
2014-05-09 00:58:32 +00:00
Filipe Cabecinhas e4b482b3ed Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper, delena

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

llvm-svn: 208372
2014-05-08 23:16:08 +00:00
Jyotsna Verma 38901ca4f6 [Hexagon] Add new InstrItinClass to support timing classes.
This patch doesn't introduce any functionality change. Test cases will be
added later when v5 support is added.

llvm-svn: 208349
2014-05-08 18:47:08 +00:00
Rafael Espindola 331380a291 Use for range loops.
llvm-svn: 208348
2014-05-08 18:40:06 +00:00
Matt Arsenault e8a076a253 R600: Promote f64 vector load/stores to i64 for consistency
llvm-svn: 208344
2014-05-08 18:01:56 +00:00
Andrea Di Biagio e85ba4df52 [X86] Add target specific combine rules to fold SSE2/AVX2 packed arithmetic shift intrinsics.
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.

The rules are:
 - Always fold a packed arithmetic shift by zero to its first operand;
 - Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
   the shift count is known to be smaller than the vector element size.

This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.

Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.

llvm-svn: 208342
2014-05-08 17:44:04 +00:00
Daniel Sanders 7d290b0974 [mips] Add PredicateControl to InstAlias's
Summary:
No functional change

Depends on D3649

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3672

llvm-svn: 208334
2014-05-08 16:12:31 +00:00
Bradley Smith 340ba0c5fe [ARM64] Add diagnostics for expected arithmetic shifts
llvm-svn: 208330
2014-05-08 15:40:39 +00:00
Bradley Smith afaab0a78a [ARM64] Re-work parsing of ADD/SUB shifted immediate operands
The parsing of ADD/SUB shifted immediates needs to be done explicitly so
that better diagnostics can be emitted, as a side effect this also
removes some of the hacks in the current method of handling this operand
type.

Additionally remove manual CMP aliasing to ADD/SUB and use InstAlias
instead.

llvm-svn: 208329
2014-05-08 15:39:58 +00:00
Bradley Smith 4c0274b663 [ARM64] Ensure immediates in extend operands are in a valid range
Also emit a more useful diagnostic when they are not.

llvm-svn: 208318
2014-05-08 14:12:12 +00:00
Bradley Smith 545caa6fa3 [ARM64] Check for proper immediate in shift/extend operands
llvm-svn: 208317
2014-05-08 14:11:16 +00:00
Christian Pirker b5728191c2 ARM big endian function argument passing
llvm-svn: 208316
2014-05-08 14:06:24 +00:00
Daniel Sanders cdbbe08b05 [mips] Implement l[wd]c3, and s[wd]c3.
Summary:
These instructions were added in MIPS-I, and MIPS-II but were removed in
MIPS-III. Interestingly, GAS continues to accept them when assembling for
MIPS-III.

For the moment, these instructions will follow GAS and accept them for
MIPS-III and newer but this will be tightened up when the invalid-*.s
tests are added.

Depends on D3647

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3648

llvm-svn: 208311
2014-05-08 13:02:11 +00:00
James Molloy c42ea14f74 [ARM64-BE] Teach fast-isel about how to set up sub-word stack arguments for big endian calls.
SelectionDAG already knows about this, but fast-isel was ignorant.

llvm-svn: 208307
2014-05-08 12:53:50 +00:00
Daniel Sanders d39320c6b6 [mips] Marked up instructions added in MIPS-II and tested that IAS for -mcpu=mips1 does not accept them
Summary:
A small number of instructions are rejected with the wrong error message.
These have been placed in a separate test for now. There seems to be some
parsing quirk that triggers when these instructions are disabled.

Depends on D3571

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3647

llvm-svn: 208305
2014-05-08 12:40:48 +00:00
Daniel Sanders 8dcb116a3e [mips] Implement tlbp, tlbr, tlbwi, and tlbwr
Reviewers: vmedic, dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3571

llvm-svn: 208301
2014-05-08 11:51:18 +00:00
Tim Northover 18f8bb84fa ARM64: make sure FastISel emits SSA MachineInstrs
We need to use a temporary register for a 2-step operation like REM.

llvm-svn: 208297
2014-05-08 10:30:56 +00:00
Evgeniy Stepanov 9661ec0ec3 [asan] Preserve flags in asm instrumentation.
Patch by Yuri Gorshenin.

llvm-svn: 208296
2014-05-08 09:55:24 +00:00
Hal Finkel 6532c20faa Move late partial-unrolling thresholds into the processor definitions
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).

This does represent a small functionality change:
 - For generic x86-64 (which uses the SB model and, thus, will get some
   unrolling).
 - For AMD cores (because they still currently use the SB scheduling model)
 - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
   the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.

llvm-svn: 208289
2014-05-08 09:14:44 +00:00
Hao Liu 1187a3d8db AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ARM64 backend.
llvm-svn: 208284
2014-05-08 07:38:13 +00:00
Saleem Abdulrasool fc6b85b185 ARM: support FK_SecRel_2 relocations on WoA
This adds FK_SecRel_2 relocation support to ARM.  This enables the building of
object files for armv7-windows-msvc which enables CodeView line tables for
debugging as opposed to armv7-windows-itanium which currently uses DWARF.

llvm-svn: 208273
2014-05-08 01:35:57 +00:00
Filipe Cabecinhas 095d9d573a Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

llvm-svn: 208271
2014-05-08 00:25:16 +00:00
Hal Finkel f6475bbc4b [X86TTI] Remove the unrolling branch limits
The loop stream detector (LSD) on modern Intel cores, which optimizes the
execution of small loops, has limits on the number of taken branches in
addition to uop-count limits (modern AMD cores have similar limits).
Unfortunately, at the IR level, estimating the number of branches that will be
taken is difficult. For one thing, it strongly depends on later passes (block
placement, etc.). The original implementation took a conservative approach and
limited the maximal BB DFS depth of the loop.  However, fairly-extensive
benchmarking by several of us has revealed that this is the wrong approach. In
fact, there are zero known cases where the branch limit prevents a detrimental
unrolling (but plenty of cases where it does prevent beneficial unrolling).

While we could improve the current branch counting logic by incorporating
branch probabilities, this further complication seems unjustified without a
motivating regression. Instead, unless and until a regression appears, the
branch counting will be removed.

llvm-svn: 208255
2014-05-07 22:25:18 +00:00
Quentin Colombet 246b6fcd28 [X86] Selectively mark the FMA variants inside a family as isCommutable.
Given a FMA family (e.g., 213, 231), not all the variants (i.e., register or
memory) are commutable.
E.g., for the 213 family (with the syntax src1, src2, src3):
fmaXXX213 A, B, reg3/mem3 == fmaXXX213 B, A, reg3/mem3

Now consider the 231 family:
fmaXXX231 A, B, reg3 == fmaXXX231 A, reg3, B
But
fmaXXX231 A, B, mem3 != fmaXXX231 A, mem3, B
Indeed, mem3 cannot be the second argument of the memory variant of fmaXXX231.

Working on a reduced test case!

<rdar://problem/16800495>

llvm-svn: 208252
2014-05-07 21:43:35 +00:00
Eric Christopher b8f9768880 Reformat a couple of functions for clarity.
llvm-svn: 208248
2014-05-07 21:05:47 +00:00
Jyotsna Verma f98a1eca6e [Hexagon] Add New TSFlags to be used in the upcoming patches.
llvm-svn: 208239
2014-05-07 19:07:34 +00:00
Chandler Carruth 32908d7a35 [x86] Make the 'x86-64' cpu, what I see as and many use as the generic
default architecture for reasonable modern x86 processors, actually be
modern. This processor model should essentially be "tuned" for modern
x86 chips as much as possible without undue penalties on any specific
architecture. Previously we weren't even using the nice scheduling
models. There are a few other tweaks needed here, but this change at
least I have benchmarked across a decent swatch of chips (intel's
clovertown, westmere, and sandybridge; amd's istanbul) and seen no
significant regressions.

If anyone has suggested ways to test this, just let me know. Somewhat
alarmingly, no existing tests failed.

llvm-svn: 208230
2014-05-07 17:37:03 +00:00
Chad Rosier 788e5e3d7c [ARM64][fast-isel] Disable target specific optimizations at -O0. Functionally,
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0.  The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check.  The AdvSIMDScalar pass is disabled by default at all
optimization levels.  This patch leaves that pass disabled by default.

Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging.  This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).

llvm-svn: 208223
2014-05-07 16:41:55 +00:00
Daniel Sanders d240953db2 [mips] Add highly experimental support for MIPS-I, MIPS-II, MIPS-III, and MIPS-V
Summary:
These processors will only be available for the integrated assembler at
first (CodeGen will emit a fatal error saying they are not implemented).

The intention is to work through the existing instructions and correctly
annotate the ISA they were added in so that we have a sufficiently good
base to start MIPS64r6 development. MIPS64r6 removes/re-encodes certain
instructions and I believe it is best to define ISA's using set-union's
as far as possible rather than using set-subtraction.

Reviewers: vmedic

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D3569

llvm-svn: 208221
2014-05-07 16:25:22 +00:00
Rafael Espindola de3e36be38 Use range loop.
llvm-svn: 208218
2014-05-07 14:53:32 +00:00
Daniel Sanders 5b864d0cbb [mips] Add FGR_32/FGR_64/GPR_64 adjectives and use then instead of FGRPredicates/GPRPredicates
Summary:
No functional change (confirmed by diffing tablegen-erated files).

Depends on D3642

Reviewers: vmedic, dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3645

llvm-svn: 208213
2014-05-07 14:25:43 +00:00
Daniel Sanders 3872b47231 [mips] Add INSN_<name> adverbs and start using them instead of AdditionalPredicates overrides
Summary:
No functional change

Depends on D3641

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3642

llvm-svn: 208212
2014-05-07 14:11:46 +00:00
Tim Northover 88a51d983e AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

llvm-svn: 208210
2014-05-07 14:10:27 +00:00
Daniel Sanders 9c1b1bec03 [mips] Add ISA_<name> adverbs and start using them instead of AdditionalPredicates overrides
Summary:
One small functional change. The recently added PAUSE instruction now has
the HasStdEnc predicate which was accidentally removed by a Requires<>.

Depends on D3640

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3641

llvm-svn: 208209
2014-05-07 13:57:22 +00:00
Rafael Espindola 566fcfe69b Remove the UseCFI option from createAsmStreamer.
We were already always passing true, this just removes the option.

llvm-svn: 208205
2014-05-07 13:00:43 +00:00
Daniel Sanders 13d7209fa9 [mips] Continue splitting Instruction.Predicates into smaller lists and re-join them with !listconcat
Summary:
Move IsGP64bit into GPRPredicates, and IsFP64bit/NotFP64bit into FGRPredicates

No functional change (confirmed by diffing tablegen-erated files).

Depends on D3639

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3640

llvm-svn: 208201
2014-05-07 12:48:37 +00:00
James Molloy d3c401a2d0 [ARM64-BE] Fix fast-isel, and add appropriate RUN lines to appropriate tests.
llvm-svn: 208200
2014-05-07 12:33:55 +00:00
James Molloy 36132057da [ARM64-BE] Fix variable-argument saving.
llvm-svn: 208199
2014-05-07 12:33:48 +00:00
James Molloy 4049e4fd77 [ARM64-BE] Implement the lane-twiddling logic at AAPCS boundaries for big endian.
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.

The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.

llvm-svn: 208198
2014-05-07 12:33:41 +00:00
Daniel Sanders 4cd0782bf2 [mips] Move IsFP64bit/NotFP64bit to the front of the AdditionalPredicates list
Summary:
This makes it easier to prove a more complicated change in the next commit
is non-functional.

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3639

llvm-svn: 208197
2014-05-07 12:27:46 +00:00
James Molloy 30e0e11eb4 [ARM64-BE] Implement the crazy bitcast handling for big endian vectors.
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.

Consider a simple memory load followed by a bitconvert then a store.
  v0 = load v2i32
  v1 = BITCAST v2i32 v0 to v4i16
       store v4i16 v2

In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.

LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence:  v0 = load v2i32

  v1 = REV v2i32                  (implicit)
  v2 = BITCAST v2i32 v1 to v4i16
  v3 = REV v4i16 v2               (implicit)
       store v4i16 v3

But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:

  v0 = load v2i32
  v1 = REV v2i32                  (implicit)
  v2 = REV v2i32
  v3 = BITCAST v2i32 v2 to v4i16
  v4 = REV v4i16
  v5 = REV v4i16 v4               (implicit)
       store v4i16 v5

This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
  (REV64_2s (REV64_4h X)) === (REV32_4h X)

There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.

Most bitconverts require some sort of conversion. The only exceptions are:
  a) Identity conversions -  vNfX <-> vNiX
  b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX

Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.

This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.

llvm-svn: 208194
2014-05-07 11:28:53 +00:00
James Molloy 3f0da857b4 [ARM64-BE] Predicate VLDR/VSTR for vectors as little-endian only. We must use LD1/ST1 on big-endian.
llvm-svn: 208193
2014-05-07 11:28:45 +00:00
James Molloy ccc7f982c1 [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

llvm-svn: 208192
2014-05-07 11:28:36 +00:00
Daniel Sanders 3dc2c016a6 [mips] Split Instruction.Predicates into smaller lists and re-join them with !listconcat
Summary:
The overall idea is to chop the Predicates list into subsets that are
usually overridden independently. This allows subclasses to partially
override the predicates of their superclasses without having to re-add all
the existing predicates.

This patch starts the process by moving HasStdEnc into a new
EncodingPredicates list and almost everything else into
AdditionalPredicates.

It has revealed a couple likely bugs where 'let Predicates' has removed
the HasStdEnc predicate.

No functional change (confirmed by diffing tablegen-erated files).

Depends on D3549, D3506

Reviewers: vmedic

Differential Revision: http://reviews.llvm.org/D3550

llvm-svn: 208184
2014-05-07 10:27:09 +00:00
Daniel Sanders 0e2364149c [mips] Move HasStdEnc to the front of the predicates lists.
Summary:
This will make it easier to prove that a more complicated change in the
following commit is non-functional.

No functional change.

Depends on D3506

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3549

llvm-svn: 208179
2014-05-07 09:58:05 +00:00
Evgeniy Stepanov 3819f02819 [asan] Add a flag to control asm instrumentation.
With this change, asm instrumentation is disabled by default.

llvm-svn: 208167
2014-05-07 07:54:11 +00:00
Joerg Sonnenberger cf86ce136c Allow using normal .eh_frame based unwinding on ARM. Use the same
encodings as x86. Use this exception model for NetBSD.

llvm-svn: 208166
2014-05-07 07:49:34 +00:00
Saleem Abdulrasool 985dcf18a9 ARM: mark additional instructions as MachineFrameSetup
Mark up additional instructions which are part of the function prologue as
MachineFrameSetup.  These instructions are part of the function prologue,
emitted by the PEI pass to setup the stack for use in the activating frame.

llvm-svn: 208153
2014-05-07 03:03:31 +00:00
Saleem Abdulrasool acd0338c61 ARM: fix WoA PEI instruction selection
The ARM::BLX instruction is an ARM mode instruction.  The Windows on ARM target
is limited to Thumb instructions.  Correctly use the thumb mode tBLXr
instruction.  This would manifest as an errant write into the object file as the
instruction is 4-bytes in length rather than 2.  The result would be a corrupted
object file that would eventually result in an executable that would crash at
runtime.

llvm-svn: 208152
2014-05-07 03:03:27 +00:00
Andrew Trick d0d8cb1d21 Update an embarassing out-of-date comment.
llvm-svn: 208137
2014-05-06 22:18:43 +00:00
Joerg Sonnenberger 818e725158 If a function needs a frame pointer, but r11 (aka fp) has not been used,
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.

llvm-svn: 208129
2014-05-06 20:43:01 +00:00
Andrea Di Biagio c14ccc9184 [X86] Improve the lowering of BITCAST dag nodes from type f64 to type v2i32 (and vice versa).
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.

With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).

This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.

llvm-svn: 208107
2014-05-06 17:09:03 +00:00
Renato Golin c7aea40ec6 Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

llvm-svn: 208104
2014-05-06 16:51:25 +00:00
Tim Northover 618850b6a5 AArch64/ARM64: implement diagnosis of unpredictable loads & stores
llvm-svn: 208091
2014-05-06 14:15:14 +00:00
Tim Northover 15641cd4e1 AArch64/ARM64: make NEON vector list parsing a bit more robust
It doesn't change the results, but it seems silly not to diagnose obvious
problems early on.

llvm-svn: 208083
2014-05-06 12:50:51 +00:00
Tim Northover 339ecf14ee AArch64/ARM64: add more specific diagnostic for floating imm 0.0.
llvm-svn: 208082
2014-05-06 12:50:47 +00:00
Tim Northover 05cbe7c80a AArch64/ARM64: add more specific diagnostic for invalid vector lanes
llvm-svn: 208081
2014-05-06 12:50:44 +00:00
Tim Northover 0f54f309bb AArch64/ARM64: produce more informative diagnostic assembling some immediates
No tests here, they'll be added when the entire neon-diagnostics.s test from
AArch64 is enabled.

llvm-svn: 208079
2014-05-06 11:18:53 +00:00
Christian Pirker fdce7cea93 ARM: For thumb fixups store halfwords high first and low second
llvm-svn: 208076
2014-05-06 10:05:11 +00:00
Kevin Qin 1353c3405d [ARM64] Enable alignment control option in front-end for ARM64.
This is the modification in llvm part.

llvm-svn: 208074
2014-05-06 09:48:52 +00:00
Craig Topper 646f64f04a Use X86 memory operand enums instead of hardcoding.
llvm-svn: 208064
2014-05-06 07:04:32 +00:00
Reid Kleckner 4a406d32e9 Fix i128 div/mod on mingw64
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.

Patch by Jameson Nash!

llvm-svn: 208029
2014-05-06 01:20:42 +00:00
Eric Christopher eb0bf5af65 Fix typo.
llvm-svn: 208006
2014-05-05 21:50:57 +00:00
Tom Stellard 45b3dcd35b R600: Expand i64 ISD:SUB
llvm-svn: 208005
2014-05-05 21:47:15 +00:00
Filipe Cabecinhas fe59062b75 Revert "Optimize shufflevector that copies an i64/f64 and zeros the rest."
This reverts commit 207992. I misread the phab number on the LGTM.

llvm-svn: 207993
2014-05-05 19:40:36 +00:00
Filipe Cabecinhas 263d98c19f Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

llvm-svn: 207992
2014-05-05 19:36:28 +00:00
Marek Olsak 82d3b11e85 R600/SI: allow 5 more input SGPRs to a shader
Our OpenGL driver needs 22 SGPRs (16 user SGPRs + 6 streamout non-user SGPRs).

Signed-off-by: Marek Olšák <marek.olsak@amd.com>
llvm-svn: 207990
2014-05-05 19:30:54 +00:00
Saleem Abdulrasool e8a7afef86 CodeGen: correct memset emittance for WoA
Windows on ARM does not conform to AEABI.  However, memset would be emitted
using the AEABI signature, resulting in inverted parameters.  Handle this
special case appropriately.

llvm-svn: 207943
2014-05-04 23:13:21 +00:00
Saleem Abdulrasool 729c7a08fb MC: support FK_SecRel_4 for Windows on ARM
Add handling for FK_SecRel_4 (4-byte section relative relocations).  These are
used by the generation of DWARF debug information (the abbrevations use section
relative relocations).  This will also be used in generation of CodeView line
tables.

llvm-svn: 207941
2014-05-04 23:13:15 +00:00
Elena Demikhovsky e73333a50f AVX-512: minor change in rndscale intrinsic
llvm-svn: 207937
2014-05-04 13:35:37 +00:00
Saleem Abdulrasool 3c82b499a0 X86: further range-loopify AsmPrinter
Use more range loops in the X86AsmPrinter.  NFC.

llvm-svn: 207928
2014-05-04 01:54:17 +00:00
Saleem Abdulrasool b942035bae X86: remove X86COFFMachineModuleInfo
Remove dead code.  This is vestigial after r98384.

llvm-svn: 207927
2014-05-04 01:54:12 +00:00
Saleem Abdulrasool 82b69fa105 X86: repair export compatibility with MinGW/cygwin
Both MinGW and cygwin (i686) construct export directives without the global
leader prefix.  This is mostly due to the fact that they use GNU ld which does
not correctly handle the export directive.  This apparently has been been broken
for a while.  However, this was recently reported as being broken by
mingwandroid and diorcety of the msys2 project.

Remove the global leader prefix if targeting MinGW or cygwin, otherwise, retain
the global leader prefix.  Add an explicit test for cygwin's behaviour of export
directives.

llvm-svn: 207926
2014-05-04 00:03:48 +00:00
Saleem Abdulrasool 75e68cbd12 X86: refactor export directive generation
Create a helper function to generate the export directive.  This was previously
duplicated inline to handle export directives for variables and functions.  This
also enables the use of range-based iterators for the generation of the
directive rather than the traditional loops.  NFC.

llvm-svn: 207925
2014-05-04 00:03:41 +00:00
Rafael Espindola 3d082fa507 Fix pr19645.
The fix itself is fairly simple: move getAccessVariant to MCValue so that we
replace the old weak expression evaluation with the far more general
EvaluateAsRelocatable.

This then requires that EvaluateAsRelocatable stop when it finds a non
trivial reference kind. And that in turn requires the ELF writer to look
harder for weak references.

Last but not least, this found a case where we were being bug by bug
compatible with gas and accepting an invalid input. I reported pr19647
to track it.

llvm-svn: 207920
2014-05-03 19:57:04 +00:00
Joey Gouly b0afd1b929 [ARM64] Correctly select ANDWri in FastISel.
http://reviews.llvm.org/D3598

llvm-svn: 207917
2014-05-03 17:27:06 +00:00
Benjamin Kramer 6004573ecf Add a description for AMD's bdver4 (aka Excavator).
This is just bdver3 + AVX2 + BMI2.

llvm-svn: 207847
2014-05-02 15:47:07 +00:00
Tom Stellard 10b1502733 R600/SI: Add processor type for Mullins.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Samuel Li <samuel.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
llvm-svn: 207846
2014-05-02 15:41:49 +00:00
Tom Stellard 3dbf1f8df0 R600: Expand vector sin and cos.
v2: move code to AMDGPUISelLowering.cpp
    squash with tests (both EG and SI)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 207845
2014-05-02 15:41:47 +00:00
Tom Stellard 605e116e8e R600: Expand TruncStore i64 -> {i16,i8}
llvm-svn: 207844
2014-05-02 15:41:46 +00:00
Tom Stellard eba61071d7 R600/SI: Only create one instruction when spilling/restoring register v3
The register spiller assumes that only one new instruction is created
when spilling and restoring registers, so we need to emit pseudo
instructions for vector register spills and lower them after
register allocation.

v2:
  - Fix calculation of lane index
  - Extend VGPR liveness to end of program.

v3:
  - Use SIMM16 field of S_NOP to specify multiple NOPs.

https://bugs.freedesktop.org/show_bug.cgi?id=75005

llvm-svn: 207843
2014-05-02 15:41:42 +00:00
Tim Northover d7360900a8 AArch64/ARM64: add patterns for post-indexed ST1 ops.
llvm-svn: 207840
2014-05-02 14:54:27 +00:00
Tim Northover 523b5a43fb ARM64: refactor NEON post-indexed loads & stores (MC).
Previously, LLVM had no knowledge that these instructions actually
modified their address register: fine if they never end up in CodeGen,
but when I'd rather like to write some patterns for them it becomes a
disaster.

The change is mostly straightforward, I think the most significant
design decision was to *always* put the address write-back first. This
allows loads and stores to be accessed more uniformly, for example
permitting the continued sharing of the InstAlias definitions.

I also discovered that the custom Decode logic is no longer needed, so
I removed it.

No tests, because there should be no functionality change.

llvm-svn: 207839
2014-05-02 14:54:21 +00:00
Tim Northover d0b07e133b AArch64/ARM64: support indexed loads/stores on vector types.
While post-indexed LD1/ST1 instructions do exist for vector loads,
this patch makes use of the more flexible addressing-modes in LDR/STR
instructions.

llvm-svn: 207838
2014-05-02 14:54:15 +00:00
Pranav Bhandarkar 94cb35cb05 Remove HexagonTargetMachine::addPassesForOptimizations; it is not needed any more.
llvm-svn: 207800
2014-05-01 22:10:59 +00:00
Reed Kotler bab3f23da6 Add basic functionality for assignment of ints.
This creates a lot of core infrastructure in which to add, with little
effort, quite a bit more to mips fast-isel

Test Plan: simplestore.ll

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3527

llvm-svn: 207790
2014-05-01 20:39:21 +00:00
Eli Bendersky a108a65df2 Add an optimization that does CSE in a group of similar GEPs.
This optimization merges the common part of a group of GEPs, so we can compute
each pointer address by adding a simple offset to the common part.

The optimization is currently only enabled for the NVPTX backend, where it has
a large payoff on some benchmarks.

Review: http://reviews.llvm.org/D3462

Patch by Jingyue Wu.

llvm-svn: 207783
2014-05-01 18:38:36 +00:00
Matt Arsenault 06028dd7be R600/SI: Fix verifier error with pseudo store instructions.
Use i32 instead of specifying SReg_32. When this is
the pseudo INDIRECT_BASE_ADDR, this would give a bogus
verifier error.

llvm-svn: 207770
2014-05-01 16:37:52 +00:00
Bradley Smith 3567cc1b42 [ARM64] Prefer generation of bzero on Darwin only
llvm-svn: 207760
2014-05-01 13:11:59 +00:00
Rafael Espindola 4a04294882 Don't force symbols to be globals in .thumb_set.
We currently force symbols to be globals in .thumb_set. The intent
seems to be that given

.thumb_set foo, bar

we emit an undefined symbol to bar if it is never defined. The side
effect is that we mark bar as global, even if it is defined, which gas
does not.

Producing an undefined reference to bar is a general difference from MC and gas.
For example, given

a = b

gas will produce an undefined reference to b, MC will not. I would be surprised
if any code depends on this, but it it does, we should fix the general
difference, not special case .thumb_set.

llvm-svn: 207757
2014-05-01 12:45:43 +00:00
Tim Northover 534acbdf73 AArch64/ARM64: print BFM instructions as BFI or BFXIL
The canonical form of the BFM instruction is always one of the more explicit
extract or insert operations, which makes reading output much easier.

llvm-svn: 207752
2014-05-01 12:29:38 +00:00
Richard Barton 3db1d580b3 Correction to assert statemtent to allow 32-bit unsigned numbers with the top bit set.
This fixes an ARM assembler crash - regression test added.

llvm-svn: 207747
2014-05-01 11:37:44 +00:00
Bradley Smith f57d5ca234 [ARM64] Conditionalize CPU specific system registers on subtarget features
llvm-svn: 207742
2014-05-01 10:25:36 +00:00
Matheus Almeida d92a3fa212 [mips] Move expansion of .cpsetup to target streamer.
Summary:
There are two functional changes:
1) The directive is not expanded for the ASM->ASM code path.
2) If PIC is not set, there's no expansion for the ASM->OBJ code path (same behaviour as GAS).

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3482

llvm-svn: 207741
2014-05-01 10:24:46 +00:00
Daniel Sanders 88fbbcaa30 [mips] Removed two-operand alias for sllv, sr[al]v, rotrv, dsllv, dsr[al]v, and drotrv
GAS doesn't actually accept these particular cases.

The mnemonic without the trailing 'v' still supports two-operand aliases.

llvm-svn: 207740
2014-05-01 10:08:36 +00:00
Saleem Abdulrasool 7158303ad7 ARM: fix memory leak, simplify WoA stack probing
This fixes the memory leak introduced with the initial addition of support for
WoA stack probing.  Now that the pseudo-instruction expansion can handle an
external symbol, use that to generate the load which simplifies the logic as
well as avoids the memory leak.

llvm-svn: 207737
2014-05-01 04:19:59 +00:00
Saleem Abdulrasool d6c0ba3787 ARM: support expanding external symbols in 32-bit moves
This enhances the expansion of the mov32imm pseudo-instruction to support an
external symbol reference.  This is motivated by a simplification of the stack
probe emission for Windows on ARM (and fixing a leak).

llvm-svn: 207736
2014-05-01 04:19:56 +00:00
Joerg Sonnenberger 0f90c95ccf If necessary for indirect encodings, emit stubs.
llvm-svn: 207730
2014-05-01 00:25:15 +00:00
Joerg Sonnenberger 3c10817b92 Prepare support of Itanium ABI on ARM as opposed to EHABI by
conditionally emitting .fnstart and friends only for EHABI.

llvm-svn: 207718
2014-04-30 22:43:13 +00:00
Joerg Sonnenberger fe54364a9d Restore condition incorrectly changed in r96289 to the older state.
llvm-svn: 207716
2014-04-30 22:40:27 +00:00
Weiming Zhao 7f6daf1799 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589

llvm-svn: 207702
2014-04-30 21:07:24 +00:00
Michael Zolotukhin 1f4a960ccf [X86] Never hoist the shift value of a shift instruction.
There is no need to check if we want to hoist the immediate value of an
shift instruction. Simply return TCC_Free right away.

This change is like r206101, but for X86.

rdar://problem/16190769

llvm-svn: 207692
2014-04-30 19:17:32 +00:00
Matheus Almeida e844872830 [mips] Add instruction alias (negu).
Summary: negu $reg is equivalent to negu $reg, $reg.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3510

llvm-svn: 207673
2014-04-30 16:53:49 +00:00
Matheus Almeida b7be52343d [mips] Add instruction alias (sltu).
Summary:
The pattern sltu $r1, $r2, $imm is found in handwritten assembly which
is just a shorthand version of sltui $r1, $r2, $imm.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3508

llvm-svn: 207671
2014-04-30 16:29:56 +00:00
Tim Northover a8c577e454 ARM64: print fp immediates without using scientific notation.
llvm-svn: 207669
2014-04-30 16:13:34 +00:00