Commit Graph

5897 Commits

Author SHA1 Message Date
Hal Finkel b9a3d61894 Don't crash when a glue node contains an internal CopyToReg
This is necessary to support the existing ppc lowering code for indirect calls.
Fixes PR12071.

llvm-svn: 151373
2012-02-24 17:53:59 +00:00
Kristof Beyls b59291a8e6 test commit. removing unnecessary whitespace.
llvm-svn: 151363
2012-02-24 13:52:45 +00:00
NAKAMURA Takumi b2ef38b999 test/CodeGen/X86/2012-02-23-mmx-inlineasm.ll: Fixup to add -march=x86.
-mcpu does not choose arch automatically, on non-x86 hosts.

llvm-svn: 151362
2012-02-24 13:29:50 +00:00
Pete Cooper 682c76b7d4 Turn avx insert intrinsic calls into INSERT_SUBVECTOR DAG nodes and remove duplicate patterns for selecting the intrinsics
llvm-svn: 151342
2012-02-24 03:51:49 +00:00
Jim Grosbach c01104dfbf Thumb2 size reduction fix for tied operands of tMUL.
The tied source operand of tMUL is the second source operand, not the
first like every other two-address thumb instruction. Special case it
in the size reduction pass to make sure we create the tMUL instruction
properly.

llvm-svn: 151315
2012-02-24 00:33:36 +00:00
Dan Gohman d4a77c4682 When emitting a cmp with 0 for a lowered select, mask out the high
bits of the value carying the boolean condition, as their contents
are undefined. This fixes rdar://10887484.

llvm-svn: 151310
2012-02-24 00:09:36 +00:00
Bill Wendling 38b31619f6 Allow an integer to be converted into an MMX type when it's used in an inline
asm.
<rdar://problem/10106006>

llvm-svn: 151303
2012-02-23 23:25:25 +00:00
Roman Divacky a2d3608f78 MCize function entry label emission on PowerPC64 properly.
llvm-svn: 151278
2012-02-23 20:28:39 +00:00
Jakob Stoklund Olesen aae960d978 Make tests less sensitive to scheduling changes.
llvm-svn: 151260
2012-02-23 17:19:34 +00:00
Anton Korobeynikov a22828e085 Fix to make sure that a comdat group gets generated correctly for a static member
of instantiated C++ templates.

Patch by Kristof Beyls!

llvm-svn: 151250
2012-02-23 10:36:04 +00:00
Evan Cheng f258a15bdf Canonicalize (srl (bswap x), 16) to (rotr (bswap x), 16) if the high 16 bits
of x are zero. This optimizes rev + lsr 16 to rev16.

rdar://10750814

llvm-svn: 151230
2012-02-23 02:58:19 +00:00
Evan Cheng e87681cf34 Optimize a couple of common patterns involving conditional moves where the false
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
    cmp   r9, r4
    mov   r4, #0
    moveq r4, #1 
    orr   lr, lr, r4

should be:
    cmp   r9, r4
    orreq lr, lr, #1

That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).

It's possible to extend this to ADD and SUB but I don't think they are common.

rdar://8659097

llvm-svn: 151224
2012-02-23 01:19:06 +00:00
Daniel Dunbar 9646a472da MC: Fix the MCNullStreamer which was broken in r147763.
llvm-svn: 151213
2012-02-22 23:49:50 +00:00
Hal Finkel ad4d9f5848 Allow the use of an alternate symbol for calculating a function's size.
The standard function epilog includes a .size directive, but ppc64 uses
an alternate local symbol to tag the actual start of each function.

Until recently, binutils accepted the .size directive as:
 .size	test1, .Ltmp0-test1
however, using this directive with recent binutils will result in the error:
 .size expression for XXX does not evaluate to a constant
so we must use the label which actually tags the start of the function.

llvm-svn: 151200
2012-02-22 21:11:47 +00:00
Michael J. Spencer 8b98bf2d6b Properly emit _fltused with FastISel. Refactor to share code with SDAG.
Patch by Joe Groff!

llvm-svn: 151183
2012-02-22 19:06:13 +00:00
Aaron Ballman e67173e718 Adding support for Microsoft's thiscall calling convention. LLVM side of the patch.
llvm-svn: 151123
2012-02-22 03:04:40 +00:00
Jakob Stoklund Olesen d0dc3920de Remove a bad PowerPC test.
This test case was way too strict, matching the entire assembly output.
Every non-trivial change to the ppc backend  or -O0 pipeline required
the test to be updated.

It should be replaced with a test of the specific vaarg feature.

llvm-svn: 151105
2012-02-21 23:49:18 +00:00
Evan Cheng 0460ae8d80 Proper support for a bastardized darwin-eabi hybird ABI.
llvm-svn: 151083
2012-02-21 20:46:00 +00:00
NAKAMURA Takumi c664fdbd4a test/CodeGen/X86/2012-02-20-MachineCPBug.ll: Fix on generic(non-x86) hosts to add -mattr=+sse.
llvm-svn: 151053
2012-02-21 11:56:42 +00:00
Evan Cheng 63618f9ba6 Fix machine-cp by having it to check sub-register indicies. e.g.
ecx = mov eax
al  = mov ch
The second copy is not a nop because the sub-indices of ecx,ch is not the
same of that of eax/al.

Re-enabled machine-cp.
PR11940

llvm-svn: 151002
2012-02-20 23:28:17 +00:00
Eric Christopher c767c51b5d Testcase for the previous commit.
llvm-svn: 150852
2012-02-18 00:05:45 +00:00
David Chisnall 8fa1716508 It turns out that putting an 8-byte symbol in a 4-byte section makes Solaris ld sulk. GNU ld is perfectly happy with it, which is worrying for a whole other set of reasons...
Thanks to Anton, Duncan and Rafael for helping me track this down.
Pointy hat to Rafael for introducing the bug in the first place.

llvm-svn: 150811
2012-02-17 16:05:50 +00:00
Chad Rosier fcd29ae390 [fast-isel] Add support for returning non-legal types with no sign- or zero-
entend flag.

llvm-svn: 150774
2012-02-17 01:21:28 +00:00
Bill Wendling ec713ee4b4 Use –mcpu=generic, so that the test will not fail when run on an Intel Atom
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is expected.
Patch by Michael Spencer!

llvm-svn: 150736
2012-02-16 22:42:48 +00:00
Benjamin Kramer b0d75c2f4e Disable machine copy propagation for now. It's known to be buggy (PR11940) and introduces subtle miscompiles in many places.
llvm-svn: 150703
2012-02-16 17:29:50 +00:00
Eli Bendersky 924f9a671d Replace all instances of dg.exp file with lit.local.cfg, since all tests are run with LIT now and now Dejagnu. dg.exp is no longer needed.
Patch reviewed by Daniel Dunbar. It will be followed by additional cleanup patches.

llvm-svn: 150664
2012-02-16 06:28:33 +00:00
Bill Wendling 6a26ab7f99 Add a test for generating Objective-C metadata from module flags.
llvm-svn: 150635
2012-02-15 23:43:37 +00:00
Pete Cooper c21ebf5c41 Stop custom lowering forr x86 DEC64m from happening if the load in the lowered sequence has more than 1 user
llvm-svn: 150537
2012-02-15 00:33:37 +00:00
Lang Hames 595111f221 Tighten physical register invariants: Allocatable physical registers can
only be live in to a block if it is the function entry point or a landing pad.

llvm-svn: 150494
2012-02-14 18:51:53 +00:00
Nadav Rotem 29984ba033 Fix PR12000. Some vector operations may use scalar operands with types
that are greater than the vector element type. For example BUILD_VECTOR
of type <1 x i1> with a constant i8 operand.
This patch fixes the assertion.

llvm-svn: 150477
2012-02-14 13:06:32 +00:00
Nadav Rotem 0c65064dbe Fix a bug in DAGCombine for the optimization of BUILD_VECTOR. We cant generate a shuffle node from two vectors of different types.
llvm-svn: 150383
2012-02-13 12:42:26 +00:00
Craig Topper 38f2c7f14f Revert accidental commit of a pruned testcase from r150360.
llvm-svn: 150361
2012-02-13 04:33:33 +00:00
Craig Topper 87119fa37f Update CanXFormVExtractWithShuffleIntoLoad to ensure bitcasts of loads only have one use. Matches DAGCombiner and prevents vector_shuffles from reaching isel.
llvm-svn: 150360
2012-02-13 04:30:38 +00:00
Pete Cooper 71be57bb32 Fixed bug when custom lowering DEC64m on x86.
If the DEC node had more than one user, it was doing this lowering but
leaving the original DEC node around and so decrementing twice.

Fixes PR11964.

llvm-svn: 150356
2012-02-13 00:10:03 +00:00
Nadav Rotem 34ca89afa8 This patch addresses the problem of poor code generation for the zext
v8i8 -> v8i32 on AVX machines. The codegen often scalarizes ANY_EXTEND nodes.
The DAGCombiner has two optimizations that can mitigate the problem. First,
if all of the operands of a BUILD_VECTOR node are extracted from an ZEXT/ANYEXT
nodes, then it is possible to create a new simplified BUILD_VECTOR which uses
UNDEFS/ZERO values to eliminate the scalar ZEXT/ANYEXT nodes.
Second, another dag combine optimization lowers BUILD_VECTOR into a shuffle
vector instruction.

In the case of zext v8i8->v8i32 on AVX, a value in an XMM register is to be
shuffled into a wide YMM register.

This patch modifes the second optimization and allows the creation of
shuffle vectors even when the newly generated vector and the original vector
from which we extract the values are of different types.

llvm-svn: 150340
2012-02-12 15:05:31 +00:00
Anton Korobeynikov c6b4017ce2 Add support for implicit TLS model used with MS VC runtime.
Patch by Kai Nacke!

llvm-svn: 150307
2012-02-11 17:26:53 +00:00
Andrew Trick d3f8fe81f4 RegAlloc superpass: includes phi elimination, coalescing, and scheduling.
Creates a configurable regalloc pipeline.

Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.

When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.

CodeGen transformation passes are never "required" as an analysis

ProcessImplicitDefs does not require LiveVariables.

We have a plan to massively simplify some of the early passes within the regalloc superpass.

llvm-svn: 150226
2012-02-10 04:10:36 +00:00
NAKAMURA Takumi d5a504e52b test/CodeGen/X86/atom-lea-sp.ll: Add explicit -mtriple=i686-linux.
llvm-svn: 150151
2012-02-09 05:12:58 +00:00
Evan Cheng 55e7d6aeff Commit Andy Zhang's test for the lea patch.
llvm-svn: 150107
2012-02-08 22:33:17 +00:00
Elena Demikhovsky 1adc1d53dd Fixed a bug in printing "cmp" pseudo ops.
> This IR code
> %res = call <8 x float> @llvm.x86.avx.cmp.ps.256(<8 x float> %a0, <8 x float> %a1, i8 14)
> fails with assertion:
>
> llc: X86ATTInstPrinter.cpp:62: void llvm::X86ATTInstPrinter::printSSECC(const llvm::MCInst*, unsigned int, llvm::raw_ostream&): Assertion `0 && "Invalid ssecc argument!"' failed.
> 0  llc             0x0000000001355803
> 1  llc             0x0000000001355dc9
> 2  libpthread.so.0 0x00007f79a30575d0
> 3  libc.so.6       0x00007f79a23a1945 gsignal + 53
> 4  libc.so.6       0x00007f79a23a2f21 abort + 385
> 5  libc.so.6       0x00007f79a239a810 __assert_fail + 240
> 6  llc             0x00000000011858d5 llvm::X86ATTInstPrinter::printSSECC(llvm::MCInst const*, unsigned int, llvm::raw_ostream&) + 119

I added the full testing for all possible pseudo-ops of cmp.
I extended X86AsmPrinter.cpp and X86IntelInstPrinter.cpp.

You'l also see lines alignments (unrelated to this fix) in X86IselLowering.cpp from my previous check-in.

llvm-svn: 150068
2012-02-08 08:37:26 +00:00
Chad Rosier 0ee8c513f7 [fast-isel] Add support for SUBs with non-legal types.
llvm-svn: 150047
2012-02-08 02:45:44 +00:00
Chad Rosier bfe7393d7d Add comment to test case.
llvm-svn: 150046
2012-02-08 02:30:12 +00:00
Chad Rosier bd471255a9 [fast-isel] Add support for ORs with non-legal types.
llvm-svn: 150045
2012-02-08 02:29:21 +00:00
Chad Rosier ded4c99f2e [fast-isel] Add support for indirect branches.
llvm-svn: 150014
2012-02-07 23:56:08 +00:00
Craig Topper b27fd77c3f Add instruction selection for 256-bit VPSHUFD and 128-bit VPERMILPS/VPERMILPD.
llvm-svn: 149968
2012-02-07 06:28:42 +00:00
Chad Rosier 685b20c114 [fast-isel] Add support for ADDs with non-legal types.
llvm-svn: 149934
2012-02-06 23:50:07 +00:00
Duncan Sands be65578460 Testcase for commit 149833 (use of an uninitialized variable noticed
by GCC).

llvm-svn: 149840
2012-02-05 19:27:57 +00:00
Benjamin Kramer 93492b8765 Testing vector code without sse doesn't make much sense.
Should bring arm and ppc testers back to life (they default to -mcpu=generic)

llvm-svn: 149821
2012-02-05 11:19:39 +00:00
Chris Lattner 6987fdb620 Add a test for the miscompilation my recent ConstantDataArray patches introduced, to make sure
we don't regress on it in the future.

llvm-svn: 149803
2012-02-05 02:37:36 +00:00
Craig Topper 4daa67483d Remove most of the intrinsics for XOP VPCMOV instruction. They all aliased to the same instruction with different types. This would be better accomplished with casts in the not yet created xopintrin.h header file.
llvm-svn: 149795
2012-02-05 00:55:56 +00:00
Chad Rosier 6d68c7cf79 [fast-isel] HandlePHINodesInSuccessorBlocks() can promite i8 and i16 types too.
llvm-svn: 149730
2012-02-04 00:39:19 +00:00
Chad Rosier 41f0e78b6c [fast-isel] Add support for FPToUI. Also add test cases for FPToSI.
llvm-svn: 149706
2012-02-03 20:27:51 +00:00
Chad Rosier a8a8ac5d47 [fast-isel] Add support for selecting UIToFP.
llvm-svn: 149704
2012-02-03 19:42:52 +00:00
Nadav Rotem 5399f4d6bf The type-legalizer often scalarizes code. One of the common patterns is extract-and-truncate.
In this patch we optimize this pattern and convert the sequence into extract op of a narrow type.
This allows the BUILD_VECTOR dag optimizations to construct efficient shuffle operations in many cases.

llvm-svn: 149692
2012-02-03 13:18:25 +00:00
Akira Hatanaka f0b08445f6 Add a new MachineJumpTableInfo entry type, EK_GPRel64BlockAddress, which is
needed to emit a 64-bit gp-relative relocation entry. Make changes necessary
for emitting jump tables which have entries with directive .gpdword. This patch
does not implement the parts needed for direct object emission or JIT.

llvm-svn: 149668
2012-02-03 04:33:00 +00:00
Matt Beaumont-Gay 1e8d720743 Unix line endings
llvm-svn: 149615
2012-02-02 19:00:49 +00:00
NAKAMURA Takumi 9df3de538b Move test/CodeGen/Generic/2012-02-01-CoalescerBug.ll to CodeGen/ARM, for now. It requires TARGETS=arm.
I cannot reproduce a fixed issue with other targets.

llvm-svn: 149604
2012-02-02 11:44:58 +00:00
Elena Demikhovsky fb44980b41 Optimization for SIGN_EXTEND operation on AVX.
Special handling was added for v4i32 -> v4i64 and v8i16 -> v8i32
extensions.

llvm-svn: 149600
2012-02-02 09:10:43 +00:00
Lang Hames 0269caafa6 Set EFLAGS correctly in EmitLoweredSelect on X86.
llvm-svn: 149597
2012-02-02 07:48:37 +00:00
Lang Hames 3a20bc3652 PR11868. The previous loop in LiveIntervals::join would sometimes fall over if
more than two adjacent ranges needed to be merged. The new version should be
able to handle an arbitrary sequence of adjancent ranges.

llvm-svn: 149588
2012-02-02 05:37:34 +00:00
Andrew Trick 8523b16ff5 Instruction scheduling itinerary for Intel Atom.
Adds an instruction itinerary to all x86 instructions, giving each a default latency of 1, using the InstrItinClass IIC_DEFAULT.

Sets specific latencies for Atom for the instructions in files X86InstrCMovSetCC.td, X86InstrArithmetic.td, X86InstrControl.td, and X86InstrShiftRotate.td. The Atom latencies for the remainder of the x86 instructions will be set in subsequent patches.

Adds a test to verify that the scheduler is working.

Also changes the scheduling preference to "Hybrid" for i386 Atom, while leaving x86_64 as ILP.

Patch by Preston Gurd!

llvm-svn: 149558
2012-02-01 23:20:51 +00:00
Mon P Wang 9f05206659 Avoid creating an extract element to an illegal type after LegalizeTypes has run.
llvm-svn: 149548
2012-02-01 22:15:20 +00:00
Andrew Trick d06df96a7c VLIW specific scheduler framework that utilizes deterministic finite automaton (DFA).
This new scheduler plugs into the existing selection DAG scheduling framework. It is a top-down critical path scheduler that tracks register pressure and uses a DFA for pipeline modeling.

Patch by Sergei Larin!

llvm-svn: 149547
2012-02-01 22:13:57 +00:00
NAKAMURA Takumi fe8186f8b0 test/CodeGen/X86/avx-minmax.ll: Relax expressions for Win32 targets. YMM arguments are passed as indirect on Win32 x64.
llvm-svn: 149505
2012-02-01 14:35:29 +00:00
Elena Demikhovsky 824eed70a6 Passing AVX 256-bit structures in Win64 was wrong.
Fixed Win64 calling conventions.

llvm-svn: 149494
2012-02-01 10:46:14 +00:00
Elena Demikhovsky 0e48c70ba7 Optimization for "truncate" operation on AVX.
Truncating v4i64 -> v4i32 and v8i32 -> v8i16 may be done with set of shuffles.

llvm-svn: 149485
2012-02-01 07:56:44 +00:00
Craig Topper b85e40f738 Remove pcmpgt/pcmpeq intrinsics as clang is not using them.
llvm-svn: 149367
2012-01-31 06:52:44 +00:00
Bill Wendling d7cd9727ee Remove all references to the old EH.
There was always the current EH. -- Ministry of Truth

llvm-svn: 149335
2012-01-31 02:09:07 +00:00
Chandler Carruth 2c469ff14a Chris's constant data sequence refactoring actually enabled printing
vectors of all one bits to be printed more cleverly in the AsmPrinter.
Unfortunately, the byte value for all one bits is the same with
-fsigned-char as the error return of '-1'. Force this to be the unsigned
byte value when returning it to avoid this problem, and update the test
case for the shiny new behavior.

Yay for building LLVM and Clang with -funsigned-char.

Chris, please review, and let me know if there is any reason to not
desire this change. It seems good on the surface, and certainly intended
based on the code written.

llvm-svn: 149299
2012-01-30 23:47:44 +00:00
Craig Topper 516cba3380 Fix pattern for memory form of PSHUFD for use with FP vectors to remove bitcast to an integer vector that normal code wouldn't have. Also remove bitcasts from code that turns splat vector loads into a shuffle as it was making the broken pattern necessary.
llvm-svn: 149232
2012-01-30 07:50:31 +00:00
Rafael Espindola bb893fea6b Add r149110 back with a fix for when the vector and the int have the same
width.

llvm-svn: 149151
2012-01-27 23:33:07 +00:00
Rafael Espindola a4062624d1 Revert r149110 and add a testcase that was crashing since that revision.
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.

llvm-svn: 149148
2012-01-27 22:42:48 +00:00
Matt Beaumont-Gay f3a9a38db4 Unix line endings
llvm-svn: 149115
2012-01-27 02:31:29 +00:00
Lang Hames 286fed5c8c Rewrite instruction operands in AdjustCopiesBackFrom. Fixes PR11861.
llvm-svn: 149097
2012-01-27 00:05:42 +00:00
Jakob Stoklund Olesen fc9dce25f7 Handle call-clobbered ymm registers on Win64.
The Win64 calling convention has xmm6-15 as callee-saved while still
clobbering all ymm registers.

Add a YMM_HI_6_15 pseudo-register that aliases the clobbered part of the
ymm registers, and mark that as call-clobbered.  This allows live xmm
registers across calls.

This hack wouldn't be necessary with RegisterMask operands representing
the call clobbers, but they are not quite operational yet.

llvm-svn: 149088
2012-01-26 22:59:28 +00:00
Chad Rosier 1a1531d65e Replace the use of isPredicable() with isPredicated() in
MachineBasicBlock::canFallThrough().  We're interested in the state of the
instruction (i.e., is this a barrier or not?), not if the instruction is
predicable or not.
rdar://10501092

llvm-svn: 149070
2012-01-26 18:24:25 +00:00
Jakob Stoklund Olesen 8c139a5125 Clear kill flags before propagating a copy.
The live range of the source register may be extended when a redundant
copy is eliminated. Make sure any kill flags between the two copies are
cleared.

This fixes PR11765.

llvm-svn: 149069
2012-01-26 17:52:15 +00:00
Victor Umansky 5f29b0e57b Fix for the following bug in AVX codegen for double-to-int conversions:
.	"fptosi" and "fptoui" IR instructions are defined with round-to-zero rounding mode.
.	Currently for AVX mode for <4xdouble> and <8xdouble>  the "VCVTPD2DQ.128" and "VCVTPD2DQ.256" instructions are selected (for .fp_to_sint. DAG node operation ) by AVX codegen. However they use round-to-nearest-even rounding mode.
.	Consequently, the conversion produces incorrect numbers.
 
The fix is to replace selection of VCVTPD2DQ instructions with VCVTTPD2DQ instructions. The latter use truncate (i.e. round-to-zero) rounding mode. 
As .fp_to_sint. DAG node operation is used only for lowering of  "fptosi" and "fptoui" IR instructions, the fix in X86InstrSSE.td definition file doesn.t have an impact on other LLVM flows.
 
The patch includes changes in the .td file, LIT test for the changes and a fix in a legacy LIT test (which produced asm code conflicting with LLVN IR spec). 

llvm-svn: 149056
2012-01-26 08:51:39 +00:00
Jakob Stoklund Olesen 4864a81aa3 Improve sub-register def handling in ProcessImplicitDefs.
This boils down to using MachineOperand::readsReg() more.

This fixes PR11829 where a use ended up after the first def when
lowering REG_SEQUENCE instructions involving IMPLICIT_DEFs.

llvm-svn: 148996
2012-01-25 23:36:27 +00:00
Anton Korobeynikov 7722a2d4e3 Properly emit ctors / dtors with priorities into desired sections
and let linker handle the rest.

This finally fixes PR5329

llvm-svn: 148990
2012-01-25 22:24:19 +00:00
Akira Hatanaka 01d3c42f90 Modify MipsFrameLowering::emitPrologue and emitEpilogue.
- Use MipsAnalyzeImmediate to expand immediates that do not fit in 16-bit.
- Change the types of variables so that they are sufficiently large to handle
  64-bit pointers.
- Emit instructions to set register $28 in a function prologue after
  instructions which store callee-saved registers have been emitted. 
 

llvm-svn: 148917
2012-01-25 04:12:04 +00:00
Akira Hatanaka 86d5fadd57 Lower 64-bit immediates using MipsAnalyzeImmediate that has just been added.
Add a test case to show fewer instructions are needed to load an immediate
with the new way of loading immediates.

llvm-svn: 148908
2012-01-25 03:01:35 +00:00
Jakob Stoklund Olesen 1b8e437ab6 Set correct <def,undef> flags when lowering REG_SEQUENCE.
A REG_SEQUENCE instruction is lowered into a sequence of partial defs:

  %vreg7:ssub_0<def,undef> = COPY %vreg20:ssub_0
  %vreg7:ssub_1<def> = COPY %vreg2
  %vreg7:ssub_2<def> = COPY %vreg2
  %vreg7:ssub_3<def> = COPY %vreg2

The first def needs an <undef> flag to indicate it is the beginning of
the live range, while the other defs are read-modify-write.  Previously,
we depended on LiveIntervalAnalysis to notice and fix the missing
<def,undef>, but that solution was never robust, it was causing problems
with ProcessImplicitDefs and the lowering of chained REG_SEQUENCE
instructions.

This fixes PR11841.

llvm-svn: 148879
2012-01-24 23:28:42 +00:00
Akira Hatanaka 77dbd786c8 Pattern for f32 to i64 conversion.
llvm-svn: 148869
2012-01-24 22:05:25 +00:00
Akira Hatanaka 9f7ec1538f 64-bit sign extension in register instructions.
llvm-svn: 148862
2012-01-24 21:41:09 +00:00
Elena Demikhovsky 0b0c5d8c4c ZERO_EXTEND operation is optimized for AVX.
v8i16 -> v8i32, v4i32 -> v4i64 - used vpunpck* instructions.

llvm-svn: 148803
2012-01-24 13:54:13 +00:00
Evgeniy Stepanov 33a7e2f2a1 An option to selectively enable part of ARM EHABI support.
This change adds an new option --arm-enable-ehabi-descriptors that
enables emitting unwinding descriptors. This provides a mode with a
working backtrace() without the (currently broken) exception support.

llvm-svn: 148800
2012-01-24 13:05:33 +00:00
Chandler Carruth ed975232bc Revert r148686 (and r148694, a fix to it) due to a serious layering
violation -- MC cannot depend on CodeGen.

Specifically, the MCTargetDesc component of each target is actually
a subcomponent of the MC library. As such, it cannot depend on the
target-independent code generator, because MC itself cannot depend on
the target-independent code generator. This change moved a flag from the
ARM MCTargetDesc file ARMMCAsmInfo.cpp to the CodeGen layer in
ARMException.cpp, leaving behind an 'extern' to refer back to it. That
layering order isn't viable givin the constraints outlined above.
Commandline flags are designed to be static specifically to avoid these
types of bugs.

Fixing this is likely going to require some non-trivial refactoring.

llvm-svn: 148759
2012-01-24 00:30:17 +00:00
Jakob Stoklund Olesen 20948fab69 Fix PR11829. PostRA LICM was too aggressive.
This fixes a typo in r148589.

llvm-svn: 148724
2012-01-23 21:01:15 +00:00
Evgeniy Stepanov 482cdc4ebd An option to selectively enable parts of ARM EHABI support.
This change adds an new value to the --arm-enable-ehabi option that
disables emitting unwinding descriptors. This mode gives a working
backtrace() without the (currently broken) exception support.

llvm-svn: 148686
2012-01-23 07:57:39 +00:00
Anton Korobeynikov 5482b9f535 Add fused multiple+add instructions from VFPv4.
Patch by Ana Pazos!

llvm-svn: 148658
2012-01-22 12:07:33 +00:00
Bob Wilson 6c7aaec077 ARM vector any_extends need to be selected to vmovl. <rdar://problem/10723651>
We have patterns for vector sext and zext operations but were missing
anyext.  Without those patterns, codegen will fail when the selection DAG
has any_extend nodes.

llvm-svn: 148568
2012-01-20 20:59:56 +00:00
Jim Grosbach 90f5780fc1 VST2 four-register w/ update pseudos for fixed/register update.
rdar://10724489

llvm-svn: 148560
2012-01-20 19:16:00 +00:00
Craig Topper 3469212c82 Add support for selecting 256-bit PALIGNR.
llvm-svn: 148532
2012-01-20 05:53:00 +00:00
Eli Friedman b359e67d2d Remove a low-quality test which was failing on Windows; test/CodeGen/X86/sret.ll is a better test for the relevant behavior.
llvm-svn: 148526
2012-01-20 02:06:40 +00:00
Eli Friedman 32c7c25dcb Support MSVC x86-32 sret convention. PR11688. Patch by Joe Groff.
llvm-svn: 148513
2012-01-20 00:05:46 +00:00
Evgeniy Stepanov 4c7eb477b5 Emit ARM EHABI unwinding instructions for 3 more Thumb instructions.
llvm-svn: 148473
2012-01-19 12:53:06 +00:00
Nick Lewycky 9e2c7f659e Space after punctuation.
llvm-svn: 148451
2012-01-19 01:13:47 +00:00
Nick Lewycky ecc0084f72 Add a TargetOption for disabling tail calls.
llvm-svn: 148442
2012-01-19 00:34:10 +00:00
Nadav Rotem 3b8f0cc9fa Fix a bug in the type-legalization of vector integers. When we bitcast one vector type to another, we must not bitcast the result if one type is widened while the other is promoted.
llvm-svn: 148383
2012-01-18 08:33:18 +00:00
Nadav Rotem fb6ddee0e9 Transform: (EXTRACT_VECTOR_ELT( VECTOR_SHUFFLE )) -> EXTRACT_VECTOR_ELT.
llvm-svn: 148337
2012-01-17 21:44:01 +00:00
Nadav Rotem 86e5390dbf Fix 11769.
In CanXFormVExtractWithShuffleIntoLoad we assumed that EXTRACT_VECTOR_ELT can be later handled by the DAGCombiner.
However, in some cases on AVX, the EXTRACT_VECTOR_ELT is legalized to EXTRACT_SUBVECTOR + EXTRACT_VECTOR_ELT, which
currently is not handled by the DAGCombiner. In this patch I added a check that we only extract from the XMM part.

llvm-svn: 148298
2012-01-17 09:13:19 +00:00
Hal Finkel 8606e3c7e3 AggressiveAntiDepBreaker needs to skip debug values because a debug value does not have a corresponding SUnit
llvm-svn: 148260
2012-01-16 22:53:41 +00:00
Eli Friedman 206ca569aa Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.
llvm-svn: 148240
2012-01-16 16:42:21 +00:00
Nadav Rotem 57935243bd [AVX] Optimize x86 VSELECT instructions using SimplifyDemandedBits.
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.

Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.

llvm-svn: 148225
2012-01-15 19:27:55 +00:00
Chandler Carruth fd63436553 Relax the FileCheck assertion a bit -- all we really care about is that
we're loading from the global array, not how it is spelled in the asm.
This should fix the MSVC bots.

llvm-svn: 148214
2012-01-15 09:38:59 +00:00
Chandler Carruth 4bb2eb15d4 FileCheck-ize a test, make it more specific to directly test the shift
removal desired.

llvm-svn: 148213
2012-01-15 09:32:57 +00:00
Evan Cheng 6bb95253eb After r147827 and r147902, it's now possible for unallocatable registers to be
live across BBs before register allocation. This miscompiled 197.parser
when a cmp + b are optimized to a cbnz instruction even though the CPSR def
is live-in a successor.
        cbnz    r6, LBB89_12
...
LBB89_12:
        ble     LBB89_1

The fix consists of two parts. 1) Teach LiveVariables that some unallocatable
registers might be liveouts so don't mark their last use as kill if they are.
2) ARM constantpool island pass shouldn't form cbz / cbnz if the conditional
branch does not kill CPSR.

rdar://10676853

llvm-svn: 148168
2012-01-14 01:53:46 +00:00
Chad Rosier d027d528b5 Cleanup test case by adding checks for test names.
llvm-svn: 148166
2012-01-14 01:46:51 +00:00
Rafael Espindola 78a6477b28 Add a test showing how the Leh_func_endN symbol is used.
llvm-svn: 148161
2012-01-14 00:12:59 +00:00
NAKAMURA Takumi 1b845f712a test/CodeGen/ARM/test-sharedidx.ll: Fix for -Asserts.
llvm-svn: 148107
2012-01-13 07:03:55 +00:00
Craig Topper 9f14d9f939 Add patterns for v16i16 and v32i8 immAllZerosV to select VPXOR to match v4i64 and v8i32.
llvm-svn: 148106
2012-01-13 06:59:47 +00:00
Evan Cheng fa8326334b DAGCombine's logic for forming pre- and post- indexed loads / stores were being
overly conservative. It was concerned about cases where it would prohibit
folding simple [r, c] addressing modes. e.g.
  ldr r0, [r2]
  ldr r1, [r2, #4]
=>
  ldr r0, [r2], #4
  ldr r1, [r2]
Change the logic to look for such cases which allows it to form indexed memory
ops more aggressively.

rdar://10674430

llvm-svn: 148086
2012-01-13 01:37:24 +00:00
Elena Demikhovsky 060f6ccdb8 Fixed a bug in LowerVECTOR_SHUFFLE caused assertion failure
lc: X86ISelLowering.cpp:6480: llvm::SDValue llvm::X86TargetLowering::LowerVECTOR_SHUFFLE(llvm::SDValue, llvm::SelectionDAG&) const: Assertion `V1.getOpcode() != ISD::UNDEF&&  "Op 1 of shuffle should not be undef"' failed.
Added a test.

llvm-svn: 148044
2012-01-12 20:33:10 +00:00
Rafael Espindola 9a167eeaff Add error-reporting tests for platforms that don't support segmented stacks.
Patch by Brian Anderson.

llvm-svn: 148042
2012-01-12 20:26:13 +00:00
Rafael Espindola 00e861ed57 Support segmented stacks on 64-bit FreeBSD.
This patch uses tcb_spare field in the tcb structure to store info.
Patch by Jyun-Yan You.

llvm-svn: 148041
2012-01-12 20:24:30 +00:00
Rafael Espindola 10745d3381 Support segmented stacks on win32.
Uses the pvArbitrary slot of the TIB, which is reserved for applications. We
only support frames with a static size.

llvm-svn: 148040
2012-01-12 20:22:08 +00:00
Nadav Rotem 0a0a829bea Fix a bug in the AVX 256-bit shuffle code in cases where the splat element is on the boundary of two 128-bit vectors.
The attached testcase was stuck in an endless loop.

llvm-svn: 148027
2012-01-12 15:31:55 +00:00
Benjamin Kramer 5b3aa60b44 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
llvm-svn: 148024
2012-01-12 12:41:34 +00:00
Nadav Rotem b5ce6ee835 On AVX, we can load v8i32 at a time. The bug happens when two uneven loads are used.
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen 
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.

llvm-svn: 147964
2012-01-11 20:19:17 +00:00
Bill Wendling bb3c4ad135 Check to make sure that the CFString's back store ends up in the correct section.
llvm-svn: 147961
2012-01-11 19:33:37 +00:00
Rafael Espindola d90466bcbf Support segmented stacks on mac.
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support
frames with static size
Patch by Brian Anderson.

llvm-svn: 147960
2012-01-11 19:00:37 +00:00
Rafael Espindola 1e1b797989 Split segmented stacks tests into tests for static- and dynamic-size frames.
Patch by Brian Anderson.

llvm-svn: 147959
2012-01-11 18:51:03 +00:00
Rafael Espindola 4eecacb9c8 Generate the segmented stack prologue for fastcc too.
Patch by Brian Anderson.

llvm-svn: 147958
2012-01-11 18:41:19 +00:00
Chandler Carruth 3212a34269 Revert r147945 which disabled an addressing mode transformation. I had
hoped this would revive one of the llvm-gcc selfhost build bots, but it
didn't so it doesn't appear that my transform is the culprit.

If anyone else is seeing failures, please let me know!

llvm-svn: 147957
2012-01-11 18:36:12 +00:00
Rafael Espindola 2b89448d60 Use unsigned comparison in segmented stack prologue.
This is a comparison of two addresses, and GCC does the comparison unsigned.

Patch by Brian Anderson.

llvm-svn: 147954
2012-01-11 18:23:35 +00:00
Rafael Espindola 6635ae1c17 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

llvm-svn: 147952
2012-01-11 18:14:03 +00:00
Jan Sjödin 21f83d9f36 Add XOP Intrinsics and tests
llvm-svn: 147949
2012-01-11 15:20:20 +00:00
Nadav Rotem baae7e4577 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
llvm-svn: 147948
2012-01-11 14:07:51 +00:00
Chandler Carruth 9bc48e5215 Disable the transformation I added in r147936 to see if it fixes some
strange build bot failures that look like a miscompile into an infloop.
I'll investigate this tomorrow, but I'd both like to know whether my
patch is the culprit, and get the bots back to green.

llvm-svn: 147945
2012-01-11 12:17:47 +00:00
Jakob Stoklund Olesen 6039983755 Fix undefined code and reenable test case.
I don't think the compact encoding code is right, but at least is has
defined behavior now.

llvm-svn: 147938
2012-01-11 09:08:04 +00:00
Chandler Carruth 55b2cdee26 Teach the X86 instruction selection to do some heroic transforms to
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:

  unsigned x = my_accelerator_table[input >> 11];

Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):

  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));

The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.

In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.

llvm-svn: 147936
2012-01-11 08:41:08 +00:00
NAKAMURA Takumi 0e60839e11 llvm/test/CodeGen/X86/zext-fold.ll: Relax an expression in stack offset.
llvm-svn: 147928
2012-01-11 07:34:22 +00:00
NAKAMURA Takumi 9e823f6ae1 llvm/test/CodeGen/X86/sub-with-overflow.ll: Add explicit -mtriple=i686-linux.
llvm-svn: 147927
2012-01-11 07:34:14 +00:00
Andrew Trick 642f0f6a40 ARM Ld/St Optimizer fix.
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits.

Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12

llvm-svn: 147922
2012-01-11 03:56:08 +00:00
Jakob Stoklund Olesen 05ff7f06fb Disable test that seems to expose an unrelated Linux issue.
llvm-svn: 147921
2012-01-11 03:42:27 +00:00
Jakob Stoklund Olesen 8b1d023a4a Detect when a value is undefined on an edge to a landing pad.
Consider this code:

int h() {
  int x;
  try {
    x = f();
    g();
  } catch (...) {
    return x+1;
  }
  return x;
}

The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.

SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.

Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().

<rdar://problem/10664933>

llvm-svn: 147912
2012-01-11 02:07:05 +00:00
Chad Rosier a415140a2c Add test case for r147881.
llvm-svn: 147891
2012-01-10 23:09:53 +00:00
Joerg Sonnenberger 96cd35cf6d Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.
Add a test that checks the stack alignment of a simple function for
Darwin, Linux and NetBSD for 32bit and 64bit mode.

llvm-svn: 147888
2012-01-10 22:43:53 +00:00
Jakob Stoklund Olesen 20f1dd5faf Consider unknown alignment caused by OptimizeThumb2Instructions().
This function runs after all constant islands have been placed, and may
shrink some instructions to their 2-byte forms.  This can actually cause
some constant pool entries to move out of range because of growing
alignment padding.

Treat instructions that may be shrunk the same as inline asm - they
erode the known alignment bits.

Also reinstate an old assertion in verify(). It is correct now that
basic block offsets include alignments.

Add a single large test case that will hopefully exercise many parts of
the constant island pass.

<rdar://problem/10670199>

llvm-svn: 147885
2012-01-10 22:32:14 +00:00
Jim Grosbach 74ac7d50a1 ARM updating VST2 pseudo-lowering fixed vs. register update.
rdar://10663487

llvm-svn: 147876
2012-01-10 21:11:12 +00:00
Nadav Rotem 61bdf79035 Fix a bug in the legalization of shuffle vectors. When we emulate shuffles using BUILD_VECTORS we may be using a BV of different type. Make sure to cast it back.
llvm-svn: 147851
2012-01-10 14:28:46 +00:00
Craig Topper 430f3f1bd6 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
llvm-svn: 147844
2012-01-10 08:23:59 +00:00
Evan Cheng 0be4144a68 Allow machine-cse to look across MBB boundary when cse'ing instructions that
define physical registers. It's currently very restrictive, only catching
cases where the CE is in an immediate (and only) predecessor. But it catches
a surprising large number of cases.

rdar://10660865

llvm-svn: 147827
2012-01-10 02:02:58 +00:00
Chandler Carruth ea27c16adc Cleanup and FileCheck-ize a test.
llvm-svn: 147772
2012-01-09 09:44:26 +00:00
Craig Topper ef7f5bf8c9 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
llvm-svn: 147767
2012-01-09 06:52:46 +00:00
Rafael Espindola f28213ca01 Don't print an unused label before .cfi_endproc.
llvm-svn: 147763
2012-01-09 00:17:29 +00:00
Craig Topper 744f6311d3 Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
llvm-svn: 147762
2012-01-09 00:11:29 +00:00
Victor Umansky 540651cf59 Reverted commit #147601 upon Evan's request.
llvm-svn: 147748
2012-01-08 17:20:33 +00:00
Rafael Espindola 382412032c Don't print a label before .cfi_startproc when we don't need to. This makes
the produce assembly when using CFI just a bit more readable.

llvm-svn: 147743
2012-01-07 22:42:19 +00:00
Jakob Stoklund Olesen 8cdce7e690 Use getRegForValue() to materialize the address of ARM globals.
This enables basic local CSE, giving us 20% smaller code for
consumer-typeset in -O0 builds.

<rdar://problem/10658692>

llvm-svn: 147720
2012-01-07 04:07:22 +00:00
Evan Cheng 00b1a3cd7e Added a late machine instruction copy propagation pass. This catches
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
        movl    %eax, %ecx
        movl    %ecx, %eax
        ret

The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)

This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.

rdar://10428165
rdar://10640363

llvm-svn: 147716
2012-01-07 03:02:36 +00:00
Jakob Stoklund Olesen 68f034ee1a Use movw+movt in ARMFastISel::ARMMaterializeGV.
This eliminates a lot of constant pool entries for -O0 builds of code
with many global variable accesses.

This speeds up -O0 codegen of consumer-typeset by 2x because the
constant island pass no longer has to look at thousands of constant pool
entries.

<rdar://problem/10629774>

llvm-svn: 147712
2012-01-07 01:47:05 +00:00
Eric Christopher c206d46709 Make the 'x' constraint work for AVX registers as well.
Fixes rdar://10614894

llvm-svn: 147704
2012-01-07 01:02:09 +00:00
Jakob Stoklund Olesen 68a922c0e9 Enable aligned NEON spilling by default.
Experiments show this to be a small speedup for modern ARM cores.

llvm-svn: 147689
2012-01-06 22:19:37 +00:00
Chandler Carruth e041a30bb9 Prevent a DAGCombine from firing where there are two uses of
a combined-away node and the result of the combine isn't substantially
smaller than the input, it's just canonicalized. This is the first part
of a significant (7%) performance gain for Snappy's hot decompression
loop.

llvm-svn: 147604
2012-01-05 11:05:55 +00:00
Chandler Carruth 6bc151f5d4 Cleanup and FileCheck-ize a test.
llvm-svn: 147603
2012-01-05 11:05:47 +00:00
Victor Umansky 9255b6d9fe Peephole optimization of ptest-conditioned branch in X86 arch. Performs instruction combining of sequences generated by ptestz/ptestc intrinsics to ptest+jcc pair for SSE and AVX.
Testing: passed 'make check' including LIT tests for all sequences being handled (both SSE and AVX)

Reviewers: Evan Cheng, David Blaikie, Bruno Lopes, Elena Demikhovsky, Chad Rosier, Anton Korobeynikov
llvm-svn: 147601
2012-01-05 08:46:19 +00:00
Benjamin Kramer aca1885695 FileCheck hygiene.
llvm-svn: 147580
2012-01-05 00:43:34 +00:00
Jakob Stoklund Olesen d110e2a83f Reapply r146997, "Heed spill slot alignment on ARM."
Now that canRealignStack() understands frozen reserved registers, it is
safe to use it for aligned spill instructions.

It will only return true if the registers reserved at the beginning of
register allocation allow for dynamic stack realignment.

<rdar://problem/10625436>

llvm-svn: 147579
2012-01-05 00:26:57 +00:00
NAKAMURA Takumi 91a3f886ef test/CodeGen/X86/jump_sign.ll: Add -mcpu=pentiumpro for non-x86 hosts. It uses "cmov".
llvm-svn: 147521
2012-01-04 03:52:23 +00:00
Akira Hatanaka c669d7a6db Have getRegForInlineAsmConstraint return the correct register class when target
is Mips64.

llvm-svn: 147516
2012-01-04 02:45:01 +00:00
Evan Cheng 801d98b3f0 Fix more places which should be checking for iOS, not darwin.
llvm-svn: 147513
2012-01-04 01:55:04 +00:00
Evan Cheng 104dbb0fd1 For x86, canonicalize max
(x > y) ? x : y
=>
(x >= y) ? x : y

So for something like
(x - y) > 0 : (x - y) ? 0
It will be
(x - y) >= 0 : (x - y) ? 0

This makes is possible to test sign-bit and eliminate a comparison against
zero. e.g.
subl   %esi, %edi
testl  %edi, %edi
movl   $0, %eax
cmovgl %edi, %eax
=>
xorl   %eax, %eax
subl   %esi, $edi
cmovsl %eax, %edi

rdar://10633221

llvm-svn: 147512
2012-01-04 01:41:39 +00:00
Jakob Stoklund Olesen 1b7f2a7638 Revert r146997, "Heed spill slot alignment on ARM."
This patch caused a miscompilation of oggenc because a frame pointer was
suddenly needed halfway through register allocation.

<rdar://problem/10625436>

llvm-svn: 147487
2012-01-03 22:34:35 +00:00
Nadav Rotem 6d31bac85e Revert 147426 because it caused pr11696.
llvm-svn: 147485
2012-01-03 22:19:42 +00:00
Nadav Rotem 1e7dda13c8 Fix incorrect widening of the bitcast sdnode in case the incoming operand is integer-promoted.
llvm-svn: 147484
2012-01-03 22:12:28 +00:00
Chad Rosier 493c1b3152 Enhance DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.
rdar://10594409

llvm-svn: 147481
2012-01-03 21:05:52 +00:00
Elena Demikhovsky 8ec21a2801 Fixed a bug in SelectionDAG.cpp.
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.

The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.

I added a special test that checks vector shuffle on win32.

llvm-svn: 147445
2012-01-03 11:59:04 +00:00
Nadav Rotem 6c7a0e6c8b Optimize the sequence blend(sign_extend(x)) to blend(shl(x)) since SSE blend instructions only look at the highest bit.
llvm-svn: 147426
2012-01-02 08:05:46 +00:00
Craig Topper b910984458 Allow CRC32 instructions to be selected when AVX is enabled.
llvm-svn: 147411
2012-01-01 19:51:58 +00:00
Craig Topper 1c064e0a89 Fix sfence, lfence, mfence, and clflush to be able to be selected when AVX is enabled. Fix monitor and mwait to require SSE3 or AVX, previously they worked even if SSE3 was disabled. Make prefetch instructions not set the execution domain since they don't use XMM registers.
llvm-svn: 147409
2012-01-01 19:40:22 +00:00
Rafael Espindola d3df940169 Revert 147399. It broke CodeGen/ARM/vext.ll.
llvm-svn: 147400
2012-01-01 17:36:23 +00:00
Elena Demikhovsky 67f80c3432 Fixed a bug in SelectionDAG.cpp.
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.

The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.

I added a special test that checks vector shuffle on win32.

llvm-svn: 147399
2012-01-01 16:22:47 +00:00
Craig Topper d51092d93a Add patterns for integer forms of SHUFPD/VSHUFPD with a memory load.
llvm-svn: 147393
2011-12-31 23:24:49 +00:00
Craig Topper 0e796fee11 Fix typo in a SHUFPD and VSHUFPD pattern that prevented SHUFPD/VSHUFPD with a load from being selected.
llvm-svn: 147392
2011-12-31 23:15:11 +00:00
Craig Topper 6c08930c5e Change FMA4 memory forms to use memopv* instead of alignedloadv*. No need to force alignment on these instructions. Add a couple testcases for memory forms.
llvm-svn: 147361
2011-12-30 02:18:36 +00:00
Craig Topper 2ca79b9d4b Fix load size for FMA4 SS/SD instructions. They need to use f32 and f64 size, but with the special handling to be compatible with the intrinsic expecting a vector. Similar handling is already used elsewhere.
llvm-svn: 147360
2011-12-30 01:49:53 +00:00
Hal Finkel 692d1fb355 Cleanup stack/frame register define/kill states. This fixes two bugs:
1. The ST*UX instructions that store and update the stack pointer did not set define/kill on R1. This became a problem when I activated post-RA scheduling (and had incorrectly adjusted the Frames-large test).

2. eliminateFrameIndex did not kill its scavenged temporary register, and this could cause the scavenger to exhaust all available registers (and its emergency spill slot) when there were a lot of CR values to spill. The 2010-02-12-saveCR test has been adjusted to check for this.

llvm-svn: 147359
2011-12-30 00:34:00 +00:00
Eli Friedman 3a01ddb7e9 Fix type-checking for load transformation which is not legal on floating-point types. PR11674.
llvm-svn: 147323
2011-12-28 21:24:44 +00:00
Nadav Rotem 3c3dd6e588 PR11662.
Promotion of the mask operand needs to be done using PromoteTargetBoolean, and not padded with garbage.

llvm-svn: 147309
2011-12-28 13:08:20 +00:00
Elena Demikhovsky b3515a8d4b Fixed a bug in LowerVECTOR_SHUFFLE and LowerBUILD_VECTOR.
Matching MOVLP mask for AVX (265-bit vectors) was wrong.
The failure was detected by conformance tests.

llvm-svn: 147308
2011-12-28 08:14:01 +00:00
Eli Friedman e96286cdf2 Make sure DAGCombiner doesn't introduce multiple loads from the same memory location. PR10747, part 2.
llvm-svn: 147283
2011-12-26 22:49:32 +00:00
Chandler Carruth a3d54fe0ae Use standard promotion for i8 CTTZ nodes and i8 CTLZ nodes when the
LZCNT instructions are available. Force promotion to i32 to get
a smaller encoding since the fix-ups necessary are just as complex for
either promoted type

We can't do standard promotion for CTLZ when lowering through BSR
because it results in poor code surrounding the 'xor' at the end of this
instruction. Essentially, if we promote the entire CTLZ node to i32, we
end up doing the xor on a 32-bit CTLZ implementation, and then
subtracting appropriately to get back to an i8 value. Instead, our
custom logic just uses the knowledge of the incoming size to compute
a perfect xor. I'd love to know of a way to fix this, but so far I'm
drawing a blank. I suspect the legalizer could be more clever and/or it
could collude with the DAG combiner, but how... ;]

llvm-svn: 147251
2011-12-24 12:12:34 +00:00
Chandler Carruth 38ce24455d Add systematic testing for cttz as well, and fix the bug I spotted by
inspection earlier.

llvm-svn: 147250
2011-12-24 11:46:10 +00:00
Chandler Carruth 103ca80f59 Add i8 and i64 testing for ctlz on x86. Also simplify the i16 test.
llvm-svn: 147249
2011-12-24 11:26:59 +00:00
Chandler Carruth 44cf07228b Tidy up this rather crufty test. Put the declarations at the top to make
my C-brain happy. Remove the unnecessary bits of pedantic IR fluff like
nounwind. Remove stray uses comments. Name things semantically rather
than tN so that adding a new test in the middle doesn't cause pain, and
so that new tests can be grouped semantically.

This exposes how little systematic testing is going on here. I noticed
this by finding several bugs via inspection and wondering why this test
wasn't catching any of them. =[

llvm-svn: 147248
2011-12-24 11:26:57 +00:00
Chandler Carruth c9fcde2347 Expand more when we have a nice 'tzcnt' instruction, to avoid generating
'bsf' instructions here.

This one is actually debatable to my eyes. It's not clear that any chip
implementing 'tzcnt' would have a slow 'bsf' for any reason, and unless
EFLAGS or a zero input matters, 'tzcnt' is just a longer encoding.
Still, this restores the old behavior with 'tzcnt' enabled for now.

llvm-svn: 147246
2011-12-24 11:11:38 +00:00
Chandler Carruth eeb3a1ce3e Tidy up some of these tests.
llvm-svn: 147245
2011-12-24 11:11:36 +00:00
Chandler Carruth 7e9453e916 Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:

  (sizeof(x)*8 - 1) ^ __builtin_clz(x)

Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.

The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.

Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.

These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.

llvm-svn: 147244
2011-12-24 10:55:54 +00:00
Chandler Carruth 15075d4b19 Cleanup this test a bit, sorting things and grouping them more clearly.
llvm-svn: 147243
2011-12-24 10:55:42 +00:00
Akira Hatanaka 79329ce425 Test case for r147232.
llvm-svn: 147233
2011-12-24 03:05:43 +00:00
Jakob Stoklund Olesen 0965585cb1 Experimental support for aligned NEON spills.
ARM targets with NEON units have access to aligned vector loads and
stores that are potentially faster than unaligned operations.

Add support for spilling the callee-saved NEON registers to an aligned
stack area using 16-byte aligned NEON loads and store.

This feature is off by default, controlled by an -align-neon-spills
command line option.

llvm-svn: 147211
2011-12-23 00:36:18 +00:00
Chad Rosier 7248bda595 Fix a couple of copy-n-paste bugs. Noticed by George Russell!
llvm-svn: 147064
2011-12-21 18:56:22 +00:00
Evan Cheng dc8a1aaea6 Fix a couple of copy-n-paste bugs. Noticed by George Russell.
llvm-svn: 147032
2011-12-21 03:04:10 +00:00
Akira Hatanaka 964c891e61 Fix bug in zero-store peephole pattern reported in pr11615.
The patch and test case were originally written by Mans Rullgard.

llvm-svn: 147024
2011-12-21 00:31:10 +00:00
Akira Hatanaka 1d8efaba7e Expand 64-bit CTLZ nodes if target architecture does not support it. Add test
case for DCLO and DCLZ.

llvm-svn: 147022
2011-12-21 00:20:27 +00:00
Akira Hatanaka bd95275f7a Test case for r147017.
llvm-svn: 147018
2011-12-20 23:58:36 +00:00
Akira Hatanaka cb2a85bc22 Add function MipsDAGToDAGISel::SelectMULT and factor out code that generates
nodes needed for multiplication. Add code for selecting 64-bit MULHS and MULHU
nodes.

llvm-svn: 147008
2011-12-20 23:10:57 +00:00
Akira Hatanaka cf10f08825 64-bit data directive.
llvm-svn: 147005
2011-12-20 22:52:19 +00:00
Akira Hatanaka 494fdf1499 32-to-64-bit sext_inreg pattern.
llvm-svn: 147004
2011-12-20 22:40:40 +00:00
Akira Hatanaka dac1d48d8d Add code in MipsDAGToDAGISel for selecting constant +0.0.
MIPS64 can generate constant +0.0 with a single DMTC1 instruction. 

llvm-svn: 146999
2011-12-20 22:25:50 +00:00
Jakob Stoklund Olesen b95c102c2f Heed spill slot alignment on ARM.
Use the spill slot alignment as well as the local variable alignment to
determine when the stack needs to be realigned. This works now that the
ARM target can always realign the stack by using a base pointer.

Still respect the ARMBaseRegisterInfo::canRealignStack() function
vetoing a realigned stack.  Don't use aligned spill code in that case.

llvm-svn: 146997
2011-12-20 22:15:04 +00:00
Evan Cheng 68132d8093 ARM target code clean up. Check for iOS, not Darwin where it makes sense.
llvm-svn: 146981
2011-12-20 18:26:50 +00:00
Elena Demikhovsky ec7e6e0946 This is the second fix related to VZEXT_MOVL node.
The failure that I see in the current version is:

LLVM ERROR: Cannot select: 0x18b8f70: v4i64 = X86ISD::VZEXT_MOVL 0x18beee0 [ID=14]
  0x18beee0: v4i64 = insert_subvector 0x18b8c70, 0x18b9170, 0x18b9570 [ID=13]
    0x18b8c70: v4i64 = insert_subvector 0x18b9870, 0x18bf4e0, 0x18b9970 [ID=12]
      0x18b9870: v4i64 = undef [ID=4]
      0x18bf4e0: v2i64 = bitcast 0x18bf3e0 [ID=10]
        0x18bf3e0: v4i32 = BUILD_VECTOR 0x18b9770, 0x18b9770, 0x18b9770, 0x18b9770 [ID=8]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
      0x18b9970: i32 = Constant<0> [ID=3]
    0x18b9170: v2i64 = undef [ORD=1] [ID=1]
    0x18b9570: i32 = Constant<2> [ID=5]

llvm-svn: 146975
2011-12-20 13:34:28 +00:00
Chandler Carruth 24680c24d8 Begin teaching the X86 target how to efficiently codegen patterns that
use the zero-undefined variants of CTTZ and CTLZ. These are just simple
patterns for now, there is more to be done to make real world code using
these constructs be optimized and codegen'ed properly on X86.

The existing tests are spiffed up to check that we no longer generate
unnecessary cmov instructions, and that we generate the very important
'xor' to transform bsr which counts the index of the most significant
one bit to the number of leading (most significant) zero bits. Also they
now check that when the variant with defined zero result is used, the
cmov is still produced.

llvm-svn: 146974
2011-12-20 11:19:37 +00:00
Bob Wilson 75f12cc3fe Mark ARM eh_sjlj_dispatchsetup as clobbering all registers. Radar 10567930.
We used to rely on the *eh_sjlj_setjmp instructions to mark that a function
with setjmp/longjmp exception handling clobbers all the registers.  But with
the recent reorganization of ARM EH, those eh_sjlj_setjmp instructions are
expanded away earlier, before PEI can see them to determine what registers to
save and restore.  Mark the dispatchsetup instruction in the same way, since
that instruction cannot be expanded early.  This also more accurately reflects
when the registers are clobbered.

llvm-svn: 146949
2011-12-20 01:29:27 +00:00
Evan Cheng 3bfaefe9e7 Move tests to FileCheck.
llvm-svn: 146923
2011-12-19 23:26:44 +00:00
Akira Hatanaka 37c45db189 Add a test case for r146900.
llvm-svn: 146901
2011-12-19 20:24:28 +00:00
Akira Hatanaka db47e0c49d Add patterns for matching immediates whose lower 16-bit is cleared. These
patterns emit a single LUi instruction instead of a pair of LUi and ORi.

llvm-svn: 146900
2011-12-19 20:21:18 +00:00
Akira Hatanaka 2a232d81f6 Remove definitions of double word shift plus 32 instructions. Assembler or
direct-object emitter should emit the appropriate shift instruction depending
on the shift amount.

llvm-svn: 146893
2011-12-19 19:44:09 +00:00
Akira Hatanaka 3c9f336361 Remove the restriction on the first operand of the add node in SelectAddr.
This change reduces the number of instructions generated.

For example, 
(load (add (sub $n0, $n1), (MipsLo got(s))))

results in the following sequence of instructions:
1. sub $n2, $n0, $n1
2. lw got(s)($n2)

Previously, three instructions were needed.
1. sub $n2, $n0, $n1
2. addiu $n3, $n2, got(s)
3. lw 0($n3)

llvm-svn: 146888
2011-12-19 19:28:37 +00:00
Evan Cheng 903231bc58 Fix a CPSR liveness tracking bug introduced when I converted IT block to bundle.
llvm-svn: 146805
2011-12-17 01:25:34 +00:00
Lang Hames da07b3ad42 Make sure that the lower bits on the VSELECT condition are properly set.
llvm-svn: 146800
2011-12-17 01:08:46 +00:00
Jakob Stoklund Olesen 9790187b6c Fix off-by-one error in bucket sort.
The bad sorting caused a misaligned basic block when building 176.vpr in
ARM mode.

<rdar://problem/10594653>

llvm-svn: 146767
2011-12-16 23:00:05 +00:00
Benjamin Kramer 9ca2e7293b Hexagon: Fix a nasty order-of-initialization bug.
Reenable the tests.

llvm-svn: 146750
2011-12-16 19:08:59 +00:00
Craig Topper a4d411cb1b Don't try to match 'unpackl/h v, v' for 32xi8 and 16xi16 when only AVX1 is supported. Fix 'unpackh v, v' for 256-bit types to understand 128-bit lanes.
llvm-svn: 146726
2011-12-16 08:06:31 +00:00
Chad Rosier 41dbf59e12 Add missing zmovl AVX patterns which were causing crashes.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

llvm-svn: 146689
2011-12-15 22:11:31 +00:00
Chad Rosier 75ed9dcbc6 Fix assert in LowerBUILD_VECTOR for v16i16 type on AVX.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

llvm-svn: 146684
2011-12-15 21:34:44 +00:00
Lang Hames 918f976e66 Set specific target cpu for testcase.
llvm-svn: 146678
2011-12-15 20:22:34 +00:00
Lang Hames 2d6d3a2f96 Added test case for r146671.
llvm-svn: 146675
2011-12-15 19:56:07 +00:00
Hal Finkel 750366f014 Add a test case to make sure that the nop really does follow the bl on ppc64 elf
llvm-svn: 146666
2011-12-15 17:59:23 +00:00
Eli Friedman 2ec824966d Don't try to form FGETSIGN after legalization; it is possible in some cases, but the existing code can't do it correctly. PR11570.
llvm-svn: 146630
2011-12-15 02:07:20 +00:00
Chad Rosier 1940baa76b Add support for lowering fneg when AVX is enabled.
rdar://10566486

llvm-svn: 146625
2011-12-15 01:02:25 +00:00
Devang Patel c268688643 Do not sink instruction, if it is not profitable.
On ARM, peephole optimization for ABS creates a trivial cfg triangle which tempts machine sink to sink instructions in code which is really straight line code. Sometimes this sinking may alter register allocator input such that use and def of a reg is divided by a branch in between, which may result in extra spills. Now mahine sink avoids sinking if final sink destination is post dominator.

Radar 10266272.

llvm-svn: 146604
2011-12-14 23:20:38 +00:00
Akira Hatanaka bff84e1914 Add support for local dynamic TLS model in LowerGlobalTLSAddress. Direct object
emission is not supported yet, but a patch that adds the support should follow
soon.

llvm-svn: 146572
2011-12-14 18:26:41 +00:00
Evan Cheng 7fae11b231 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Chad Rosier 4020ae75ea Add newline at EOF.
llvm-svn: 146538
2011-12-14 01:34:39 +00:00
Chad Rosier 563de603f7 [fast-isel] Unaligned loads of floats are not supported. Therefore, convert to a regular
load and then move the result from a GPR to a FPR.

llvm-svn: 146502
2011-12-13 19:22:14 +00:00
Akira Hatanaka 341850fdc6 Move direct object emitter test to directory test/MC/Mips. Rename it to
elf-relsym.ll.

llvm-svn: 146470
2011-12-13 03:50:34 +00:00
Akira Hatanaka e41963ce47 Relocation against a symbol, instead of against section. We had some extreme
test cases where there were a lot of relocations applied relative to a large
rodata section. Gas would create a symbol for each of these whereas we would
be relative to the beginning of the rodata section. This change mimics what
gas does.

Patch by Jack Carter.

llvm-svn: 146468
2011-12-13 02:27:40 +00:00
Tony Linthicum 525ca5fc69 Temporarily disable Hexagon tests. They are failing on OS X
llvm-svn: 146455
2011-12-13 00:33:45 +00:00
Akira Hatanaka 9e5908ae3a Test case for r146432 by Jack Carter.
llvm-svn: 146433
2011-12-12 22:41:39 +00:00
Bob Wilson fadc2c83e5 Implement 'e' and 'f' modifiers for Neon inline asm. <rdar://problem/10551006>
These modifiers simply select either the low or high D subregister of a Neon
Q register.  I've also removed the unimplemented 'p' modifier, which turns out
to be a bit different than the comment here suggests and as far as I can tell
was only intended for internal use in Apple's version of gcc.

llvm-svn: 146417
2011-12-12 21:45:15 +00:00
Tony Linthicum 1213a7a57f Hexagon backend support
llvm-svn: 146412
2011-12-12 21:14:40 +00:00
Chandler Carruth 6b0e34c445 Manually upgrade the test suite to specify the flag to cttz and ctlz.
I followed three heuristics for deciding whether to set 'true' or
'false':

- Everything target independent got 'true' as that is the expected
  common output of the GCC builtins.
- If the target arch only has one way of implementing this operation,
  set the flag in the way that exercises the most of codegen. For most
  architectures this is also the likely path from a GCC builtin, with
  'true' being set. It will (eventually) require lowering away that
  difference, and then lowering to the architecture's operation.
- Otherwise, set the flag differently dependending on which target
  operation should be tested.

Let me know if anyone has any issue with this pattern or would like
specific tests of another form. This should allow the x86 codegen to
just iteratively improve as I teach the backend how to differentiate
between the two forms, and everything else should remain exactly the
same.

llvm-svn: 146370
2011-12-12 11:59:10 +00:00
Stepan Dyatkovskiy 4683740967 Fixed bug 9905: Failure in code selection for llvm intrinsics sqrt/exp (fix for FSQRT, FSIN, FCOS, FPOWI, FPOW, FLOG, FLOG2, FLOG10, FEXP, FEXP2). Third attempt: simplified checks in test for armv7-apple-darwin11.
llvm-svn: 146341
2011-12-11 14:35:48 +00:00
Chad Rosier 1c468af854 Revert associate SelectInsertValue test as well.
llvm-svn: 146332
2011-12-10 21:34:28 +00:00
Chad Rosier 6641294e3b Revert r146322 to appease buildbots. Original commit message:
Fixed bug 9905: Failure in code selection for llvm intrinsics sqrt/exp (fix for
FSQRT, FSIN, FCOS, FPOWI, FPOW, FLOG, FLOG2, FLOG10, FEXP, FEXP2). Second
attempt.

llvm-svn: 146328
2011-12-10 19:55:03 +00:00
Stepan Dyatkovskiy df0b779e9f Fixed bug 9905: Failure in code selection for llvm intrinsics sqrt/exp (fix for FSQRT, FSIN, FCOS, FPOWI, FPOW, FLOG, FLOG2, FLOG10, FEXP, FEXP2). Second attempt.
llvm-svn: 146322
2011-12-10 08:42:24 +00:00
Hal Finkel 67a7f18faf Make CR spill and restore use a reserved register. These operations cannot use the register scavenger because the scavenger can only scavenge one register and frame-index elimination may have already grabbed it.
llvm-svn: 146318
2011-12-10 04:50:53 +00:00
Eli Friedman 4e36a934dc Splats can contain undef's; make sure to handle them correctly. PR11526.
llvm-svn: 146299
2011-12-09 23:54:42 +00:00
Evan Cheng 1d54d2210a Update test to something more sensible.
llvm-svn: 146282
2011-12-09 21:54:10 +00:00
Chad Rosier dd998ff4df [fast-isel] Add support for selecting insertvalue.
rdar://10530851

llvm-svn: 146276
2011-12-09 20:09:54 +00:00
Benjamin Kramer 16bbfbec66 X86: Add patterns for the various rounding ops for SSE4.1 and AVX.
llvm-svn: 146257
2011-12-09 15:44:03 +00:00
Evan Cheng 5895fa79d6 Forgot setting -march.
llvm-svn: 146244
2011-12-09 06:15:00 +00:00
Akira Hatanaka 8e16aac534 jalr should use t9 ($25) for indirect calls regardless of the relocation model
specified.

llvm-svn: 146229
2011-12-09 01:45:12 +00:00
Eli Friedman 053a724483 Fix a couple of logic bugs in TargetLowering::SimplifyDemandedBits. PR11514.
llvm-svn: 146219
2011-12-09 01:16:26 +00:00
Evan Cheng b96bca81e7 Add 256-bit variant vmovss and vmovsd patterns. rdar://10538417
llvm-svn: 146196
2011-12-08 22:30:45 +00:00
Evan Cheng 2a217be25f Add various missing AVX patterns which was causing crashes. Sadly, the generated
code looks pretty bad compared to SSE.

rdar://10538793

llvm-svn: 146191
2011-12-08 22:05:28 +00:00
Owen Anderson 0b9b9da6c8 Teach SelectionDAG to match more calls to libm functions onto existing SDNodes. Mark these nodes as illegal by default, unless the target declares otherwise.
llvm-svn: 146171
2011-12-08 19:32:14 +00:00
Evan Cheng 3294538546 Add test for r146163.
llvm-svn: 146167
2011-12-08 19:21:39 +00:00
Daniel Dunbar c09e4593b2 Revert r146143, "Fix bug 9905: Failure in code selection for llvm intrinsics
sqrt/exp (fix for FSQRT, FSIN, FCOS, FPOWI, FPOW, FLOG, FLOG2, FLOG10, FEXP,
FEXP2).", it is failing tests.

llvm-svn: 146157
2011-12-08 17:32:18 +00:00
NAKAMURA Takumi 0faa233439 test/CodeGen/X86/vec_compare-2.ll: Add explicit -mtriple=i686-linux.
llvm-svn: 146152
2011-12-08 15:24:09 +00:00
Nadav Rotem 26edb291ac Fix a bug in the integer-promotion of bitcast operations on vector types.
We must not issue a bitcast operation for integer-promotion of vector types, because the
location of the values in the vector may be different.

llvm-svn: 146150
2011-12-08 13:10:01 +00:00
Stepan Dyatkovskiy a4bcf27dae Fix bug 9905: Failure in code selection for llvm intrinsics sqrt/exp (fix for FSQRT, FSIN, FCOS, FPOWI, FPOW, FLOG, FLOG2, FLOG10, FEXP, FEXP2).
llvm-svn: 146143
2011-12-08 07:55:03 +00:00
Akira Hatanaka ae378af667 32 to 64-bit zext pattern.
llvm-svn: 146096
2011-12-07 23:14:41 +00:00
Akira Hatanaka b2e05cb6b1 64-bit WrapperPICPat patterns.
llvm-svn: 146086
2011-12-07 22:11:43 +00:00
Akira Hatanaka c5b5a8d8b1 Modify LowerFCOPYSIGN to handle Mips64.
llvm-svn: 146080
2011-12-07 21:48:50 +00:00
Akira Hatanaka 4a04a56a36 Fix 64-bit immediate patterns.
llvm-svn: 146059
2011-12-07 20:10:24 +00:00
Eli Friedman ed8b3e38ec Support vector bitcasts in the AsmPrinter. PR11495.
llvm-svn: 146001
2011-12-07 00:50:54 +00:00
Eli Friedman 0e58cba286 Fix an optimization involving EXTRACT_SUBVECTOR in DAGCombine so it behaves correctly. PR11494.
llvm-svn: 145996
2011-12-07 00:11:56 +00:00
Hal Finkel 0fc34bc2d3 delaying restore-cr changed assigned registers in some tests
llvm-svn: 145963
2011-12-06 20:55:46 +00:00
Hal Finkel 0702bc1b28 add a test case that uses RESTORE_CR
llvm-svn: 145962
2011-12-06 20:55:41 +00:00
Justin Holewinski 04424665c3 PTX: Continue to fix up the register mess.
llvm-svn: 145947
2011-12-06 17:39:48 +00:00
Craig Topper 6572e0f203 Fix a bunch of SSE/AVX patterns to use v2i64/v4i64 loads since all other integer vector loads are promoted to those.
llvm-svn: 145927
2011-12-06 09:04:59 +00:00
Craig Topper bf41eb3a98 Merge isSHUFPMask and isCommutedSHUFPMask into single function that can do both. Do the same for the 256-bit version. Use loops to reduce size of isVSHUFPYMask. Fix test cases that were incorrectly passing due to isCommutedSHUFPMask not checking for the vector being 128-bit. This caused some 256-bit shuffles to be incorrectly commuted.
llvm-svn: 145921
2011-12-06 04:59:07 +00:00
Chad Rosier c77830d21e [arm-fast-isel] Doublewords only require word-alignment.
rdar://10528060

llvm-svn: 145891
2011-12-06 01:44:17 +00:00
Jakob Stoklund Olesen 2e05db2fa0 Align ARM constant pool islands via their basic block.
Previously, all ARM::CONSTPOOL_ENTRY instructions had a hardwired
alignment of 4 bytes emitted by ARMAsmPrinter.  Now the same alignment
is set on the basic block.

This is in preparation of supporting ARM constant pool islands with
different alignments.

llvm-svn: 145890
2011-12-06 01:43:02 +00:00
Akira Hatanaka 20cee2eba1 Add definitions of 64-bit extract and insert instrucions and make
PerformANDCombine and PerformOrCombine aware of them. Test cases are included
too.

llvm-svn: 145853
2011-12-05 21:26:34 +00:00
Akira Hatanaka 34e3df76f9 Have LowerJumpTable support Mips64. Modify 2010-07-20-Switch.ll to test N64 and
O32 with relocation-model=pic too.

llvm-svn: 145850
2011-12-05 21:03:03 +00:00
Hal Finkel 97a6028b3a Add test case - this input used to crash because of duplicate generation of SPILL_CRs
llvm-svn: 145820
2011-12-05 17:55:22 +00:00
Hal Finkel 8f6834dfa5 enable PPC register scavenging by default (update tests and remove some FIXMEs)
llvm-svn: 145819
2011-12-05 17:55:17 +00:00
Hal Finkel e18c72689c remove wasted space for extra bit copies of CR2 subregs
llvm-svn: 145817
2011-12-05 17:55:06 +00:00
NAKAMURA Takumi e6efe405de test/CodeGen/X86/pointer-vector.ll: Add explicit -mtriple=i686-linux.
llvm-svn: 145805
2011-12-05 07:54:57 +00:00
Nadav Rotem 3924cb0267 Add support for vectors of pointers.
llvm-svn: 145801
2011-12-05 06:29:09 +00:00
Anton Korobeynikov 965e0c6de2 Emit the ctors in the proper order on ARM/EABI.
Maybe some targets should use this as well.

Patch by Evgeniy Stepanov!

llvm-svn: 145781
2011-12-03 23:49:37 +00:00
Venkatraman Govindaraju 6dae604f50 Sparc CodeGen: Fix AnalyzeBranch for PR 10282. Removing addSuccessor() since
AnalyzeBranch doesn't change the successor, just the order.

llvm-svn: 145779
2011-12-03 21:24:48 +00:00
Sanjoy Das 006e43bcc0 Check for stack space more intelligently.
libgcc sets the stack limit field in TCB to 256 bytes above the actual
allocated stack limit.  This means if the function's stack frame needs
less than 256 bytes, we can just compare the stack pointer with the
stack limit.  This should result in lesser calls to __morestack.

llvm-svn: 145766
2011-12-03 09:32:07 +00:00
Sanjoy Das 165ca1d4ba Fix a bug in the x86-32 code generated for segmented stacks.
Currently LLVM pads the call to __morestack with a add and sub of 8
bytes to esp.  This isn't correct since __morestack expects the call
to be followed directly by a ret.

This commit also adjusts the relevant test-case.

llvm-svn: 145765
2011-12-03 09:21:07 +00:00
Chad Rosier ec3b77e00d [arm-fast-isel] Unaligned stores of floats require special care.
rdar://10510150

llvm-svn: 145742
2011-12-03 02:21:57 +00:00
Akira Hatanaka 430f917fbe Test cases for 64-bit multiplication and division.
llvm-svn: 145717
2011-12-02 22:31:36 +00:00
Akira Hatanaka bbc5555bee Fix test cases to use FileCheck.
llvm-svn: 145716
2011-12-02 22:28:09 +00:00
Chad Rosier 9fd0e55e91 [arm-fast-isel] After promoting a function parameter be sure to update the
argument value type.  Otherwise, the sign/zero-extend has no effect on arguments
passed via the stack (i.e., undefined high-order bits).
rdar://10515467

llvm-svn: 145701
2011-12-02 20:25:18 +00:00
Hal Finkel d87f7af1f3 specify cpu for test to fix failure on some darwin systems with a g4+ cpu
llvm-svn: 145699
2011-12-02 19:38:17 +00:00
Craig Topper abeb79eee3 Add instruction selection support for horizontal add/sub of 256-bit floating point vectors. Also add the test case for 256-bit integer vectors.
llvm-svn: 145680
2011-12-02 07:16:01 +00:00
Hal Finkel 9286705955 adjust the instruction ordering in some PPC tests: changes due to postRA haz. rec.
llvm-svn: 145678
2011-12-02 04:58:12 +00:00
Eric Christopher 9da7f305a4 For 64-bit the rest of the general regs are ok for the q constraint. Make
sure we can emit both the high and low versions of those registers.

Fixes rdar://10392864

llvm-svn: 145579
2011-12-01 08:12:41 +00:00
Eli Friedman d61887dd0a Pass AVX vectors which are arguments to varargs functions on the stack. <rdar://problem/10463281>.
llvm-svn: 145573
2011-12-01 04:49:21 +00:00
Jan Sjödin 9430e284a9 Support for encoding all FMA4 instructions and tablegen patterns for all
remaining FMA4 instructions and intrinsics with tests.

llvm-svn: 145525
2011-11-30 22:09:42 +00:00
Eli Friedman 6cff9df298 Make GlobalMerge honor the preferred alignment on globals without an explicitly specified alignment.
<rdar://problem/10497732>.

llvm-svn: 145523
2011-11-30 21:54:15 +00:00
Nadav Rotem 0a1801015c Add test arch to make it pass on non x86 targets
llvm-svn: 145498
2011-11-30 17:34:28 +00:00
Nadav Rotem 66427bcce9 Add a tripple to the test
llvm-svn: 145489
2011-11-30 11:20:56 +00:00
Nadav Rotem 96923cc2bb X86: PerformOrCombine introduced a vselect node with a wrong order of operands. This bug was introduced when a dedicated blend sdnode was replaced with the vselect node (in 139479).
llvm-svn: 145488
2011-11-30 10:13:37 +00:00
Jakob Stoklund Olesen f50d2eafdb FileCheckize.
llvm-svn: 145452
2011-11-29 23:09:16 +00:00
Akira Hatanaka dc25f9f38a Change names for MIPS "generic" processors defined in Mips.td to match what GNU
tools use. Patch by Simon Atanasyan.

"mips32r1" => "mips32"
"4ke" => mips32r2"
"mips64r1" => "mips64"

llvm-svn: 145451
2011-11-29 23:08:41 +00:00
Evan Cheng 648e48d02e Add another missing pattern. llvm-gcc likes f64 but clang likes i64 so it was generating poor code for some SSE builtins.
llvm-svn: 145448
2011-11-29 22:48:34 +00:00
Jakob Stoklund Olesen bde32d36bb Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

llvm-svn: 145440
2011-11-29 22:27:25 +00:00
Chad Rosier 46addb9e07 If fast-isel fails, remove dead instructions generated during the failed
attempt.  

llvm-svn: 145425
2011-11-29 19:40:47 +00:00
Elena Demikhovsky 7a81dea516 Fixed vsqrt.ss intrinsic usage - order of input operands was wrong.
Added a test.
Thanks Bruno for reviewing the patch.

llvm-svn: 145403
2011-11-29 15:00:45 +00:00