Commit Graph

20459 Commits

Author SHA1 Message Date
Rafael Espindola 4eecacb9c8 Generate the segmented stack prologue for fastcc too.
Patch by Brian Anderson.

llvm-svn: 147958
2012-01-11 18:41:19 +00:00
Chandler Carruth 3212a34269 Revert r147945 which disabled an addressing mode transformation. I had
hoped this would revive one of the llvm-gcc selfhost build bots, but it
didn't so it doesn't appear that my transform is the culprit.

If anyone else is seeing failures, please let me know!

llvm-svn: 147957
2012-01-11 18:36:12 +00:00
Rafael Espindola 2b89448d60 Use unsigned comparison in segmented stack prologue.
This is a comparison of two addresses, and GCC does the comparison unsigned.

Patch by Brian Anderson.

llvm-svn: 147954
2012-01-11 18:23:35 +00:00
Rafael Espindola 6635ae1c17 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

llvm-svn: 147952
2012-01-11 18:14:03 +00:00
Jan Sjödin 21f83d9f36 Add XOP Intrinsics and tests
llvm-svn: 147949
2012-01-11 15:20:20 +00:00
Nadav Rotem baae7e4577 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
llvm-svn: 147948
2012-01-11 14:07:51 +00:00
Chandler Carruth 9bc48e5215 Disable the transformation I added in r147936 to see if it fixes some
strange build bot failures that look like a miscompile into an infloop.
I'll investigate this tomorrow, but I'd both like to know whether my
patch is the culprit, and get the bots back to green.

llvm-svn: 147945
2012-01-11 12:17:47 +00:00
Chandler Carruth 3eacfb83fa Hoist a really redundant code pattern into a helper function, and delete
lots of lines of code. No functionality changed.

llvm-svn: 147942
2012-01-11 11:04:36 +00:00
Chandler Carruth b0049f4a43 Simplify the AND-rooted mask+shift checking code to match that of the
SRL-rooted code.

llvm-svn: 147941
2012-01-11 09:35:04 +00:00
Chandler Carruth 3dbcda8478 Unify the interface of the three mask+shift transform helpers, and
factor the differences that were hiding in one of them into its other
caller, the SRL handling code. No change in behavior.

llvm-svn: 147940
2012-01-11 09:35:02 +00:00
Chandler Carruth aa01e6661a Clarify and make explicit some of the requirements for transforming
mask+shift pairs at the beginning of the ISD::AND case block, and then
hoist the final pattern into a helper function, simplifying and
reflowing it appropriately. This should have no observable behavior
change, but several simplifications fell out of this such as directly
computing the new mask constant, etc.

llvm-svn: 147939
2012-01-11 09:35:00 +00:00
Jakob Stoklund Olesen 6039983755 Fix undefined code and reenable test case.
I don't think the compact encoding code is right, but at least is has
defined behavior now.

llvm-svn: 147938
2012-01-11 09:08:04 +00:00
Chandler Carruth 51d3076bbf Hoist the logic to transform shift+mask combinations into sub-register
extracts and scaled addressing modes into its own helper function. No
functionality changed here, just hoisting and layout fixes falling out
of that hoisting.

llvm-svn: 147937
2012-01-11 08:48:20 +00:00
Chandler Carruth 55b2cdee26 Teach the X86 instruction selection to do some heroic transforms to
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:

  unsigned x = my_accelerator_table[input >> 11];

Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):

  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));

The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.

In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.

llvm-svn: 147936
2012-01-11 08:41:08 +00:00
Rafael Espindola 647841b181 Add big endian mips support. Based on a patch by Jack Carter.
llvm-svn: 147924
2012-01-11 04:04:14 +00:00
Rafael Espindola 870c4e92b9 Add the skeleton of an asm parser for mips.
llvm-svn: 147923
2012-01-11 03:56:41 +00:00
Andrew Trick 642f0f6a40 ARM Ld/St Optimizer fix.
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits.

Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12

llvm-svn: 147922
2012-01-11 03:56:08 +00:00
Lang Hames 995c63329a Fixed order of operands in comment to match code.
llvm-svn: 147890
2012-01-10 22:53:20 +00:00
Joerg Sonnenberger 96cd35cf6d Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.
Add a test that checks the stack alignment of a simple function for
Darwin, Linux and NetBSD for 32bit and 64bit mode.

llvm-svn: 147888
2012-01-10 22:43:53 +00:00
Jakob Stoklund Olesen 20f1dd5faf Consider unknown alignment caused by OptimizeThumb2Instructions().
This function runs after all constant islands have been placed, and may
shrink some instructions to their 2-byte forms.  This can actually cause
some constant pool entries to move out of range because of growing
alignment padding.

Treat instructions that may be shrunk the same as inline asm - they
erode the known alignment bits.

Also reinstate an old assertion in verify(). It is correct now that
basic block offsets include alignments.

Add a single large test case that will hopefully exercise many parts of
the constant island pass.

<rdar://problem/10670199>

llvm-svn: 147885
2012-01-10 22:32:14 +00:00
Chad Rosier 1a8f0ccd8c Add missing VEX predicates to VMOVSDto64rr/VMOVSDto64mr. This fixes a few
failing test cases on our internal AVX nightly tester.
rdar://10663637

llvm-svn: 147881
2012-01-10 22:14:06 +00:00
Jim Grosbach 74ac7d50a1 ARM updating VST2 pseudo-lowering fixed vs. register update.
rdar://10663487

llvm-svn: 147876
2012-01-10 21:11:12 +00:00
Benjamin Kramer 233149cf06 Fix some leftover control reaches end of non-void function warnings.
llvm-svn: 147874
2012-01-10 20:47:20 +00:00
Richard Smith ad5b42c02f Move default case for covered enum outside of switch.
llvm-svn: 147870
2012-01-10 19:43:09 +00:00
Bill Wendling d5ab02600e For i386, don't use the generic code.
As the comment around 7746 says, it's better to use the x87 extended precision
here than SSE. And the generic code doesn't know how to do that. It also regains
the speed lost for the uint64_to_float.c testcase.
<rdar://problem/10669858>

llvm-svn: 147869
2012-01-10 19:41:30 +00:00
Richard Smith 3f1035410f Fix a -Wreturn-type warning in g++.
llvm-svn: 147867
2012-01-10 19:10:22 +00:00
Chandler Carruth f3e8502cc1 Add 'llvm_unreachable' to passify GCC's understanding of the constraints
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.

llvm-svn: 147861
2012-01-10 18:08:01 +00:00
Devang Patel 67bf992a8f Add definition for intel asm variant.
Right now, this just adds additional entries in match table. The parser does not use them yet.

llvm-svn: 147859
2012-01-10 17:51:54 +00:00
David Blaikie edbb58c577 Remove unnecessary default cases in switches that cover all enum values.
llvm-svn: 147855
2012-01-10 16:47:17 +00:00
Benjamin Kramer 077ae1d760 Add definitions for AMD's bobcat (aka btver1)
llvm-svn: 147846
2012-01-10 11:50:02 +00:00
Craig Topper 430f3f1bd6 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
llvm-svn: 147844
2012-01-10 08:23:59 +00:00
Craig Topper b0c0f72ae6 Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is the final piece to remove the AVX hack that disabled SSE.
llvm-svn: 147843
2012-01-10 06:54:16 +00:00
Craig Topper d97bbd7b60 Remove hasSSE*orAVX functions and change all callers to use just hasSSE*. AVX is now an SSE level and no longer disables SSE checks.
llvm-svn: 147842
2012-01-10 06:37:29 +00:00
Craig Topper eb8f9e9e5b Instruction selection priority fixes to remove the XMM/XMMInt/orAVX predicates. Another commit will remove orAVX functions from X86SubTarget.
llvm-svn: 147841
2012-01-10 06:30:56 +00:00
Jakob Stoklund Olesen f09a316542 Accurately model hardware alignment rounding.
On Thumb, the displacement computation hardware uses the address of the
current instruction rouned down to a multiple of 4.  Include this
rounding in the UserOffset we compute for each instruction.

When inline asm is present, the instruction alignment may not be known.
Constrain the maximum displacement instead in that case.

This makes it possible for CreateNewWater() and OffsetIsInRange() to
agree about the valid displacements.  When they disagree, infinite
looping happens.

As always, test cases for this stuff are insane.

<rdar://problem/10660175>

llvm-svn: 147825
2012-01-10 01:34:59 +00:00
Rafael Espindola 5cb98f1062 Remove the logging streamer.
llvm-svn: 147820
2012-01-10 00:40:39 +00:00
Jakob Stoklund Olesen 1a80e3a26b Catch runaway ARMConstantIslandPass even in -Asserts builds.
The pass is prone to looping, and it is better to crash than loop
forever, even in a -Asserts build.

<rdar://problem/10660175>

llvm-svn: 147806
2012-01-09 22:16:24 +00:00
Devang Patel 29ba4f97e6 Fix asm string wrt variants.
llvm-svn: 147805
2012-01-09 21:32:02 +00:00
Devang Patel 85d684a4d9 Split AsmParser into two components - AsmParser and AsmParserVariant
AsmParser holds info specific to target parser.
AsmParserVariant holds info specific to asm variants supported by the target.

llvm-svn: 147787
2012-01-09 19:13:28 +00:00
Chandler Carruth c16622daff Don't rely on the fact that shift values are never very large, and thus
this substraction will result in small negative numbers at worst which
become very large positive numbers on assignment and are thus caught by
the <=4 check on the next line. The >0 check clearly intended to catch
these as negative numbers.

Spotted by inspection, and impossible to trigger given the shift widths
that can be used.

llvm-svn: 147773
2012-01-09 09:47:25 +00:00
Craig Topper f287a4509e Remove AVX hack in X86Subtarget. AVX/AVX2 are now treated as an SSE level. Predicate functions have been altered to maintain previous names and behavior.
llvm-svn: 147770
2012-01-09 09:02:13 +00:00
Craig Topper b89805c77d Add HasAVX predicate to some of the AVX patterns.
llvm-svn: 147769
2012-01-09 08:34:00 +00:00
Craig Topper a51f7f75c2 Reorder a bunch of patterns to put the AVX version first thus giving it priority over the SSE version. Another step towards trying to remove the AVX hack that disables SSE from X86Subtarget.
llvm-svn: 147768
2012-01-09 08:10:38 +00:00
Craig Topper ef7f5bf8c9 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
llvm-svn: 147767
2012-01-09 06:52:46 +00:00
Craig Topper c1f5622ad3 Mark MOVNTI as being supported in SSE2 OR AVX mode. This instruction has no AVX equivalent so we should use the SSE version.
llvm-svn: 147766
2012-01-09 06:38:55 +00:00
Craig Topper a081644f8a Move SSE2 logical operations PAND/POR/PXOR/PANDN above SSE1 logical operations ANDPS/ORPS/XORPS/ANDNPS. This fixes a pattern ordering issue that meant that the SSE2 instructions could never be directly selected since the SSE1 patterns would always match first. This is largely moot with the ExeDepsFix pass, but I'm trying to audit for all such ordering issues.
llvm-svn: 147765
2012-01-09 05:07:01 +00:00
Craig Topper 210e4f81b3 Change some places that were checking for AVX OR SSE1/2 to use hasXMM/hasXMMInt instead. Also fix one place that checked SSE3, but accidentally excluded AVX to use hasSSE3orAVX. This is a step towards removing the AVX hack from the X86Subtarget.h
llvm-svn: 147764
2012-01-09 02:28:15 +00:00
Craig Topper 744f6311d3 Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
llvm-svn: 147762
2012-01-09 00:11:29 +00:00
Craig Topper c1ab7afec8 Enable FISTTP* instructions when AVX is enabled.
llvm-svn: 147758
2012-01-08 23:04:21 +00:00
Evan Cheng 4882e488f7 Don't forget to transfer implicit uses of return instruction.
llvm-svn: 147752
2012-01-08 20:41:16 +00:00
Victor Umansky 540651cf59 Reverted commit #147601 upon Evan's request.
llvm-svn: 147748
2012-01-08 17:20:33 +00:00
Jakob Stoklund Olesen 083dbdca7f Match SelectionDAG logic for enabling movt.
Darwin doesn't do static, and ELF targets only support static.

llvm-svn: 147740
2012-01-07 20:49:15 +00:00
Craig Topper f210619d08 Fix typo in the X86 backend readme. Patch from Jaeden Amero.
llvm-svn: 147739
2012-01-07 20:35:21 +00:00
Benjamin Kramer 6898db6269 Remove VectorExtras. This unused helper was written for a type of API that is discouraged now.
llvm-svn: 147738
2012-01-07 19:42:13 +00:00
Craig Topper ca66bba45e Remove unnecessary check of hasAVX(). It's already included in hasXMM().
llvm-svn: 147734
2012-01-07 18:48:43 +00:00
Jakob Stoklund Olesen 8cdce7e690 Use getRegForValue() to materialize the address of ARM globals.
This enables basic local CSE, giving us 20% smaller code for
consumer-typeset in -O0 builds.

<rdar://problem/10658692>

llvm-svn: 147720
2012-01-07 04:07:22 +00:00
Rafael Espindola 0708209642 Split Finish into Finish and FinishImpl to have a common place to do end of
file error checking. Use that to error on an unfinished cfi_startproc.

The error is not nice, but is already better than a segmentation fault.

llvm-svn: 147717
2012-01-07 03:13:18 +00:00
Evan Cheng 501e3095e8 Copy implicit defs (e.g. r0) when changing tBX_RET to tPOP_RET. This bug is
exposed with an upcoming change will would delete the copy to return register
because there is no use! It's amazing anything works.

llvm-svn: 147715
2012-01-07 02:55:54 +00:00
Jakob Stoklund Olesen 68f034ee1a Use movw+movt in ARMFastISel::ARMMaterializeGV.
This eliminates a lot of constant pool entries for -O0 builds of code
with many global variable accesses.

This speeds up -O0 codegen of consumer-typeset by 2x because the
constant island pass no longer has to look at thousands of constant pool
entries.

<rdar://problem/10629774>

llvm-svn: 147712
2012-01-07 01:47:05 +00:00
Eric Christopher c206d46709 Make the 'x' constraint work for AVX registers as well.
Fixes rdar://10614894

llvm-svn: 147704
2012-01-07 01:02:09 +00:00
Jakob Stoklund Olesen 68a922c0e9 Enable aligned NEON spilling by default.
Experiments show this to be a small speedup for modern ARM cores.

llvm-svn: 147689
2012-01-06 22:19:37 +00:00
Jakob Stoklund Olesen 690511137c Abort AdjustBBOffsetsAfter early when possible.
llvm-svn: 147685
2012-01-06 21:40:15 +00:00
Chad Rosier 64dc8aa44f Initializing to false makes better sense. Thanks, David.
llvm-svn: 147679
2012-01-06 20:11:59 +00:00
Chad Rosier a3d90a9467 Fix uninitialized variable warning.
llvm-svn: 147676
2012-01-06 20:02:49 +00:00
Chad Rosier 6b64c3c683 Fix uninitialized variable warning.
llvm-svn: 147675
2012-01-06 19:59:58 +00:00
Craig Topper 29b0737452 Mark scalar FMA4 instructions as ignoring the VEX.L bit.
llvm-svn: 147602
2012-01-05 08:56:10 +00:00
Victor Umansky 9255b6d9fe Peephole optimization of ptest-conditioned branch in X86 arch. Performs instruction combining of sequences generated by ptestz/ptestc intrinsics to ptest+jcc pair for SSE and AVX.
Testing: passed 'make check' including LIT tests for all sequences being handled (both SSE and AVX)

Reviewers: Evan Cheng, David Blaikie, Bruno Lopes, Elena Demikhovsky, Chad Rosier, Anton Korobeynikov
llvm-svn: 147601
2012-01-05 08:46:19 +00:00
Bill Wendling ac27f0c830 Replace the uint64_t -> double convertion algorithm with one that's more efficient.
This small bit of ASM code is sufficient to do what the old algorithm did:

     movq       %rax,  %xmm0
     punpckldq  (c0),  %xmm0  // c0: (uint4){ 0x43300000U, 0x45300000U, 0U, 0U }
     subpd      (c1),  %xmm0  // c1: (double2){ 0x1.0p52, 0x1.0p52 * 0x1.0p32 }
   #ifdef __SSE3__
     haddpd   %xmm0, %xmm0          
   #else
     pshufd   $0x4e, %xmm0, %xmm1 
     addpd    %xmm1, %xmm0
   #endif

It's arguably faster. One caveat, the 'haddpd' instruction isn't very fast on
all processors.
<rdar://problem/7719814>

llvm-svn: 147593
2012-01-05 02:13:20 +00:00
Jakob Stoklund Olesen d110e2a83f Reapply r146997, "Heed spill slot alignment on ARM."
Now that canRealignStack() understands frozen reserved registers, it is
safe to use it for aligned spill instructions.

It will only return true if the registers reserved at the beginning of
register allocation allow for dynamic stack realignment.

<rdar://problem/10625436>

llvm-svn: 147579
2012-01-05 00:26:57 +00:00
Jakob Stoklund Olesen 9cb477db25 Avoid reserving an ARM base pointer during register allocation.
Once register allocation has started the reserved registers are frozen.

Fix the ARM canRealignStack() hook to respect the frozen register state.
Now the hook returns false if register allocation was started with frame
pointer elimination enabled.

It also returns false if register allocation started without a reserved
base pointer, and stack realignment would require a base pointer.  This
bug was breaking oggenc on armv6.

No test case, an upcoming patch will use this functionality to realign
the stack for spill slots when possible.

llvm-svn: 147578
2012-01-05 00:26:52 +00:00
Benjamin Kramer 9c48f26341 Silence warnings of a mysterious compiler that still defaults to C89.
llvm-svn: 147553
2012-01-04 22:06:45 +00:00
Akira Hatanaka aac3e06bf7 Enable -soft-float for MIPS.
llvm-svn: 147541
2012-01-04 19:29:11 +00:00
Akira Hatanaka 3b775b8cc3 Rename immLUiOpnd.
llvm-svn: 147519
2012-01-04 03:09:26 +00:00
Akira Hatanaka b89a4bfe41 - Define base classes for Jump-and-link instructions and make 32-bit and 64-bit
versions derive from them.
- JALR64 is not needed since N64 does not emit jal. 
- Add template parameter to BranchLink that sets the rt field. 
- Fix the set of temporary registers for O32 and N64.

llvm-svn: 147518
2012-01-04 03:02:47 +00:00
Akira Hatanaka c669d7a6db Have getRegForInlineAsmConstraint return the correct register class when target
is Mips64.

llvm-svn: 147516
2012-01-04 02:45:01 +00:00
Evan Cheng 801d98b3f0 Fix more places which should be checking for iOS, not darwin.
llvm-svn: 147513
2012-01-04 01:55:04 +00:00
Evan Cheng 104dbb0fd1 For x86, canonicalize max
(x > y) ? x : y
=>
(x >= y) ? x : y

So for something like
(x - y) > 0 : (x - y) ? 0
It will be
(x - y) >= 0 : (x - y) ? 0

This makes is possible to test sign-bit and eliminate a comparison against
zero. e.g.
subl   %esi, %edi
testl  %edi, %edi
movl   $0, %eax
cmovgl %edi, %eax
=>
xorl   %eax, %eax
subl   %esi, $edi
cmovsl %eax, %edi

rdar://10633221

llvm-svn: 147512
2012-01-04 01:41:39 +00:00
Chad Rosier 6ca97df951 Fix 80-column violations.
llvm-svn: 147495
2012-01-03 23:19:12 +00:00
Jakob Stoklund Olesen 1b7f2a7638 Revert r146997, "Heed spill slot alignment on ARM."
This patch caused a miscompilation of oggenc because a frame pointer was
suddenly needed halfway through register allocation.

<rdar://problem/10625436>

llvm-svn: 147487
2012-01-03 22:34:35 +00:00
Nadav Rotem 6d31bac85e Revert 147426 because it caused pr11696.
llvm-svn: 147485
2012-01-03 22:19:42 +00:00
Chad Rosier 493c1b3152 Enhance DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.
rdar://10594409

llvm-svn: 147481
2012-01-03 21:05:52 +00:00
Matt Beaumont-Gay b982d8eb65 Fix malformed assert.
If anybody has strong feelings about 'default: assert(0 && "blah")' vs
'default: llvm_unreachable("blah")', feel free to regularize the instances of
each in this file.

llvm-svn: 147459
2012-01-03 19:03:59 +00:00
Devang Patel c1215324a3 Intel style asm variant does not need '%' prefix.
llvm-svn: 147453
2012-01-03 18:22:10 +00:00
Craig Topper 5bacb7e9e5 Miscellaneous shuffle lowering cleanup. No functional changes. Primarily converting the indexing loops to unsigned to be consistent across functions.
llvm-svn: 147430
2012-01-02 09:17:37 +00:00
Craig Topper 53d559641f Make CanXFormVExtractWithShuffleIntoLoad reject loads with multiple uses. Also make it return false if there's not even a load at all. This makes the code better match the code in DAGCombiner that it tries to match. These two changes prevent some cases where vector_shuffles were making it to instruction selection and causing the older shuffle selection code to be triggered. Also needed to fix a bad pattern that this change exposed. This is the first step towards getting rid of the old shuffle selection support. No test cases yet because there's no way to tell whether a shuffle was handled in the legalize stage or at instruction selection.
llvm-svn: 147428
2012-01-02 08:46:48 +00:00
Nadav Rotem 6c7a0e6c8b Optimize the sequence blend(sign_extend(x)) to blend(shl(x)) since SSE blend instructions only look at the highest bit.
llvm-svn: 147426
2012-01-02 08:05:46 +00:00
Craig Topper b910984458 Allow CRC32 instructions to be selected when AVX is enabled.
llvm-svn: 147411
2012-01-01 19:51:58 +00:00
Craig Topper 1c064e0a89 Fix sfence, lfence, mfence, and clflush to be able to be selected when AVX is enabled. Fix monitor and mwait to require SSE3 or AVX, previously they worked even if SSE3 was disabled. Make prefetch instructions not set the execution domain since they don't use XMM registers.
llvm-svn: 147409
2012-01-01 19:40:22 +00:00
Benjamin Kramer 47aecca51a X86Disassembler: Fix undefined behavior found by GCC 4.6
llvm-svn: 147404
2012-01-01 17:55:36 +00:00
Craig Topper 6e54ba7eee Merge X86 SHUFPS and SHUFPD node types.
llvm-svn: 147394
2011-12-31 23:50:21 +00:00
Craig Topper d51092d93a Add patterns for integer forms of SHUFPD/VSHUFPD with a memory load.
llvm-svn: 147393
2011-12-31 23:24:49 +00:00
Craig Topper 0e796fee11 Fix typo in a SHUFPD and VSHUFPD pattern that prevented SHUFPD/VSHUFPD with a load from being selected.
llvm-svn: 147392
2011-12-31 23:15:11 +00:00
Bruno Cardoso Lopes cd1d447d62 Cleanup Mips code and rename some variables. Patch by Jack Carter
llvm-svn: 147383
2011-12-30 21:09:41 +00:00
Bruno Cardoso Lopes d5b2834fb7 Improve Mips JIT.
Implement encoder methods getJumpTargetOpValue and getBranchTargetOpValue
for jmptarget and brtarget Mips tablegen operand types in the code emitter
for old-style JIT. Rename the pc relative relocation for branches - new
name is Mips::reloc_mips_pc16.

Patch by Sasa Stankovic

llvm-svn: 147382
2011-12-30 21:04:30 +00:00
Craig Topper a5d1fc2cc7 Make FMA4 imply AVX so that YMM registers would be available. Necessitates removing from Bulldozer CPU types since it would enable AVX code generation implicitly. Also make SSE4A imply SSE3. Without some level of SSE implied, XMM registers wouldn't be legal.
llvm-svn: 147369
2011-12-30 07:16:00 +00:00
Craig Topper 2ba766ae84 Add disassembler support for VPERMIL2PD and VPERMIL2PS.
llvm-svn: 147368
2011-12-30 06:23:39 +00:00
Craig Topper 03a0beda88 Add FMA4 instructions to disassembler.
llvm-svn: 147367
2011-12-30 05:20:36 +00:00
Craig Topper cd93de93fa Separate the concept of having memory access in operand 4 from the concept of having the W bit set for XOP instructons. Removes ORing W-bits in the encoder and will similarly simplify the disassembler implementation.
llvm-svn: 147366
2011-12-30 04:48:54 +00:00
Craig Topper c0f9bcb5d5 Combine FMA4 SS/SD patterns with the instruction definitions.
llvm-svn: 147365
2011-12-30 03:33:59 +00:00
Craig Topper 51fe43fcd9 Combine FMA4 PS/PD patterns with the instruction definitions.
llvm-svn: 147364
2011-12-30 03:17:15 +00:00