Commit Graph

4949 Commits

Author SHA1 Message Date
Chris Lattner 58a9622957 expose intrinsic info to the targets.
llvm-svn: 27070
2006-03-24 18:44:11 +00:00
Chris Lattner d589dd1352 Fix a bad JIT encoding of VPERM. Why is VPERM D,A,B,C but vfmadd is D,A,C,B ??
llvm-svn: 27069
2006-03-24 18:24:43 +00:00
Chris Lattner f2286d5917 Like the comment says, prefer to use the implicit add done by [r+r] addressing
modes than emitting an explicit add and using a base of r0.  This implements
Regression/CodeGen/PowerPC/mem-rr-addr-mode.ll

llvm-svn: 27068
2006-03-24 17:58:06 +00:00
Jim Laskey 864e444749 Clean up some commentary.
llvm-svn: 27064
2006-03-24 10:00:56 +00:00
Chris Lattner a90b7141ed Disable the i32->float G5 optimization. It is unsafe, as documented in the
comment.

This fixes 177.mesa, and McCat/09-vor with the td scheduler.

llvm-svn: 27060
2006-03-24 07:53:47 +00:00
Chris Lattner ab882abce8 add support for using vxor to build zero vectors. This implements
Regression/CodeGen/PowerPC/vec_zero.ll

llvm-svn: 27059
2006-03-24 07:48:08 +00:00
Evan Cheng 082c8785ef Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Chris Lattner f5efddf80b Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Evan Cheng a91d8a5b43 All v2f64 shuffle cases can be handled.
llvm-svn: 27044
2006-03-24 06:40:32 +00:00
Evan Cheng 2595a687da More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng 6afb3c2de7 A new entry
llvm-svn: 27039
2006-03-24 02:57:03 +00:00
Reid Spencer f9c3dcfdc1 Ignore the burg output files.
llvm-svn: 27033
2006-03-24 02:21:35 +00:00
Evan Cheng d27fb3e85e Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng 4b5b4e373b Typo
llvm-svn: 27008
2006-03-23 23:24:51 +00:00
Chris Lattner cbcfe46556 add a note
llvm-svn: 27000
2006-03-23 21:28:44 +00:00
Evan Cheng f842ea57bb Typo
llvm-svn: 26997
2006-03-23 20:26:04 +00:00
Chris Lattner 81137629e0 Add PPC vector bit-convert support
llvm-svn: 26995
2006-03-23 19:54:27 +00:00
Jim Laskey 3c43609f1f Add support to locate local variables in frames (early version.)
llvm-svn: 26994
2006-03-23 18:12:57 +00:00
Jim Laskey cf0166fbeb Change interface to DwarfWriter.
llvm-svn: 26991
2006-03-23 18:09:44 +00:00
Jim Laskey 267d39d128 Modify how CBE handles #lines.
llvm-svn: 26990
2006-03-23 18:08:29 +00:00
Chris Lattner ce0206e119 Fix the encodings of these new instructions, hopefully fixing the JIT
failures from last night

llvm-svn: 26981
2006-03-23 16:13:50 +00:00
Evan Cheng 82ed4a42f9 Following icc's lead: use movdqa to load / store 128-bit integer vectors
llvm-svn: 26980
2006-03-23 07:44:07 +00:00
Chris Lattner 6f95ab7abb Eliminate IntrinsicLowering from TargetMachine.
Make the CBE and V9 backends create their own, since they're the only ones that use it.

llvm-svn: 26974
2006-03-23 05:43:16 +00:00
Chris Lattner 811dd8d009 remove always-null IntrinsicLowering argument.
llvm-svn: 26971
2006-03-23 05:28:02 +00:00
Evan Cheng 7055878170 Add v4i32 <-> v4f32 bitconvert patterns.
llvm-svn: 26969
2006-03-23 02:36:37 +00:00
Evan Cheng b9b0550dc6 Add 128-bit integer vector load and add (for testing).
llvm-svn: 26967
2006-03-23 01:57:24 +00:00
Nate Begeman fb6e02931c Add support for 8 bit immediates with 16/32 bit cmp instructions
llvm-svn: 26966
2006-03-23 01:29:48 +00:00
Evan Cheng 021bb7c956 Added a ValueType operand to isShuffleMaskLegal(). For now, x86 will not do
64-bit vector shuffle.

llvm-svn: 26964
2006-03-22 22:07:06 +00:00
Evan Cheng ed794cd27b SHUFP* are two address code.
llvm-svn: 26959
2006-03-22 20:08:18 +00:00
Evan Cheng bc04722860 Some clean up.
llvm-svn: 26957
2006-03-22 19:22:18 +00:00
Evan Cheng d4e1557941 - Supposely movlhps is faster / better than unpcklpd.
- Don't forget pshufd is only available with sse2.

llvm-svn: 26956
2006-03-22 19:16:21 +00:00
Evan Cheng 68ad48bd1a - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Evan Cheng 8fdbdf20cd - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Chris Lattner e24cf9dfa1 add a note
llvm-svn: 26950
2006-03-22 07:33:46 +00:00
Evan Cheng 3617caf526 Fix PSHUF* and SHUF* jit code emission problems
llvm-svn: 26949
2006-03-22 07:10:28 +00:00
Chris Lattner eccf46950c This has been implemented. Tweak it into another note
llvm-svn: 26944
2006-03-22 05:33:23 +00:00
Chris Lattner 4a66d69433 When possible, custom lower 32-bit SINT_TO_FP to this:
_foo2:
        extsw r2, r3
        std r2, -8(r1)
        lfd f0, -8(r1)
        fcfid f0, f0
        frsp f1, f0
        blr

instead of this:

_foo2:
        lis r2, ha16(LCPI2_0)
        lis r4, 17200
        xoris r3, r3, 32768
        stw r3, -4(r1)
        stw r4, -8(r1)
        lfs f0, lo16(LCPI2_0)(r2)
        lfd f1, -8(r1)
        fsub f0, f1, f0
        frsp f1, f0
        blr

This speeds up Misc/pi from 2.44s->2.09s with LLC and from 3.01->2.18s
with llcbeta (16.7% and 38.1% respectively).

llvm-svn: 26943
2006-03-22 05:30:33 +00:00
Chris Lattner 77373d1bea Add support for "ri" addressing modes where the immediate is a 14-bit field
which is shifted left two bits before use.  Instructions like STD use this
addressing mode.

llvm-svn: 26942
2006-03-22 05:26:03 +00:00
Chris Lattner f5e36c8bc0 fix a warning
llvm-svn: 26941
2006-03-22 04:18:34 +00:00
Evan Cheng d097e67544 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng b1d3c64d1f Add a couple more pseudo instructions.
llvm-svn: 26939
2006-03-22 02:52:03 +00:00
Chris Lattner 4e7371758f Fix the JIT encoding of the VAForm_1 instructions, including vmaddfp
llvm-svn: 26935
2006-03-22 01:44:36 +00:00
Evan Cheng baea59c61c Didn't mean to check this in. No MMX support yet.
llvm-svn: 26933
2006-03-21 23:04:23 +00:00
Evan Cheng d5e905d762 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Chris Lattner 00f4683bf6 These targets don't support EXTRACT_VECTOR_ELT, though, in time, X86 will.
llvm-svn: 26930
2006-03-21 20:51:05 +00:00
Chris Lattner 3a2ae6ad3c Don't emit pseudo instructions!
llvm-svn: 26926
2006-03-21 20:19:37 +00:00
Nate Begeman 013127981a Update readme
llvm-svn: 26924
2006-03-21 18:58:20 +00:00
Chris Lattner 139eac5b71 Print absolute memory references like this:
lwz r2, 8(0)
instead of this:
       lwz r2, 8(r0)

This fixes the llc/llc-beta failures on PPC last night.

llvm-svn: 26922
2006-03-21 17:21:13 +00:00
Evan Cheng 2d819f5fa4 Combine 2 entries
llvm-svn: 26921
2006-03-21 07:18:26 +00:00
Evan Cheng aeebc96099 Add a note about x86 register coallescing
llvm-svn: 26920
2006-03-21 07:12:57 +00:00
Evan Cheng 1208d9179a - Remove scalar to vector pseudo ops. They are just wrong.
- Handle FR32 to VR128:v4f32 and FR64 to VR128:v2f64 with aliases of MOVAPS
and MOVAPD. Mark them as move instructions and *hope* they will be deleted.

llvm-svn: 26919
2006-03-21 07:09:35 +00:00
Chris Lattner bda7310ef7 With Evan's latest tblgen patch, this code is obsolete, thanks Evan!
llvm-svn: 26917
2006-03-21 06:37:40 +00:00
Chris Lattner d2132f87d7 When codegen'ing vector MUL using VFMADD, *add* the 0, don't *mul* the 0.
llvm-svn: 26913
2006-03-21 00:51:38 +00:00
Chris Lattner f194834161 minor note
llvm-svn: 26912
2006-03-21 00:47:09 +00:00
Evan Cheng e4d1416239 x86 ISD::SCALAR_TO_VECTOR support.
llvm-svn: 26911
2006-03-21 00:33:35 +00:00
Evan Cheng fb872b41c0 Junk unused vector register classes.
llvm-svn: 26910
2006-03-21 00:30:59 +00:00
Chris Lattner c8b16d00b9 Handle constant addresses more efficiently, folding the low bits into the
disp field of the load/store if possible.  This compiles
CodeGen/PowerPC/load-constant-addr.ll to:

_test:
        lis r2, 2838
        lfs f1, 26848(r2)
        blr

instead of:

_test:
        lis r2, 2838
        ori r2, r2, 26848
        lfs f1, 0(r2)
        blr

llvm-svn: 26908
2006-03-20 22:38:22 +00:00
Chris Lattner 6d74b09da7 remove dead variable
llvm-svn: 26907
2006-03-20 22:37:23 +00:00
Chris Lattner a1bc294f0c Fix a couple of bugs in permute/splat generate, thanks to Nate for actually
figuring these out! :)

llvm-svn: 26904
2006-03-20 18:26:51 +00:00
Chris Lattner eda030da04 reenable this hack, the tblgen version isn't quite ready
llvm-svn: 26902
2006-03-20 17:54:43 +00:00
Chris Lattner f96d523b8f Fix the pattern for VADDUWM, add i32 splat
llvm-svn: 26901
2006-03-20 17:51:58 +00:00
Evan Cheng 89f3cff0f5 Use tblgen'd VECTOR_SHUFFLE selection code.
llvm-svn: 26900
2006-03-20 08:14:16 +00:00
Chris Lattner a9a1313386 Add support for generating vspltw, instead of a vperm instruction with a
constant pool load.  This generates significantly nicer code for splats.

When tblgen gets bugfixed, we can remove the custom selection code.

llvm-svn: 26898
2006-03-20 06:51:10 +00:00
Chris Lattner a8fbb6dd3d Implement PPC::isSplatShuffleMask and PPC::getVSPLTImmediate.
llvm-svn: 26897
2006-03-20 06:37:44 +00:00
Chris Lattner ffc475689b fix duplicate definition errors
llvm-svn: 26896
2006-03-20 06:33:01 +00:00
Chris Lattner 80b6bd2746 Add a build_vector node
llvm-svn: 26895
2006-03-20 06:18:01 +00:00
Chris Lattner 382f356bd9 Check in some intermediate code that adds a skeleton for matching vsplt*
instructions

llvm-svn: 26894
2006-03-20 06:15:45 +00:00
Evan Cheng e6448448c2 Move a few things around.
llvm-svn: 26893
2006-03-20 06:04:52 +00:00
Chris Lattner e4e1ac37ba add vector_shuffle
llvm-svn: 26891
2006-03-20 05:40:45 +00:00
Chris Lattner 93d99f9928 fix typo
llvm-svn: 26889
2006-03-20 05:05:55 +00:00
Chris Lattner 366b2514fa add vsplat instructions, fix sched description for vperm
llvm-svn: 26888
2006-03-20 04:47:33 +00:00
Chris Lattner a8713b1ee6 Custom lower arbitrary VECTOR_SHUFFLE's to VPERM.
TODO: leave specific ones as VECTOR_SHUFFLE's and turn them into specialized
operations like vsplt*

llvm-svn: 26887
2006-03-20 01:53:53 +00:00
Chris Lattner 0a8b4eaee9 Claim to have v16i8 for perm masks
llvm-svn: 26886
2006-03-20 01:53:02 +00:00
Chris Lattner e7a058de7d add the vperm instruction
llvm-svn: 26883
2006-03-20 01:00:56 +00:00
Chris Lattner d16f6fdd49 add a note with a testcase
llvm-svn: 26877
2006-03-19 22:27:41 +00:00
Chris Lattner 169e6238ad Add a note about the MUL -> FMADD vector bug.
llvm-svn: 26874
2006-03-19 22:08:08 +00:00
Evan Cheng f7c2e3628b Vector undef's
llvm-svn: 26870
2006-03-19 09:38:54 +00:00
Chris Lattner 7e9440a4fc Custom lower SCALAR_TO_VECTOR into lve*x.
llvm-svn: 26868
2006-03-19 06:55:52 +00:00
Chris Lattner b1ee9c7e24 PPC doesn't have SCALAR_TO_VECTOR
llvm-svn: 26865
2006-03-19 06:17:19 +00:00
Chris Lattner 5b595af956 add support for vector undef
llvm-svn: 26863
2006-03-19 06:10:09 +00:00
Evan Cheng 0a03f789c2 Remind us of exit value substitution
llvm-svn: 26862
2006-03-19 06:09:23 +00:00
Evan Cheng 5111c81a3c Turning on LSR by default
llvm-svn: 26861
2006-03-19 06:08:49 +00:00
Evan Cheng 66a9c0dea7 Remember which tests are hurt by LSR.
llvm-svn: 26860
2006-03-19 06:08:11 +00:00
Chris Lattner 0c9eb670bb minor fixes
llvm-svn: 26857
2006-03-19 05:43:01 +00:00
Chris Lattner ea6468758d notes
llvm-svn: 26856
2006-03-19 05:33:30 +00:00
Chris Lattner 431c90c9fa we don't use lmw/stmw. When we want them they are easy enough to add
llvm-svn: 26853
2006-03-19 04:33:37 +00:00
Chris Lattner f7b6e7212f rename these nodes
llvm-svn: 26848
2006-03-19 01:13:28 +00:00
Evan Cheng 9bf978dc20 Use the generic vector register classes VR64 / VR128 rather than V4F32,
V8I16, etc.

llvm-svn: 26838
2006-03-18 01:23:20 +00:00
Nate Begeman 21f87d0e4c Fix subfic to match subc by default instead of sub so that it is correctly
cost-modeled as producing a flag.  This fixes the test I just added for neg

llvm-svn: 26835
2006-03-17 22:41:37 +00:00
Evan Cheng b09a56f3a4 Darwin should use _setjmp/_longjmp instead of setjmp/longjmp.
llvm-svn: 26833
2006-03-17 20:31:41 +00:00
Evan Cheng 4f674921d6 Move some pattern fragments to the right files.
llvm-svn: 26831
2006-03-17 19:55:52 +00:00
Chris Lattner 388fc4d9fb Disable x86 fastcc from passing args in registers
llvm-svn: 26824
2006-03-17 17:27:47 +00:00
Chris Lattner 43798850f9 Parameterize the number of integer arguments to pass in registers
llvm-svn: 26818
2006-03-17 05:10:20 +00:00
Evan Cheng bfc2e97383 Also fold MOV8r0, MOV16r0, MOV32r0 + store to MOV8mi, MOV16mi, and MOV32mi.
llvm-svn: 26817
2006-03-17 02:36:22 +00:00
Evan Cheng aca7915b70 Add some missing entries to X86RegisterInfo::foldMemoryOperand(). e.g.
ADD32ri8.

llvm-svn: 26816
2006-03-17 02:25:01 +00:00
Evan Cheng 27750f3287 - Nuke 16-bit SBB instructions. We'll never use them.
- Nuke a bogus comment.

llvm-svn: 26815
2006-03-17 02:24:04 +00:00
Nate Begeman bb01d4f272 Remove BRTWOWAY*
Make the PPC backend not dependent on BRTWOWAY_CC and make the branch
selector smarter about the code it generates, fixing a case in the
readme.

llvm-svn: 26814
2006-03-17 01:40:33 +00:00
Chris Lattner 8bf1c59e7f remove dead variable
llvm-svn: 26813
2006-03-16 23:52:08 +00:00
Evan Cheng c11fcceec5 A new entry.
llvm-svn: 26810
2006-03-16 22:44:22 +00:00
Nate Begeman fb0e36fa56 Notes on how to kill the eeevil brtwoway, and make ppc branch selector
more target independant, generate better code, and be less conservative.

llvm-svn: 26809
2006-03-16 22:37:48 +00:00