Commit Graph

143 Commits

Author SHA1 Message Date
Nadav Rotem 486ff59a9f Enable element promotion type legalization by deafault.
Changed tests which assumed that vectors are legalized by widening them.

llvm-svn: 142152
2011-10-16 20:31:33 +00:00
Nadav Rotem bc25b6eb67 Fix a bug in LowerV2I64Splat, which generated a BUILD_VECTOR for which there was
no pattern.

llvm-svn: 142130
2011-10-16 10:02:06 +00:00
Kalle Raiskila 3815de8d50 Mark 'branch indirect' instruction as an indirect branch.
Not having it confused assembly printing of jumptables.

llvm-svn: 141862
2011-10-13 11:40:03 +00:00
Kalle Raiskila f5769c1070 Pass signed (not unsigned) 10 bit field to SPU 'ori' instruction.
llvm-svn: 139004
2011-09-02 10:05:01 +00:00
Chris Lattner 5756c16cdf make the asmparser reject function and type redefinitions. 'Merging' hasn't been
needed since llvm-gcc 3.4 days.

llvm-svn: 133248
2011-06-17 07:06:44 +00:00
Chris Lattner b90ed2233c manually upgrade a bunch of tests to modern syntax, and remove some that
are either unreduced or only test old syntax.

llvm-svn: 133228
2011-06-17 03:14:27 +00:00
Chris Lattner 1c42a4d159 don't test for codegen of 'store undef'
llvm-svn: 129184
2011-04-09 02:31:26 +00:00
Cameron Zwarich 338d362200 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar 94ccb27b43 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Cameron Zwarich cc27b3acc4 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Benjamin Kramer 1885d21700 Fix mistyped CHECK lines.
llvm-svn: 127366
2011-03-09 22:07:31 +00:00
Joerg Sonnenberger 62f759791a Be nice to Xcore and the XMOS assembler and avoid quoting section names
that contain only letters, digits and the characters "_" and ".".

llvm-svn: 127028
2011-03-04 20:03:14 +00:00
Kalle Raiskila a1d947dd14 Allow vector shifts (shl,lshr,ashr) on SPU.
There was a previous implementation with patterns that would 
have matched e.g. 
	shl <v4i32> <i32>,
but this is not valid LLVM IR so they never were selected.

llvm-svn: 126998
2011-03-04 13:19:18 +00:00
Kalle Raiskila 3531e9b0d9 Allow load from constant on SPU.
A 'load <4 x i32>* null' crashes llc before this fix.

llvm-svn: 126995
2011-03-04 12:00:11 +00:00
Joerg Sonnenberger 852ab890b5 Bug#9033: For the ELF assembler output, always quote the section name.
llvm-svn: 126963
2011-03-03 22:31:08 +00:00
Chris Lattner 2a720d933a fix visitShift to properly zero extend the shift amount if the provided operand
is narrower than the shift register.  Doing an anyext provides undefined bits in
the top part of the register.

llvm-svn: 125457
2011-02-13 09:02:52 +00:00
Kalle Raiskila 6e5a54b36c Allow sign-extending of i8 and i16 to i128 on SPU.
llvm-svn: 123912
2011-01-20 15:49:06 +00:00
Kalle Raiskila 7e7b4ac751 Don't crash SPU BE with memory accesses with big alignmnet.
llvm-svn: 123620
2011-01-17 11:59:20 +00:00
Kalle Raiskila affe15fd67 Don't feed 19 bit immediates to ILA.
Patch (slightly modified) by Visa Putkinen.

llvm-svn: 122052
2010-12-17 09:36:09 +00:00
Devang Patel c24048a718 If dbg_declare() or dbg_value() is not lowered by isel then emit DEBUG message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
2010-12-06 22:39:26 +00:00
Kalle Raiskila 1ff0bfa28f Handle lshr for i128 correctly on SPU also when
shiftamount > 7.

llvm-svn: 120288
2010-11-29 14:44:28 +00:00
Kalle Raiskila dc620afd1e Enable PostRA scheduling for SPU.
This speeds up selected test cases with up to
5% - no slowdowns observed.

llvm-svn: 120286
2010-11-29 10:30:25 +00:00
Kalle Raiskila 97fc68774c Allow for 'fcmp ogt' in SPU.
Fix by Visa Putkinen!

llvm-svn: 120090
2010-11-24 11:42:17 +00:00
Kalle Raiskila e1b6c273b8 Division by pow-of-2 is not cheap on SPU, do it with
shifts.

llvm-svn: 120022
2010-11-23 13:27:59 +00:00
Kalle Raiskila 77d11d054c Fix a bug with extractelement on SPU.
In the attached testcase, the element was
never extracted (missing rotate).

llvm-svn: 119973
2010-11-22 16:28:26 +00:00
Kalle Raiskila 0a9dd405a5 Fix memory access lowering on SPU, adding
support for the case where alignment<value size.

These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory 
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.

llvm-svn: 118889
2010-11-12 10:14:03 +00:00
Kalle Raiskila a49d062234 Change v64 datalayout in SPU.
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays, 
we use array alignment for v64 too. This makes the 
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.

This also makes an "unaligned store" test to be 
aligned, with different (but functionally equivalent)
code generated.

llvm-svn: 117360
2010-10-26 10:45:47 +00:00
Kalle Raiskila 5f2034c455 Improve lowering of sext to i128 on SPU.
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.

llvm-svn: 116701
2010-10-18 09:34:19 +00:00
Kalle Raiskila 56f7cd255b Zap some redundant 'ori $?, $?, 0' from SPU.
Also remove some code that died in the process.
One now non-existant ori is checked for.

llvm-svn: 115306
2010-10-01 09:20:01 +00:00
Kalle Raiskila c0e9b8d8bb Change SPU register re-interpretations from OR to COPY_TO_REGCLASS instruction.
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
	or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.

Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.

llvm-svn: 114074
2010-09-16 12:29:33 +00:00
Kalle Raiskila e542972828 Fix CellSPU vector shuffles, again.
Some cases of lowering to rotate were miscompiled.

llvm-svn: 113355
2010-09-08 11:53:38 +00:00
Kalle Raiskila 1e616572d9 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Kalle Raiskila 7e25bc4145 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Kalle Raiskila e60b5161d1 Fix a bug with insertelement on SPU.
The previous algorithm in LowerVECTOR_SHUFFLE 
didn't check all requirements for "monotonic" shuffles.

llvm-svn: 111361
2010-08-18 10:20:29 +00:00
Kalle Raiskila ab49360f59 Remove all traces of v2[i,f]32 on SPU.
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are 
expanded. This causes changes to some dejagnu tests.

llvm-svn: 111360
2010-08-18 10:04:39 +00:00
Kalle Raiskila f3984d1ef6 Change SPU C calling convention to match that described in
"SPU Application Binary Interface Specification, v1.9" by
IBM. 
Specifically: use r3-r74 to pass parameters and the return value.

llvm-svn: 111358
2010-08-18 09:50:30 +00:00
Kalle Raiskila 999da1f3a0 Have SPU handle halfvec stores aligned by 8 bytes.
llvm-svn: 110576
2010-08-09 16:33:00 +00:00
Kalle Raiskila 8b2f70125f Make SPU backend handle insertelement and
store for "half vectors"

llvm-svn: 110198
2010-08-04 13:59:48 +00:00
Kalle Raiskila 77558b7d13 More SPU v2f32 stuff added: insertelement and shuffle.
llvm-svn: 110038
2010-08-02 11:22:10 +00:00
Kalle Raiskila 68b3886678 Add preliminary v2f32 support for SPU. Like with v2i32, we just
duplicate the instructions and operate on half vectors. 

Also reorder code in SPUInstrInfo.td for better coherency.

llvm-svn: 110037
2010-08-02 10:25:47 +00:00
Kalle Raiskila 622f8eb981 Add preliminary v2i32 support for SPU backend. As there are no
such registers in SPU, this support boils down to "emulating" 
them by duplicating instructions on the general purpose registers. 

This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.

llvm-svn: 110035
2010-08-02 08:54:39 +00:00
Jakob Stoklund Olesen 37c42a3d02 Remove many calls to TII::isMoveInstr. Targets should be producing COPY anyway.
TII::isMoveInstr is going tobe completely removed.

llvm-svn: 108507
2010-07-16 04:45:42 +00:00
Benjamin Kramer 3bbc52ce3e Fix some tests that didn't test anything.
llvm-svn: 106954
2010-06-26 20:05:06 +00:00
Kalle Raiskila df071b7e42 Add the check to the testcase of r106419.
llvm-svn: 106421
2010-06-21 15:11:51 +00:00
Kalle Raiskila 0ab5a02579 Mark the SPU 'lr' instruction to never have side effects.
This allows the fast regiser allocator to remove redundant 
register moves.
Update a set of tests that depend on the register allocator
to be linear scan. 

llvm-svn: 106420
2010-06-21 15:08:16 +00:00
Kalle Raiskila d7f50c118a Fix the lowering of VECTOR_SHUFFLE on SPU to handle splats.
llvm-svn: 106419
2010-06-21 14:42:19 +00:00
Kalle Raiskila 6f58190f6f Fix lowering of VECTOR_SHUFFLE on SPU. Old algorithm
used to choke llc with the attached test.
 

llvm-svn: 106411
2010-06-21 10:17:36 +00:00
Kalle Raiskila 5e0862f7f5 Fix SPU to cope with vector insertelement to an undef position.
We default to inserting to lane 0.

llvm-svn: 105722
2010-06-09 09:58:17 +00:00
Kalle Raiskila 056113a211 Handle loading from/storing to undef pointers on SPU by inserting a
random load/store, rather than crashing llc.

llvm-svn: 105710
2010-06-09 08:29:41 +00:00
Kalle Raiskila 8916358f97 Fix handling of 'load' nodes.
llvm-svn: 105269
2010-06-01 13:34:47 +00:00