Commit Graph

124 Commits

Author SHA1 Message Date
Devang Patel c24048a718 If dbg_declare() or dbg_value() is not lowered by isel then emit DEBUG message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
2010-12-06 22:39:26 +00:00
Kalle Raiskila 1ff0bfa28f Handle lshr for i128 correctly on SPU also when
shiftamount > 7.

llvm-svn: 120288
2010-11-29 14:44:28 +00:00
Kalle Raiskila dc620afd1e Enable PostRA scheduling for SPU.
This speeds up selected test cases with up to
5% - no slowdowns observed.

llvm-svn: 120286
2010-11-29 10:30:25 +00:00
Kalle Raiskila 97fc68774c Allow for 'fcmp ogt' in SPU.
Fix by Visa Putkinen!

llvm-svn: 120090
2010-11-24 11:42:17 +00:00
Kalle Raiskila e1b6c273b8 Division by pow-of-2 is not cheap on SPU, do it with
shifts.

llvm-svn: 120022
2010-11-23 13:27:59 +00:00
Kalle Raiskila 77d11d054c Fix a bug with extractelement on SPU.
In the attached testcase, the element was
never extracted (missing rotate).

llvm-svn: 119973
2010-11-22 16:28:26 +00:00
Kalle Raiskila 0a9dd405a5 Fix memory access lowering on SPU, adding
support for the case where alignment<value size.

These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory 
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.

llvm-svn: 118889
2010-11-12 10:14:03 +00:00
Kalle Raiskila a49d062234 Change v64 datalayout in SPU.
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays, 
we use array alignment for v64 too. This makes the 
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.

This also makes an "unaligned store" test to be 
aligned, with different (but functionally equivalent)
code generated.

llvm-svn: 117360
2010-10-26 10:45:47 +00:00
Kalle Raiskila 5f2034c455 Improve lowering of sext to i128 on SPU.
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.

llvm-svn: 116701
2010-10-18 09:34:19 +00:00
Kalle Raiskila 56f7cd255b Zap some redundant 'ori $?, $?, 0' from SPU.
Also remove some code that died in the process.
One now non-existant ori is checked for.

llvm-svn: 115306
2010-10-01 09:20:01 +00:00
Kalle Raiskila c0e9b8d8bb Change SPU register re-interpretations from OR to COPY_TO_REGCLASS instruction.
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
	or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.

Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.

llvm-svn: 114074
2010-09-16 12:29:33 +00:00
Kalle Raiskila e542972828 Fix CellSPU vector shuffles, again.
Some cases of lowering to rotate were miscompiled.

llvm-svn: 113355
2010-09-08 11:53:38 +00:00
Kalle Raiskila 1e616572d9 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Kalle Raiskila 7e25bc4145 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Kalle Raiskila e60b5161d1 Fix a bug with insertelement on SPU.
The previous algorithm in LowerVECTOR_SHUFFLE 
didn't check all requirements for "monotonic" shuffles.

llvm-svn: 111361
2010-08-18 10:20:29 +00:00
Kalle Raiskila ab49360f59 Remove all traces of v2[i,f]32 on SPU.
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are 
expanded. This causes changes to some dejagnu tests.

llvm-svn: 111360
2010-08-18 10:04:39 +00:00
Kalle Raiskila f3984d1ef6 Change SPU C calling convention to match that described in
"SPU Application Binary Interface Specification, v1.9" by
IBM. 
Specifically: use r3-r74 to pass parameters and the return value.

llvm-svn: 111358
2010-08-18 09:50:30 +00:00
Kalle Raiskila 999da1f3a0 Have SPU handle halfvec stores aligned by 8 bytes.
llvm-svn: 110576
2010-08-09 16:33:00 +00:00
Kalle Raiskila 8b2f70125f Make SPU backend handle insertelement and
store for "half vectors"

llvm-svn: 110198
2010-08-04 13:59:48 +00:00
Kalle Raiskila 77558b7d13 More SPU v2f32 stuff added: insertelement and shuffle.
llvm-svn: 110038
2010-08-02 11:22:10 +00:00
Kalle Raiskila 68b3886678 Add preliminary v2f32 support for SPU. Like with v2i32, we just
duplicate the instructions and operate on half vectors. 

Also reorder code in SPUInstrInfo.td for better coherency.

llvm-svn: 110037
2010-08-02 10:25:47 +00:00
Kalle Raiskila 622f8eb981 Add preliminary v2i32 support for SPU backend. As there are no
such registers in SPU, this support boils down to "emulating" 
them by duplicating instructions on the general purpose registers. 

This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.

llvm-svn: 110035
2010-08-02 08:54:39 +00:00
Jakob Stoklund Olesen 37c42a3d02 Remove many calls to TII::isMoveInstr. Targets should be producing COPY anyway.
TII::isMoveInstr is going tobe completely removed.

llvm-svn: 108507
2010-07-16 04:45:42 +00:00
Benjamin Kramer 3bbc52ce3e Fix some tests that didn't test anything.
llvm-svn: 106954
2010-06-26 20:05:06 +00:00
Kalle Raiskila df071b7e42 Add the check to the testcase of r106419.
llvm-svn: 106421
2010-06-21 15:11:51 +00:00
Kalle Raiskila 0ab5a02579 Mark the SPU 'lr' instruction to never have side effects.
This allows the fast regiser allocator to remove redundant 
register moves.
Update a set of tests that depend on the register allocator
to be linear scan. 

llvm-svn: 106420
2010-06-21 15:08:16 +00:00
Kalle Raiskila d7f50c118a Fix the lowering of VECTOR_SHUFFLE on SPU to handle splats.
llvm-svn: 106419
2010-06-21 14:42:19 +00:00
Kalle Raiskila 6f58190f6f Fix lowering of VECTOR_SHUFFLE on SPU. Old algorithm
used to choke llc with the attached test.
 

llvm-svn: 106411
2010-06-21 10:17:36 +00:00
Kalle Raiskila 5e0862f7f5 Fix SPU to cope with vector insertelement to an undef position.
We default to inserting to lane 0.

llvm-svn: 105722
2010-06-09 09:58:17 +00:00
Kalle Raiskila 056113a211 Handle loading from/storing to undef pointers on SPU by inserting a
random load/store, rather than crashing llc.

llvm-svn: 105710
2010-06-09 08:29:41 +00:00
Kalle Raiskila 8916358f97 Fix handling of 'load' nodes.
llvm-svn: 105269
2010-06-01 13:34:47 +00:00
Kalle Raiskila 9dd3ef8d01 Make SPU backend not assert on jump tables.
llvm-svn: 103466
2010-05-11 11:00:02 +00:00
Kalle Raiskila 92ea401d8f Fix encoding of 'sf' and 'sfh' instructions.
llvm-svn: 103399
2010-05-10 08:13:49 +00:00
Chris Lattner 0185047b3f "on the rare occasion the SPU BE produces illegal assembly - it tries to emit an add instruction of the form 'a reg, reg, imm'."
Patch by Kalle Raiskila!

llvm-svn: 103021
2010-05-04 17:58:46 +00:00
Chris Lattner 38c1a1a247 teach cellspu how to return i8 and i16 from calls,
patch by Kalle Raiskila!

llvm-svn: 101875
2010-04-20 05:36:09 +00:00
Benjamin Kramer 7e4a475929 Make sure this test tests something.
llvm-svn: 100879
2010-04-09 19:03:31 +00:00
Chris Lattner 1ef9826ff8 "On SPU, variables in the .bss section that are allocated with the .lcomm directive are not aligned on 16 byte boundaries. This causes misaligned loads, as the generated assembly assumes this "default" alignment.
this patch disables .lcomm in favour of '.local .comm'

Patch by Kalle Raisklia!

llvm-svn: 100875
2010-04-09 18:27:03 +00:00
Dale Johannesen f118f9788b Split big test into multiple directories to cater to
those who don't build all targets.

llvm-svn: 100688
2010-04-07 20:43:35 +00:00
Chris Lattner f60c556b91 From Kalle Raiskila:
"the bigstack patch for SPU, with testcase. It is essentially the patch committed as 97091, and reverted as 97099, but with the following additions:
-in vararg handling, registers are marked to be live, to not confuse the register scavenger
-function prologue and epilogue are not emitted, if the stack size is 16. 16 means it is empty - there is only the register scavenger emergency spill slot, which is not used as there is no stack."

llvm-svn: 99819
2010-03-29 17:38:47 +00:00
Chris Lattner f0692603d5 fix bss section printing for cell, patch by Kalle Raiskila!
llvm-svn: 97814
2010-03-05 18:55:36 +00:00
Chris Lattner a6368219ac don't let asm-verbose break the check-next lines in these tests.
llvm-svn: 93869
2010-01-19 06:39:54 +00:00
Evan Cheng 166a4e6caa Teach dag combine to fold the following transformation more aggressively:
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))

Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.

This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.

llvm-svn: 92849
2010-01-06 19:38:29 +00:00
Dan Gohman fb4193625a Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Evan Cheng aadf060b92 Revert this dag combine change:
Fold (zext (and x, cst)) -> (and (zext x), cst)

DAG combiner likes to optimize expression in the other way so this would end up cause an infinite looping.

llvm-svn: 91574
2009-12-17 00:40:05 +00:00
Evan Cheng d1521ef40c Fold (zext (and x, cst)) -> (and (zext x), cst).
llvm-svn: 91380
2009-12-15 00:52:11 +00:00
Evan Cheng d938faff4b Teach InferPtrAlignment to infer GV+cst alignment and use it to simplify x86 isl lowering code.
llvm-svn: 90925
2009-12-09 01:53:58 +00:00
Dan Gohman ff97acd8f1 Revert the main portion of r31856. It was causing BranchFolding
to break up CFG diamonds by banishing one of the blocks to the end of
the function, which is bad for code density and branch size.

This does pessimize MultiSource/Benchmarks/Ptrdist/yacr2, the
benchmark cited as the reason for the change, however I've examined
the code and it looks more like a case of gaming a particular
branch than of being generally applicable.

llvm-svn: 84803
2009-10-22 00:03:58 +00:00
Daniel Dunbar 1df7ea05c3 Teach lit that the .c files in 'test/CodeGen/CellSPU/useful-harnesses' aren't tests.
llvm-svn: 84460
2009-10-19 03:53:55 +00:00
Dan Gohman a080159a7c Convert more tests to avoid llvm-as.
llvm-svn: 81545
2009-09-11 18:36:27 +00:00
Dan Gohman c8054d90fb Eliminate more uses of llvm-as and llvm-dis.
llvm-svn: 81293
2009-09-09 00:09:15 +00:00