Commit Graph

11997 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen 460ceae432 Move Blackfin intrinsics into the Target/Blackfin directory.
llvm-svn: 84194
2009-10-15 18:50:52 +00:00
Jakob Stoklund Olesen 923b5aa973 Clean up TargetIntrinsicInfo API. Add pure virtual methods.
llvm-svn: 84192
2009-10-15 18:49:26 +00:00
Daniel Dunbar 3b5bdd536b Revert "Complete Rewrite of AsmPrinter, TargetObjectFile based on new
PIC16Section class", it breaks globals.ll.

llvm-svn: 84184
2009-10-15 15:02:14 +00:00
Sanjiv Gupta 917db86b42 Complete Rewrite of AsmPrinter, TargetObjectFile based on new PIC16Section class
derived from MCSection.

llvm-svn: 84180
2009-10-15 10:10:43 +00:00
Sanjiv Gupta 7357636091 Few changes to comply with new DebugInfo Metadata representation.
llvm-svn: 84179
2009-10-15 09:48:25 +00:00
Bob Wilson 68ead6c7a8 Be smarter about reusing constant pool entries.
llvm-svn: 84173
2009-10-15 05:52:29 +00:00
Bob Wilson b4f2a85fe4 Fix another problem with ARM constant pools. Radar 7303551.
When ARMConstantIslandPass cannot find any good locations (i.e., "water") to
place constants, it falls back to inserting unconditional branches to make a
place to put them.  My recent change exposed a problem in this area.  We may
sometimes append to the same block more than one unconditional branch.  The
symptoms of this are that the generated assembly has a branch to an undefined
label and running llc with -debug will cause a seg fault.

This happens more easily since my change to prevent CPEs from moving from
lower to higher addresses as the algorithm iterates, but it could have
happened before.  The end of the block may be in range for various constant
pool references, but the insertion point for new CPEs is not right at the end
of the block -- it is at the end of the CPEs that have already been placed
at the end of the block.  The insertion point could be out of range.  When
that happens, the fallback code will always append another unconditional
branch if the end of the block is in range.

The fix is to only append an unconditional branch if the block does not
already end with one.  I also removed a check to see if the constant pool load
instruction is at the end of the block, since that is redundant with
checking if the end of the block is in-range.

There is more to be done here, but I think this fixes the immediate problem.

llvm-svn: 84172
2009-10-15 05:10:36 +00:00
Bob Wilson cfcf6bc70d Fix instruction encoding bits for NEON VPADAL.
Patch by Johnny Chen.

llvm-svn: 84146
2009-10-14 21:43:17 +00:00
Bob Wilson ad03cf02f6 Remove unused variables to fix build warning.
llvm-svn: 84144
2009-10-14 21:40:45 +00:00
Jim Grosbach 94068707e1 Inst{11-8} for vshl should be 0b0101, not 0b1111.
Refs: A7-17 & A8-750.

Patch by Johnny Chen.

llvm-svn: 84131
2009-10-14 20:31:01 +00:00
Bob Wilson 1a791eedbf Set instruction encoding bits 4 and 7 for ARM register-register and
register-shifted-register instructions.  Patch by Johnny Chen.

llvm-svn: 84124
2009-10-14 19:00:24 +00:00
Bob Wilson c350cdf3b3 Refactor code to select NEON VST intrinsics.
llvm-svn: 84122
2009-10-14 18:32:29 +00:00
Bob Wilson 12b4799787 Refactor code to select NEON VLD intrinsics.
llvm-svn: 84117
2009-10-14 17:28:52 +00:00
Bob Wilson 93117bc499 More refactoring. NEON vst lane intrinsics can share almost all the code for
vld lane intrinsics.

llvm-svn: 84110
2009-10-14 16:46:45 +00:00
Bob Wilson 4145e3ac8d Refactor code for selecting NEON load lane intrinsics.
llvm-svn: 84109
2009-10-14 16:19:03 +00:00
Dan Gohman 0be8c2b0e3 Make isSafeToClobberEFLAGS more aggressive. Teach it to scan backwards
(for uses marked kill and defs marked dead) a few instructions in
addition to forwards. Also, increase the maximum number of instructions
to scan, as it appears to help in a fair number of cases.

llvm-svn: 84061
2009-10-14 00:08:59 +00:00
Kevin Enderby 3a80daced0 Correct comment about ARM immediates using '#' not '$' and TODO for modifiers.
llvm-svn: 84055
2009-10-13 23:33:38 +00:00
Devang Patel d7ebfe3963 s/DebugLoc.CompileUnit/DebugLoc.Scope/g
s/DebugLoc.InlinedLoc/DebugLoc.InlinedAtLoc/g

llvm-svn: 84054
2009-10-13 23:28:53 +00:00
Bob Wilson b62d160b3c More Neon clean-up: avoid the need for custom-lowering vld/st-lane intrinsics
by creating TargetConstants during instruction selection instead of during
legalization.

llvm-svn: 84042
2009-10-13 22:29:24 +00:00
Kevin Enderby f50799412c More bits of the ARM target assembler for llvm-mc to parse immediates.
Also fixed a couple of coding style things that crept in.  And added more
to the temporary hacked up ARMAsmParser::MatchInstruction() method for testing.

llvm-svn: 84040
2009-10-13 22:19:02 +00:00
Bob Wilson 1fdbe1152d NEON VLD/VST are now fully implemented. For operations that expand to
multiple instructions, the expansion is done during selection so there is
no need to do anything special during legalization.

llvm-svn: 84036
2009-10-13 21:55:24 +00:00
Bob Wilson 3b51560ae4 Revise ARM inline assembly memory operands to require the memory address to
be in a register.  The previous use of ARM address mode 2 was completely
arbitrary and inappropriate for Thumb.  Radar 7137468.

llvm-svn: 84022
2009-10-13 20:50:28 +00:00
Sandeep Patel 7460e0822f Fix method name in comment, per Bob Wilson.
llvm-svn: 84017
2009-10-13 20:25:58 +00:00
Sandeep Patel 423e42b371 Add ARMv6T2 SBFX/UBFX instructions. Approved by Anton Korobeynikov.
llvm-svn: 84009
2009-10-13 18:59:48 +00:00
Ted Kremenek f34311779c Update CMake file (lexically order files).
llvm-svn: 84008
2009-10-13 18:57:27 +00:00
Bob Wilson 453a06e3ac Add some ARM instruction encoding bits.
Patch by Johnny Chen.

llvm-svn: 83983
2009-10-13 17:35:30 +00:00
Bob Wilson d26a26ae7e Fix regression introduced by r83894.
llvm-svn: 83982
2009-10-13 17:29:13 +00:00
Bob Wilson 0bc673de0d Fix a tab. Thanks to Johnny Chen for pointing it out.
llvm-svn: 83973
2009-10-13 15:27:23 +00:00
Kevin Enderby 11b32384f2 Fix two warnings about unused variables that are only used in assert() calls.
llvm-svn: 83917
2009-10-12 22:51:49 +00:00
Bob Wilson 5b07a903d4 Delete a comment that makes no sense to me. The statement that moving a CPE
before its reference is only supported on ARM has not been true for a while.
In fact, until recently, that was only supported for Thumb.  Besides that,
CPEs are always a multiple of 4 bytes in size, so inserting a CPE should have
no effect on Thumb alignment.

llvm-svn: 83916
2009-10-12 22:49:05 +00:00
Kevin Enderby 3f0b9b3db4 Fix a problem in the code where ARMAsmParser::ParseShift() second argument
should have been a pointer to a reference.

llvm-svn: 83915
2009-10-12 22:39:54 +00:00
Bob Wilson 3250e7769f Change CreateNewWater method to return NewMBB by reference.
llvm-svn: 83905
2009-10-12 21:39:43 +00:00
Bob Wilson cc121aa750 Last week, ARMConstantIslandPass was failing to converge for the
MultiSource/Benchmarks/MiBench/automotive-susan test.  The failure has
since been masked by an unrelated change (just randomly), so I don't have
a testcase for this now.  Radar 7291928.

The situation where this happened is that a constant pool entry (CPE) was
placed at a lower address than the load that referenced it.  There were in
fact 2 CPEs placed at adjacent addresses and referenced by 2 loads that were
close together in the code.  The distance from the loads to the CPEs was
right at the limit of what they could handle, so that only one of the CPEs
could be placed within range.  On every iteration, the first CPE was found
to be out of range, causing a new CPE to be inserted.  The second CPE had
been in range but the newly inserted entry pushed it too far away.  Thus the
second CPE was also replaced by a new entry, which in turn pushed the first
CPE out of range.  Etc.

Judging from some comments in the code, the initial implementation of this
pass did not support CPEs placed _before_ their references.  In the case
where the CPE is placed at a higher address, the key to making the algorithm
terminate is that new CPEs are only inserted at the end of a group of adjacent
CPEs.  This is implemented by removing a basic block from the "WaterList"
once it has been used, and then adding the newly inserted CPE block to the
list so that the next insertion will come after it.  This avoids the ping-pong
effect where CPEs are repeatedly moved to the beginning of a group of
adjacent CPEs.  This does not work when going backwards, however, because the
entries at the end of an adjacent group of CPEs are closer than the CPEs
earlier in the group.

To make this pass terminate, we need to maintain a property that changes can
only happen in some sort of monotonic fashion.  The fix used here is to require
that the CPE for a particular constant pool load can only move to lower
addresses.  This is a very simple change to the code and should not cause
any significant degradation in the results.

llvm-svn: 83902
2009-10-12 21:23:15 +00:00
Bob Wilson e4adae267e Another minor clean-up.
llvm-svn: 83897
2009-10-12 20:45:53 +00:00
Bob Wilson 196bf32ab0 Remove redundant parameter.
llvm-svn: 83894
2009-10-12 20:37:23 +00:00
Bob Wilson 3a7326e705 Use early exit to reduce indentation.
llvm-svn: 83874
2009-10-12 19:04:03 +00:00
Bob Wilson 3af34312d4 Change to return a value by reference.
llvm-svn: 83873
2009-10-12 19:01:12 +00:00
Bob Wilson c7a3cf4066 Add a typedef for an iterator.
llvm-svn: 83872
2009-10-12 18:52:13 +00:00
Dale Johannesen 06243d7bf2 Revert the kludge in 76703. I got a clean
bootstrap of FSF-style PPC, so there is some
reason to believe the original bug (which was
never analyzed) has been fixed, probably by
82266.

llvm-svn: 83871
2009-10-12 18:49:00 +00:00
Dan Gohman a698d7ac3c Don't forget to mark RAX as live-out of the function when arranging for
it to hold the address of an sret return value, for x86-64 ABI purposes.

Also, fix the test that was originally intended to test this to actually
test it, using FileCheck.

llvm-svn: 83853
2009-10-12 16:36:12 +00:00
Chris Lattner 0840c823e4 Fix PR5087, patch by Jakub Staszak!
llvm-svn: 83822
2009-10-12 04:22:44 +00:00
Anton Korobeynikov 4b38ce9f25 Add missed mem-mem move patterns
llvm-svn: 83812
2009-10-11 23:03:53 +00:00
Anton Korobeynikov 415c3dc501 Add MSP430 mem-mem insts support. Patch by Brian Lucas with some my refinements
llvm-svn: 83811
2009-10-11 23:03:28 +00:00
Anton Korobeynikov 6bce6bbf40 Implement 'm' memory operand properly
llvm-svn: 83785
2009-10-11 19:14:21 +00:00
Anton Korobeynikov a58a3f930a Implement proper asmprinting for the globals. This eliminates bogus "call" modifier and also adds support for offsets wrt globals.
llvm-svn: 83784
2009-10-11 19:14:02 +00:00
Anton Korobeynikov 3525a4a268 Implement asm printing for inline asm memory operands
llvm-svn: 83783
2009-10-11 19:13:34 +00:00
Anton Korobeynikov 5b8826b4da It seems that OR operation does not affect status reg at all.
Remove impdef of SRW. This fixes PR4779

llvm-svn: 83739
2009-10-10 22:17:47 +00:00
Dan Gohman 1faa11521e Remove a no-longer-necessary #include.
llvm-svn: 83697
2009-10-10 00:36:09 +00:00
Dan Gohman e919de5acf Replace X86's CanRematLoadWithDispOperand by calling the target-independent
MachineInstr::isInvariantLoad instead, which has the benefit of being
more complete.

llvm-svn: 83696
2009-10-10 00:34:18 +00:00
Dan Gohman 4a72f7ab53 Mark the LDR instruction with isReMaterializable, as it is rematerializable
when loading from an invariant memory location.

llvm-svn: 83688
2009-10-09 23:28:27 +00:00