Commit Graph

2235 Commits

Author SHA1 Message Date
Lang Hames dab7b06de9 Improved tracking of value number kills. VN kills are now represented
as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.

A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test 
condition has been updated to reflect this.

llvm-svn: 75097
2009-07-09 03:57:02 +00:00
Chris Lattner 44f6bcfa0b remove eh, convert to FileCheck style
llvm-svn: 75087
2009-07-09 01:07:22 +00:00
Chris Lattner 212f44d180 we have no tests for dllimport/export. Add one.
llvm-svn: 75085
2009-07-09 00:53:44 +00:00
Chris Lattner ade55bc8dd * add some assertions for sanity checking.
* remove some old code that was needed when we'd put ESP in the scale instead of 
  the base of some instructions.
* Fix a bug with the P modifier in inline asm that caused us to drop it.

llvm-svn: 75077
2009-07-09 00:27:29 +00:00
Chris Lattner da0cf0b134 add a test for dale's recent change.
llvm-svn: 75074
2009-07-09 00:00:16 +00:00
Chris Lattner f65129cb6a switch test to FileCheck-style and test the P and non-P cases.
llvm-svn: 75071
2009-07-08 23:44:06 +00:00
Chris Lattner 334e561e35 rename a test to make it a feature test.
llvm-svn: 75070
2009-07-08 23:40:57 +00:00
David Goodwin 22c2fba978 Use common code for both ARM and Thumb-2 instruction and register info.
llvm-svn: 75067
2009-07-08 23:10:31 +00:00
Bob Wilson 1d298fd75b Implement NEON vst1 instruction.
llvm-svn: 75037
2009-07-08 20:32:02 +00:00
Chris Lattner d5ffa8ffb6 add some more check for vector compares.
llvm-svn: 75024
2009-07-08 18:51:25 +00:00
Chris Lattner 072198a2a1 convert a test to "FileCheck" style.
llvm-svn: 75023
2009-07-08 18:48:24 +00:00
Bob Wilson f731a2df6b Implement NEON vld1 instructions.
llvm-svn: 75019
2009-07-08 18:11:30 +00:00
David Goodwin 121563c615 Add rev16 test... xfail for now
llvm-svn: 75012
2009-07-08 16:15:06 +00:00
David Goodwin af7451b674 Checkpoint Thumb2 Instr info work. Generalized base code so that it can be shared between ARM and Thumb2. Not yet activated because register information must be generalized first.
llvm-svn: 75010
2009-07-08 16:09:28 +00:00
Chris Lattner 04bf64d43c eliminate the v[if]cmp versions of these tests, now that [if]cmp+sext works.
llvm-svn: 74980
2009-07-08 00:49:35 +00:00
Chris Lattner dc84b31d94 Change these tests to use [fi]cmp+sext instead of v[fi]cmp. No
functionality change.

llvm-svn: 74979
2009-07-08 00:46:57 +00:00
Chris Lattner 4ac607332d dag combine sext(setcc) -> vsetcc before legalize. To make this safe,
VSETCC must define all bits, which is different than it was documented
to before.  Since all targets that implement VSETCC already have this
behavior, and we don't optimize based on this, just change the 
documentation.  We now get nice code for vec_compare.ll

llvm-svn: 74978
2009-07-08 00:31:33 +00:00
Chris Lattner fc74e8241a add support for legalizing an icmp where the result is illegal (4xi1) but
the input is legal (4 x i32)

llvm-svn: 74964
2009-07-07 23:03:54 +00:00
Chris Lattner cbbf747b7b add a trivial test that vector compares work.
llvm-svn: 74963
2009-07-07 22:51:09 +00:00
Chris Lattner 30220d8f98 implement support for spliting and scalarizing vector setcc's. This
finishes off enough support for vector compares to get the icmp/fcmp
version of 2008-07-23-VSetCC.ll passing.

llvm-svn: 74961
2009-07-07 22:47:46 +00:00
Chris Lattner 87d4f309f5 verify that the fcmp version of this works just as well as the
vfcmp version.  We actually get better code for this silly testcase.

llvm-svn: 74954
2009-07-07 22:07:47 +00:00
Evan Cheng d0611f9a37 Add Thumb2 movcc instructions.
llvm-svn: 74946
2009-07-07 20:39:03 +00:00
Evan Cheng 39d8075edc Add missing tests.
llvm-svn: 74945
2009-07-07 20:38:08 +00:00
Evan Cheng d0f6324cdc Add Thumb2 pkhbt / pkhtb.
llvm-svn: 74895
2009-07-07 05:35:52 +00:00
Evan Cheng b24e51e2d9 Add some more Thumb2 multiplication instructions.
llvm-svn: 74889
2009-07-07 01:17:28 +00:00
Evan Cheng 40398233b7 Add bfc to armv6t2.
llvm-svn: 74868
2009-07-06 22:23:46 +00:00
Evan Cheng e63b0e6f79 Added ARM::mls for armv6t2.
llvm-svn: 74866
2009-07-06 22:05:45 +00:00
Evan Cheng ba2410b7ca Avoid adding a duplicate def. This fixes PR4478.
llvm-svn: 74857
2009-07-06 21:34:05 +00:00
Evan Cheng 0e8bde5910 Add thumb2 sign / zero extend with rotate instructions.
llvm-svn: 74755
2009-07-03 01:43:10 +00:00
Evan Cheng 53cdf022b6 Added indexed stores.
llvm-svn: 74740
2009-07-03 00:06:39 +00:00
Evan Cheng 8ecd7eb3f7 Sign extending pre/post indexed loads.
llvm-svn: 74736
2009-07-02 23:16:11 +00:00
Evan Cheng 84c6cda2ef Thumb2 pre/post indexed loads.
llvm-svn: 74696
2009-07-02 07:28:31 +00:00
Chris Lattner 87bb642676 @GOTPCREL is also rip-relative. Fix fast-isel to do the right thing.
This fixes an llvm-gcc bootstrap problem I introduced.

llvm-svn: 74691
2009-07-02 04:22:01 +00:00
Chris Lattner d1c5951615 Fix yet-another bug I introduced into fastisel, this time handling
constant pool references that weren't getting properly rip-relative.

llvm-svn: 74689
2009-07-02 03:14:25 +00:00
Chris Lattner 1f50b61329 Fix codegen for references to available_externally symbols. This fixes
PR4482.

llvm-svn: 74613
2009-07-01 16:53:44 +00:00
Evan Cheng 04f72fc955 CommuteChangesDestination() should check if to-be-commuted instruction defines any register. Also teaches the default commuteInstruction() to commute instruction without definitions (e.g. X86::test / ARM::tsp).
llvm-svn: 74602
2009-07-01 08:29:08 +00:00
Evan Cheng 2a5efe14a7 Remove special handling of implicit_def. Fix a couple more bugs in liveintervalanalysis and coalescer handling of implicit_def.
Note, isUndef marker must be placed even on implicit_def def operand or else the scavenger will not ignore it. This is necessary because -O0 path does not use liveintervalanalysis, it treats implicit_def just like any other def.

llvm-svn: 74601
2009-07-01 08:19:36 +00:00
Chris Lattner f95fa1b721 Fix some fast-isel problems selecting global variable addressing in
pic mode.

llvm-svn: 74582
2009-07-01 03:27:19 +00:00
Evan Cheng d379e896ff Handle IMPLICIT_DEF with isUndef operand marker, part 2. This patch moves the code to annotate machineoperands to LiveIntervalAnalysis. It also add markers for implicit_def that define physical registers. The rest, is just a lot of details.
llvm-svn: 74580
2009-07-01 01:59:31 +00:00
David Goodwin 86c7e20ca6 Add PIC load and store patterns for Thumb-2.
llvm-svn: 74577
2009-07-01 00:01:13 +00:00
David Goodwin d0890a2bad Add thumb-2 store word, halfword, and byte.
llvm-svn: 74555
2009-06-30 22:11:34 +00:00
David Goodwin 28d6d87244 Improve Thumb-2 jump table support.
llvm-svn: 74549
2009-06-30 19:50:22 +00:00
Rafael Espindola 317fd045e2 Fix PR4485.
Avoid unnecessary duplication of operand 0 of X86::FpSET_ST0_80. This duplication would
cause one register to remain on the stack at the function return.

llvm-svn: 74534
2009-06-30 16:40:03 +00:00
Rafael Espindola bd971ffcc6 Fix PR4484.
This was caused by me confounding FP0 and ST(0).

llvm-svn: 74523
2009-06-30 12:18:16 +00:00
Evan Cheng dcf1f59305 Temporarily restore the scavenger implicit_def checking code. MachineOperand isUndef mark is not being put on implicit_def of physical registers (created for parameter passing, etc.).
llvm-svn: 74519
2009-06-30 09:19:42 +00:00
Evan Cheng 0dc101b897 Add a bit IsUndef to MachineOperand. This indicates the def / use register operand is defined by an implicit_def. That means it can def / use any register and passes (e.g. register scavenger) can feel free to ignore them.
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.

This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.

llvm-svn: 74518
2009-06-30 08:49:04 +00:00
Evan Cheng 57726817aa A few more load instructions.
llvm-svn: 74500
2009-06-30 02:15:48 +00:00
David Goodwin 17512663f5 Enhance tests to include shifted-register operand testing.
llvm-svn: 74490
2009-06-30 01:02:20 +00:00
David Goodwin 76b37950ca Add Thumb-2 support for TEQ amd TST.
llvm-svn: 74468
2009-06-29 22:49:42 +00:00
David Goodwin 911edef65b Thumb-2 tests
llvm-svn: 74464
2009-06-29 22:25:22 +00:00
Rafael Espindola 538064d6b1 FIX PR 4459.
Not sure I understand how the temp register gets used,
but this fixes a bug and introduces no regressions.

llvm-svn: 74446
2009-06-29 20:29:59 +00:00
David Goodwin dbf11ba800 Rename ARMcmpNZ to ARMcmpZ and use it to represent comparisons that set only the Z flag (i.e. eq and ne). Make ARMcmpZ commutative.
llvm-svn: 74423
2009-06-29 15:33:01 +00:00
Evan Cheng b23b50d54d Implement Thumb2 ldr.
After much back and forth, I decided to deviate from ARM design and split LDR into 4 instructions (r + imm12, r + imm8, r + r << imm12, constantpool). The advantage of this is 1) it follows the latest ARM technical manual, and 2) makes it easier to reduce the width of the instruction later. The down side is this creates more inconsistency between the two sub-targets. We should split ARM LDR instruction in a similar fashion later. I've added a README entry for this.

llvm-svn: 74420
2009-06-29 07:51:04 +00:00
Chris Lattner 9876bd8257 factor some logic out into a helper function, allow remat of loads from constant
globals.  This implements remat-constant.ll even without aggressive-remat.

llvm-svn: 74373
2009-06-27 04:38:55 +00:00
Chris Lattner fea81da433 Reimplement rip-relative addressing in the X86-64 backend. The new
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not.  Instead, those decisions are made by isel lowering
and propagated through to the asm printer.  To achieve this, we:

1. Represent RIP relative addresses by setting the base of the X86 addr
   mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
   X86ISD::WrapperRIP.  When it is unsafe to use RIP, it lowers to
   X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
   a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
   passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
   when to emit (%rip), they just print the symbol.

I think this is a big improvement over the previous situation.  It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier.  This is a short term hack, there is
a much better, but more involved, solution.  2. I had to xfail an 
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction.  This specific test is easy to fix without
-aggressive-remat, which I intend to do next.

llvm-svn: 74372
2009-06-27 04:16:01 +00:00
Chris Lattner df92e147c9 remove some unneeded eh info.
llvm-svn: 74371
2009-06-27 04:07:31 +00:00
Chris Lattner de36afc1fe testcase for PR4466
llvm-svn: 74367
2009-06-27 01:33:35 +00:00
David Goodwin 5285817490 When possible, use "mvn ra, rb" instead of "eor ra, rb, -1" because mvn has a narrow version and eor(i) does not.
llvm-svn: 74355
2009-06-26 23:13:13 +00:00
Dan Gohman d3b930d426 Add some testcases for some of the recent ScalarEvolution bug fixes.
llvm-svn: 74353
2009-06-26 22:54:11 +00:00
David Goodwin 3aaa751712 Thumb-2 tests
llvm-svn: 74345
2009-06-26 22:37:07 +00:00
Chris Lattner b5c2639f83 remove unwind info, add test for asmprinting of jump table labels with (%rip)
llvm-svn: 74337
2009-06-26 22:16:49 +00:00
Evan Cheng 07b016856d Add x86 support for 'n' inline asm modifier. This will be handled target independently as part of MC work.
llvm-svn: 74336
2009-06-26 22:00:19 +00:00
David Goodwin aa294c5593 Thumb-2 has CLZ.
llvm-svn: 74322
2009-06-26 20:47:43 +00:00
David Goodwin 35ee722d42 Use "adcs/sbcs" only when the carry-out is live, otherwise use "adc/sbc".
llvm-svn: 74321
2009-06-26 20:45:56 +00:00
Daniel Dunbar a720af1370 More spelling Count as count.
llvm-svn: 74306
2009-06-26 18:35:07 +00:00
Daniel Dunbar 6b1678d5d8 Spell Count as count.
llvm-svn: 74298
2009-06-26 18:21:54 +00:00
David Goodwin 3bd42afebe Add Thumb-2 tests.
llvm-svn: 74295
2009-06-26 18:10:30 +00:00
David Goodwin 5960e6d974 ADC used to implement adde should use "adcs" opcode instead of "adc".
llvm-svn: 74293
2009-06-26 18:07:25 +00:00
David Goodwin 34f7ede9e7 ORN and BIC tests.
llvm-svn: 74289
2009-06-26 16:20:06 +00:00
David Goodwin 0377f737ff Currently there is a pattern for the thumb-2 MOV 16-bit immediate instruction. That instruction cannot write the flags so it should use T2I instead of T2sI.
Also, added a pattern for the thumb-2 MOV of shifted immediate since that can encode immediates not encodable by the 16-bit immediate.

llvm-svn: 74288
2009-06-26 16:10:07 +00:00
Evan Cheng 7779156b39 Fix tests: Count -> count.
llvm-svn: 74282
2009-06-26 07:05:57 +00:00
Evan Cheng 34c8c7414f Fix a CodeGenDAGPatterns bug. Check if top level predicates match when it's looking for duplicates.
llvm-svn: 74276
2009-06-26 05:59:16 +00:00
Daniel Dunbar 07025e2c02 Fix spelling of 'count'
llvm-svn: 74249
2009-06-26 01:33:02 +00:00
Evan Cheng 97727a61f9 Select ADC, SBC, and RSC instead of the ADCS, SBCS, and RSCS when the carry bit def is not used.
llvm-svn: 74228
2009-06-25 23:34:10 +00:00
David Goodwin 16f357cccf Use MVN for ~t2_so_imm immediates.
llvm-svn: 74223
2009-06-25 23:11:21 +00:00
Bill Wendling 722c6e1b70 Don't grep the -debug output. This isn't the way to test changes.
llvm-svn: 74211
2009-06-25 21:59:32 +00:00
Chris Lattner a4194b1082 down with unwind info :)
llvm-svn: 74206
2009-06-25 21:48:17 +00:00
Evan Cheng c7ea8df67e ISD::ADDE / ISD::SUBE updates the carry bit so they should isle to ADCS and SBCS / RSCS.
llvm-svn: 74200
2009-06-25 20:59:23 +00:00
Evan Cheng 83f979a48b Add Thumb2 pc relative add.
llvm-svn: 74141
2009-06-24 23:47:58 +00:00
Evan Cheng ff1a4a7271 We should run these tests as well.
llvm-svn: 74121
2009-06-24 21:36:26 +00:00
Chris Lattner 01d5049dc2 unwind info not needed.
llvm-svn: 74112
2009-06-24 19:48:04 +00:00
Evan Cheng d76d0aa68a Move thumb and thumb2 tests into separate directories.
llvm-svn: 74068
2009-06-24 06:36:07 +00:00
Evan Cheng 38f2453817 Fix support for inline asm input / output operand tying when operand spans across multiple registers (e.g. two i64 operands in 32-bit mode).
llvm-svn: 74053
2009-06-24 02:05:51 +00:00
Dan Gohman f19aeec3f5 Extend ScalarEvolution's multiple-exit support to compute exact
trip counts in more cases.

Generalize ScalarEvolution's isLoopGuardedByCond code to recognize
And and Or conditions, splitting the code out into an
isNecessaryCond helper function so that it can evaluate Ands and Ors
recursively, and make SCEVExpander be much more aggressive about
hoisting instructions out of loops.

test/CodeGen/X86/pr3495.ll has an additional instruction now, but
it appears to be due to an arbitrary register allocation difference.

llvm-svn: 74048
2009-06-24 01:18:18 +00:00
Evan Cheng 4983e4550e Proper patterns for thumb2 shift and rotate instructions.
llvm-svn: 73987
2009-06-23 19:39:13 +00:00
Bob Wilson 2e076c4e02 Add support for ARM's Advanced SIMD (NEON) instruction set.
This is still a work in progress but most of the NEON instruction set
is supported.

llvm-svn: 73919
2009-06-22 23:27:02 +00:00
Evan Cheng 16ee19738c It's coalescer, not coaleser.
llvm-svn: 73902
2009-06-22 21:09:17 +00:00
Bob Wilson 4582530a2c For Darwin on ARMv6 and newer, make register r9 available for use as a
caller-saved register.

llvm-svn: 73901
2009-06-22 21:01:46 +00:00
Evan Cheng 8cbbc7944d Fix another register coalescer crash: forgot to check if the instruction being updated has already been coalesced.
llvm-svn: 73898
2009-06-22 20:49:32 +00:00
Evan Cheng 3d75d6af57 hasFP should return true if frame address is taken.
llvm-svn: 73893
2009-06-22 18:38:48 +00:00
Rafael Espindola 6ead59f8ed Fix PR4185.
Handle FpSET_ST0_80 being used when ST0 is still alive.

llvm-svn: 73850
2009-06-21 12:02:51 +00:00
Chris Lattner 7d2b049404 change TLS_ADDR lowering to lower to a real mem operand, instead of matching as
a global with that gets printed with the :mem modifier.  All operands to lea's 
should be handled with the lea32mem operand kind, and this allows the TLS stuff
to do this.  There are several better ways to do this, but I went for the minimal
change since I can't really test this (beyond make check).

This also makes the use of EBX explicit in the operand list in the 32-bit, 
instead of implicit in the instruction.

llvm-svn: 73834
2009-06-20 20:38:48 +00:00
Chris Lattner 1771a852f0 no need for unwind info
llvm-svn: 73832
2009-06-20 19:48:26 +00:00
Chris Lattner fbc9778a1b no need for unwind info here.
llvm-svn: 73831
2009-06-20 19:43:09 +00:00
Evan Cheng c6a8d0dbe9 Fix PR4419: handle defs of partial uses.
llvm-svn: 73816
2009-06-20 04:34:51 +00:00
Dan Gohman cc31110b95 Re-apply r73718, now that the fix in r73787 is in, and add a
hand-crafted testcase which demonstrates the bug that was exposed
in 254.gap.

llvm-svn: 73793
2009-06-19 23:23:27 +00:00
Evan Cheng b4b20bbb7d Enable arm pre-allocation load / store multiple optimization pass.
llvm-svn: 73791
2009-06-19 23:17:27 +00:00
Evan Cheng 86076c9e30 Revert 73718. It's breaking 254.gap.
llvm-svn: 73783
2009-06-19 21:15:06 +00:00
Eli Friedman 2fc939c809 Fix for PR2484: add an SSE1 pattern for a shuffle we normally prefer to
handle with an SSE2 instruction.

llvm-svn: 73760
2009-06-19 07:00:55 +00:00
Eli Friedman d984158320 Mark a few Thumb instructions commutable; just happened to spot this
while experimenting.  I'm reasonably sure this is correct, but please 
tell me if these instructions have some strange property which makes this
change unsafe.

llvm-svn: 73746
2009-06-19 01:43:08 +00:00
Evan Cheng de9e36a74e On Darwin, ams printer should output a second label before a jump table so the linker knows it's a new atom. But this is only needed if the jump table is put in a separate section from the function body.
llvm-svn: 73720
2009-06-18 20:37:15 +00:00
Dan Gohman 8c9ac59455 Generalize LSR's OptimizeSMax to handle unsigned max tests as well
as signed max tests. Along with r73717, this helps CodeGen avoid
emitting code for a maximum operation for this class of loop.

llvm-svn: 73718
2009-06-18 20:23:18 +00:00
Dan Gohman a0348809b6 Remove the code from IVUsers that attempted to handle
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.

llvm-svn: 73706
2009-06-18 16:54:06 +00:00
Anton Korobeynikov 02bb33c58d Initial support for some Thumb2 instructions.
Patch by Viktor Kutuzov and Anton Korzh from Access Softek, Inc.

llvm-svn: 73622
2009-06-17 18:13:58 +00:00
Anton Korobeynikov 469e8217d4 Make the test target-neutral
llvm-svn: 73547
2009-06-16 20:25:25 +00:00
Anton Korobeynikov 5d28cb204f GNU as refuses to assemble "pop {}" instruction. Do not emit such
(this is the case when we have thumb vararg function with single
callee-saved register, which is handled separately).

llvm-svn: 73529
2009-06-16 18:49:08 +00:00
Evan Cheng cc21a5415a If a val# is defined by an implicit_def and it is being removed, all of the copies off the val# were removed. This causes problem later since the scavenger will see uses of registers without defs. The proper solution is to change the copies into implicit_def's instead.
TurnCopyIntoImpDef turns a copy into implicit_def and remove the val# defined by it. This causes an scavenger assertion later if the def reaches other blocks. Disable the transformation if the value live interval extends beyond its def block.

llvm-svn: 73478
2009-06-16 07:12:58 +00:00
Eli Friedman abfad5d61e Add some generic expansion logic for SMULO and UMULO. Fixes UMULO
support for x86, and UMULO/SMULO for many architectures, including PPC 
(PR4201), ARM, and Cell. The resulting expansion isn't perfect, but it's
not bad.

llvm-svn: 73477
2009-06-16 06:58:29 +00:00
Dan Gohman 8e85118943 Update this test to use fmul instead of mul.
llvm-svn: 73436
2009-06-15 22:49:34 +00:00
Evan Cheng b9bff5880a ifcvt should ignore cfg where true and false successors are the same.
llvm-svn: 73423
2009-06-15 21:24:34 +00:00
Bill Wendling 20f0adfc0e This test is failing. Revert for now.
llvm-svn: 73404
2009-06-15 19:10:56 +00:00
Bill Wendling 66e104cd11 Add another testcase for r71478.
llvm-svn: 73399
2009-06-15 18:36:34 +00:00
Arnold Schwaighofer cb9046cfc8 CheckTailCallReturnConstraints is missing a check on the
incomming chain of the RETURN node. The incomming chain must
be the outgoing chain of the CALL node. This causes the
backend to identify tail calls that are not tail calls. This
patch fixes this.

llvm-svn: 73387
2009-06-15 14:43:36 +00:00
Evan Cheng 1283c6a066 Part 1.
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.

Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0

If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.

- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.

This is work in progress, not yet enabled.

llvm-svn: 73381
2009-06-15 08:28:29 +00:00
Evan Cheng 185c9ef0a2 Add a ARM specific pre-allocation pass that re-schedule loads / stores from
consecutive addresses togther. This makes it easier for the post-allocation pass
to form ldm / stm.

This is step 1. We are still missing a lot of ldm / stm opportunities because
of register allocation are not done in the desired order. More enhancements
coming.

llvm-svn: 73291
2009-06-13 09:12:55 +00:00
Evan Cheng b6cf8dbb96 If killed register is defined by implicit_def, do not clear it since it's live range may overlap another def of same register.
llvm-svn: 73255
2009-06-12 21:34:26 +00:00
Evan Cheng d93b5b672f Mark some pattern-less instructions as neverHasSideEffects.
llvm-svn: 73252
2009-06-12 20:46:18 +00:00
Arnold Schwaighofer e3a018d707 Fix Bug 4278: X86-64 with -tailcallopt calling convention
out of sync with regular cc.

The only difference between the tail call cc and the normal
cc was that one parameter register - R9 - was reserved for
calling functions through a function pointer. After time the
tail call cc has gotten out of sync with the regular cc. 

We can use R11 which is also caller saved but not used as
parameter register for potential function pointers and
remove the special tail call cc on x86-64.

llvm-svn: 73233
2009-06-12 16:26:57 +00:00
Anton Korobeynikov c745132865 Add testcase for register scanveger assertion fix in r72755
(double def due to livevars)

llvm-svn: 73096
2009-06-08 22:54:15 +00:00
Eli Friedman 0b387fbd1b Fix the run-line for this test to work correctly outside of x86.
llvm-svn: 73025
2009-06-07 09:44:19 +00:00
Eli Friedman 516479d6e7 Tweak the expansion code for BIT_CONVERT to generate better code
converting from an MMX vector to an i64.

llvm-svn: 73024
2009-06-07 09:41:57 +00:00
Eli Friedman 3234587213 Slightly generalize the code that handles shuffles of consecutive loads
on x86 to handle more cases.  Fix a bug in said code that would cause it 
to read past the end of an object.  Rewrite the code in 
SelectionDAGLegalize::ExpandBUILD_VECTOR to be a bit more general. 
Remove PerformBuildVectorCombine, which is no longer necessary with 
these changes.  In addition to simplifying the code, with this change, 
we can now catch a few more cases of consecutive loads.

llvm-svn: 73012
2009-06-07 06:52:44 +00:00
Eli Friedman be1bb0f8b1 PR3628: Add patterns to match SHL/SRL/SRA to the corresponding Altivec
instructions.

llvm-svn: 73009
2009-06-07 01:07:55 +00:00
Eli Friedman c61e357aa6 Fix the expansion for CONCAT_VECTORS so that it doesn't create illegal
types.

llvm-svn: 72993
2009-06-06 07:08:26 +00:00
Eli Friedman 75c496f920 Avoid crashing on a variable-index insertelement with element type i16.
llvm-svn: 72991
2009-06-06 06:32:50 +00:00
Eli Friedman 1b1844ad1f Get rid of some bogus patterns for X86vzmovl. Don't create VZEXT_MOVL
nodes for vectors with an i16 element type.  Add an optimization for 
building a vector which is all zeros/undef except for the bottom 
element, where the bottom element is an i8 or i16.

llvm-svn: 72988
2009-06-06 06:05:10 +00:00
Eli Friedman 868bd6ab52 Fix an obvious typo.
llvm-svn: 72987
2009-06-06 05:55:37 +00:00
Eli Friedman 6c101ebfa8 Get rid of a bogus pattern that interferes with optimization.
llvm-svn: 72985
2009-06-06 04:17:04 +00:00
Eli Friedman b45e8ce69a PR2598: make sure to expand illegal forms of integer/floating-point
conversions for x86, like <2 x i32> -> <2 x float> and <4 x i16> -> 
<4 x float>.

llvm-svn: 72983
2009-06-06 03:57:58 +00:00
Nate Begeman 624690c6b2 Adapt the x86 build_vector dagcombine to the current state of the legalizer.
build vectors with i64 elements will only appear on 32b x86 before legalize.
Since vector widening occurs during legalize, and produces i64 build_vector 
elements, the dag combiner is never run on these before legalize splits them
into 32b elements.

Teach the build_vector dag combine in x86 back end to recognize consecutive 
loads producing the low part of the vector.

Convert the two uses of TLI's consecutive load recognizer to pass LoadSDNodes
since that was required implicitly.

Add a testcase for the transform.

Old:
	subl	$28, %esp
	movl	32(%esp), %eax
	movl	4(%eax), %ecx
	movl	%ecx, 4(%esp)
	movl	(%eax), %eax
	movl	%eax, (%esp)
	movaps	(%esp), %xmm0
	pmovzxwd	%xmm0, %xmm0
	movl	36(%esp), %eax
	movaps	%xmm0, (%eax)
	addl	$28, %esp
	ret

New:
	movl	4(%esp), %eax
	pmovzxwd	(%eax), %xmm0
	movl	8(%esp), %eax
	movaps	%xmm0, (%eax)
	ret

llvm-svn: 72957
2009-06-05 21:37:30 +00:00
Evan Cheng 3158790e32 Changing allocation ordering from r3 ... r0 back to r0 ... r3. The order change no longer make sense after the coalescing changes we have made since then.
llvm-svn: 72955
2009-06-05 19:08:58 +00:00
Dan Gohman 5c36f4f40c Fix an erroneous check for isFNeg; the FNeg case is handled
a few lines later on.

llvm-svn: 72904
2009-06-04 23:43:29 +00:00
Dan Gohman a5b9645c4b Split the Add, Sub, and Mul instruction opcodes into separate
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.

For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.

This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt

llvm-svn: 72897
2009-06-04 22:49:04 +00:00
Devang Patel 72a4d2fec1 Add new function attribute - noredzone.
Update code generator to use this attribute and remove DisableRedZone target option.
Update llc to set this attribute when -disable-red-zone command line option is used.

llvm-svn: 72894
2009-06-04 22:05:33 +00:00
Evan Cheng fa0ac19b82 RALinScan::attemptTrivialCoalescing() was returning a virtual register instead of the physical register it is allocated to. This resulted in virtual register(s) being added the live-in sets.
llvm-svn: 72890
2009-06-04 20:53:36 +00:00
Evan Cheng 60fdf787a7 A value defined by an implicit_def can be liven to a use BB. This is unfortunate. But register allocator still has to add it to the live-in set of the use BB.
llvm-svn: 72888
2009-06-04 20:25:48 +00:00
Dan Gohman 1aa86203f6 Check in test changes that I accidentally left out of r72872.
llvm-svn: 72875
2009-06-04 18:22:31 +00:00
Eli Friedman 63488f1fbf PR3739, part 2: Use an explicit store to spill XMM registers. (Previously,
the code tried to use "push", which doesn't exist for XMM registers.)

llvm-svn: 72836
2009-06-04 02:32:04 +00:00
Eli Friedman 0cb0c78a26 PR3739, part 1: Disable the red zone on Win64.
llvm-svn: 72830
2009-06-04 02:02:01 +00:00
Evan Cheng 7f5976e11b Re-apply 72756 with fixes. One of those was introduced by we changed MachineInstrBuilder::addReg() interface.
llvm-svn: 72826
2009-06-04 01:15:28 +00:00
Eli Friedman ee06b752f0 PR4317: Handle splits where the new block is unreachable correctly in
DominatorTreeBase::Split.

llvm-svn: 72810
2009-06-03 21:42:06 +00:00
Evan Cheng ad6f3ff2c0 For Darwin / x86_64, override -relocation-model=static to pic if the output is assembly since Darwin assembler does not really support -static codeine.
I view this as a temporary workaround until the assembler / linker changes.

llvm-svn: 72806
2009-06-03 21:13:54 +00:00
Evan Cheng b39a6be77a Fix for PR4225: When rewriter reuse a value in a physical register , it clear the register kill operand marker and its kill ops information. However, the cleared operand may be a def of a super-register. Clear the kill ops info for the super-register's sub-registers as well.
llvm-svn: 72758
2009-06-03 09:00:27 +00:00
Evan Cheng ab0c710fae Temporarily revert 72756 for now.
llvm-svn: 72757
2009-06-03 07:40:47 +00:00
Evan Cheng dfe6e689fd Fold preceding / trailing base inc / dec into the single load / store as well.
llvm-svn: 72756
2009-06-03 06:14:58 +00:00
Dan Gohman fc262babc3 Revert r72734. The Darwin assembler doesn't support the static
relocation model on x86-64. Higher level logic should override
the relocation model to PIC on x86_64-apple-darwin.

llvm-svn: 72746
2009-06-03 00:37:20 +00:00
Dan Gohman 760377effc Fix CodeGenPrepare's address-mode sinking to handle unusual
addresses, involving Base values which do not have Pointer type.
This fixes PR4297.

llvm-svn: 72739
2009-06-02 21:29:13 +00:00
Evan Cheng 448641d87c On Darwin x86_64 small code model doesn't guarantee code address fits in 32-bit.
llvm-svn: 72734
2009-06-02 20:09:31 +00:00
Evan Cheng 7142ad75a1 (i64 (zext (srl GR32 8))) -> movzbl AH is not safe since srl 8 only clear the top 8 bits.
llvm-svn: 72618
2009-05-30 08:43:27 +00:00
Evan Cheng 8e97b85cde Remove an accidental commit.
llvm-svn: 72560
2009-05-29 05:28:52 +00:00
Evan Cheng 716e688fca More h-registers tricks: folding zext nodes.
llvm-svn: 72558
2009-05-29 01:44:43 +00:00
Evan Cheng 86cdb4b345 Do not try to create a MVT type of width 0.
llvm-svn: 72557
2009-05-28 23:52:18 +00:00
Eli Friedman e4b43e60b3 Add explicit test for PR4280.
llvm-svn: 72539
2009-05-28 21:04:35 +00:00
Eli Friedman 1f906448bc Add a testcase which got fixed by recent legalization work.
llvm-svn: 72517
2009-05-28 05:10:20 +00:00
Evan Cheng a9cda8abf2 Added optimization that narrow load / op / store and the 'op' is a bit twiddling instruction and its second operand is an immediate. If bits that are touched by 'op' can be done with a narrower instruction, reduce the width of the load and store as well. This happens a lot with bitfield manipulation code.
e.g.
orl     $65536, 8(%rax)
=>
orb     $1, 10(%rax)

Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.

llvm-svn: 72507
2009-05-28 00:35:15 +00:00
Bill Wendling 0a9ea80013 This looks like it passes now.
llvm-svn: 72485
2009-05-27 17:43:21 +00:00
Torok Edwin be6a9a151a Fix PR4254.
The DAGCombiner created a negative shiftamount, stored in an
unsigned variable. Later the optimizer eliminated the shift entirely as being
undefined.
Example: (srl (shl X, 56) 48). ShiftAmt is 4294967288.
Fix it by checking that the shiftamount is positive, and storing in a signed
variable.

llvm-svn: 72331
2009-05-23 17:29:48 +00:00
Torok Edwin 7996339dd8 available_externall linkage is not local, this was confusing the codegenerator,
and it wasn't generating calls through @PLT for these functions.
hasLocalLinkage() is now false for available_externally,
I attempted to fix the inliner and dce to handle available_externally properly.
It passed make check.

llvm-svn: 72328
2009-05-23 14:06:57 +00:00
Eli Friedman 53a71147ba Fix test to account for legalization changes; I think this ends up
running an extra DAGCombine pass which improves the code a bit.

llvm-svn: 72326
2009-05-23 13:15:11 +00:00
Duncan Sands d6fb6501e3 Add a new codegen pass that normalizes dwarf exception handling
code in preparation for code generation.  The main thing it does
is handle the case when eh.exception calls (and, in a future
patch, eh.selector calls) are far away from landing pads.  Right
now in practice you only find eh.exception calls close to landing
pads: either in a landing pad (the common case) or in a landing
pad successor, due to loop passes shifting them about.  However
future exception handling improvements will result in calls far
from landing pads:
(1) Inlining of rewinds.  Consider the following case:
In function @f:
...
  invoke @g to label %normal unwind label %unwinds
...
unwinds:
  %ex = call i8* @llvm.eh.exception()
...

In function @g:
...
  invoke @something to label %continue unwind label %handler
...
handler:
  %ex = call i8* @llvm.eh.exception()
... perform cleanups ...
  "rethrow exception"

Now inline @g into @f.  Currently this is turned into:
In function @f:
...
  invoke @something to label %continue unwind label %handler
...
handler:
  %ex = call i8* @llvm.eh.exception()
... perform cleanups ...
  invoke "rethrow exception" to label %normal unwind label %unwinds
unwinds:
  %ex = call i8* @llvm.eh.exception()
...

However we would like to simplify invoke of "rethrow exception" into
a branch to the %unwinds label.  Then %unwinds is no longer a landing
pad, and the eh.exception call there is then far away from any landing
pads.

(2) Using the unwind instruction for cleanups.
It would be nice to have codegen handle the following case:
  invoke @something to label %continue unwind label %run_cleanups
...
handler:
... perform cleanups ...
  unwind

This requires turning "unwind" into a library call, which
necessarily takes a pointer to the exception as an argument
(this patch also does this unwind lowering).  But that means
you are using eh.exception again far from a landing pad.

(3) Bugpoint simplifications.  When bugpoint is simplifying
exception handling code it often generates eh.exception calls
far from a landing pad, which then causes codegen to assert.
Bugpoint then latches on to this assertion and loses sight
of the original problem.

Note that it is currently rare for this pass to actually do
anything.  And in fact it normally shouldn't do anything at
all given the code coming out of llvm-gcc!  But it does fire
a few times in the testsuite.  As far as I can see this is
almost always due to the LoopStrengthReduce codegen pass
introducing pointless loop preheader blocks which are landing
pads and only contain a branch to another block.  This other
block contains an eh.exception call.  So probably by tweaking
LoopStrengthReduce a bit this can be avoided.

llvm-svn: 72276
2009-05-22 20:36:31 +00:00
Eli Friedman 9030c35eb4 Fix for PR4235: to build a floating-point value from integer parts,
build an integer and cast that to a float.  This fixes a crash 
caused by trying to split an f32 into two f16's.

This changes the behavior in test/CodeGen/XCore/fneg.ll because that 
testcase now triggers a DAGCombine which converts the fneg into an integer
operation.  If someone is interested, it's probably possible to tweak 
the test to generate an actual fneg.

llvm-svn: 72162
2009-05-20 06:02:09 +00:00
Evan Cheng 1fbc2a4754 Fix test on non-darwin hosts.
llvm-svn: 72161
2009-05-20 05:45:36 +00:00
Evan Cheng 960983371c Try again. Allow call to immediate address for ELF or when in static relocation mode.
llvm-svn: 72160
2009-05-20 04:53:57 +00:00
Evan Cheng 61da18645b Cannot use immediate as call absolute target in PIC mode.
llvm-svn: 72154
2009-05-20 01:11:00 +00:00
Bob Wilson e666cc5206 Fix pr4058 and pr4059. Do not split i64 or double arguments between r3 and
the stack.  Patch by Sandeep Patel.

llvm-svn: 72106
2009-05-19 10:02:36 +00:00
Bob Wilson a2c462bbe9 Fix pr4091: Add support for "m" constraint in ARM inline assembly.
llvm-svn: 72105
2009-05-19 05:53:42 +00:00
Dan Gohman b81dd48fd2 Add nounwind to a few tests.
llvm-svn: 72002
2009-05-18 15:16:49 +00:00
Anton Korobeynikov 6de08cd093 Mark rotl/rotr as expand. This generates pretty ugly code, but this is better than nothing.
llvm-svn: 71976
2009-05-17 10:16:28 +00:00
Anton Korobeynikov 6b5523aec2 Typo
llvm-svn: 71975
2009-05-17 10:15:22 +00:00
Jakob Stoklund Olesen 9d7fb58581 Help DejaGnu avoid pipe-jam by producing less output from certain test cases.
When a test fails with more than a pipeful of output on stdout AND stderr, one
of the DejaGnu programs blocks. The problem can be avoided by redirecting
stdout to a file.

llvm-svn: 71919
2009-05-16 00:34:42 +00:00
Dan Gohman 6b43dd7664 Add nounwind to this test.
llvm-svn: 71734
2009-05-13 22:29:12 +00:00
Evan Cheng 85cca64dd7 If header of inner loop is aligned, do not align the outer loop header. We don't want to add nops in the outer loop for the sake of aligning the inner loop.
llvm-svn: 71609
2009-05-12 23:58:14 +00:00
Evan Cheng df1aeeeb90 Teach TransferDeadness to delete truly dead instructions if they do not produce side effects.
llvm-svn: 71606
2009-05-12 23:07:00 +00:00
Evan Cheng 79afe74195 Add nounwind.
llvm-svn: 71575
2009-05-12 18:35:43 +00:00
Evan Cheng c94de92c54 Fixed a stack slot coloring with reg bug: do not update implicit use / def when doing forward / backward propagation.
llvm-svn: 71574
2009-05-12 18:31:57 +00:00
Bob Wilson 9e3d48f10d Fix pr4195: When iterating through predecessor blocks, break out of the loop
after finding the (unique) layout predecessor.  Sometimes a block may be listed
more than once, and processing it more than once in this loop can lead to
inconsistent values for FtTBB/FtFBB, since the AnalyzeBranch method does not
clear these values.  There's no point in continuing the loop regardless.
The testcase for this is reduced from the 2003-05-02-DependentPHI SingleSource
test.

llvm-svn: 71536
2009-05-12 03:48:10 +00:00
Dan Gohman d76d71a291 Factor the code for collecting IV users out of LSR into an IVUsers class,
and generalize it so that it can be used by IndVarSimplify. Implement the
base IndVarSimplify transformation code using IVUsers. This removes
TestOrigIVForWrap and associated code, as ScalarEvolution now has enough
builtin overflow detection and folding logic to handle all the same cases,
and more. Run "opt -iv-users -analyze -disable-output" on your favorite
loop for an example of what IVUsers does.

This lets IndVarSimplify eliminate IV casts and compute trip counts in
more cases. Also, this happens to finally fix the remaining testcases
in PR1301.

Now that IndVarSimplify is being more aggressive, it occasionally runs
into the problem where ScalarEvolutionExpander's code for avoiding
duplicate expansions makes it difficult to ensure that all expanded
instructions dominate all the instructions that will use them. As a
temporary measure, IndVarSimplify now uses a FixUsesBeforeDefs function
to fix up instructions inserted by SCEVExpander. Fortunately, this code
is contained, and can be easily removed once a more comprehensive
solution is available.

llvm-svn: 71535
2009-05-12 02:17:14 +00:00
Evan Cheng 78a4eb844b Teach LSR to optimize more loop exit compares, i.e. change them to use postinc iv value. Previously LSR would only optimize those which are in the loop latch block. However, if LSR can prove it is safe (and profitable), it's now possible to change those not in the latch blocks to use postinc values.
Also, if the compare is the only use, LSR would place the iv increment instruction before the compare instead in the latch.

llvm-svn: 71485
2009-05-11 22:33:01 +00:00
Dale Johannesen b571463363 Fix PR4188. TailMerging can't tolerate inexact
sucessor info.

llvm-svn: 71478
2009-05-11 21:54:13 +00:00
Dan Gohman 5b3a56bfac Make this grep line a little more specific so that it doesn't
accidentally match something unrelated.

llvm-svn: 71458
2009-05-11 18:49:56 +00:00
Dan Gohman 9521cadff7 When scalarizing a vector BITCAST, check whether the operand has vector
type, rather than assume that it does. If the operand is not vector, it
shouldn't be run through ScalarizeVectorOp. This fixes one of the
testcases in PR3886.

llvm-svn: 71453
2009-05-11 18:30:42 +00:00
Dan Gohman faf75c8c9a Convert a subtract into a negate and an add when it helps x86
address folding.

llvm-svn: 71446
2009-05-11 18:02:53 +00:00
Dale Johannesen 02cb2bf2e3 Reverse a loop that is counting up to a maximum to
count down to 0 instead, under very restricted
circumstances.  Adjust 4 testcases in which this
optimization fires.

llvm-svn: 71439
2009-05-11 17:15:42 +00:00
Anton Korobeynikov 7706cb8d5d Add MSP430 test for PR4136
llvm-svn: 71392
2009-05-10 14:48:36 +00:00
Evan Cheng 51fa9c7138 Enable loop bb placement optimization.
llvm-svn: 71291
2009-05-08 23:35:49 +00:00
Chris Lattner f1d9b91434 Fix PR4152: asm constraint validation happens before dag combine, so we
need to work a bit to combine things like (x+c1+c2) into x+c3.

llvm-svn: 71232
2009-05-08 18:23:14 +00:00
Evan Cheng 2fa281106a Optimize code placement in loop to eliminate unconditional branches or move unconditional branch to the outside of the loop. e.g.
///       A:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       <fallthrough to B>                                                                                                                                                 
///                                                                                                                                                                          
///       B:  --> loop header                                                                                                                                                
///       ...                                                                                                                                                                
///       jcc <cond> C, [exit]                                                                                                                                               
///                                                                                                                                                                          
///       C:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jmp B                                                                                                                                                              
///                                                                                                                                                                          
/// ==>                                                                                                                                                                      
///                                                                                                                                                                          
///       A:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jmp B                                                                                                                                                              
///                                                                                                                                                                          
///       C:  --> new loop header                                                                                                                                            
///       ...                                                                                                                                                                
///       <fallthough to B>                                                                                                                                                  
///                                                                                                                                                                          
///       B:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jcc <cond> C, [exit] 

llvm-svn: 71209
2009-05-08 06:34:09 +00:00
Bob Wilson e20be4183c Fix pr4100. Do not remove no-op copies when they are dead. The register
scavenger gets confused about register liveness if it doesn't see them.
I'm not thrilled with this solution, but it only comes up when there are dead
copies in the code, which is something that hopefully doesn't happen much.

Here is what happens in pr4100: As shown in the following excerpt from the
debug output of llc, the source of a move gets reloaded from the stack,
inserting a new load instruction before the move.  Since that source operand
is a kill, the physical register is free to be reused for the destination
of the move.  The move ends up being a no-op, copying R3 to R3, so it is
deleted.  But, it leaves behind the load to reload %reg1028 into R3, and
that load is not updated to show that it's destination operand (R3) is dead.
The scavenger gets confused by that load because it thinks that R3 is live.

Starting RegAlloc of: %reg1025<def,dead> = MOVr %reg1028<kill>, 14, %reg0, %reg0
  Regs have values: 
  Reloading %reg1028 into R3
  Last use of R3[%reg1028], removing it from live set
  Assigning R3 to %reg1025
  Register R3 [%reg1025] is never used, removing it from live set

Alternative solutions might be either marking the load as dead, or zapping
the load along with the no-op copy.  I couldn't see an easy way to do
either of those, though.

llvm-svn: 71196
2009-05-07 23:47:03 +00:00
Bill Wendling 759de47964 THis doesn't fail.
llvm-svn: 71142
2009-05-07 01:41:42 +00:00
Bill Wendling 6144f63bd5 Temporarily revert r71010. It was causing massive failures during self-hosting.
llvm-svn: 71138
2009-05-07 01:27:25 +00:00
Evan Cheng cfc0513080 Do not use register as base ptr of pre- and post- inc/dec load / store nodes.
llvm-svn: 71098
2009-05-06 18:25:01 +00:00
Lang Hames ab68cf24bb Renamed Spiller classes (plus uses and related files) to VirtRegRewriter.
llvm-svn: 71057
2009-05-06 02:36:21 +00:00
Evan Cheng cfdbfccd91 Quotes should be printed before private prefix; some code clean up.
llvm-svn: 71032
2009-05-05 22:50:29 +00:00
Dan Gohman bb2f10759e If a MachineBasicBlock has multiple ways of reaching another block,
allow it to have multiple CFG edges to that block. This is needed
to allow MachineBasicBlock::isOnlyReachableByFallthrough to work
correctly. This fixes PR4126.

llvm-svn: 71018
2009-05-05 21:10:19 +00:00
Evan Cheng 9bf9002161 Enable stack coloring with regs at -O3.
llvm-svn: 71010
2009-05-05 20:30:36 +00:00
Chris Lattner be9fa506ad Add basic support for code generation of
addrspace(257) -> FS relative on x86.  Patch by Zoltan Varga!

llvm-svn: 70992
2009-05-05 18:52:19 +00:00
Dan Gohman bb525f7e02 X86FastISel doesn't support the -tailcallopt ABI.
llvm-svn: 70902
2009-05-04 19:50:33 +00:00
Anton Korobeynikov 2d1e7321f6 Fix code emission for conditional branches.
Patch by Collin Winter!

llvm-svn: 70898
2009-05-04 19:10:38 +00:00
Dan Gohman ff08995589 Previously, RecursivelyDeleteDeadInstructions provided an option
of returning a list of pointers to Values that are deleted. This was
unsafe, because the pointers in the list are, by nature of what
RecursivelyDeleteDeadInstructions does, always dangling. Replace this
with a simple callback mechanism. This may eventually be removed if
all clients can reasonably be expected to use CallbackVH.

Use this to factor out the dead-phi-cycle-elimination code from LSR
utility function, and generalize it to use the
RecursivelyDeleteTriviallyDeadInstructions utility function.

This makes LSR more aggressive about eliminating dead PHI cycles;
adjust tests to either be less trivial or to simply expect fewer
instructions.

llvm-svn: 70636
2009-05-02 18:29:22 +00:00
Chris Lattner e01821edbd 'The attached patch fixes an issue where llc -march=cpp fails with
"Invalid primitive type" on input containing the x86_fp80 type.'
Patch by Collin Winter!

llvm-svn: 70610
2009-05-01 23:54:26 +00:00
Evan Cheng 99578674fd Mark MOV8mr_NOREX and MOV8rm_NOREX as mayStore / mayLoad respectively.
llvm-svn: 70461
2009-04-30 00:58:57 +00:00
Chris Lattner 5ab42e93c4 fix a regression handling indirect results: these need to be considered
memory operands otherwise the writebacks get lost when the inline asm 
doesn't otherwise have side effects.  This fixes rdar://6839427, though
clang really shouldn't generate these anymore.

llvm-svn: 70455
2009-04-30 00:48:50 +00:00
Nate Begeman 7e6e352735 Fix infinite recursion in the C++ code which handles movddup by making it unnecessary.
llvm-svn: 70425
2009-04-29 22:47:44 +00:00
Evan Cheng 9cce299cc8 spillPhysRegAroundRegDefsUses() may have invalidated iterators stored in fixed_ IntervalPtrs. Reset them.
llvm-svn: 70378
2009-04-29 07:16:34 +00:00
Chris Lattner 7d10386113 Disable the load-shrinking optimization from looking at
anything larger than 64-bits, avoiding a crash.  This should
really be fixed to use APInts, though type legalization happens
to help us out and we get good code on the attached testcase at
least.

This fixes rdar://6836460

llvm-svn: 70360
2009-04-29 03:45:07 +00:00
Bill Wendling 084669a1c9 Second attempt:
Massive check in. This changes the "-fast" flag to "-O#" in llc. If you want to
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.

Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'll change the JIT with a follow-up patch.

llvm-svn: 70343
2009-04-29 00:15:41 +00:00
Anton Korobeynikov dac88bae4f Properly print 'P' modifier on inline asm memory operands.
This should fix PR3379 and PR4064.
Patch inspired by Edwin Török!

llvm-svn: 70328
2009-04-28 21:49:33 +00:00
Evan Cheng 7e09994cb8 Fix PR4034. Bug in LiveInterval::join when it's compacting new valno's.
llvm-svn: 70291
2009-04-28 06:24:09 +00:00
Evan Cheng c3884be983 Fix for PR4051. When 2address pass delete an instruction, update kill info when necessary.
llvm-svn: 70279
2009-04-28 02:12:36 +00:00
Bill Wendling 56f2987a87 r70270 isn't ready yet. Back this out. Sorry for the noise.
llvm-svn: 70275
2009-04-28 01:04:53 +00:00
Bill Wendling d0ae15946c Massive check in. This changes the "-fast" flag to "-O#" in llc. If you want to
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.

Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'm not 100% sure if it's necessary to change it there...

llvm-svn: 70270
2009-04-28 00:21:31 +00:00
Evan Cheng 093e4c578d Fix PR4076. Correctly create live interval of physical register with two-address update.
llvm-svn: 70245
2009-04-27 20:42:46 +00:00
Dan Gohman e99f98262c Permit ChangeCompareStride to rewrite a comparison when the factor
between the comparison's iv stride and the candidate stride is
exactly -1.

llvm-svn: 70244
2009-04-27 20:35:32 +00:00
Dan Gohman 7646637379 Teach getZeroExtendExpr and getSignExtendExpr to use trip-count
information to simplify [sz]ext({a,+,b}) to {zext(a),+,[zs]ext(b)},
as appropriate.

These functions and the trip count code each call into the other, so
this requires careful handling to avoid infinite recursion. During
the initial trip count computation, conservative SCEVs are used,
which are subsequently discarded once the trip count is actually
known.

Among other benefits, this change lets LSR automatically eliminate
some unnecessary zext-inreg and sext-inreg operation where the
operand is an induction variable.

llvm-svn: 70241
2009-04-27 20:16:15 +00:00
Nate Begeman c79f731531 Revert accidental testcase reduction
llvm-svn: 70226
2009-04-27 18:42:40 +00:00
Nate Begeman 8d6d4b9289 2nd attempt, fixing SSE4.1 issues and implementing feedback from duncan.
PR2957

ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

llvm-svn: 70225
2009-04-27 18:41:29 +00:00
Evan Cheng 0f85bd368c Fix PR4056. It's possible a physical register def is dead if its implicit use is deleted by two-address pass.
llvm-svn: 70213
2009-04-27 17:36:47 +00:00
Dan Gohman 3266391e86 Fix the syntax for a PR number in a test.
llvm-svn: 70208
2009-04-27 15:08:34 +00:00
Dan Gohman be36f5ccda When transforming sext(trunc(load(x))) into sext(smaller load(x)),
the trunc is directly replaced with the smaller load, so don't
try to create a new sext node. This fixes PR4050.

llvm-svn: 70179
2009-04-27 02:00:55 +00:00
Evan Cheng 362acf8a56 Do not share a single unknown val# for all the live ranges merged into a physical sub-register live interval. When coalescer is merging in clobbered virtaul register live interval into a physical register live interval, give each virtual register val# a separate val# in the physical register live interval. Otherwise, the coalescer would have lost track of the definitions information it needs to make correct coalescing decisions.
llvm-svn: 70026
2009-04-25 09:25:19 +00:00
Dale Johannesen 56cb14c874 Fix PR 4057, a crash doing float->char const folding.
This particular one is undefined behavior (although this
isn't related to the crash), so it will no longer do it
at compile time, which seems better.

llvm-svn: 69990
2009-04-24 21:34:13 +00:00
Rafael Espindola c1396a2313 Fix PR 4004 by including the call to __tls_get_addr in X86tlsaddr. This is not
very elegant, but neither is the tls specification :-(

llvm-svn: 69968
2009-04-24 12:59:40 +00:00
Rafael Espindola b93db668b3 Revert 69952. Causes testsuite failures on linux x86-64.
llvm-svn: 69967
2009-04-24 12:40:33 +00:00
Nate Begeman bb881d66f4 PR2957
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.

llvm-svn: 69952
2009-04-24 03:42:54 +00:00
Dan Gohman 723f175bc9 Explicitly pass -tailcallopt=false to these tests so that they
work as intended no matter what the default setting of that
option is.

llvm-svn: 69911
2009-04-23 19:39:41 +00:00
Evan Cheng 1a99a5f501 It has finally happened. Spiller is now using live interval info.
This fixes a very subtle bug. vr defined by an implicit_def is allowed overlap with any register since it doesn't actually modify anything. However, if it's used as a two-address use, its live range can be extended and it can be spilled. The spiller must take care not to emit a reload for the vn number that's defined by the implicit_def. This is both a correctness and performance issue.

llvm-svn: 69743
2009-04-21 22:46:52 +00:00
Evan Cheng d67efaa847 Added a linearscan register allocation optimization. When the register allocator spill an interval with multiple uses in the same basic block, it creates a different virtual register for each of the reloads. e.g.
%reg1498<def> = MOV32rm %reg1024, 1, %reg0, 12, %reg0, Mem:LD(4,4) [sunkaddr39 + 0]
        %reg1506<def> = MOV32rm %reg1024, 1, %reg0, 8, %reg0, Mem:LD(4,4) [sunkaddr42 + 0]
        %reg1486<def> = MOV32rr %reg1506
        %reg1486<def> = XOR32rr %reg1486, %reg1498, %EFLAGS<imp-def,dead>
        %reg1510<def> = MOV32rm %reg1024, 1, %reg0, 4, %reg0, Mem:LD(4,4) [sunkaddr45 + 0]

=>

        %reg1498<def> = MOV32rm %reg2036, 1, %reg0, 12, %reg0, Mem:LD(4,4) [sunkaddr39 + 0]
        %reg1506<def> = MOV32rm %reg2037, 1, %reg0, 8, %reg0, Mem:LD(4,4) [sunkaddr42 + 0]
        %reg1486<def> = MOV32rr %reg1506
        %reg1486<def> = XOR32rr %reg1486, %reg1498, %EFLAGS<imp-def,dead>
        %reg1510<def> = MOV32rm %reg2038, 1, %reg0, 4, %reg0, Mem:LD(4,4) [sunkaddr45 + 0]

From linearscan's point of view, each of reg2036, 2037, and 2038 are separate registers, each is "killed" after a single use. The reloaded register is available and it's often clobbered right away. e.g. In thise case reg1498 is allocated EAX while reg2036 is allocated RAX. This means we end up with multiple reloads from the same stack slot in the same basic block.

Now linearscan recognize there are other reloads from same SS in the same BB. So it'll "downgrade" RAX (and its aliases) after reg2036 is allocated until the next reload (reg2037) is done. This greatly increase the likihood reloads from SS are reused.

This speeds up sha1 from OpenSSL by 5.8%. It is also an across the board win for SPEC2000 and 2006.

llvm-svn: 69585
2009-04-20 08:01:12 +00:00
Chris Lattner 1e7da23983 testcase for PR3898
llvm-svn: 69473
2009-04-18 20:49:22 +00:00
Duncan Sands e4ff21ba4b Don't try to make BUILD_VECTOR operands have the same
type as the vector element type: allow them to be of
a wider integer type than the element type all the way
through the system, and not just as far as LegalizeDAG.
This should be safe because it used to be this way
(the old type legalizer would produce such nodes), so
backends should be able to handle it.  In fact only
targets which have legal vector types with an illegal
promoted element type will ever see this (eg: <4 x i16>
on ppc).  This fixes a regression with the new type
legalizer (vec_splat.ll).  Also, treat SCALAR_TO_VECTOR
the same as BUILD_VECTOR.  After all, it is just a
special case of BUILD_VECTOR.

llvm-svn: 69467
2009-04-18 20:16:54 +00:00
Dale Johannesen 2f6263fea3 Adjust XFAIL syntax, maybe that will help. The other
way worked for me...

llvm-svn: 69414
2009-04-18 02:01:23 +00:00
Dale Johannesen e34fb6b5ce patch 69408 breaks this by removing the opportunity
for the optimization it's testing to kick in (although
it improves the code, getting rid of all spills).
I don't understand the optimization well enough to
rescue the test, so XFAILing.

llvm-svn: 69409
2009-04-18 00:11:50 +00:00
Bob Wilson 9c1ec76084 Rename file to have the correct suffix.
llvm-svn: 69380
2009-04-17 20:40:20 +00:00
Bob Wilson a4c2290e5f Use CallConvLower.h and TableGen descriptions of the calling conventions
for ARM.  Patch by Sandeep Patel.

llvm-svn: 69371
2009-04-17 19:07:39 +00:00
Rafael Espindola 355fe12c82 For general dynamic TLS access we must use
leaq	foo@TLSGD(%rip), %rdi

as part of the instruction sequence. Using a register other than %rdi and then
copying it to %rdi is not valid.

llvm-svn: 69350
2009-04-17 14:35:58 +00:00
Evan Cheng b96a1082a9 Teach spiller to unfold instructions which modref spill slot when a scratch
register is available and when it's profitable.

e.g.
     xorq  %r12<kill>, %r13
     addq  %rax, -184(%rbp)
     addq  %r13, -184(%rbp)
==>
     xorq  %r12<kill>, %r13
     movq  -184(%rbp), %r12
     addq  %rax, %r12
     addq  %r13, %r12
     movq  %r12, -184(%rbp)

Two more instructions, but fewer memory accesses. It can also open up
opportunities for more optimizations.

llvm-svn: 69341
2009-04-17 01:29:40 +00:00
Rafael Espindola 5e42177a0f fix PR3995. A scale must be 1, 2, 4 or 8.
llvm-svn: 69284
2009-04-16 12:34:53 +00:00
Dan Gohman 0a40ad93a9 Expand GEPs in ScalarEvolution expressions. SCEV expressions can now
have pointer types, though in contrast to C pointer types, SCEV
addition is never implicitly scaled. This not only eliminates the
need for special code like IndVars' EliminatePointerRecurrence
and LSR's own GEP expansion code, it also does a better job because
it lets the normal optimizations handle pointer expressions just
like integer expressions.

Also, since LLVM IR GEPs can't directly index into multi-dimensional
VLAs, moving the GEP analysis out of client code and into the SCEV
framework makes it easier for clients to handle multi-dimensional
VLAs the same way as other arrays.

Some existing regression tests show improved optimization.
test/CodeGen/ARM/2007-03-13-InstrSched.ll in particular improved to
the point where if-conversion started kicking in; I turned it off
for this test to preserve the intent of the test.

llvm-svn: 69258
2009-04-16 03:18:22 +00:00
Dan Gohman 86e43e7999 Fix the RUN lines so that this test actually tests.
llvm-svn: 69096
2009-04-14 22:50:17 +00:00
Dan Gohman 62f4498646 For the h-register addressing-mode trick, use the correct value for
any non-address uses of the address value. This fixes 186.crafty.

llvm-svn: 69094
2009-04-14 22:45:05 +00:00
Dan Gohman e5cd1fcdb9 When the result of an EXTRACT_SUBREG, INSERT_SUBREG, or SUBREG_TO_REG
operator is used by a CopyToReg to export the value to a different
block, don't reuse the CopyToReg's register for the subreg operation
result if the register isn't precisely the right class for the
subreg operation.

Also, rename the h-registers.ll test, now that there are more
than one.

llvm-svn: 69087
2009-04-14 22:17:14 +00:00
Evan Cheng dfbbf5c043 Some of GR8_NOREX registers are only available in 64-bit mode.
llvm-svn: 69049
2009-04-14 16:57:43 +00:00
Dale Johannesen b866ce73b2 Use the output of the asm so the optimizer won't
delete it.

llvm-svn: 69018
2009-04-14 01:51:40 +00:00
Evan Cheng 9787183b9b Fix PR3934 part 2. findOnlyInterestingUse() was not setting IsCopy and IsDstPhys which are returned by value and used by callee. This happened to work on the earlier test cases because of a logic error in the caller side.
llvm-svn: 69006
2009-04-14 00:32:25 +00:00
Evan Cheng f0843803a0 PR3934: Fix a bogus two-address pass assertion.
llvm-svn: 68979
2009-04-13 20:04:24 +00:00
Dan Gohman 57d6bd36b2 Implement x86 h-register extract support.
- Add patterns for h-register extract, which avoids a shift and mask,
   and in some cases a temporary register.
 - Add address-mode matching for turning (X>>(8-n))&(255<<n), where
   n is a valid address-mode scale value, into an h-register extract
   and a scaled-offset address.
 - Replace X86's MOV32to32_ and related instructions with the new
   target-independent COPY_TO_SUBREG instruction.

On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.

These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.

llvm-svn: 68962
2009-04-13 16:09:41 +00:00
Rafael Espindola 6d6c6043ea X86-64 TLS support for local exec and initial exec.
llvm-svn: 68947
2009-04-13 13:02:49 +00:00
Chris Lattner 184f1be4a8 Add a new "available_externally" linkage type. This is intended
to support C99 inline, GNU extern inline, etc.  Related bugzilla's
include PR3517, PR3100, & PR2933.  Nothing uses this yet, but it
appears to work.

llvm-svn: 68940
2009-04-13 05:44:34 +00:00
Rafael Espindola 7186f20a1b In X86DAGToDAGISel::MatchWrapper, if base or index are set, avoid matching
only if symbolic addresses are RIP relatives.

llvm-svn: 68924
2009-04-12 23:00:38 +00:00
Rafael Espindola e4bd8904f7 Add tests for the parts of X86-64 TLS that are already implemented.
llvm-svn: 68901
2009-04-12 10:43:41 +00:00
Chris Lattner ce6bcf0847 fix a cross-block fastisel crash handling overflow intrinsics.
See comment for details.  This fixes rdar://6772169

llvm-svn: 68890
2009-04-12 07:51:14 +00:00
Chris Lattner 4d59f88e60 move a target-specific test into its directory so it isn't run if you
don't configure the ARM target in.

llvm-svn: 68843
2009-04-10 23:58:38 +00:00
Chris Lattner 30c3de6461 fix two problems with machine sinking:
1. Sinking would crash when the first instruction of a block was
   sunk due to iterator problems.
2. Instructions could be sunk to their current block, causing an
   infinite loop.

This fixes PR3968

llvm-svn: 68787
2009-04-10 16:38:36 +00:00
Rafael Espindola bb834f0929 Don't fold a load if the other operand is a TLS address.
With this we generate

movl    %gs:0, %eax
leal    i@NTPOFF(%eax), %eax

instead of

movl    $i@NTPOFF, %eax
addl    %gs:0, %eax

llvm-svn: 68778
2009-04-10 10:09:34 +00:00
Bob Wilson 51856173c8 Fix pr3954. The register scavenger asserts for inline assembly with
register destinations that are tied to source operands.  The
TargetInstrDescr::findTiedToSrcOperand method silently fails for inline
assembly.  The existing MachineInstr::isRegReDefinedByTwoAddr was very
close to doing what is needed, so this revision makes a few changes to
that method and also renames it to isRegTiedToUseOperand (for consistency
with the very similar isRegTiedToDefOperand and because it handles both
two-address instructions and inline assembly with tied registers).

llvm-svn: 68714
2009-04-09 17:16:43 +00:00
Chris Lattner a725028d41 reg0 references are not real registers. This fixes a crash on the
attached testcase.

llvm-svn: 68712
2009-04-09 16:50:43 +00:00
Dan Gohman 0e8d199f91 Generalize ExtendUsesToFormExtLoad to be usable for ANY_EXTEND,
in addition to ZERO_EXTEND and SIGN_EXTEND. Fix a bug in the
way it checked for live-out values, and simplify the way it
find users by using SDNode::use_iterator's (relatively) new
features. Also, make it slightly more permissive on targets
with free truncates.

In SelectionDAGBuild, avoid creating ANY_EXTEND nodes that are
larger than necessary. If the target's SwitchAmountTy has
enough bits, use it. This exposes the truncate to optimization
early, enabling more optimizations.

llvm-svn: 68670
2009-04-09 03:51:29 +00:00
Rafael Espindola 3b2df10c9e Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Bob Wilson 8462791237 Add testcase for PR3795.
llvm-svn: 68620
2009-04-08 18:00:55 +00:00
Duncan Sands 5a82613db0 Soft float support for FREM.
llvm-svn: 68614
2009-04-08 16:20:57 +00:00
Duncan Sands fb438caac6 Soft float support for undef. Reported by Xerxes Rånby.
llvm-svn: 68607
2009-04-08 13:33:37 +00:00
Dan Gohman 9e7a137d01 Fully escape the grep string for this test.
llvm-svn: 68580
2009-04-08 00:54:40 +00:00
Dan Gohman bb4ff96e89 Update this test for recent codegen improvements. CodeGen is now
using an lea in place of a mov and an add for this test.

llvm-svn: 68579
2009-04-08 00:51:11 +00:00
Dan Gohman ad3e549a53 Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.

This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.

Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.

Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.

Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.

Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.

llvm-svn: 68576
2009-04-08 00:15:30 +00:00
Bill Wendling 4aa25b79f9 Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola 1edda06792 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Dan Gohman ca93aabeba Don't attempt to handle aggregate argument values in FastISel; let
SelectionDAG do those. This fixes PR3955.

llvm-svn: 68546
2009-04-07 20:40:11 +00:00
Bob Wilson 0669f6d295 Handle 'a' modifier in ARM inline assembly.
Patch by Richard Pennington.

llvm-svn: 68464
2009-04-06 21:46:51 +00:00
Nick Lewycky 4717538b1c Try SSE2?
llvm-svn: 68423
2009-04-04 10:24:24 +00:00
Nick Lewycky 734dee287d Fix test on non-x86 platforms.
llvm-svn: 68419
2009-04-04 07:20:43 +00:00
Dan Gohman 8bff8a1e87 Fix a TargetLowering optimization so that it doesn't duplicate
loads when an input node has multiple uses.

llvm-svn: 68398
2009-04-03 20:11:30 +00:00
Mon P Wang 9c186c5d27 Added a x86 dag combine to increase the chances to use a
movq for v2i64 on x86-32.

llvm-svn: 68368
2009-04-03 02:43:30 +00:00
Bob Wilson cf1ec2cc68 Fix PR3862: Recognize some ARM-specific constraints for immediates in inline
assembly.

llvm-svn: 68218
2009-04-01 17:58:54 +00:00
Evan Cheng 0d551591ea Fully general expansion of integer shift of any size.
llvm-svn: 68134
2009-03-31 19:39:24 +00:00
Dan Gohman 6161b3ccf6 Add an explicit -asm-verbose to these tests, to make it
possible to run the tests with -asm-verbose defaulting
to false.

llvm-svn: 68124
2009-03-31 18:20:47 +00:00
Owen Anderson 4486c1fac0 Remove the "fast" cases for spill and restore point determination, as these were subtlely wrong in obscure cases. Patch the testcase
to account for this change.

llvm-svn: 68093
2009-03-31 08:27:09 +00:00
Dan Gohman 97a20b8dbf Fix live-out reg logic to not insert over-aggressive AssertZExt
instructions. This fixes lua.

llvm-svn: 68083
2009-03-31 01:38:29 +00:00
Evan Cheng 09f5be8146 Turn a 2-address instruction into a 3-address one when it's profitable even if the two-address operand is killed.
e.g.
%reg1024<def> = MOV r1
%reg1025<def> = ADD %reg1024, %reg1026
r0            = MOV %reg1025

If it's not possible / profitable to commute ADD, then turning ADD into a LEA saves a copy.

llvm-svn: 68065
2009-03-30 21:34:07 +00:00
Anton Korobeynikov 71278a5be8 Tweak test for recent relro stuff
llvm-svn: 68035
2009-03-30 15:28:40 +00:00
Evan Cheng 471ed6e460 Forgot this test.
llvm-svn: 68025
2009-03-30 06:17:34 +00:00
Anton Korobeynikov f3cf04f900 Testcase for recent ro/relocs stuff
llvm-svn: 68008
2009-03-29 17:14:57 +00:00
Duncan Sands d21581eaa1 Fix PR3899: add support for extracting floats from vectors
when using -soft-float.
Based on a patch by Jakob Stoklund Olesen.

llvm-svn: 67996
2009-03-29 13:51:06 +00:00
Arnold Schwaighofer e622cbf385 Make check in CheckTailCallReturnConstraints for ignorable instructions between
a CALL and a RET node more generic. Add a test for tail calls with a void
return.

llvm-svn: 67943
2009-03-28 12:36:29 +00:00
Arnold Schwaighofer 83d5420d02 Enable tail call optimization for functions that return a struct (bug 3664) and for functions that return types that need extending (e.g i1).
llvm-svn: 67934
2009-03-28 08:33:27 +00:00
Evan Cheng fd81c73cde Optimize some 64-bit multiplication by constants into two lea's or one lea + shl since imulq is slow (latency 5). e.g.
x * 40
=>
shlq    $3, %rdi
leaq    (%rdi,%rdi,4), %rax

This has the added benefit of allowing more multiply to be folded into addressing mode. e.g.
a * 24 + b
=>
leaq    (%rdi,%rdi,2), %rax
leaq    (%rsi,%rax,8), %rax

llvm-svn: 67917
2009-03-28 05:57:29 +00:00
Dan Gohman 3f50cb6cd4 Fix this test so that it doesn't spuriously fail due to some
unrelated debugging output happening to contain the string "store".

llvm-svn: 67849
2009-03-27 16:17:22 +00:00
Evan Cheng 99c16729d3 Add -march=x86.
llvm-svn: 67783
2009-03-26 23:03:32 +00:00
Bill Wendling 55a2cd736d Add -f to RUN line.
llvm-svn: 67744
2009-03-26 06:17:54 +00:00
Chris Lattner f661d328c4 no need for eh info
llvm-svn: 67740
2009-03-26 05:51:18 +00:00
Bill Wendling 996749e912 Add testcase for r67728.
llvm-svn: 67729
2009-03-26 01:52:47 +00:00
Evan Cheng 1e550e15cc Add a test case for PR3779: when to promote the function return value.
llvm-svn: 67702
2009-03-25 20:30:19 +00:00
Evan Cheng 2e9f42bed5 Revert 67132. This is breaking some objective-c apps.
Also fixes SDISel so it *does not* force promote return value if the function is not marked signext / zeroext.

llvm-svn: 67701
2009-03-25 20:20:11 +00:00
Evan Cheng 5e5a63cf8f CodeGen still defaults to non-verbose asm, but llc now overrides it and default to verbose.
llvm-svn: 67668
2009-03-25 01:47:28 +00:00
Dan Gohman 9fc30d5c30 Add a testcase for the scheduling heuristic introduced in r67586.
llvm-svn: 67622
2009-03-24 16:38:27 +00:00
Evan Cheng a774a99245 Do not emit comments unless -asm-verbose.
llvm-svn: 67580
2009-03-24 00:17:40 +00:00
Evan Cheng 7fe1b0f50f Fix a bug in spill weight computation. If the alias is a super-register, and the super-register is in the register class we are trying to allocate. Then add the weight to all sub-registers of the super-register even if they are not aliases.
e.g. allocating for GR32, bh is not used, updating bl spill weight.                                                                                                        
     bl should get the same spill weight otherwise it will be choosen                                                                                              
     as a spill candidate since spilling bh doesn't make ebx available.
This fix PR2866.

llvm-svn: 67574
2009-03-23 22:57:19 +00:00
Dale Johannesen 93eefa0043 Fix internal representation of fp80 to be the
same as a normal i80 {low64, high16} rather
than its own {high64, low16}.  A depressing number
of places know about this; I think I got them all.
Bitcode readers and writers convert back to the old
form to avoid breaking compatibility.

llvm-svn: 67562
2009-03-23 21:16:53 +00:00
Evan Cheng 4dc0c6697f Update test for pr3864.
llvm-svn: 67545
2009-03-23 18:27:36 +00:00
Evan Cheng f858466018 Fix PR3391 and PR3864. Reg allocator infinite looping.
llvm-svn: 67544
2009-03-23 18:24:37 +00:00
Evan Cheng 968c3b0d6e Model inline asm constraint which ties an input to an output register as machine operand TIED_TO constraint. This eliminated the need to pre-allocate registers for these. This also allows register allocator can eliminate the unneeded copies.
llvm-svn: 67512
2009-03-23 08:01:15 +00:00
Evan Cheng 47c9750f04 Do not fold away subreg_to_reg if the source register has a sub-register index. That means the source register is taking a sub-register of a larger register. e.g. On x86
%RAX<def> = ...
%RAX<def> = SUBREG_TO_REG 0, %EAX:3<kill>, 3
The first def is defining RAX, not EAX so the top bits were not zero-extended.

llvm-svn: 67511
2009-03-23 07:19:58 +00:00
Rafael Espindola d2b64fc65b Add -relocation-model=pic so that the test works
both in Linux and Darwin.

llvm-svn: 67191
2009-03-18 09:38:28 +00:00
Mon P Wang 32c8074be6 Added missing support for widening when splitting an unary op (PR3683)
and expanding a bit convert (PR3711).  In both cases, we extract the
valid part of the widen vector and then do the conversion.

llvm-svn: 67175
2009-03-18 06:24:04 +00:00
Evan Cheng 8df898917f Add another test case for r64440.
llvm-svn: 67156
2009-03-18 02:43:01 +00:00
Chris Lattner a6bed3e950 Disable the "call to immediate" optimization on x86-64. It is
not safe in general because the immediate could be an arbitrary
value that does not fit in a 32-bit pcrel displacement.  
Conservatively fall back to loading the value into a register
and calling through it.

We still do the optzn on X86-32.

llvm-svn: 67142
2009-03-18 00:43:52 +00:00
Bill Wendling 4eaeb4ef22 A more proper -mtriple.
llvm-svn: 67138
2009-03-18 00:19:44 +00:00
Bill Wendling 3aad86fa3f Temporary fix. I think Rafael wanted this to be Linux-only.
llvm-svn: 67137
2009-03-18 00:16:36 +00:00
Chris Lattner 42e9ca42ce LSR shouldn't ever try to hack on integer IV's larger than 64-bits. Right now
it is not APInt clean, but even when it is it needs to be evaluated carefully
to determine whether it is actually profitable.

This fixes a crash on PR3806

llvm-svn: 67134
2009-03-17 23:58:30 +00:00
Rafael Espindola 4606b12108 Don't force promotion of return arguments on the callee.
Some architectures (like x86) don't require it.
This fixes bug 3779.

llvm-svn: 67132
2009-03-17 23:43:59 +00:00
Chris Lattner 4359f3f26f this is apparently passing now. Evan/Dan, please check
to see if this is producing the expected code or not, I'm
not sure what the test was intended to check.

llvm-svn: 67099
2009-03-17 20:23:43 +00:00
Chris Lattner 2363d0b8b9 Fix codegen to compute the size of an allocation by multiplying the
size by the array amount as an i32 value instead of promoting from
i32 to i64 then doing the multiply.  Not doing this broke wrap-around
assumptions that the optimizers (validly) made.  The ultimate real
fix for this is to introduce i64 version of alloca and remove mallocinst.

This fixes PR3829

llvm-svn: 67093
2009-03-17 19:36:00 +00:00
Evan Cheng 8216785663 Add newline at end of file.
llvm-svn: 67085
2009-03-17 17:08:25 +00:00
Scott Michel df52d3d477 CellSPU:
Revert inadvertent mis-fix of fneg.

llvm-svn: 67084
2009-03-17 16:45:16 +00:00
Duncan Sands fb5c74ef4b Reapply r67049, with the test adjusted for darwin
(which produces "call L_f$stub" rather than "call f").

llvm-svn: 67079
2009-03-17 09:46:22 +00:00
Mon P Wang 523c0852c6 Fix a problem with DAGCombine where we were building an illegal build
vector shuffle mask. Forced the mask to be built using i32.  Note: this will
be irrelevant once vector_shuffle no longer takes a build vector for the
shuffle mask.

llvm-svn: 67076
2009-03-17 06:33:10 +00:00
Dan Gohman d6e571b202 Recognize bswapl as bswap too.
llvm-svn: 67072
2009-03-17 02:45:40 +00:00
Dan Gohman 77a9279d80 Recognize "bswapq" as an alternate spelling for the bswap instruction.
llvm-svn: 67071
2009-03-17 02:17:27 +00:00
Evan Cheng 76f1b47ec9 Spiller may unfold load / mod / store instructions as an optimization when the would be loaded value is available in a register. It needs to check if it's legal to clobber the register. Also, the register can contain values of multiple spill slots, make sure to check all instead of just the one being unfolded.
llvm-svn: 67068
2009-03-17 01:23:09 +00:00
Scott Michel 839ad0a5f3 CellSPU:
- Fix fabs, fneg for f32 and f64.
- Use BuildVectorSDNode.isConstantSplat, now that the functionality exists
- Continue to improve i64 constant lowering. Lower certain special constants
  to the constant pool when they correspond to SPU's shufb instruction's
  special mask values. This avoids the overhead of performing a shuffle on a
  zero-filled vector just to get the special constant when the memory load
  suffices.

llvm-svn: 67067
2009-03-17 01:15:45 +00:00
Bill Wendling dadaf54e09 --- Reverse-merging (from foreign repository) r67049 into '.':
U    test/CodeGen/X86/2009-03-13-PHIElimBug.ll
D    test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll
U    lib/CodeGen/PHIElimination.cpp

r67049 was causing this failure:

Running /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/dg.exp ...
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/2009-03-13-PHIElimBug.ll for PR3784
Failed with exit(1) at line 1
while running:  llvm-as < /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/2009-03-13-PHIElimBug.ll |  llc -march=x86 | /usr/bin/grep -A 2 {call f} | /usr/bin/grep movl
child process exited abnormally

llvm-svn: 67051
2009-03-16 20:27:20 +00:00
Duncan Sands d3e07c9d09 Tweak the fix for PR3784: be less sensitive about just
how invokes are set up.  The fix could be disturbed by
register copies coming after the EH_LABEL, and also didn't
behave quite right when it was the invoke result that
was used in a phi node.  Also (see new testcase) fix
another phi elimination bug while there: register copies
in the landing pad need to come after the EH_LABEL, because
that's where execution branches to when unwinding.  If they
come before the EH_LABEL then they will never be executed...
Also tweak the original testcase so it doesn't use a no-longer
existing counter.
The accumulated phi elimination changes fix two of seven Ada
testsuite failures that turned up after landing pad critical
edge splitting was turned off.  So there's probably more to come.

llvm-svn: 67049
2009-03-16 19:58:38 +00:00
Scott Michel d1db1aba66 CellSPU:
Incorporate Tilmann's 128-bit operation patch. Evidently, it gets the
llvm-gcc bootstrap a bit further along.

llvm-svn: 67048
2009-03-16 18:47:25 +00:00
Dan Gohman 6c28e72bb1 Add a testcase that covers a wide variety of ABI isel cases.
llvm-svn: 67003
2009-03-14 02:35:10 +00:00
Dan Gohman f98cd1b48a Use %rip-relative addressing on x86-64 whenever practical, as
it has a smaller encoding than absolute addressing.

llvm-svn: 67002
2009-03-14 02:33:41 +00:00
Dan Gohman 638e530509 Add a few more ptrtoint/inttoptr cast tests.
llvm-svn: 66989
2009-03-13 23:54:51 +00:00
Dan Gohman a62e4ab690 Improve FastISel's handling of truncates to i1, and implement
ptrtoint and inttoptr in X86FastISel. These casts aren't always
handled in the generic FastISel code because X86 sometimes needs
custom code to do truncation and zero-extension.

llvm-svn: 66988
2009-03-13 23:53:06 +00:00
Evan Cheng 94419d6fdd Fix PR3784: If the source of a phi comes from a bb ended with an invoke, make sure the copy is inserted before the try range (unless it's used as an input to the invoke, then insert it after the last use), not at the end of the bb.
Also re-apply r66140 which was disabled as a workaround.

llvm-svn: 66976
2009-03-13 22:59:14 +00:00
Dan Gohman c0bb959591 Fix FastISel's assumption that i1 values are always zero-extended
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.

llvm-svn: 66941
2009-03-13 20:42:20 +00:00
Rafael Espindola 71144973f3 Improve sext and zext of TLS variables.
llvm-svn: 66922
2009-03-13 18:37:06 +00:00
Evan Cheng 1fb8aedd1e Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues.
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.


Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.

llvm-svn: 66875
2009-03-13 07:51:59 +00:00
Chris Lattner 99cc133710 generalize the previous code to use the full generality of LEA
for i32/i64 expressions (we could also do i16 on cpus where
i16 lea is fast, but I didn't add this).  On the example, we now
generate:

_test:
	movl	4(%esp), %eax
	cmpl	$42, (%eax)
	setl	%al
	movzbl	%al, %eax
	leal	4(%eax,%eax,8), %eax
	ret

instead of:

_test:
	movl	4(%esp), %eax
	cmpl	$41, (%eax)
	movl	$4, %ecx
	movl	$13, %eax
	cmovg	%ecx, %eax
	ret

llvm-svn: 66869
2009-03-13 05:53:31 +00:00
Chris Lattner 4be6df5d86 optimize the case of cond ? 42 : 41 and friends. This compiles the
example to:

_test:
	movl	4(%esp), %eax
	cmpl	$41, (%eax)
	setg	%al
	movzbl	%al, %eax
	orl	$4294967294, %eax
	ret

instead of:

        movl    4(%esp), %eax
        cmpl    $41, (%eax)
	movl	$4294967294, %ecx
	movl	$4294967295, %eax
	cmova	%ecx, %eax
	ret

which is smaller in code size and faster. rdar://6668608

llvm-svn: 66868
2009-03-13 05:22:11 +00:00
Dan Gohman a1d92423cf Enhance address-mode folding of ISD::ADD to handle cases where the
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.

llvm-svn: 66865
2009-03-13 02:25:09 +00:00
Evan Cheng 50a839e61f Add this test back.
llvm-svn: 66838
2009-03-12 23:01:35 +00:00
Duncan Sands 1f853d6a2a Revert commit 66140 since it caused several failures
in the Ada testcase.  Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late...  See PR3784.

llvm-svn: 66826
2009-03-12 21:13:42 +00:00
Evan Cheng 56f9f80bb1 Typo.
llvm-svn: 66797
2009-03-12 17:07:39 +00:00
Evan Cheng f16a991262 Fix test after Chris' select changes.
llvm-svn: 66795
2009-03-12 16:10:08 +00:00
Chris Lattner 4147f08e44 Move 3 "(add (select cc, 0, c), x) -> (select cc, x, (add, x, c))"
related transformations out of target-specific dag combine into the
ARM backend.  These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).

Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0  -> (zext(cond) << 3).  This happens frequently
with the recently added cp constant select optimization, but is a
very general xform.  For example, we now compile the second example
in const-select.ll to:

_test:
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        seta    %al
        movzbl  %al, %eax
        movl    4(%esp), %ecx
        movsbl  (%ecx,%eax,4), %eax
        ret

instead of:

_test:
        movl    4(%esp), %eax
        leal    4(%eax), %ecx
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        cmovbe  %eax, %ecx
        movsbl  (%ecx), %eax
        ret

This passes multisource and dejagnu.

llvm-svn: 66779
2009-03-12 06:52:53 +00:00
Evan Cheng ef0b7cc2d5 On x86, if the only use of a i64 load is a i64 store, generate a pair of double load and store instead.
llvm-svn: 66776
2009-03-12 05:59:15 +00:00
Chris Lattner 7b87e542dc add no-unwind, remove duplicate run line.
llvm-svn: 66775
2009-03-12 05:56:37 +00:00
Chris Lattner 1d5cf4bcdd add nounwinds
llvm-svn: 66773
2009-03-12 05:35:33 +00:00
Dan Gohman 5637df37cd Revert r66024. The JIT encoding for CALLpcrel32 is wrong -- see PR3773, and the
assembly text output uses an indirect call ("call *") instead of a direct call.

llvm-svn: 66735
2009-03-11 23:01:47 +00:00
Rafael Espindola 294943c99b optimize i8 and i16 tls values.
llvm-svn: 66725
2009-03-11 22:40:04 +00:00
Evan Cheng 6cba561648 My last coalescer fix introduced a subtler one. It's aborting a commuting optimization too late and left the live intervals to be out of sync with instructions. This fixes 8b10b.
llvm-svn: 66715
2009-03-11 22:18:44 +00:00
Mon P Wang 25c6a46a81 For yonah, fix a vector shuffle case for v16i8 where we didn't properly clear some bits.
llvm-svn: 66684
2009-03-11 18:47:57 +00:00
Mon P Wang ce6a26cb1a Fixed a v8i16 shuffle case that should generate a pshufb instead of a pshuflw/hw.
llvm-svn: 66645
2009-03-11 06:35:11 +00:00
Chris Lattner 43d6377f89 reapply my previous patch (r66358) with a tweak to set the
alignment of the generated constant pool entry to the
desired alignment of a type.  If we don't do this, we end up
trying to do movsd from 4-byte alignment memory.  This fixes
450.soplex and 456.hmmer.

llvm-svn: 66641
2009-03-11 05:08:08 +00:00
Evan Cheng 64b3f9d7a7 Two coalescer fixes in one.
1. Use the same value# to represent unknown values being merged into sub-registers.
2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges.

llvm-svn: 66610
2009-03-11 00:03:21 +00:00
Bill Wendling 63144deea9 Readd test, but XFAIL it.
llvm-svn: 66581
2009-03-10 21:31:00 +00:00
Evan Cheng aa887653f4 Revert 66358 for now. It's breaking povray, 450.soplex, and 456.hmmer on x86 / Darwin.
llvm-svn: 66574
2009-03-10 20:47:18 +00:00
Bill Wendling a042a82408 Add radar number.
llvm-svn: 66534
2009-03-10 06:53:54 +00:00
Chris Lattner 1522e2498f wire up support for emitting "special" values from inline asm
format strings with the standard ${:foo} syntax.

llvm-svn: 66527
2009-03-10 05:37:13 +00:00
Chris Lattner 4249b9a698 Fix PR3763 by using proper APInt methods instead of uint64_t's.
llvm-svn: 66434
2009-03-09 20:22:18 +00:00
Evan Cheng ce5dfb692a ARM isLegalAddressImmediate should check if type is a simple type now that optimizer can create values of funky scalar types.
llvm-svn: 66429
2009-03-09 19:15:00 +00:00
Evan Cheng fb8ded911e Yet another case where the spiller marked two uses of the same register on the same instruction as kill. This fixes PR3706.
llvm-svn: 66428
2009-03-09 19:00:05 +00:00
Evan Cheng ec415efb44 Recognize triplets starting with armv5-, armv6- etc. And set the ARM arch version accordingly.
llvm-svn: 66365
2009-03-08 04:02:49 +00:00
Evan Cheng de22116f39 If a MI uses the same register more than once, only mark one of them as 'kill'.
llvm-svn: 66363
2009-03-08 03:58:35 +00:00
Chris Lattner ab5a443144 implement an optimization to codegen c ? 1.0 : 2.0 as load { 2.0, 1.0 } + c*4.
For 2009-03-07-FPConstSelect.ll we now produce:

_f:
	xorl	%eax, %eax
	testl	%edi, %edi
	movl	$4, %ecx
	cmovne	%rax, %rcx
	leaq	LCPI1_0(%rip), %rax
	movss	(%rcx,%rax), %xmm0
	ret

previously we produced:

_f:
	subl	$4, %esp
	cmpl	$0, 8(%esp)
	movss	LCPI1_0, %xmm0
	je	LBB1_2	## entry
LBB1_1:	## entry
	movss	LCPI1_1, %xmm0
LBB1_2:	## entry
	movss	%xmm0, (%esp)
	flds	(%esp)
	addl	$4, %esp
	ret

on PPC the code also improves to:

_f:
	cntlzw r2, r3
	srwi r2, r2, 5
	li r3, lo16(LCPI1_0)
	slwi r2, r2, 2
	addis r3, r3, ha16(LCPI1_0)
	lfsx f1, r3, r2
	blr 

from:

_f:
	li r2, lo16(LCPI1_1)
	cmplwi cr0, r3, 0
	addis r2, r2, ha16(LCPI1_1)
	beq cr0, LBB1_2	; entry
LBB1_1:	; entry
	li r2, lo16(LCPI1_0)
	addis r2, r2, ha16(LCPI1_0)
LBB1_2:	; entry
	lfs f1, 0(r2)
	blr 

This also improves the existing pic-cpool case from:

foo:
	subl	$12, %esp
	call	.Lllvm$1.$piclabel
.Lllvm$1.$piclabel:
	popl	%eax
	addl	$_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax
	cmpl	$0, 16(%esp)
	movsd	.LCPI1_0@GOTOFF(%eax), %xmm0
	je	.LBB1_2	# entry
.LBB1_1:	# entry
	movsd	.LCPI1_1@GOTOFF(%eax), %xmm0
.LBB1_2:	# entry
	movsd	%xmm0, (%esp)
	fldl	(%esp)
	addl	$12, %esp
	ret

to:

foo:
	call	.Lllvm$1.$piclabel
.Lllvm$1.$piclabel:
	popl	%eax
	addl	$_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax
	xorl	%ecx, %ecx
	cmpl	$0, 4(%esp)
	movl	$8, %edx
	cmovne	%ecx, %edx
	fldl	.LCPI1_0@GOTOFF(%eax,%edx)
	ret

This triggers a few dozen times in spec FP 2000.

llvm-svn: 66358
2009-03-08 01:51:30 +00:00
Dan Gohman ff659b5b86 Arithmetic instructions don't set EFLAGS bits OF and CF bits
the same say the "test" instruction does in overflow cases,
so eliminating the test is only safe when those bits aren't
needed, as is the case for COND_E and COND_NE, or if it
can be proven that no overflow will occur. For now, just
restrict the optimization to COND_E and COND_NE and don't
do any overflow analysis.

llvm-svn: 66318
2009-03-07 01:58:32 +00:00
Dan Gohman 15af5524a4 Fix ScheduleDAGRRList::CopyAndMoveSuccessors' handling of nodes
with multiple chain operands. This can occur when the scheduler
has added chain operands to a node that already has a chain
operand, in order to handle physical register dependencies.

This fixes an llvm-gcc bootstrap failure on x86-64 introduced
in r66058.

llvm-svn: 66240
2009-03-06 02:23:01 +00:00
Dan Gohman 2c2f192c74 Fix the "test" optimization to recognize "dec" as an add of
negative one, as subtracts of immediates are canonicalized
to adds.

llvm-svn: 66180
2009-03-05 19:32:48 +00:00
Dan Gohman 14cc4a458e Make this test more thorough. Not only should there be no %esi,
there should be no spilling of anything.

llvm-svn: 66179
2009-03-05 19:31:32 +00:00
Evan Cheng b7922dee15 Do not split edges to EH landing pads. It will cause code size explosion.
llvm-svn: 66140
2009-03-05 06:31:26 +00:00
Dan Gohman 55d7b2ac4f Re-apply 66008, now that the unfoldMemoryOperand bug is fixed.
llvm-svn: 66058
2009-03-04 19:44:21 +00:00
Owen Anderson 0dedc114de Add a restore folder, which shaves a dozen or so machineinstrs off oggenc. Update a testcase to check this.
llvm-svn: 66029
2009-03-04 08:52:31 +00:00
Evan Cheng 9edd616b59 Fix PR3666: isel calls to constant addresses.
llvm-svn: 66024
2009-03-04 06:48:53 +00:00
Eli Friedman 7604d37723 PR3686: make the legalizer handle bitcast from i80 to x86 long double.
llvm-svn: 66021
2009-03-04 06:23:34 +00:00
Dan Gohman 6728f892be Revert r66004 for now; it's causing a variety of test failures.
llvm-svn: 66008
2009-03-04 03:54:19 +00:00
Evan Cheng e08f4cb9a1 Rename test.
llvm-svn: 66006
2009-03-04 02:47:25 +00:00
Dan Gohman fe8d71f42a Teach the x86 backend to eliminate "test" instructions by using the EFLAGS
result from add, sub, inc, and dec instructions in simple cases.

llvm-svn: 66004
2009-03-04 02:33:24 +00:00
Evan Cheng b8905c4e2c Fix PR3701. 1. X86 target renamed eflags register to flags. This matches what llvm-gcc generates so codegen knows flags register is being clobbered by inline asm. 2. BURR scheduler should also check if inline asm nodes can clobber "live" physical registers. Previously it was only checking target nodes with implicit defs.
llvm-svn: 65996
2009-03-04 01:41:49 +00:00
Bill Wendling 6d2714738f The DAG combiner was performing a BT combine. The BT combine had a value of -1,
so it changed it into a 31 via the TLO.ShrinkDemandedConstant() call. Then it
would go through the DAG combiner again. This time it had a value of 31, which
was turned into a -1 by TLI.SimplifyDemandedBits(). This would ping pong
forever.

Teach the TLO.ShrinkDemandedConstant() call not to lower a value if the demanded
value is an XOR of all ones.

llvm-svn: 65985
2009-03-04 00:18:06 +00:00
Nate Begeman a9e981225e Fix a problem with DAGCombine on 64b targets where folding
extracts + build_vector into a shuffle would fail, because the
type of the new build_vector would not be legal.  Try harder to
create a legal build_vector type.  Note: this will be totally 
irrelevant once vector_shuffle no longer takes a build_vector for
shuffle mask.

New:
_foo:
	xorps	%xmm0, %xmm0
	xorps	%xmm1, %xmm1
	subps	%xmm1, %xmm1
	mulps	%xmm0, %xmm1
	addps	%xmm0, %xmm1
	movaps	%xmm1, 0

Old:
_foo:
	xorps	%xmm0, %xmm0
	movss	%xmm0, %xmm1
	xorps	%xmm2, %xmm2
	unpcklps	%xmm1, %xmm2
	pshufd	$80, %xmm1, %xmm1
	unpcklps	%xmm1, %xmm2
	pslldq	$16, %xmm2
	pshufd	$57, %xmm2, %xmm1
	subps	%xmm0, %xmm1
	mulps	%xmm0, %xmm1
	addps	%xmm0, %xmm1
	movaps	%xmm1, 0

llvm-svn: 65791
2009-03-01 23:44:07 +00:00
Evan Cheng c2f95b56db Minor optimization:
Look for situations like this:                                                                                                                                                              
%reg1024<def> = MOV r1                                                                                                                                                                      
%reg1025<def> = MOV r0                                                                                                                                                                      
%reg1026<def> = ADD %reg1024, %reg1025                                                                                                                                                      
r0            = MOV %reg1026                                                                                                                                                                
Commute the ADD to hopefully eliminate an otherwise unavoidable copy.

llvm-svn: 65752
2009-03-01 02:03:43 +00:00
Evan Cheng 398dee1c4a Last commit accidentially deleted this code.
llvm-svn: 65679
2009-02-28 06:02:14 +00:00
Rafael Espindola 000421eade Refactor TLS code and add some tests. The tests and expected results are:
pic |  declaration | linkage  | visibility |

!pic |  declaration | external | default    | tls1.ll     tls2.ll     | local exec
 pic |  declaration | external | default    | tls1-pic.ll tls2-pic.ll | general dynamic
!pic | !declaration | external | default    | tls3.ll     tls4.ll     | initial exec
 pic | !declaration | external | default    | tls3-pic.ll tls4-pic.ll | general dynamic

!pic |  declaration | external | hidden     | tls7.ll     tls8.ll     | local exec
 pic |  declaration | external | hidden     | X                       | local dynamic
!pic | !declaration | external | hidden     | tls9.ll     tls10.ll    | local exec
 pic | !declaration | external | hidden     | X                       | local dynamic

!pic |  declaration | internal | default    | tls5.ll     tls6.ll     | local exec
 pic |  declaration | internal | default    | X                       | local dynamic

The ones marked with an X have not been implemented since local dynamic is not implemented.

llvm-svn: 65632
2009-02-27 13:37:18 +00:00
Evan Cheng 2ad43a97cc Make sure this test passes on linux-ppc.
llvm-svn: 65600
2009-02-27 00:51:50 +00:00
Evan Cheng 8d0b4d4fd6 MachineLICM CSE should match destination register classes; avoid hoisting implicit_def's.
llvm-svn: 65592
2009-02-27 00:02:22 +00:00
Evan Cheng 40abb7b5d0 ADDS{D|S}rr_Int and MULS{D|S}rr_Int are not commutable. The users of these intrinsics expect the high bits will not be modified.
llvm-svn: 65499
2009-02-26 03:12:02 +00:00
Evan Cheng ca2d65467b The last commit was overly conservative. It's ok to reuse value that's already marked livein.
llvm-svn: 65498
2009-02-26 03:02:21 +00:00
Evan Cheng a49de9de2e Revert BuildVectorSDNode related patches: 65426, 65427, and 65296.
llvm-svn: 65482
2009-02-25 22:49:59 +00:00
Dan Gohman 318d7376ba Fast-isel can't do TLS yet, so it should fall back to SDISel
if it sees TLS addresses.

llvm-svn: 65341
2009-02-23 22:03:08 +00:00
Dan Gohman 0fd45016c2 Use the -stack-alignment option instead of using a target triple
for avoiding dynamic stack realignment.

llvm-svn: 65319
2009-02-23 16:34:46 +00:00
Evan Cheng 9f8fddeed8 Only v1i16 (i.e. _m64) is returned via RAX / RDX.
llvm-svn: 65313
2009-02-23 09:03:22 +00:00
Nate Begeman 9af0d0c97e Make this test use darwin targe triple, to avoid stack traffic on linux.
llvm-svn: 65312
2009-02-23 09:03:06 +00:00
Nate Begeman e684da3e5d Generate better code for v8i16 shuffles on SSE2
Generate better code for v16i8 shuffles on SSE2 (avoids stack)
Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops.
Document the shuffle matching logic and add some FIXMEs for later further
  cleanups.
New tests that test the above.

Examples:

New:
_shuf2:
	pextrw	$7, %xmm0, %eax
	punpcklqdq	%xmm1, %xmm0
	pshuflw	$128, %xmm0, %xmm0
	pinsrw	$2, %eax, %xmm0

Old:
_shuf2:
	pextrw	$2, %xmm0, %eax
	pextrw	$7, %xmm0, %ecx
	pinsrw	$2, %ecx, %xmm0
	pinsrw	$3, %eax, %xmm0
	movd	%xmm1, %eax
	pinsrw	$4, %eax, %xmm0
	ret

=========

New:
_shuf4:
	punpcklqdq	%xmm1, %xmm0
	pshufb	LCPI1_0, %xmm0

Old:
_shuf4:
	pextrw	$3, %xmm0, %eax
	movsd	%xmm1, %xmm0
	pextrw	$3, %xmm1, %ecx
	pinsrw	$4, %ecx, %xmm0
	pinsrw	$5, %eax, %xmm0

========

New:
_shuf1:
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	pextrw	$1, %xmm0, %eax
	rolw	$8, %ax
	movd	%xmm0, %ecx
	rolw	$8, %cx
	pextrw	$5, %xmm0, %edx
	pextrw	$4, %xmm0, %esi
	pextrw	$3, %xmm0, %edi
	pextrw	$2, %xmm0, %ebx
	movaps	%xmm0, %xmm1
	pinsrw	$0, %ecx, %xmm1
	pinsrw	$1, %eax, %xmm1
	rolw	$8, %bx
	pinsrw	$2, %ebx, %xmm1
	rolw	$8, %di
	pinsrw	$3, %edi, %xmm1
	rolw	$8, %si
	pinsrw	$4, %esi, %xmm1
	rolw	$8, %dx
	pinsrw	$5, %edx, %xmm1
	pextrw	$7, %xmm0, %eax
	rolw	$8, %ax
	movaps	%xmm1, %xmm0
	pinsrw	$7, %eax, %xmm0
	popl	%esi
	popl	%edi
	popl	%ebx
	ret

Old:
_shuf1:
	subl	$252, %esp
	movaps	%xmm0, (%esp)
	movaps	%xmm0, 16(%esp)
	movaps	%xmm0, 32(%esp)
	movaps	%xmm0, 48(%esp)
	movaps	%xmm0, 64(%esp)
	movaps	%xmm0, 80(%esp)
	movaps	%xmm0, 96(%esp)
	movaps	%xmm0, 224(%esp)
	movaps	%xmm0, 208(%esp)
	movaps	%xmm0, 192(%esp)
	movaps	%xmm0, 176(%esp)
	movaps	%xmm0, 160(%esp)
	movaps	%xmm0, 144(%esp)
	movaps	%xmm0, 128(%esp)
	movaps	%xmm0, 112(%esp)
	movzbl	14(%esp), %eax
	movd	%eax, %xmm1
	movzbl	22(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	42(%esp), %eax
	movd	%eax, %xmm1
	movzbl	50(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm1, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	77(%esp), %eax
	movd	%eax, %xmm1
	movzbl	84(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	104(%esp), %eax
	movd	%eax, %xmm1
	punpcklbw	%xmm1, %xmm0
	punpcklbw	%xmm2, %xmm0
	movaps	%xmm0, %xmm1
	punpcklbw	%xmm3, %xmm1
	movzbl	127(%esp), %eax
	movd	%eax, %xmm0
	movzbl	135(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	155(%esp), %eax
	movd	%eax, %xmm0
	movzbl	163(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm0, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	188(%esp), %eax
	movd	%eax, %xmm0
	movzbl	197(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	217(%esp), %eax
	movd	%eax, %xmm4
	movzbl	225(%esp), %eax
	movd	%eax, %xmm0
	punpcklbw	%xmm4, %xmm0
	punpcklbw	%xmm2, %xmm0
	punpcklbw	%xmm3, %xmm0
	punpcklbw	%xmm1, %xmm0
	addl	$252, %esp
	ret

llvm-svn: 65311
2009-02-23 08:49:38 +00:00
Scott Michel 9d31aca679 Introduce the BuildVectorSDNode class that encapsulates the ISD::BUILD_VECTOR
instruction. The class also consolidates the code for detecting constant
splats that's shared across PowerPC and the CellSPU backends (and might be
useful for other backends.) Also introduces SelectionDAG::getBUID_VECTOR() for
generating new BUILD_VECTOR nodes.

llvm-svn: 65296
2009-02-22 23:36:09 +00:00
Richard Pennington d853864705 bug 3610: Test case.
llvm-svn: 65287
2009-02-22 15:54:44 +00:00
Evan Cheng e779595af0 If a use operand is marked isKill, don't forget to add kill to its live interval as well.
llvm-svn: 65279
2009-02-22 08:35:56 +00:00
Evan Cheng e4ffc030e2 Be bug compatible with gcc by returning MMX values in RAX.
llvm-svn: 65274
2009-02-22 08:05:12 +00:00
Anton Korobeynikov 42aae86590 Drop bunch of half-working stuff in the ext_weak linkage support.
Now we're using one gross, but quite robust hack :) (previous ones
did not work, for example, when ext_weak symbol was used deep inside
constant expression in the initializer).

The proper fix of this problem will require some quite huge asmprinter
changes and that's why was postponed. This fixes PR3629 by the way :)

llvm-svn: 65230
2009-02-21 11:53:32 +00:00
Evan Cheng 34806b1fa4 If two-address def is dead and the instruction does not define other registers, and it doesn't produce side effects, just delete the instruction.
llvm-svn: 65218
2009-02-21 03:14:25 +00:00
Evan Cheng 107b06c4b9 Teach LSR sink to sink the immediate portion of the common expression back into uses if they fit in address modes of all the uses.
llvm-svn: 65215
2009-02-21 02:06:47 +00:00
Evan Cheng 8a9481d50d Fix strange logic in CollectIVUsers used to determine whether all uses are
addresses, part 1. This fixes an obvious logic bug. Previously if the only
in-loop use is a PHI, it would return AllUsesAreAddresses as true.

llvm-svn: 65178
2009-02-20 22:16:49 +00:00
Evan Cheng 2a9bad5ac1 Support return of MMX values in 64-bit mode.
llvm-svn: 65152
2009-02-20 20:43:02 +00:00
Owen Anderson f13820148b Fix a crash in the pre-alloc splitter exposed by recent codegen changes.
llvm-svn: 65121
2009-02-20 10:02:23 +00:00
Chris Lattner 52e8d4cc5d make these tests pass when run on a G5.
llvm-svn: 65117
2009-02-20 07:10:11 +00:00
Dan Gohman 2a12ae7d1f Implement "superhero" strength reduction, or full strength
reduction of address calculations down to basic pointer arithmetic.
This is currently off by default, as it needs a few other features
before it becomes generally useful. And even when enabled, full
strength reduction is only performed when it doesn't increase
register pressure, and when several other conditions are true.

This also factors out a bunch of exisiting LSR code out of
StrengthReduceStridedIVUsers into separate functions, and tidies
up IV insertion. This actually decreases register pressure even
in non-superhero mode. The change in iv-users-in-other-loops.ll
is an example of this; there are two more adds because there are
two fewer leas, and there is less spilling.

llvm-svn: 65108
2009-02-20 04:17:46 +00:00
Evan Cheng a40d5e14ab GV with null value initializer shouldn't go to BSS if it's meant for a mergeable strings section. Currently it only checks for Darwin. Someone else please check if it should apply to other targets as well.
llvm-svn: 64877
2009-02-18 02:19:52 +00:00
Evan Cheng f505cd5ebb A couple of places where reused use operands should be marked kill. This is exposed by recent availability fallthrough changes.
llvm-svn: 64745
2009-02-17 06:41:03 +00:00
Dan Gohman 9cdfd44521 Change these tests to use regular loads instead of llvm.x86.sse2.loadu.dq.
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.

llvm-svn: 64623
2009-02-16 00:44:23 +00:00