Commit Graph

38637 Commits

Author SHA1 Message Date
Matt Fleming 141791c6d1 Grammar fix. This is a test commit.
llvm-svn: 104264
2010-05-20 19:45:09 +00:00
Dan Gohman ab5fb7f559 Minor code cleanups.
llvm-svn: 104263
2010-05-20 19:44:23 +00:00
Dan Gohman ee2fea3cd7 When canonicalizing icmp operand order to put the loop invariant
operand on the left, the interesting operand is on the right. This
fixes a bug where LSR was failing to recognize ICmpZero uses,
which led it to be unable to reverse the induction variable in the
attached testcase.

Delete test/CodeGen/X86/stack-color-with-reg-2.ll, because its test
is extremely fragile and hard to meaningfully update.

llvm-svn: 104262
2010-05-20 19:26:52 +00:00
Mikhail Glushenkov 3e69aa0399 llvmc: Make segfault detection work on Win32.
llvm-svn: 104261
2010-05-20 19:23:47 +00:00
Dan Gohman fdf9874ba7 Set Changed to true when canonicalizing ICmp operand order; even though
it isn't a very interesting change, it's a change nonetheless.

llvm-svn: 104260
2010-05-20 19:16:03 +00:00
Bob Wilson 5954994bba Handle Neon v2f64 and v2i64 vector shuffles as register copies.
This fixes the remaining issue with pr7167.

llvm-svn: 104257
2010-05-20 18:39:53 +00:00
Jim Grosbach 63d4f68df4 Remove dbg_value workaround and associated command line option
llvm-svn: 104254
2010-05-20 18:34:01 +00:00
Dan Gohman 098a47931c Delete MMX_MOVQ64gmr. It was the same as MMX_MOVQ64mr, but it didn't
have a pattern and it had an invalid encoding.

llvm-svn: 104244
2010-05-20 18:05:01 +00:00
Dale Johannesen d7d6638e3e The PPC MFCR instruction implicitly uses all 8 of the CR
registers.  Currently it is not so marked, which leads to
VCMPEQ instructions that feed into it getting deleted.
If it is so marked, local RA complains about this sequence:
 vreg = MCRF  CR0
 MFCR  <kill of whatever preg got assigned to vreg>
All current uses of this instruction are only interested in
one of the 8 CR registers, so redefine MFCR to be a normal
unary instruction with a CR input (which is emitted only as
a comment).  That avoids all problems.  7739628.

llvm-svn: 104238
2010-05-20 17:48:26 +00:00
Devang Patel e2ff7f3a7d Strip llvm.dbg.lv also.
llvm-svn: 104236
2010-05-20 16:49:22 +00:00
Dan Gohman 981563d0ba Rename a variable to avoid shadowing.
llvm-svn: 104234
2010-05-20 16:41:11 +00:00
Devang Patel e1c53f29d3 Split DbgVariable. Eventually, variable info will be communicated through frame index, or DBG_VALUE instruction, or collection of DBG_VALUE instructions. Plus each DbgVariable may not need a label.
llvm-svn: 104233
2010-05-20 16:36:41 +00:00
Dan Gohman 6b733fc189 Minor code simplification.
llvm-svn: 104232
2010-05-20 16:23:28 +00:00
Dan Gohman 29790edb93 Fix assembly parsing and encoding of the pushf and popf family of
instructions.

llvm-svn: 104231
2010-05-20 16:16:00 +00:00
Dan Gohman 5238275478 Set neverHasSideEffects on 64-bit pushf and popf, for consistency with
16-bit and 32-bit pushf and popf.

llvm-svn: 104228
2010-05-20 15:42:55 +00:00
Dan Gohman 80a9608442 Move the code for deleting BaseRegs and LSRUses into helper functions,
and fix a bug that valgrind noticed where the code would std::swap an
element with itself.

llvm-svn: 104225
2010-05-20 15:17:54 +00:00
Benjamin Kramer 7c3e230cd1 Reduce string trashing.
llvm-svn: 104223
2010-05-20 14:14:22 +00:00
Evan Cheng bdd062dae0 Add a hybrid bottom up scheduler that reduce register usage while avoiding
pipeline stall. It's useful for targets like ARM cortex-a8. NEON has a lot
of long latency instructions so a strict register pressure reduction
scheduler does not work well.
Early experiments show this speeds up some NEON loops by over 30%.

llvm-svn: 104216
2010-05-20 06:13:19 +00:00
Nick Lewycky c53cc4f8bf Fix typo in comment.
llvm-svn: 104209
2010-05-20 03:30:09 +00:00
Dan Gohman 1e19eab963 Define the x86 pause instruction.
llvm-svn: 104204
2010-05-20 01:35:50 +00:00
Dan Gohman a3b7570a3a Fix the sfence instruction to use MRM_F8 instead of MRM7r, since it
doesn't have a register operand. Also, use I instead of PSI, for
consistency with mfence and lfence.

llvm-svn: 104203
2010-05-20 01:23:41 +00:00
Eric Christopher 27e7ffc7d4 Partial code for emitting thread local bss data.
llvm-svn: 104197
2010-05-20 00:49:07 +00:00
Dan Gohman 20fab456da Teach LSR how to cope better with unrolled loops on targets where
the addressing modes don't make this trivially easy. This allows
it to avoid falling into the less precise heuristics in more
cases.

llvm-svn: 104186
2010-05-19 23:43:12 +00:00
Bob Wilson 42603958fb Optimize away insertelement of an undef value. This shows up in
test/Codegen/ARM/reg_sequence.ll but it doesn't affect the generated code
because the coalescer cleans it up.  Radar 7998853.

llvm-svn: 104185
2010-05-19 23:42:58 +00:00
Chris Lattner 7cbfa4462f fix rdar://7986634 - match instruction opcodes case insensitively.
llvm-svn: 104183
2010-05-19 23:34:33 +00:00
Jim Grosbach f98511473e Enable preserving debug information through post-RA scheduling
llvm-svn: 104175
2010-05-19 22:57:47 +00:00
Jim Grosbach 604560c5fe Fix the post-RA instruction scheduler to handle instructions referenced by
more than one dbg_value instruction. rdar://7759363

llvm-svn: 104174
2010-05-19 22:57:06 +00:00
Evan Cheng 70e506e18a Code clean up.
llvm-svn: 104173
2010-05-19 22:42:23 +00:00
Devang Patel a08130864e Revert r104165.
llvm-svn: 104172
2010-05-19 21:58:28 +00:00
Jakob Stoklund Olesen e0eddb21f5 Add support for partial redefs to the fast register allocator.
A partial redef now triggers a reload if required. Also don't add
<imp-def,dead> operands for physical superregisters.

Kill flags are still treated as full register kills, and <imp-use,kill> operands
are added for physical superregisters as before.

llvm-svn: 104167
2010-05-19 21:36:05 +00:00
Devang Patel 0fe341e2e2 There is no need to maintain InsnsBeginScopeSet separately.
llvm-svn: 104165
2010-05-19 21:26:53 +00:00
Jakob Stoklund Olesen 5d4c134a94 Add MachineInstr::readsVirtualRegister() in preparation for proper handling of
partial redefines.

We are going to treat a partial redefine of a virtual register as a
read-modify-write:

  %reg1024:6 = OP

Unless the register is fully clobbered:

  %reg1024:6 = OP, %reg1024<imp-def>

MachineInstr::readsVirtualRegister() knows the difference. The first case is a
read, the second isn't.

llvm-svn: 104149
2010-05-19 20:36:22 +00:00
Evan Cheng 738e920edf Code refactoring: pull SchedPreference enum from TargetLowering.h to TargetMachine.h and put it in its own namespace.
llvm-svn: 104147
2010-05-19 20:19:50 +00:00
Jakob Stoklund Olesen e11cdf8cc8 TwoAddressInstructionPass doesn't really know how to merge live intervals when
lowering REG_SEQUENCE instructions.

Insert copies for REG_SEQUENCE sources not killed to avoid breaking later passes.

llvm-svn: 104146
2010-05-19 20:08:00 +00:00
Mikhail Glushenkov 59a61fd7cc llvmc: report an error if a child process segfaults.
llvm-svn: 104145
2010-05-19 19:24:32 +00:00
Bob Wilson 6a1bfd282b When expanding a vector_shuffle, the element type may not be legal and may
need to be promoted.  The BUILD_VECTOR and EXTRACT_VECTOR_ELT nodes generated
here already allow the promoted type to be used without further changes, so
just do the promotion.  This fixes part of pr7167.

llvm-svn: 104141
2010-05-19 18:48:32 +00:00
Daniel Dunbar 52e37becf6 MC/X86: Add missing entry for TAILJMP_1 to getRelaxedOpcode().
llvm-svn: 104122
2010-05-19 17:20:58 +00:00
Daniel Dunbar d2f78e755f MC/X86: Lower TAILCALLd[64] to JMP_1, to allow relaxation and to avoid same
prefix byte problem as in r104062.
 - As a total hack to keep the TAILCALL markers in the output, which some tests depend on, this invents a new TAILJMP_1 instruction.

llvm-svn: 104120
2010-05-19 15:26:43 +00:00
Daniel Dunbar b243dfb085 MC/X86: Strip spurious operands from TAILJMPr64 as we do for CALL64r and
CALL64pcrel32, for the same reason.

llvm-svn: 104116
2010-05-19 08:07:12 +00:00
Evan Cheng daeca2d156 t2LEApcrel and tLEApcrel are re-materializable. This makes it possible to hoist more loads during machine LICM.
llvm-svn: 104115
2010-05-19 07:28:01 +00:00
Evan Cheng b7704fee4c Use 'adr' for LEApcrel and LEApcrel. Mark LEApcrel re-materializable.
llvm-svn: 104114
2010-05-19 07:26:50 +00:00
Daniel Dunbar 4f6c7c6d94 MC/X86: Lower MOV{8,16,32,64}{rm,mr} to fixed-register forms, as appropriate.
llvm-svn: 104112
2010-05-19 06:20:44 +00:00
Evan Cheng dd7f566597 Mark pattern-less mayLoad / mayStore instructions neverHasSideEffects. These do not have other un-modeled side effects.
llvm-svn: 104111
2010-05-19 06:07:03 +00:00
Evan Cheng e89f5ae9d4 Target instruction selection should copy memoperands.
llvm-svn: 104110
2010-05-19 06:06:09 +00:00
Daniel Dunbar 45ace40959 MC/X86: Strip spurious operands from CALL64r as we do for CALL64pcrel32, to
avoid same prefix byte problem as in r104062.

llvm-svn: 104108
2010-05-19 04:31:36 +00:00
Evan Cheng 2c452fcd14 Mark a few more pattern-less instructions with neverHasSideEffects. This is especially important on instructions like t2LEApcreal which are prime candidate for machine LICM.
llvm-svn: 104102
2010-05-19 01:52:25 +00:00
Dan Gohman 744c96dd48 Add a comment explaining why this code uses Append mode.
llvm-svn: 104095
2010-05-19 01:21:34 +00:00
Evan Cheng abd0ad54a4 Intrinsics which do a vector compare (results are all zero or all ones) are modeled as icmp / fcmp + sext. This is turned into a vsetcc by dag combine (yes, not a good long term solution). The targets can then isel the vsetcc to the appropriate instruction.
The trouble arises when the result of a vector cmp + sext is then and'ed with all ones. Instcombine will turn it into a vector cmp + zext, dag combiner will miss turning it into a vsetcc and hell breaks loose after that.

Teach dag combine to turn a vector cpm + zest into a vsetcc + and 1. This fixes rdar://7923010.

llvm-svn: 104094
2010-05-19 01:08:17 +00:00
Dan Gohman 58c6f21453 Factor out the code for picking integer arithmetic with immediate
opcodes into a helper function. This fixes a few places in the code
which were not properly selecting the 8-bit-immediate opcodes.

llvm-svn: 104091
2010-05-19 00:53:19 +00:00
Dan Gohman beebef4137 Add a comment.
llvm-svn: 104089
2010-05-18 23:55:57 +00:00