Commit Graph

1564 Commits

Author SHA1 Message Date
Duncan Sands b7168c270e Revert commit 93204, since it causes the assembler to barf
on x86-64 linux with messages like this:
Error: Incorrect register `%r14' used with `l' suffix

llvm-svn: 93242
2010-01-12 17:46:16 +00:00
Dan Gohman 9059833de6 Make several tests less fragile.
llvm-svn: 93230
2010-01-12 04:52:47 +00:00
Dan Gohman c119580307 Reapply the MOV64r0 patch, with a fix: MOV64r0 clobbers EFLAGS.
llvm-svn: 93229
2010-01-12 04:42:54 +00:00
Evan Cheng 42b07e9600 Add manual ISD::OR fastisel selection routines. TableGen is no longer autogen them after 93152 and 93191.
llvm-svn: 93204
2010-01-11 22:59:27 +00:00
Evan Cheng 99789a7a76 Extend r93152 to work on OR r, r. If the source set bits are known not to overlap, then select as an ADD instead.
llvm-svn: 93191
2010-01-11 22:03:29 +00:00
Chris Lattner f0d26d4b74 reduce this to a sensible testcase.
llvm-svn: 93189
2010-01-11 21:58:19 +00:00
David Greene eb103c404b Shorten up this testcase.
llvm-svn: 93187
2010-01-11 21:50:35 +00:00
Evan Cheng 7bdf339602 Revert 93158. It's breaking quite a few x86_64 tests.
llvm-svn: 93185
2010-01-11 21:13:41 +00:00
Jakob Stoklund Olesen d2a1bee2d4 Avoid adding PHI arguments for a predecessor that has gone away when a BRCOND was constant folded.
This fixes PR5980.

llvm-svn: 93184
2010-01-11 21:02:33 +00:00
Dan Gohman e99a3c191e Use a 32-bit and with implicit zero-extension instead of a 64-bit and if it
has an immediate with at least 32 bits of leading zeros, to avoid needing to
materialize that immediate in a register first.

FileCheckize, tidy, and extend a testcase to cover this case.

This fixes rdar://7527390.

llvm-svn: 93160
2010-01-11 17:58:34 +00:00
Dan Gohman 3a55686345 Re-instate MOV64r0 and MOV16r0, with adjustments to work with the
new AsmPrinter. This is perhaps less elegant than describing them
in terms of MOV32r0 and subreg operations, but it allows the
current register to rematerialize them.

llvm-svn: 93158
2010-01-11 17:37:57 +00:00
Dan Gohman 31e8637ac2 Generalize this check to avoid depending on a specific register assignment.
llvm-svn: 93157
2010-01-11 17:24:27 +00:00
Dan Gohman 355ebc7f58 Make this test less trivial, to avoid spurious failures.
llvm-svn: 93156
2010-01-11 17:23:56 +00:00
Evan Cheng 64d9f40557 Select an OR with immediate as an ADD if the input bits are known zero. This allow the instruction to be 3address-fied if needed.
llvm-svn: 93152
2010-01-11 17:03:47 +00:00
David Greene 206351a1ff Implement a feature (-vector-unaligned-mem) to allow targets to
ignore alignment requirements for SIMD memory operands.  This
is useful on architectures like the AMD 10h that do not trap on
unaligned references if a status bit is twiddled at startup time.

llvm-svn: 93151
2010-01-11 16:29:42 +00:00
Jeffrey Yasskin bb857e5d68 Fix http://llvm.org/PR5729: x86-64 tail calls were putting their targets into
R11, and then asserting that the target was in R9.  Since R9 isn't reserved for
the target anymore, and is used as an argument, this patch changes the
assertion.

llvm-svn: 93065
2010-01-09 18:56:43 +00:00
Dan Gohman 6bd3ef82ff Revert an earlier change to SIGN_EXTEND_INREG for vectors. The VTSDNode
really does need to be a vector type, because
TargetLowering::getOperationAction for SIGN_EXTEND_INREG uses that type,
and it needs to be able to distinguish between vectors and scalars.

Also, fix some more issues with legalization of vector casts.

llvm-svn: 93043
2010-01-09 02:13:55 +00:00
Evan Cheng cc6d56bd3b Fix a critical bug in 64-bit atomic operation lowering for 32-bit. The results of the cmpxchg8b instructions are being thrown away when it branches back to the top of the checking loop. This means the loop always compares against the old value and this can result in a dead lock.
llvm-svn: 93028
2010-01-08 23:41:50 +00:00
Evan Cheng 58ec4fec88 ReplaceAllUsesOfValueWith may delete other nodes that the one being replaced. Do not delete dead nodes again.
llvm-svn: 92988
2010-01-08 02:36:12 +00:00
Chris Lattner dab2cd543f Fix rdar://7517201, a regression introduced by r92849.
When folding a and(any_ext(load)) both the any_ext and the
load have to have only a single use.

This removes the anyext-uses.ll testcase which started failing
because it is unreduced and unclear what it is testing.

llvm-svn: 92950
2010-01-07 21:59:23 +00:00
Evan Cheng 90dc43fcf5 Fix a minor regression from my dag combiner changes. One more place which needs to look pass truncates.
llvm-svn: 92885
2010-01-07 00:54:06 +00:00
Evan Cheng 166a4e6caa Teach dag combine to fold the following transformation more aggressively:
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))

Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.

This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.

llvm-svn: 92849
2010-01-06 19:38:29 +00:00
Dan Gohman f34b289057 Move this test from test/Transforms/IndVarSimplify to
test/CodeGen/X86, as doesn't use -indvars, and it does use
llc -march=x86-64.

llvm-svn: 92799
2010-01-05 22:52:54 +00:00
Bill Wendling 03f0af372c Don't assign the shift the same type as the variable being shifted. This could
result in illegal types for the SHL operator.

llvm-svn: 92797
2010-01-05 22:39:10 +00:00
Dan Gohman fb4193625a Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Dan Gohman 8c63ee7e28 Make this test more portable.
llvm-svn: 92514
2010-01-04 21:23:34 +00:00
Dan Gohman 52183c3cc9 Add some tests and update an existing test to reflect recent
x86 isel peeps.

llvm-svn: 92509
2010-01-04 20:53:54 +00:00
Chris Lattner 1dae8766b1 fix PR5930, allowing the asmprinter to emit difference between
two labels as a truncate.

llvm-svn: 92455
2010-01-03 18:33:18 +00:00
Chris Lattner f6a585fc2f add PR#
llvm-svn: 92451
2010-01-03 18:10:58 +00:00
Chris Lattner a7cfc43af8 differences between two blockaddress's don't cause a
global variable initializer to require relocations.

llvm-svn: 92450
2010-01-03 18:09:40 +00:00
Chris Lattner 909c71c96a allow this to work on linux hosts.
llvm-svn: 92407
2010-01-02 00:22:15 +00:00
Chris Lattner 1eea3b0ada Teach codegen to handle:
(X != null) | (Y != null) --> (X|Y) != 0
 (X == null) & (Y == null) --> (X|Y) == 0

so that instcombine can stop doing this for pointers.  This is part of PR3351,
which is a case where instcombine doing this for pointers (inserting ptrtoint)
is pessimizing code.

llvm-svn: 92406
2010-01-02 00:00:03 +00:00
Chris Lattner 6eef072eb6 rename file.
llvm-svn: 92405
2010-01-01 23:55:04 +00:00
Chris Lattner 39f18e545e Teach codegen to lower llvm.powi to an efficient (but not optimal)
multiply sequence when the power is a constant integer.  Before, our
codegen for std::pow(.., int) always turned into a libcall, which was
really inefficient.

This should also make many gfortran programs happier I'd imagine.

llvm-svn: 92388
2010-01-01 03:32:16 +00:00
Chris Lattner f5e3ed64d5 handle equality memcmp of 8 bytes on x86-64 with two unaligned loads and a
compare.  On other targets we end up with a call to memcmp because we don't
want 16 individual byte loads.  We should be able to use movups as well, but
we're failing to select the generated icmp.

llvm-svn: 92107
2009-12-24 01:07:17 +00:00
Chris Lattner 1a32ede6fd move an optimization for memcmp out of simplifylibcalls and into
SDISel.  This optimization was causing simplifylibcalls to 
introduce type-unsafe nastiness.  This is the first step, I'll be 
expanding the memcmp optimizations shortly, covering things that
we really really wouldn't want simplifylibcalls to do.

llvm-svn: 92098
2009-12-24 00:37:38 +00:00
Eric Christopher fdb33458fc Update objectsize intrinsic and associated dependencies. Fix
lowering code and update testcases.

llvm-svn: 91979
2009-12-23 02:51:48 +00:00
Evan Cheng 71d7eaa87e Remove target attribute break-sse-dep. Instead, do not fold load into sse partial update instructions unless optimizing for size.
llvm-svn: 91910
2009-12-22 17:47:23 +00:00
Evan Cheng b175de6356 Increase opportunities to optimize (brcond (srl (and c1), c2)).
llvm-svn: 91717
2009-12-18 21:31:31 +00:00
Evan Cheng 4cf30b72bf On recent Intel u-arch's, folding loads into some unary SSE instructions can
be non-optimal. To be precise, we should avoid folding loads if the instructions
only update part of the destination register, and the non-updated part is not
needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
the partial register dependency and it can improve performance. e.g.

movss (%rdi), %xmm0
cvtss2sd %xmm0, %xmm0

instead of
cvtss2sd (%rdi), %xmm0

An alternative method to break dependency is to clear the register first. e.g.
xorps %xmm0, %xmm0
cvtss2sd (%rdi), %xmm0

llvm-svn: 91672
2009-12-18 07:40:29 +00:00
Dan Gohman 51fbfb726f Tidy up this testcase and add test for tailcall optimization
with unreachable.

llvm-svn: 91650
2009-12-18 01:05:06 +00:00
Dan Gohman 7f4326f8b6 Remove "tail" keywords. These calls are not intended to be tail calls.
This protects this test from depending on codegen not performing the
tail call optimization by default.

llvm-svn: 91648
2009-12-18 01:02:18 +00:00
Sean Callanan 04d8cb74f3 Instruction fixes, added instructions, and AsmString changes in the
X86 instruction tables.

Also (while I was at it) cleaned up the X86 tables, removing tabs and
80-line violations.

This patch was reviewed by Chris Lattner, but please let me know if
there are any problems.

* X86*.td
	Removed tabs and fixed 80-line violations

* X86Instr64bit.td
	(IRET, POPCNT, BT_, LSL, SWPGS, PUSH_S, POP_S, L_S, SMSW)
		Added
	(CALL, CMOV) Added qualifiers
	(JMP) Added PC-relative jump instruction
	(POPFQ/PUSHFQ) Added qualifiers; renamed PUSHFQ to indicate
		that it is 64-bit only (ambiguous since it has no
		REX prefix)
	(MOV) Added rr form going the other way, which is encoded
		differently
	(MOV) Changed immediates to offsets, which is more correct;
		also fixed MOV64o64a to have to a 64-bit offset
	(MOV) Fixed qualifiers
	(MOV) Added debug-register and condition-register moves
	(MOVZX) Added more forms
	(ADC, SUB, SBB, AND, OR, XOR) Added reverse forms, which
		(as with MOV) are encoded differently
	(ROL) Made REX.W required
	(BT) Uncommented mr form for disassembly only
	(CVT__2__) Added several missing non-intrinsic forms
	(LXADD, XCHG) Reordered operands to make more sense for
		MRMSrcMem
	(XCHG) Added register-to-register forms
	(XADD, CMPXCHG, XCHG) Added non-locked forms
* X86InstrSSE.td
	(CVTSS2SI, COMISS, CVTTPS2DQ, CVTPS2PD, CVTPD2PS, MOVQ)
		Added
* X86InstrFPStack.td
	(COM_FST0, COMP_FST0, COM_FI, COM_FIP, FFREE, FNCLEX, FNOP,
	 FXAM, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, F2XM1, FYL2X,
	 FPTAN, FPATAN, FXTRACT, FPREM1, FDECSTP, FINCSTP, FPREM,
	 FYL2XP1, FSINCOS, FRNDINT, FSCALE, FCOMPP, FXSAVE,
	 FXRSTOR)
		Added
	(FCOM, FCOMP) Added qualifiers
	(FSTENV, FSAVE, FSTSW) Fixed opcode names
	(FNSTSW) Added implicit register operand
* X86InstrInfo.td
	(opaque512mem) Added for FXSAVE/FXRSTOR
	(offset8, offset16, offset32, offset64) Added for MOV
	(NOOPW, IRET, POPCNT, IN, BTC, BTR, BTS, LSL, INVLPG, STR,
	 LTR, PUSHFS, PUSHGS, POPFS, POPGS, LDS, LSS, LES, LFS,
	 LGS, VERR, VERW, SGDT, SIDT, SLDT, LGDT, LIDT, LLDT,
	 LODSD, OUTSB, OUTSW, OUTSD, HLT, RSM, FNINIT, CLC, STC,
	 CLI, STI, CLD, STD, CMC, CLTS, XLAT, WRMSR, RDMSR, RDPMC,
	 SMSW, LMSW, CPUID, INVD, WBINVD, INVEPT, INVVPID, VMCALL,
	 VMCLEAR, VMLAUNCH, VMRESUME, VMPTRLD, VMPTRST, VMREAD,
	 VMWRITE, VMXOFF, VMXON) Added
	(NOOPL, POPF, POPFD, PUSHF, PUSHFD) Added qualifier
	(JO, JNO, JB, JAE, JE, JNE, JBE, JA, JS, JNS, JP, JNP, JL,
	 JGE, JLE, JG, JCXZ) Added 32-bit forms
	(MOV) Changed some immediate forms to offset forms
	(MOV) Added reversed reg-reg forms, which are encoded
		differently
	(MOV) Added debug-register and condition-register moves
	(CMOV) Added qualifiers
	(AND, OR, XOR, ADC, SUB, SBB) Added reverse forms, like MOV
	(BT) Uncommented memory-register forms for disassembler
	(MOVSX, MOVZX) Added forms
	(XCHG, LXADD) Made operand order make sense for MRMSrcMem
	(XCHG) Added register-register forms
	(XADD, CMPXCHG) Added unlocked forms
* X86InstrMMX.td
	(MMX_MOVD, MMV_MOVQ) Added forms
* X86InstrInfo.cpp: Changed PUSHFQ to PUSHFQ64 to reflect table
	change

* X86RegisterInfo.td: Added debug and condition register sets
* x86-64-pic-3.ll: Fixed testcase to reflect call qualifier
* peep-test-3.ll: Fixed testcase to reflect test qualifier
* cmov.ll: Fixed testcase to reflect cmov qualifier
* loop-blocks.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-11.ll: Fixed testcase to reflect call qualifier
* 2009-11-04-SubregCoalescingBug.ll: Fixed testcase to reflect call
  qualifier
* x86-64-pic-2.ll: Fixed testcase to reflect call qualifier
* live-out-reg-info.ll: Fixed testcase to reflect test qualifier
* tail-opts.ll: Fixed testcase to reflect call qualifiers
* x86-64-pic-10.ll: Fixed testcase to reflect call qualifier
* bss-pagealigned.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-1.ll: Fixed testcase to reflect call qualifier
* widen_load-1.ll: Fixed testcase to reflect call qualifier

llvm-svn: 91638
2009-12-18 00:01:26 +00:00
Evan Cheng 1be6286028 Re-enable 91381 with fixes.
llvm-svn: 91489
2009-12-16 00:53:11 +00:00
Dale Johannesen 56f041406d Do better with physical reg operands (typically, from inline asm)
in local register allocator.  If a reg-reg copy has a phys reg
input and a virt reg output, and this is the last use of the phys
reg, assign the phys reg to the virt reg.  If a reg-reg copy has
a phys reg output and we need to reload its spilled input, reload
it directly into the phys reg than passing it through another reg.

Following 76208, there is sometimes no dependency between the def of
a phys reg and its use; this creates a window where that phys reg
can be used for spilling (this is true in linear scan also).  This
is bad and needs to be fixed a better way, although 76208 works too
well in practice to be reverted.  However, there should normally be
no spilling within inline asm blocks.  The patch here goes a long way
towards making this actually be true.

llvm-svn: 91485
2009-12-16 00:29:41 +00:00
Kenneth Uildriks 792f0913ee For fastcc on x86, let ECX be used as a return register after EAX and EDX
llvm-svn: 91410
2009-12-15 03:27:52 +00:00
Evan Cheng fcb5453dc7 Disable 91381 for now. It's miscompiling ARMISelDAG2DAG.cpp.
llvm-svn: 91405
2009-12-15 03:07:11 +00:00
Evan Cheng 852c486946 Make 91378 more conservative.
1. Only perform (zext (shl (zext x), y)) -> (shl (zext x), y) when y is a constant. This makes sure it remove at least one zest.
2. If the shift is a left shift, make sure the original shift cannot shift out bits.

llvm-svn: 91399
2009-12-15 03:00:32 +00:00
Evan Cheng 0e8b9e32d1 Use sbb x, x to materialize carry bit in a GPR. The result is all one's or all zero's.
llvm-svn: 91381
2009-12-15 00:53:42 +00:00
Evan Cheng ca7c690d3b Propagate zest through logical shift.
llvm-svn: 91378
2009-12-15 00:41:36 +00:00