Commit Graph

6862 Commits

Author SHA1 Message Date
Eric Christopher da2d2f4d1f Experiment with changing the default 32-bit linux stack alignment to
16 bytes for PR8969. Update all testcases accordingly.

llvm-svn: 123367
2011-01-13 06:47:10 +00:00
Chris Lattner abd2dfd3dc Fix PR8946, a missing reg/reg form of movdqu.
llvm-svn: 123242
2011-01-11 17:04:55 +00:00
Anton Korobeynikov 441ae5b88c Update CMake stuff
llvm-svn: 123171
2011-01-10 12:39:23 +00:00
Anton Korobeynikov 2f93128109 Rename TargetFrameInfo into TargetFrameLowering. Also, put couple of FIXMEs and fixes here and there.
llvm-svn: 123170
2011-01-10 12:39:04 +00:00
Jakob Stoklund Olesen 2fb5b31578 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Jakob Stoklund Olesen 4a7b48d5f4 Fix the last virtual register enumerations.
llvm-svn: 123102
2011-01-08 23:11:11 +00:00
Evan Cheng 078b0b095e Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Evan Cheng 6eb516dbea Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.

llvm-svn: 123044
2011-01-07 23:50:32 +00:00
Evan Cheng a048c83fe4 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
2011-01-07 19:35:30 +00:00
Rafael Espindola 9f9a10691a Correctly disassemble truncated asm.
Patch by Richard Simth.

llvm-svn: 122962
2011-01-06 16:48:42 +00:00
Benjamin Kramer 3aa955e906 Remove dead code and silence warnings.
llvm-svn: 122957
2011-01-06 13:01:02 +00:00
Evan Cheng 7998b1d6fe Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010

llvm-svn: 122955
2011-01-06 07:58:36 +00:00
Evan Cheng 3ae2b79aa3 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.

llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Bill Wendling 3b949bcaf3 PR8919 - LLVM incorrectly generates "_alloca" as the stack probing call. That
works only on MinGW32. On 64-bit, the function to call is "__chkstk".
Patch by KS Sreeram!

llvm-svn: 122934
2011-01-06 00:50:34 +00:00
Bill Wendling 81d40711f3 PR8918 - When used with MinGW64, LLVM generates a "calll __main" at the
beginning of the "main" function. The assembler complains about the invalid
suffix for the 'call' instruction. The right instruction is "callq __main".
Patch by KS Sreeram!

llvm-svn: 122933
2011-01-06 00:47:10 +00:00
Chris Lattner 872908fdeb fix PR8900, a shuffle miscompilation. Patch by Nadav Rotem!
llvm-svn: 122921
2011-01-05 22:28:46 +00:00
Chris Lattner 2d7df02670 silence more self assignment warnings.
llvm-svn: 122920
2011-01-05 22:26:52 +00:00
Jakob Stoklund Olesen 01d4d86585 Use the EdgeBundles analysis in X86FloatingPoint instead of recomputing CFG
bundles in the pass.

llvm-svn: 122833
2011-01-04 21:10:11 +00:00
Jakob Stoklund Olesen f96ae684c4 Turn the EdgeBundles class into a stand-alone machine CFG analysis pass.
The analysis will be needed by both the greedy register allocator and the
X86FloatingPoint pass. It only needs to be computed once when the CFG doesn't
change.

This pass is very fast, usually showing up as 0.0% wall time.

llvm-svn: 122832
2011-01-04 21:10:05 +00:00
Dale Johannesen e45a2389ce Eliminate a warning compiling with llvm-gcc. (IMO the
warning is overzealous but gcc is what it is.)

llvm-svn: 122829
2011-01-04 19:31:24 +00:00
Evan Cheng 65089fc6c7 Use pushq / popq instead of subq $8, %rsp / addq $8, %rsp to adjust stack in
prologue and epilogue if the adjustment is 8. Similarly, use pushl / popl if
the adjustment is 4 in 32-bit mode.

In the epilogue, takes care to pop to a caller-saved register that's not live
at the exit (either return or tailcall instruction).
rdar://8771137

llvm-svn: 122783
2011-01-03 22:53:22 +00:00
Benjamin Kramer 25e6e06e42 Try to reuse the value when lowering memset.
This allows us to compile:
  void test(char *s, int a) {
    __builtin_memset(s, a, 15);
  }
into 1 mul + 3 stores instead of 3 muls + 3 stores.

llvm-svn: 122710
2011-01-02 19:57:05 +00:00
Oscar Fuentes 68b7bb95d4 A workaround for a bug in cmake 2.8.3 diagnosed on PR 8885.
llvm-svn: 122706
2011-01-02 19:32:31 +00:00
Chris Lattner 51415d26f1 update a bunch of entries.
llvm-svn: 122700
2011-01-02 18:31:38 +00:00
Rafael Espindola d606e54757 Add support for the 'H' modifier.
llvm-svn: 122667
2011-01-01 20:58:46 +00:00
Oscar Fuentes a8eb60436b Add to the list of cmake files the object file, not the asm file. This
is necessary for executing the custom command that runs the
assember. Fixes PR8877.

llvm-svn: 122649
2010-12-31 20:15:37 +00:00
Nick Lewycky ee0432ce08 Add another non-commutable instruction that gas accepts commuted forms for.
Fixes PR8861.

llvm-svn: 122641
2010-12-30 22:10:49 +00:00
NAKAMURA Takumi de8fda8908 CMake: Add disabling optimization on MSVC8 and MSVC10 as workaround for some files in Target/ARM and Target/X86.
llvm-svn: 122623
2010-12-29 03:59:27 +00:00
Rafael Espindola 2ac8355ecd Add support for the same encodings of the personality function that gnu as
supports.

llvm-svn: 122577
2010-12-27 00:36:05 +00:00
Chris Lattner f9e0a56b94 fix some sort of weird pasto
llvm-svn: 122560
2010-12-26 12:05:11 +00:00
Chris Lattner 424de3498b add a note
llvm-svn: 122559
2010-12-26 03:53:31 +00:00
Evan Cheng 62de0fa671 Code clean up. No functionality change.
llvm-svn: 122528
2010-12-23 23:54:17 +00:00
Chris Lattner 2a0a3b43d7 Flag -> Glue, the ongoing saga
llvm-svn: 122513
2010-12-23 18:28:41 +00:00
Benjamin Kramer b37ae33125 Remove some obsolete README items, add a new one off the top of my head.
llvm-svn: 122495
2010-12-23 15:07:02 +00:00
Benjamin Kramer 6020ed9d99 X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]

llvm-svn: 122451
2010-12-22 23:09:28 +00:00
Benjamin Kramer f6ddc4a1de Add some x86 specific dagcombines for conditional increments.
(add Y, (sete  X, 0)) -> cmp X, 1; adc  0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete  X, 0), Y) -> cmp X, 1; sbb  0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y

for
  unsigned foo(unsigned a, unsigned b) {
    if (a == 0) b++;
    return b;
  }
we now get:
  foo:
    cmpl  $1, %edi
    movl  %esi, %eax
    adcl  $0, %eax
    ret
instead of:
  foo:
    testl %edi, %edi
    sete  %al
    movzbl  %al, %eax
    addl  %esi, %eax
    ret

llvm-svn: 122364
2010-12-21 21:41:44 +00:00
Chris Lattner 3e5fbd74ed rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Nate Begeman 4b9db07b02 Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
lowering to use it.  Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.

llvm-svn: 122277
2010-12-20 22:04:24 +00:00
Daniel Dunbar ca2511d849 Add header...
llvm-svn: 122247
2010-12-20 15:45:51 +00:00
Daniel Dunbar 7da045e59f X86/MC/Mach-O: Split out createX86MachObjectWriter().
llvm-svn: 122246
2010-12-20 15:07:39 +00:00
Chris Lattner 5c00d41688 now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
the same as setcc.  Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS).  This is
a step towards finishing off PR5443.  In the testcase in that bug we now  get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbq	%rcx, %rcx
	testb	$1, %cl
	setne	%dl
	ret

instead of:

	movq	%rdi, %rax
	addq	%rsi, %rax
	movl	$0, %ecx
	adcq	$0, %rcx
	testq	%rcx, %rcx
	setne	%dl
	ret

llvm-svn: 122219
2010-12-20 01:37:09 +00:00
Chris Lattner 46b9efcad7 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret

llvm-svn: 122217
2010-12-20 01:16:03 +00:00
Chris Lattner 9c26d2711b use for loop over types.
llvm-svn: 122214
2010-12-20 01:03:27 +00:00
Chris Lattner 846c20d4e6 Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.

We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.

With the previous series of changes, this causes no changes in the testsuite, woo.

llvm-svn: 122213
2010-12-20 00:59:46 +00:00
Mon P Wang 1064992c84 Prevents PerformShuffleCombine from creating a node with an illegal type after legalize types
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type.

llvm-svn: 122206
2010-12-19 23:55:53 +00:00
Chris Lattner 9edf3f50bf improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.

llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Chris Lattner 6dddab2ffe simplify some code to just reuse a setcc if we can instead of
going through the CSE maps to get it.

llvm-svn: 122196
2010-12-19 21:23:48 +00:00
Chris Lattner c37bb023b1 now that generic vector types aren't selected onto MMX operations,
we don't need -disable-mmx anymore.

llvm-svn: 122189
2010-12-19 20:19:20 +00:00
Chris Lattner ae756e1980 reduce copy/paste programming with the power of for loops.
llvm-svn: 122187
2010-12-19 20:07:10 +00:00
Chris Lattner 1e8c032a6e X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2

llvm-svn: 122186
2010-12-19 20:03:11 +00:00