Commit Graph

124 Commits

Author SHA1 Message Date
Michael Liao 3237662b65 Re-work X86 code generation of atomic ops with spin-loop
- Rewrite/merge pseudo-atomic instruction emitters to address the
  following issue:
  * Reduce one unnecessary load in spin-loop

    previously the spin-loop looks like

        thisMBB:
        newMBB:
          ld  t1 = [bitinstr.addr]
          op  t2 = t1, [bitinstr.val]
          not t3 = t2  (if Invert)
          mov EAX = t1
          lcs dest = [bitinstr.addr], t3  [EAX is implicit]
          bz  newMBB
          fallthrough -->nextMBB

    the 'ld' at the beginning of newMBB should be lift out of the loop
    as lcs (or CMPXCHG on x86) will load the current memory value into
    EAX. This loop is refined as:

        thisMBB:
          EAX = LOAD [MI.addr]
        mainMBB:
          t1 = OP [MI.val], EAX
          LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
          JNE mainMBB
        sinkMBB:

  * Remove immopc as, so far, all pseudo-atomic instructions has
    all-register form only, there is no immedidate operand.

  * Remove unnecessary attributes/modifiers in pseudo-atomic instruction
    td

  * Fix issues in PR13458

- Add comprehensive tests on atomic ops on various data types.
  NOTE: Some of them are turned off due to missing functionality.

- Revise tests due to the new spin-loop generated.

llvm-svn: 164281
2012-09-20 03:06:15 +00:00
Jakob Stoklund Olesen 3cf3ffce24 Fix the TCRETURNmi64 bug differently.
Add a PatFrag to match X86tcret using 6 fixed registers or less. This
avoids folding loads into TCRETURNmi64 using 7 or more volatile
registers.

<rdar://problem/12282281>

llvm-svn: 163819
2012-09-13 18:31:27 +00:00
Jakob Stoklund Olesen 78b9f8fc67 Revert r163761 "Don't fold indexed loads into TCRETURNmi64."
The patch caused "Wrong topological sorting" assertions.

llvm-svn: 163810
2012-09-13 16:52:17 +00:00
Jakob Stoklund Olesen bfacef45eb Don't fold indexed loads into TCRETURNmi64.
We don't have enough GR64_TC registers when calling a varargs function
with 6 arguments. Since %al holds the number of vector registers used,
only %r11 is available as a scratch register.

This means that addressing modes using both base and index registers
can't be folded into TCRETURNmi64.

<rdar://problem/12282281>

llvm-svn: 163761
2012-09-13 00:25:00 +00:00
Hans Wennborg 789acfb63d Implement the local-dynamic TLS model for x86 (PR3985)
This implements codegen support for accesses to thread-local variables
using the local-dynamic model, and adds a clean-up pass so that the base
address for the TLS block can be re-used between local-dynamic access on
an execution path.

llvm-svn: 157818
2012-06-01 16:27:21 +00:00
Jakob Stoklund Olesen 7e21d617ef Use ptr_rc_tailcall instead of GR32_TC.
The getPointerRegClass() hook will return GR32_TC, or whatever is
appropriate for the current function.

Patch by Yiannis Tsiouris!

llvm-svn: 156459
2012-05-09 01:50:09 +00:00
Manman Ren ef4e0479ec X86: optimization for -(x != 0)
This patch will optimize -(x != 0) on X86
FROM 
cmpl	$0x01,%edi
sbbl	%eax,%eax
notl	%eax
TO
negl %edi
sbbl %eax %eax

In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td:
def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>;

rdar: 10961709
llvm-svn: 156312
2012-05-07 18:06:23 +00:00
Rafael Espindola ba0a6cabb8 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Lang Hames 5569ce7d56 Make x86 REP_MOV* and REP_STO instructions use the correct operand sizes in 64-bit mode.
llvm-svn: 153680
2012-03-29 19:54:28 +00:00
Preston Gurd 48ccc4df0b This patch adds X86 instruction itineraries for non-pseudo opcodes in
X86InstrCompiler.td.
 
It also adds –mcpu-generic to the legalize-shift-64.ll test so the test
will pass if run on an Intel Atom CPU, which would otherwise
produce an instruction schedule which differs from that which the test expects.

llvm-svn: 153033
2012-03-19 14:10:12 +00:00
Michael J. Spencer 248d65e78b Add WIN_FTOL_* psudo-instructions to model the unique calling convention
used by the Win32 _ftol2 runtime function. Patch by Joe Groff!

llvm-svn: 151382
2012-02-24 19:01:22 +00:00
Jakob Stoklund Olesen 97e3115dc2 Use the same CALL instructions for Windows as for everything else.
The different calling conventions and call-preserved registers are
represented with regmask operands that are added dynamically.

llvm-svn: 150708
2012-02-16 17:56:02 +00:00
Eli Friedman 206ca569aa Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.
llvm-svn: 148240
2012-01-16 16:42:21 +00:00
Eli Friedman 75e3db4c7a Get rid of unused codegen-only instruction.
llvm-svn: 148239
2012-01-16 16:29:35 +00:00
Benjamin Kramer 5b3aa60b44 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
llvm-svn: 148024
2012-01-12 12:41:34 +00:00
Chandler Carruth 7e9453e916 Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:

  (sizeof(x)*8 - 1) ^ __builtin_clz(x)

Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.

The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.

Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.

These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.

llvm-svn: 147244
2011-12-24 10:55:54 +00:00
Chandler Carruth 24680c24d8 Begin teaching the X86 target how to efficiently codegen patterns that
use the zero-undefined variants of CTTZ and CTLZ. These are just simple
patterns for now, there is more to be done to make real world code using
these constructs be optimized and codegen'ed properly on X86.

The existing tests are spiffed up to check that we no longer generate
unnecessary cmov instructions, and that we generate the very important
'xor' to transform bsr which counts the index of the most significant
one bit to the number of leading (most significant) zero bits. Also they
now check that when the variant with defined zero result is used, the
cmov is still produced.

llvm-svn: 146974
2011-12-20 11:19:37 +00:00
Rafael Espindola b3285224cd Fixes an issue reported by -verify-machineinstrs.
Patch by Sanjoy Das.

llvm-svn: 143064
2011-10-26 21:16:41 +00:00
Rafael Espindola 66393c127d This commit introduces two fake instructions MORESTACK_RET and
MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET
followed by a MOV respectively.  Having a fake instruction prevents
the verifier from seeing a MachineBasicBlock end with a
non-terminator (MOV).  It also prevents the rather eccentric case of a
MachineBasicBlock ending with RET but having successors nevertheless.

Patch by Sanjoy Das.

llvm-svn: 143062
2011-10-26 21:12:27 +00:00
Eli Friedman d68a727bd0 Fix the assembler strings for a couple of atomic instructions. Doesn't really matter much in practice, but it's a bit cleaner.
llvm-svn: 139563
2011-09-13 00:27:04 +00:00
Eli Friedman 02f2f89a98 Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly fix some subtle bugs involving passes which check mayStore()).
This isn't exactly ideal, but it is good enough for the moment.

llvm-svn: 139245
2011-09-07 18:48:32 +00:00
Jakob Stoklund Olesen 1f72dd40c7 Pseudo CMOV instructions don't clobber EFLAGS.
The explanation about a 0 argument being materialized as xor is no
longer valid.  Rematerialization will check if EFLAGS is live before
clobbering it.

The code produced by X86TargetLowering::EmitLoweredSelect does not
clobber EFLAGS.

This causes one less testb instruction to be generated in the cmov.ll
test case.

llvm-svn: 139057
2011-09-02 23:52:55 +00:00
Rafael Espindola 3353017668 Adds a SelectionDAG node X86SegAlloca which will be custom lowered
from DYNAMIC_STACKALLOC.

Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which
will match X86SegAlloca (based on word size) are also added.  They
will be custom emitted to inject the actual stack handling code.

Patch by Sanjoy Das.

llvm-svn: 138814
2011-08-30 19:43:21 +00:00
Eli Friedman 5e5704277f Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
llvm-svn: 138660
2011-08-26 21:21:21 +00:00
Eli Friedman 342e8df0e0 Basic x86 code generation for atomic load and store instructions.
llvm-svn: 138478
2011-08-24 20:50:09 +00:00
Bruno Cardoso Lopes 72323966c8 Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556
llvm-svn: 137179
2011-08-09 23:27:13 +00:00
Eli Friedman 4ef2426b87 Fix a couple ridiculous copy-paste errors. rdar://9914773 .
llvm-svn: 137160
2011-08-09 22:17:39 +00:00
Eli Friedman e6d1853e74 X86ISD::MEMBARRIER does not require SSE2; it doesn't actually generate any code, and all x86 processors will honor the required semantics.
llvm-svn: 136249
2011-07-27 19:43:50 +00:00
Dan Gohman 8eb36ef497 Add a comment describing why transforming (shl x, 1) to (add x, x) is to be
considered safe enough in this context.

llvm-svn: 133159
2011-06-16 15:55:48 +00:00
Benjamin Kramer e30b70073a X86: smulo -> add is now done target-independently in DAGCombiner, remove the patterns.
llvm-svn: 131801
2011-05-21 18:32:01 +00:00
Stuart Hastings 91f1d24736 Re-commit 131641 with fixes; de-pseudoize MOVSX16rr8 and friends.
rdar://problem/8614450

llvm-svn: 131746
2011-05-20 19:04:40 +00:00
Stuart Hastings c72240bbd9 Reverting 131641 to investigate 'bot complaint.
llvm-svn: 131654
2011-05-19 17:54:42 +00:00
Stuart Hastings b476b0cc9f Revise MOVSX16rr8/MOVZX16rr8 (and rm variants) to no longer be
pseudos.  rdar://problem/8614450

llvm-svn: 131641
2011-05-19 16:59:50 +00:00
Eric Christopher a1d9e29552 Support XOR and AND optimization with no return value.
Finishes off rdar://8470697

llvm-svn: 131458
2011-05-17 08:10:18 +00:00
Eric Christopher 4a34e61e53 Optimize atomic lock or that doesn't use the result value.
Next up: xor and and.

Part of rdar://8470697

llvm-svn: 131171
2011-05-10 23:57:45 +00:00
Eric Christopher e33464663f Refactor lock versions of binary operators to be a little less
cut and paste.

llvm-svn: 131139
2011-05-10 18:36:16 +00:00
Benjamin Kramer d724a590e5 X86: Add a bunch of peeps for add and sub of SETB.
"b + ((a < b) ? 1 : 0)" compiles into
	cmpl	%esi, %edi
	adcl	$0, %esi
instead of
	cmpl	%esi, %edi
	sbbl	%eax, %eax
	andl	$1, %eax
	addl	%esi, %eax

This saves a register, a false dependency on %eax
(Intel's CPUs still don't ignore it) and it's shorter.

llvm-svn: 131070
2011-05-08 18:36:07 +00:00
Dan Gohman f0f8e14370 The labyrinthine X86 backend no longer appears to require
these patterns.

llvm-svn: 125759
2011-02-17 18:50:19 +00:00
NAKAMURA Takumi 0cfdac078e Target/X86: Tweak win64's tailcall.
llvm-svn: 124272
2011-01-26 02:04:09 +00:00
NAKAMURA Takumi 9d29eff198 Fix whitespace.
llvm-svn: 124270
2011-01-26 02:03:37 +00:00
Eric Christopher 542f8a5221 The stub routine that we're calling uses test and so clobbers
the flags.

llvm-svn: 123712
2011-01-18 01:37:20 +00:00
Chris Lattner 46b9efcad7 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret

llvm-svn: 122217
2010-12-20 01:16:03 +00:00
Chris Lattner 9edf3f50bf improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.

llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Evan Cheng be69d8e2f3 Only rr forms of ADD*_DB are commutable.
llvm-svn: 121908
2010-12-15 22:57:36 +00:00
Eric Christopher c2dc95ae00 Add rsp to the uses for the same reason as 32-bit.
llvm-svn: 121328
2010-12-09 00:26:41 +00:00
Rafael Espindola c4774795ce Move lowering of TLS_addr32 and TLS_addr64 to X86MCInstLower.
llvm-svn: 120263
2010-11-28 21:16:39 +00:00
Rafael Espindola 5d882894d8 Lower TLS_addr32 and TLS_addr64.
llvm-svn: 120225
2010-11-27 20:43:02 +00:00
Chris Lattner 941c19b7ba reject instructions that contain a \n in their asmstring. Mark
various X86 and ARM instructions that are bitten by this as isCodeGenOnly,
as they are.

llvm-svn: 117884
2010-11-01 00:46:16 +00:00
Chris Lattner 9492c17baf two changes: make the asmmatcher generator ignore ARM pseudos properly,
and make it a hard error for instructions to not have an asm string.
These instructions should be marked isCodeGenOnly.

llvm-svn: 117861
2010-10-31 19:15:18 +00:00
Michael J. Spencer f509c6ca27 X86: Add alloca probing to dynamic alloca on Windows. Fixes PR8424.
llvm-svn: 116984
2010-10-21 01:41:01 +00:00
Michael J. Spencer 9cafc872ab Fix Whitespace.
llvm-svn: 116972
2010-10-20 23:40:27 +00:00
Rafael Espindola 2216af3fa8 Fix another case where we were preferring instructions with large
immediates instead of 8 bits ones.

llvm-svn: 116410
2010-10-13 17:14:25 +00:00
Rafael Espindola 8ea9b0eb32 Fix PR8365 by adding a more specialized Pat that checks if an 'and' with
8 bit constants can be used.

llvm-svn: 116403
2010-10-13 13:31:20 +00:00
Dan Gohman 395a898b2b Initial va_arg support for x86-64. Patch by David Meyer!
llvm-svn: 116319
2010-10-12 18:00:49 +00:00
Chris Lattner dd77477690 reapply: Use the new TB_NOT_REVERSABLE flag instead of special
reapply: reimplement the second half of the or/add optimization.  We should now

with no changes.  Turns out that one missing "Defs = [EFLAGS]" can upset things
a bit.

llvm-svn: 116040
2010-10-08 03:57:25 +00:00
Chris Lattner 626656a562 reapply the patch reverted in r116033:
"Reimplement (part of) the or -> add optimization.  Matching 'or' into 'add'"

With a critical fix: the add pseudos clobber EFLAGS.

llvm-svn: 116039
2010-10-08 03:54:52 +00:00
Daniel Dunbar 8f21f9c1fb Revert "Reimplement (part of) the or -> add optimization. Matching 'or' into
'add'", which seems to have broken just about everything.

llvm-svn: 116033
2010-10-08 02:07:32 +00:00
Daniel Dunbar efdf08b5b8 Revert "reimplement the second half of the or/add optimization. We should now",
which depends on r116007, which I am about to revert.

llvm-svn: 116031
2010-10-08 02:07:26 +00:00
Chris Lattner 134f415bf8 reimplement the second half of the or/add optimization. We should now
only end up emitting LEA instead of OR.  If we aren't able to promote
something into an LEA, we should never be emitting it as an ADD.

Add some testcases that we emit "or" in cases where we used to produce
an "add".

llvm-svn: 116026
2010-10-08 01:05:10 +00:00
Chris Lattner 4fb38d3cd3 Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'
is general goodness because it allows ORs to be converted to LEA to avoid
inserting copies.  However, this is bad because it makes the generated .s
file less obvious and gives valgrind heartburn (tons of false positives in
bitfield code).

While the general fix should be in valgrind, we can at least try to avoid
emitting ADD instructions that *don't* get promoted to LEA.  This is more
work because it requires introducing pseudo instructions to represents
"add that knows the bits are disjoint", but hey, people really love valgrind.

This fixes this testcase:
https://bugs.kde.org/show_bug.cgi?id=242137#c20

the add r/i cases are coming next.

llvm-svn: 116007
2010-10-07 23:36:18 +00:00
Chris Lattner cff5b0ea36 Move cmov pseudo instructions to InstrCompiler,
convert all the rest of the cmovs to the multiclass,
with good results:

 X86InstrCMovSetCC.td |  598 +--------------------------------------------------
 X86InstrCompiler.td  |   61 +++++
 2 files changed, 77 insertions(+), 582 deletions(-)

llvm-svn: 115707
2010-10-05 23:09:10 +00:00
Chris Lattner 1a1c600110 Use #NAME# to have the CMOV multiclass define things with the same names as before
(e.g. CMOVBE16rr instead of CMOVBErr16).

llvm-svn: 115705
2010-10-05 23:00:14 +00:00
Chris Lattner 7538ed80a9 enhance tblgen to support anonymous defm's, use this to
simplify the X86 CMOVmr's.

llvm-svn: 115702
2010-10-05 22:51:56 +00:00
Chris Lattner fa25dd9548 convert cmov mr patterns to use a multipattern. Death to redundancy
and verbosity

llvm-svn: 115701
2010-10-05 22:42:54 +00:00
Chris Lattner 0067ee02f9 switch CMOVBE to the multipattern:
21 insertions(+), 53 deletions(-)

Moar change coming before I switch the rest.

llvm-svn: 115697
2010-10-05 22:23:58 +00:00
Chris Lattner 8f4f1d1136 move SETB pseudos into the same place in InstrCompiler.td
llvm-svn: 115686
2010-10-05 21:18:04 +00:00
Chris Lattner 84571a1581 move some instructions from Instr64Bit -> InstrInfo.
bswap32 doesn't read eflags.

llvm-svn: 115604
2010-10-05 06:47:35 +00:00
Chris Lattner da8c94ef44 move CMOV_FR32 and friends to InstrCompiler, since they are
pseudo instructions.

Move POPCNT to InstrSSE since they are SSE4 instructions.

llvm-svn: 115603
2010-10-05 06:41:40 +00:00
Chris Lattner 44a5a2b569 move various pattern matching support goop out of X86Instr64Bit, to live
with the 32-bit stuff.

llvm-svn: 115602
2010-10-05 06:37:31 +00:00
Chris Lattner fa9b058eef split conditional moves and setcc's out to their own file.
llvm-svn: 115601
2010-10-05 06:33:16 +00:00
Chris Lattner f9594ba4e7 move string pseudo instructions to InstrCompiler consolidate 64-bit and 32-bit together.
llvm-svn: 115600
2010-10-05 06:27:48 +00:00
Chris Lattner c184a57e98 move the atomic pseudo instructions out to X86InstrCompiler.td
llvm-svn: 115599
2010-10-05 06:22:35 +00:00
Chris Lattner c793f8bca6 move more pseudo instructions out to X86InstrCompiler.td
llvm-svn: 115598
2010-10-05 06:10:16 +00:00
Chris Lattner ae33f5d93b continue moving stuff out to X86InstrSystem.td. Move
control flow stuff out to X86InstrControl.td.  Move
some compiler pseudo instructions and Pat<> patterns
out to X86InstrCompiler.td

llvm-svn: 115596
2010-10-05 06:04:14 +00:00