Commit Graph

199 Commits

Author SHA1 Message Date
Ben Langmuir 1650175de6 Partial support for Intel SHA Extensions (sha1rnds4)
Add basic assembly/disassembly support for the first Intel SHA
instruction 'sha1rnds4'. Also includes feature flag, and test cases.

Support for the remaining instructions will follow in a separate patch.

llvm-svn: 190611
2013-09-12 15:51:31 +00:00
Benjamin Kramer 8f429384b5 X86: Add a description of the Intel Atom Silvermont CPU.
Currently this is just the atom model with SSE4.2 enabled.

llvm-svn: 189669
2013-08-30 14:05:32 +00:00
Rafael Espindola 94a2c5642d Rename features to match what gcc and clang use.
There is no advantage in being different and using the same names simplifies
clang a bit.

llvm-svn: 189141
2013-08-23 20:21:34 +00:00
Craig Topper 5c94bb8551 Rename mattr names for AVX-512 to from avx-512 -> avx512f, avx-512-pfi -> av512pf, avx-512-cdi -> avx512cd, avx-512-eri->avx512er. This matches better with official docs and what gcc patches appearto be using. I didn't touch the has* functions or the feature flag names to avoid change the td and lowering file while commits are still happening.
llvm-svn: 188859
2013-08-21 03:57:57 +00:00
Elena Demikhovsky 003e7d73b9 Added encoding prefixes for KNL instructions (EVEX).
Added 512-bit operands printing.
Added instruction formats for KNL instructions.

llvm-svn: 187324
2013-07-28 08:28:38 +00:00
Elena Demikhovsky 8cfb43f73b I'm starting to commit KNL backend. I'll push patches one-by-one. This patch includes support for the extended register set XMM16-31, YMM16-31, ZMM0-31.
The full ISA you can see here: http://software.intel.com/en-us/intel-isa-extensions

llvm-svn: 187030
2013-07-24 11:02:47 +00:00
Benjamin Kramer b44c4275d5 X86: Add target description for btver2; make autodetection logic aware of AVX.
llvm-svn: 181005
2013-05-03 10:20:08 +00:00
Preston Gurd 8b7ab4ba2b This patch adds the X86FixupLEAs pass, which will reduce instruction
latency for certain models of the Intel Atom family, by converting
instructions into their equivalent LEA instructions, when it is both
useful and possible to do so.

llvm-svn: 180573
2013-04-25 20:29:37 +00:00
Chad Rosier 9f7a221fdc [asm parser] Add support for predicating MnemonicAlias based on the assembler
variant/dialect.  Addresses a FIXME in the emitMnemonicAliases function.
Use and test case to come shortly.
rdar://13688439 and part of PR13340.

llvm-svn: 179804
2013-04-18 22:35:36 +00:00
Michael Liao a486a11dcf Add support of RDSEED defined in AVX2 extension
llvm-svn: 178314
2013-03-28 23:41:26 +00:00
Nadav Rotem e7b6a8aa8c Add the Haswell machine model.
llvm-svn: 178301
2013-03-28 22:34:46 +00:00
Preston Gurd 663e6f9558 For the current Atom processor, the fastest way to handle a call
indirect through a memory address is to load the memory address into
a register and then call indirect through the register.

This patch implements this improvement by modifying SelectionDAG to
force a function address which is a memory reference to be loaded
into a virtual register.

Patch by Sriram Murali.

llvm-svn: 178171
2013-03-27 19:14:02 +00:00
Michael Liao e344ec919f Add HLE target feature
llvm-svn: 178082
2013-03-26 22:46:02 +00:00
Jakob Stoklund Olesen 1ac7e662d4 Enable SandyBridgeModel for all modern Intel P6 descendants.
All Intel CPUs since Yonah look a lot alike, at least at the granularity
of the scheduling models. We can add more accurate models for
processors that aren't Sandy Bridge if required. Haswell will probably
need its own.

The Atom processor and anything based on NetBurst is completely
different. So are the non-Intel chips.

llvm-svn: 178080
2013-03-26 22:19:12 +00:00
Michael Liao 5173ee03af Add PREFETCHW codegen support
- Add 'PRFCHW' feature defined in AVX2 ISA extension

llvm-svn: 178040
2013-03-26 17:47:11 +00:00
Kay Tiong Khoo f809c6491d added basic support for Intel ADX instructions
-feature flag, instructions definitions, test cases

llvm-svn: 175196
2013-02-14 19:08:21 +00:00
Preston Gurd a01daace88 Pad Short Functions for Intel Atom
The current Intel Atom microarchitecture has a feature whereby
when a function returns early then it is slightly faster to execute
a sequence of NOP instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction until
the return address is ready.

When compiling for X86 Atom only, this patch will run a pass,
called "X86PadShortFunction" which will add NOP instructions where less
than four cycles elapse between function entry and return.

It includes tests.

This patch has been updated to address Nadav's review comments
- Optimize only at >= O1 and don't do optimization if -Os is set
- Stores MachineBasicBlock* instead of BBNum
- Uses DenseMap instead of std::map
- Fixes placement of braces

Patch by Andy Zhang.

llvm-svn: 171879
2013-01-08 18:27:24 +00:00
Nadav Rotem 478b6a47ec Revert revision 171524. Original message:
URL: http://llvm.org/viewvc/llvm-project?rev=171524&view=rev
Log:
The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.

llvm-svn: 171603
2013-01-05 05:42:48 +00:00
Preston Gurd e36b685a94 The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.

llvm-svn: 171524
2013-01-04 20:54:54 +00:00
Chandler Carruth 7a28f95419 Make '-mtune=x86_64' assume fast unaligned memory accesses.
Not all chips targeted by x86_64 have this feature, but a dramatically
increasing number do. Specifying a chip-specific tuning parameter will
continue to turn the feature on or off as appropriate for that
particular chip, but the generic flag should try to achieve the best
performance on the most widely available hardware. Today, the number of
chips with fast UA access dwarfs those without in the x86-64 space.

Note that this also brings LLVM's code generation for this '-march' flag
more in line with that of modern GCCs. Reviewed by Dan Gohman.

llvm-svn: 170269
2012-12-15 09:01:13 +00:00
Chandler Carruth 867c7bff9a Revert "Make '-mtune=x86_64' assume fast unaligned memory accesses."
Accidental commit... git svn betrayed me. Sorry for the noise.

llvm-svn: 169741
2012-12-10 18:23:52 +00:00
Chandler Carruth 7eaa45c738 Make '-mtune=x86_64' assume fast unaligned memory accesses.
Summary:
Not all chips targeted by x86_64 have this feature, but a dramatically
increasing number do. Specifying a chip-specific tuning parameter will
continue to turn the feature on or off as appropriate for that
particular chip, but the generic flag should try to achieve the best
performance on the most widely available hardware. Today, the number of
chips with fast UA access dwarfs those without in the x86-64 space.

Note that this also brings LLVM's code generation for this '-march' flag
more in line with that of modern GCCs.

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D195

llvm-svn: 169740
2012-12-10 18:22:42 +00:00
Chandler Carruth 0f58558101 Address a FIXME and update the fast unaligned memory feature for newer
Intel chips.

The model number rules were determined by inspecting Intel's
documentation for their newer chip model numbers. My understanding is
that all of the newer Intel chips have fast unaligned memory access, but
if anyone is concerned about a particular chip, just shout.

No tests updated; it's not clear we have dedicated tests for the chips'
various features, but if anyone would like tests (or can point me at
some existing ones), I'm happy to oblige.

llvm-svn: 169730
2012-12-10 09:18:44 +00:00
Michael Liao 73cffddb95 Add support of RTM from TSX extension
- Add RTM code generation support throught 3 X86 intrinsics:
  xbegin()/xend() to start/end a transaction region, and xabort() to abort a
  tranaction region

llvm-svn: 167573
2012-11-08 07:28:54 +00:00
Michael Liao c6696b04db Atom has SIMD instruction set extension up to SSSE3
llvm-svn: 166665
2012-10-25 07:06:48 +00:00
Craig Topper 303878108e Fix 80-column violation
llvm-svn: 165089
2012-10-03 03:56:12 +00:00
Roman Divacky fd69009419 Add support for AMD Geode.
llvm-svn: 163710
2012-09-12 14:36:02 +00:00
Preston Gurd cdf540d5d6 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!

llvm-svn: 163150
2012-09-04 18:22:17 +00:00
Anitha Boyapati af3e98347f Patch to enable FMA on bdver2 target. Make XOP feature enable FMA4 as well.
llvm-svn: 162012
2012-08-16 04:04:02 +00:00
Anitha Boyapati 426feb61b9 (no commit message)
llvm-svn: 162010
2012-08-16 03:50:04 +00:00
Andrew Trick 87255e340e I'm introducing a new machine model to simultaneously allow simple
subtarget CPU descriptions and support new features of
MachineScheduler.

MachineModel has three categories of data:
1) Basic properties for coarse grained instruction cost model.
2) Scheduler Read/Write resources for simple per-opcode and operand cost model (TBD).
3) Instruction itineraties for detailed per-cycle reservation tables.

These will all live side-by-side. Any subtarget can use any
combination of them. Instruction itineraries will not change in the
near term. In the long run, I expect them to only be relevant for
in-order VLIW machines that have complex contraints and require a
precise scheduling/bundling model. Once itineraries are only actively
used by VLIW-ish targets, they could be replaced by something more
appropriate for those targets.

This tablegen backend rewrite sets things up for introducing
MachineModel type #2: per opcode/operand cost model.

llvm-svn: 159891
2012-07-07 04:00:00 +00:00
Craig Topper 79dbb0c6e4 Rename FMA3 feature flag to just FMA to match gcc so it can be added to clang.
llvm-svn: 157903
2012-06-03 18:58:46 +00:00
Benjamin Kramer a0396e4583 X86: Rename the CLMUL target feature to PCLMUL.
It was renamed in gcc/gas a while ago and causes all kinds of
confusion because it was named differently in llvm and clang.

llvm-svn: 157745
2012-05-31 14:34:17 +00:00
Craig Topper bae0e9ea1d Make XOP and FMA4 require SSE4A to match GCC behavior. Use this to simplify Bulldozer feature list.
llvm-svn: 155897
2012-05-01 06:54:48 +00:00
Craig Topper 43518cc55f Make XOP imply AVX as its needed to legalize the registers types.
llvm-svn: 155891
2012-05-01 05:41:41 +00:00
Craig Topper 29dd148a71 Make CLMUL and AES imply SSE2 since its needed to legalize the type.
llvm-svn: 155888
2012-05-01 05:28:32 +00:00
Craig Topper 0eacda5f69 Enable AVX and FMA4 for AMD Bulldozer processors.
llvm-svn: 155885
2012-05-01 05:18:13 +00:00
Craig Topper 08ccfbe57b Enable detection of AVX and AVX2 support through CPUID. Add AVX/AVX2 to corei7-avx, core-avx-i, and core-avx2 cpu names.
llvm-svn: 155618
2012-04-26 06:40:15 +00:00
Jia Liu b22310fda6 Emacs-tag and some comment fix for all ARM, CellSPU, Hexagon, MBlaze, MSP430, PPC, PTX, Sparc, X86, XCore.
llvm-svn: 150878
2012-02-18 12:03:15 +00:00
Evan Cheng 1b81fddd65 Use LEA to adjust stack ptr for Atom. Patch by Andy Zhang.
llvm-svn: 150008
2012-02-07 22:50:41 +00:00
Andrew Trick 8523b16ff5 Instruction scheduling itinerary for Intel Atom.
Adds an instruction itinerary to all x86 instructions, giving each a default latency of 1, using the InstrItinClass IIC_DEFAULT.

Sets specific latencies for Atom for the instructions in files X86InstrCMovSetCC.td, X86InstrArithmetic.td, X86InstrControl.td, and X86InstrShiftRotate.td. The Atom latencies for the remainder of the x86 instructions will be set in subsequent patches.

Adds a test to verify that the scheduler is working.

Also changes the scheduling preference to "Hybrid" for i386 Atom, while leaving x86_64 as ILP.

Patch by Preston Gurd!

llvm-svn: 149558
2012-02-01 23:20:51 +00:00
Devang Patel 4a6e778aae Rename X86ATTAsmParser -> X86AsmParser
We are using one parser to parse att as well as intel style syntax.

llvm-svn: 148032
2012-01-12 18:03:40 +00:00
Devang Patel 67bf992a8f Add definition for intel asm variant.
Right now, this just adds additional entries in match table. The parser does not use them yet.

llvm-svn: 147859
2012-01-10 17:51:54 +00:00
Benjamin Kramer 077ae1d760 Add definitions for AMD's bobcat (aka btver1)
llvm-svn: 147846
2012-01-10 11:50:02 +00:00
Devang Patel 85d684a4d9 Split AsmParser into two components - AsmParser and AsmParserVariant
AsmParser holds info specific to target parser.
AsmParserVariant holds info specific to asm variants supported by the target.

llvm-svn: 147787
2012-01-09 19:13:28 +00:00
Craig Topper f287a4509e Remove AVX hack in X86Subtarget. AVX/AVX2 are now treated as an SSE level. Predicate functions have been altered to maintain previous names and behavior.
llvm-svn: 147770
2012-01-09 09:02:13 +00:00
Craig Topper a5d1fc2cc7 Make FMA4 imply AVX so that YMM registers would be available. Necessitates removing from Bulldozer CPU types since it would enable AVX code generation implicitly. Also make SSE4A imply SSE3. Without some level of SSE implied, XMM registers wouldn't be legal.
llvm-svn: 147369
2011-12-30 07:16:00 +00:00
Craig Topper e1bd05128e Make FMA3 imply AVX needs to be enabled. Particularly because 256-bit types aren't valid unless AVX is enabled.
llvm-svn: 147349
2011-12-29 19:46:19 +00:00
Craig Topper a060afb5ba Add FeaturePOPCNT to all CPU types that lost it was removed from SSE42/SSE4A in r147339.
llvm-svn: 147347
2011-12-29 18:47:31 +00:00
Craig Topper 7bd3305f3e Make SSE42 and SSE4A not imply POPCNT. POPCNT should be able to be disabled on its own without disabling SSE4.2 or SSE4A.
llvm-svn: 147339
2011-12-29 15:51:45 +00:00
Jan Sjödin 1280eb1d06 Add XOP feature flag.
llvm-svn: 145682
2011-12-02 15:14:37 +00:00
Benjamin Kramer 5feb3dab79 X86: Turns out bulldozer also supports sse42 and lzcnt.
While at it remove the barcelona/instanbul/shanghai subtargets, they're
unsupported by GCC and look pretty broken.

llvm-svn: 145494
2011-11-30 15:48:16 +00:00
Benjamin Kramer 981f32327d X86: Add subtargets for AMD's bulldozer.
llvm-svn: 145493
2011-11-30 15:27:46 +00:00
Craig Topper 228d9131aa Add intrinsics and feature flag for read/write FS/GS base instructions. Also add AVX2 feature flag.
llvm-svn: 143319
2011-10-30 19:57:21 +00:00
David Meyer 49045ddb4c Remove NaClMode
llvm-svn: 142338
2011-10-18 05:29:23 +00:00
Craig Topper aea148c366 Add X86 BZHI instruction as well as BMI2 feature detection.
llvm-svn: 142122
2011-10-16 07:55:05 +00:00
Craig Topper 3657fe4b17 Add X86 TZCNT instruction and patterns to select it. Also added core-avx2 processor which is gcc's name for Haswell.
llvm-svn: 141939
2011-10-14 03:21:46 +00:00
Bill Wendling 063f55ffdd Revert r141854 because it was causing failures:
http://lab.llvm.org:8011/builders/llvm-x86_64-linux/builds/101

--- Reverse-merging r141854 into '.':
U    test/MC/Disassembler/X86/x86-32.txt
U    test/MC/Disassembler/X86/simple-tests.txt
D    test/CodeGen/X86/bmi.ll
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86.td
U    lib/Target/X86/X86Subtarget.h

llvm-svn: 141857
2011-10-13 07:48:07 +00:00
Craig Topper 8cc9388073 Add X86 TZCNT instruction and patterns to select it. Also added core-avx2 processor which is gcc's name for Haswell.
llvm-svn: 141854
2011-10-13 07:09:14 +00:00
Craig Topper 271064e873 Add X86 LZCNT instruction. Including instruction selection support.
llvm-svn: 141651
2011-10-11 06:44:02 +00:00
Benjamin Kramer 874c519337 X86: Add a subtarget definition for core-avx-i, which is GCC's name for ivy bridge.
llvm-svn: 141571
2011-10-10 19:35:07 +00:00
Benjamin Kramer 42c0330a79 X86: Add patterns for the movbe instruction (mov + bswap, only available on atom)
llvm-svn: 141563
2011-10-10 18:34:56 +00:00
Craig Topper fe9179fa4f Add Ivy Bridge 16-bit floating point conversion instructions for the X86 disassembler.
llvm-svn: 141505
2011-10-09 07:31:39 +00:00
Craig Topper 786bdb9e14 Add support for MOVBE and RDRAND instructions for the assembler and disassembler. Includes feature flag checking, but no instrinsic support. Fixes PR10832, PR11026 and PR11027.
llvm-svn: 141007
2011-10-03 17:28:23 +00:00
Nick Lewycky 73df7e3830 Add a new MC bit for NaCl (Native Client) mode. NaCl requires that certain
instructions are more aligned than the CPU requires, and adds some additional
directives, to follow in future patches. Patch by David Meyer!

llvm-svn: 139125
2011-09-05 21:51:43 +00:00
Eli Friedman 5e5704277f Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
llvm-svn: 138660
2011-08-26 21:21:21 +00:00
Evan Cheng 13bcc6c1c7 Add Mode64Bit feature and sink it down to MC layer.
llvm-svn: 134641
2011-07-07 21:06:52 +00:00
Benjamin Kramer 0bf26746d9 Rename the "sandybridge" subtarget to "corei7-avx", for GCC compatibility.
llvm-svn: 131730
2011-05-20 15:11:26 +00:00
Michael J. Spencer 9973738b65 Add pentium{3,4}m cpus. Patch by Alexander Best!
llvm-svn: 130749
2011-05-03 03:42:50 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Michael J. Spencer 30088ba110 Add 3DNow! intrinsics.
llvm-svn: 129551
2011-04-15 00:32:41 +00:00
Michael J. Spencer b88784c185 Fix whitespace and tabs.
llvm-svn: 129517
2011-04-14 14:33:36 +00:00
Evan Cheng f8b4c0035b Disable auto-detection of AVX support since AVX codegen support is not ready.
llvm-svn: 121677
2010-12-13 04:23:53 +00:00
Nate Begeman 8b08f5232b Formalize the notion that AVX and SSE are non-overlapping extensions from the compiler's point of view. Per email discussion, we either want to always use VEX-prefixed instructions or never use them, and are taking "HasAVX" to mean "Always use VEX". Passing -mattr=-avx,+sse42 should serve to restore legacy SSE support when desirable.
llvm-svn: 121439
2010-12-10 00:26:57 +00:00
Benjamin Kramer 2f489236ab Add patterns for the x86 popcnt instruction.
- Also adds a new POPCNT subtarget feature that is currently enabled if the target
  supports SSE4.2 (nehalem) or SSE4A (barcelona).

llvm-svn: 120917
2010-12-04 20:32:23 +00:00
Jim Grosbach 4cf25f5ba9 Clean up comments.
llvm-svn: 117785
2010-10-30 13:48:28 +00:00
Jim Grosbach c6e13f7383 Clean up asm writer usage for x86 and msp430 to flag that the writer should
use MC instructions in the printInstruction() method via the tablegen flag
for it rather than a #define prior to including the autogenerated bits.

llvm-svn: 115238
2010-09-30 23:40:25 +00:00
Daniel Dunbar 167b9d7f30 tblgen/AsmMatcher: Always emit the match function as 'MatchInstructionImpl',
target specific parsers can adapt the TargetAsmParser to this.

llvm-svn: 110888
2010-08-12 00:55:32 +00:00
Bruno Cardoso Lopes d618c8ac64 Declare CLMUL as a subtarget feature
llvm-svn: 109207
2010-07-23 01:22:45 +00:00
Daniel Dunbar b82cd9319b MC/X86: We now match instructions like "incl %eax" correctly for the arch we are
assembling; remove crufty custom cleanup code.

llvm-svn: 108681
2010-07-19 06:14:54 +00:00
Daniel Dunbar 9b816a1bb3 MC/X86: Add "support" for matching ATT style mnemonic prefixes.
- The idea is that when a match fails, we just try to match each of +'b', +'w',
   +'l'. If exactly one matches, we assume this is a mnemonic prefix and accept
   it. If all match, we assume it is width generic, and take the 'l' form.

 - This would be a horrible hack, if it weren't so simple. Therefore it is an
   elegant solution! Chris gets the credit for this particular elegant
   solution. :)

 - Next step to making this more robust is to have the X86 matcher generate the
   mnemonic prefix information. Ideally we would also compute up-front exactly
   which mnemonic to attempt to match, but this may require more custom code in
   the matcher than is really worth it.

llvm-svn: 103012
2010-05-04 16:12:42 +00:00
Jakob Stoklund Olesen b93331f3be Replace TSFlagsFields and TSFlagsShifts with a simpler TSFlags field.
When a target instruction wants to set target-specific flags, it should simply
set bits in the TSFlags bit vector defined in the Instruction TableGen class.

This works well because TableGen resolves member references late:

class I : Instruction {
  AddrMode AM = AddrModeNone;
  let TSFlags{3-0} = AM.Value;
}

let AM = AddrMode4 in
def ADD : I;

TSFlags gets the expected bits from AddrMode4 in this example.

llvm-svn: 100384
2010-04-05 03:10:20 +00:00
Eric Christopher 2ef63183a5 Separate out the AES-NI instructions from the SSE4.2 instructions. Add
a new subtarget option for AES and check for the support.  Add "westmere"
line of processors and add AES-NI support to the core i7.

Add a couple of TODOs for information I couldn't verify.

llvm-svn: 100231
2010-04-02 21:54:27 +00:00
Evan Cheng 738b0f9ec7 Nehalem unaligned memory access is fast.
llvm-svn: 100089
2010-04-01 05:58:17 +00:00
Jakob Stoklund Olesen f8d7eda663 Teach TableGen to understand X.Y notation in the TSFlagsFields strings.
Remove much horribleness from X86InstrFormats as a result. Similar
simplifications are probably possible for other targets.

llvm-svn: 99539
2010-03-25 18:52:01 +00:00
Jakob Stoklund Olesen 49e121d5e4 Add a late SSEDomainFix pass that twiddles SSE instructions to avoid domain crossings.
On Nehalem and newer CPUs there is a 2 cycle latency penalty on using a register
in a different domain than where it was defined. Some instructions have
equvivalents for different domains, like por/orps/orpd.

The SSEDomainFix pass tries to minimize the number of domain crossings by
changing between equvivalent opcodes where possible.

This is a work in progress, in particular the pass doesn't do anything yet. SSE
instructions are tagged with their execution domain in TableGen using the last
two bits of TSFlags. Note that not all instructions are tagged correctly. Life
just isn't that simple.

The SSE execution domain issue is very similar to the ARM NEON/VFP pipeline
issue handled by NEONMoveFixPass. This pass may become target independent to
handle both.

llvm-svn: 99524
2010-03-25 17:25:00 +00:00
Daniel Dunbar 63ec093b6e MC/X86/AsmMatcher: Use the new instruction cleanup routine to implement a
temporary workaround for matching inc/dec on x86_64 to the correct instruction.
 - This hack will eventually be replaced with a robust mechanism for handling
   matching instructions based on the available target features.

llvm-svn: 98858
2010-03-18 20:06:02 +00:00
Chris Lattner 77f7dba60c all 64-bit cpus have cmov, this should fix CodeGen/X86/cmov.ll
(at least) on non-x86 builders.

llvm-svn: 98520
2010-03-14 22:24:34 +00:00
Chris Lattner 44ac89f517 revert r95949, it turns out that adding new prefixes is not a
great solution for the disassembler, we'll go with "plan b".

llvm-svn: 95957
2010-02-12 01:55:31 +00:00
Chris Lattner 336f9abb45 add another bit of space for new kinds of instruction prefixes.
llvm-svn: 95949
2010-02-12 01:15:16 +00:00
David Greene 206351a1ff Implement a feature (-vector-unaligned-mem) to allow targets to
ignore alignment requirements for SIMD memory operands.  This
is useful on architectures like the AMD 10h that do not trap on
unaligned references if a status bit is twiddled at startup time.

llvm-svn: 93151
2010-01-11 16:29:42 +00:00
Evan Cheng 71d7eaa87e Remove target attribute break-sse-dep. Instead, do not fold load into sse partial update instructions unless optimizing for size.
llvm-svn: 91910
2009-12-22 17:47:23 +00:00
Evan Cheng 4cf30b72bf On recent Intel u-arch's, folding loads into some unary SSE instructions can
be non-optimal. To be precise, we should avoid folding loads if the instructions
only update part of the destination register, and the non-updated part is not
needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
the partial register dependency and it can improve performance. e.g.

movss (%rdi), %xmm0
cvtss2sd %xmm0, %xmm0

instead of
cvtss2sd (%rdi), %xmm0

An alternative method to break dependency is to clear the register first. e.g.
xorps %xmm0, %xmm0
cvtss2sd (%rdi), %xmm0

llvm-svn: 91672
2009-12-18 07:40:29 +00:00
Sean Callanan 04d8cb74f3 Instruction fixes, added instructions, and AsmString changes in the
X86 instruction tables.

Also (while I was at it) cleaned up the X86 tables, removing tabs and
80-line violations.

This patch was reviewed by Chris Lattner, but please let me know if
there are any problems.

* X86*.td
	Removed tabs and fixed 80-line violations

* X86Instr64bit.td
	(IRET, POPCNT, BT_, LSL, SWPGS, PUSH_S, POP_S, L_S, SMSW)
		Added
	(CALL, CMOV) Added qualifiers
	(JMP) Added PC-relative jump instruction
	(POPFQ/PUSHFQ) Added qualifiers; renamed PUSHFQ to indicate
		that it is 64-bit only (ambiguous since it has no
		REX prefix)
	(MOV) Added rr form going the other way, which is encoded
		differently
	(MOV) Changed immediates to offsets, which is more correct;
		also fixed MOV64o64a to have to a 64-bit offset
	(MOV) Fixed qualifiers
	(MOV) Added debug-register and condition-register moves
	(MOVZX) Added more forms
	(ADC, SUB, SBB, AND, OR, XOR) Added reverse forms, which
		(as with MOV) are encoded differently
	(ROL) Made REX.W required
	(BT) Uncommented mr form for disassembly only
	(CVT__2__) Added several missing non-intrinsic forms
	(LXADD, XCHG) Reordered operands to make more sense for
		MRMSrcMem
	(XCHG) Added register-to-register forms
	(XADD, CMPXCHG, XCHG) Added non-locked forms
* X86InstrSSE.td
	(CVTSS2SI, COMISS, CVTTPS2DQ, CVTPS2PD, CVTPD2PS, MOVQ)
		Added
* X86InstrFPStack.td
	(COM_FST0, COMP_FST0, COM_FI, COM_FIP, FFREE, FNCLEX, FNOP,
	 FXAM, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, F2XM1, FYL2X,
	 FPTAN, FPATAN, FXTRACT, FPREM1, FDECSTP, FINCSTP, FPREM,
	 FYL2XP1, FSINCOS, FRNDINT, FSCALE, FCOMPP, FXSAVE,
	 FXRSTOR)
		Added
	(FCOM, FCOMP) Added qualifiers
	(FSTENV, FSAVE, FSTSW) Fixed opcode names
	(FNSTSW) Added implicit register operand
* X86InstrInfo.td
	(opaque512mem) Added for FXSAVE/FXRSTOR
	(offset8, offset16, offset32, offset64) Added for MOV
	(NOOPW, IRET, POPCNT, IN, BTC, BTR, BTS, LSL, INVLPG, STR,
	 LTR, PUSHFS, PUSHGS, POPFS, POPGS, LDS, LSS, LES, LFS,
	 LGS, VERR, VERW, SGDT, SIDT, SLDT, LGDT, LIDT, LLDT,
	 LODSD, OUTSB, OUTSW, OUTSD, HLT, RSM, FNINIT, CLC, STC,
	 CLI, STI, CLD, STD, CMC, CLTS, XLAT, WRMSR, RDMSR, RDPMC,
	 SMSW, LMSW, CPUID, INVD, WBINVD, INVEPT, INVVPID, VMCALL,
	 VMCLEAR, VMLAUNCH, VMRESUME, VMPTRLD, VMPTRST, VMREAD,
	 VMWRITE, VMXOFF, VMXON) Added
	(NOOPL, POPF, POPFD, PUSHF, PUSHFD) Added qualifier
	(JO, JNO, JB, JAE, JE, JNE, JBE, JA, JS, JNS, JP, JNP, JL,
	 JGE, JLE, JG, JCXZ) Added 32-bit forms
	(MOV) Changed some immediate forms to offset forms
	(MOV) Added reversed reg-reg forms, which are encoded
		differently
	(MOV) Added debug-register and condition-register moves
	(CMOV) Added qualifiers
	(AND, OR, XOR, ADC, SUB, SBB) Added reverse forms, like MOV
	(BT) Uncommented memory-register forms for disassembler
	(MOVSX, MOVZX) Added forms
	(XCHG, LXADD) Made operand order make sense for MRMSrcMem
	(XCHG) Added register-register forms
	(XADD, CMPXCHG) Added unlocked forms
* X86InstrMMX.td
	(MMX_MOVD, MMV_MOVQ) Added forms
* X86InstrInfo.cpp: Changed PUSHFQ to PUSHFQ64 to reflect table
	change

* X86RegisterInfo.td: Added debug and condition register sets
* x86-64-pic-3.ll: Fixed testcase to reflect call qualifier
* peep-test-3.ll: Fixed testcase to reflect test qualifier
* cmov.ll: Fixed testcase to reflect cmov qualifier
* loop-blocks.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-11.ll: Fixed testcase to reflect call qualifier
* 2009-11-04-SubregCoalescingBug.ll: Fixed testcase to reflect call
  qualifier
* x86-64-pic-2.ll: Fixed testcase to reflect call qualifier
* live-out-reg-info.ll: Fixed testcase to reflect test qualifier
* tail-opts.ll: Fixed testcase to reflect call qualifiers
* x86-64-pic-10.ll: Fixed testcase to reflect call qualifier
* bss-pagealigned.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-1.ll: Fixed testcase to reflect call qualifier
* widen_load-1.ll: Fixed testcase to reflect call qualifier

llvm-svn: 91638
2009-12-18 00:01:26 +00:00
Chris Lattner 13306a1fff remove a temporary hack.
llvm-svn: 82395
2009-09-20 07:47:59 +00:00
Chris Lattner 1cbd3ded33 split MCInst printing out of the X86ATTInstPrinter
class into its own X86ATTInstPrinter class.  The inst
printer now has just one dependence on the code generator
(TRI).

llvm-svn: 81703
2009-09-13 19:30:11 +00:00
Chris Lattner cc8c581a5b Add support for modeling whether or not the processor has support for
conditional moves as a subtarget feature.  This is the easy part of 
PR4841.

llvm-svn: 80763
2009-09-02 05:53:04 +00:00
Daniel Dunbar e431871851 llvm-mc/AsmParser: Allow target to specific a comment delimiter, which will be
used to strip hard coded comments out of .td assembly strings.

llvm-svn: 78716
2009-08-11 20:59:47 +00:00
Daniel Dunbar 0033199c50 Match X86 register names to number.
llvm-svn: 77404
2009-07-29 00:02:19 +00:00
David Greene 46b56ffae3 Add processor descriptions for Istanbul and Shanghai.
llvm-svn: 74429
2009-06-29 16:54:06 +00:00