to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
prevent IT blocks from being broken apart.
llvm-svn: 146542
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
llvm-svn: 146026
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.
This also makes the AVX variants redundant.
llvm-svn: 145440
This was a bug in keeping track of the available domains when merging
domain values.
The wrong domain mask caused ExecutionDepsFix to try to move VANDPSYrr
to the integer domain which is only available in AVX2.
Also add an assertion to catch future attempts at emitting AVX2
instructions.
llvm-svn: 145096
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.
The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.
The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.
llvm-svn: 144602
The xorps instruction is smaller than pxor, so prefer that encoding.
The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.
llvm-svn: 143996
In 64-bit mode, sub_8bit_hi sub-registers can only be used by NOREX
instructions. The COPY created from the EXTRACT_SUBREG DAG node cannot
target all GR8 registers, only those in GR8_NOREX.
TO enforce this, we ensure that all instructions using the
EXTRACT_SUBREG are GR8_NOREX constrained.
This fixes PR11088.
llvm-svn: 141499
This instruction is explicitly encoded without an REX prefix, so both
operands but be *_NOREX.
Also add an assertion to copyPhysReg() that fires when the MOV8rr_NOREX
constraints are not satisfied.
This fixes a miscompilation in 20040709-2 in the gcc test suite.
llvm-svn: 141410
I am going to unify the SSEDomainFix and NEONMoveFix passes into a
single target independent pass. They are essentially doing the same
thing.
llvm-svn: 140652
We already support GR64 <-> VR128 copies. All of these copies break
partial register dependencies by zeroing the high part of the target
register.
llvm-svn: 140348
alignment check for 256-bit classes more strict. There're no testcases
but we catch more folding cases for AVX while running single and multi
sources in the llvm testsuite.
Since some 128-bit AVX instructions have different number of operands
than their SSE counterparts, they are placed in different tables.
256-bit AVX instructions should also be added in the table soon. And
there a few more 128-bit versions to handled, which should come in
the following commits.
llvm-svn: 139687
single field (Flags), which is a bitwise OR of items from the TB_*
enum. This makes it easier to add new information in the future.
* Gives every static array an equivalent layout: { RegOp, MemOp, Flags }
* Adds a helper function, AddTableEntry, to avoid duplication of the
insertion code.
* Renames TB_NOT_REVERSABLE to TB_NO_REVERSE.
* Adds TB_NO_FORWARD, which is analogous to TB_NO_REVERSE, except that
it prevents addition of the Reg->Mem entry. (This is going to be used
by Native Client, in the next CL).
Patch by David Meyer
llvm-svn: 139311
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
llvm-svn: 134021
we try to branch to them.
Before we were creating successor lists with duplicated entries. Fixing that
found a bug in isBlockOnlyReachableByFallthrough that would causes it to
return the wrong answer for
-----------
...
jne foo
jmp bar
foo:
----------
llvm-svn: 132882
Add TargetRegisterInfo::hasSubClassEq and use it to check for compatible
register classes instead of trying to list all register classes in
X86's getLoadStoreRegOpcode.
llvm-svn: 132398
after folding ADD32ri to ADD32mi, so don't do that.
This only happens when the greedy register allocator gets itself in trouble and
spills %vreg9 here:
16L %vreg9<def> = MOVPC32r 0, %ESP<imp-use>; GR32:%vreg9
48L %vreg9<def> = ADD32ri %vreg9, <es:_GLOBAL_OFFSET_TABLE_>[TF=1], %EFLAGS<imp-def,dead>; GR32:%vreg9
That should never happen, the live range should be split instead.
llvm-svn: 130625
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.
First part of <rdar://problem/8460511>.
llvm-svn: 129401
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.
Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.
Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.
llvm-svn: 127067
"long latency" enough to hoist even if it may increase spilling. Reloading
a value from spill slot is often cheaper than performing an expensive
computation in the loop. For X86, that means machine LICM will hoist
SQRT, DIV, etc. ARM will be somewhat aggressive with VFP and NEON
instructions.
- Enable register pressure aware machine LICM by default.
llvm-svn: 116781
The reg-reg copies were no longer being generated since copyPhysReg copies
physical registers only.
The loads and stores are not necessary - The TC constraint is imposed by the
TAILJMP and TCRETURN instructions, there should be no need for constrained loads
and stores.
llvm-svn: 116314
reapply: reimplement the second half of the or/add optimization. We should now
with no changes. Turns out that one missing "Defs = [EFLAGS]" can upset things
a bit.
llvm-svn: 116040
only end up emitting LEA instead of OR. If we aren't able to promote
something into an LEA, we should never be emitting it as an ADD.
Add some testcases that we emit "or" in cases where we used to produce
an "add".
llvm-svn: 116026
is general goodness because it allows ORs to be converted to LEA to avoid
inserting copies. However, this is bad because it makes the generated .s
file less obvious and gives valgrind heartburn (tons of false positives in
bitfield code).
While the general fix should be in valgrind, we can at least try to avoid
emitting ADD instructions that *don't* get promoted to LEA. This is more
work because it requires introducing pseudo instructions to represents
"add that knows the bits are disjoint", but hey, people really love valgrind.
This fixes this testcase:
https://bugs.kde.org/show_bug.cgi?id=242137#c20
the add r/i cases are coming next.
llvm-svn: 116007
operands.
With this done, we can remove the _Int suffixes from the round instructions
without the disassembler blowing up. This allows the assembler to support
them, implementing rdar://8456376 - llvm-mc rejects 'roundss'
llvm-svn: 115019
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.
llvm-svn: 110946
When a register is defined by a partial load:
%reg1234:sub_32 = MOV32mr <fi#-1>; GR64:%reg1234
That load cannot be folded into an instruction using the full 64-bit register.
It would become a 64-bit load.
This is related to the recent change to have isLoadFromStackSlot return false on
a sub-register load.
llvm-svn: 110874
We do sometimes load from a too small stack slot when dealing with x86 arguments
(varargs and smaller-than-32-bit args). It looks like we know what we are doing
in those cases, so I am going to remove the assert instead of artifically
enlarging stack slot sizes.
The assert in storeRegToStackSlot stays in. We don't want to write beyond the
bounds of a stack slot.
llvm-svn: 109764
subregister operands like this:
%reg1040:sub_32bit<def> = MOV32rm <fi#-2>, 1, %reg0, 0, %reg0, %reg1040<imp-def>; mem:LD4[FixedStack-2](align=8)
Make them return false when subreg operands are present. VirtRegRewriter is
making bad assumptions otherwise.
This fixes PR7713.
llvm-svn: 109489
rip out the implementation of X86InstrInfo::GetInstSizeInBytes.
The code being ripped out just implemented a copy and hacked up
version of the (old) instruction encoder, and is buggy and
terrible in other ways. Since "GetInstSizeInBytes" is really
only there to support the JIT's "NeedsExactSize" hook (which
noone is using), just rip out the code. I will rip out the
NeedsExactSize hook next.
This resolves rdar://7617809 - switch X86InstrInfo::GetInstSizeInBytes to use X86MCCodeEmitter
llvm-svn: 109149