match splats in the form (splat (scalar_to_vector (load ...))) whenever
the load can be folded. All the logic and instruction emission is
working but because of PR8156, there are no ways to match loads, cause
they can never be folded for splats. Thus, the tests are XFAILed, but
I've tested and exercised all the logic using a relaxed version for
checking the foldable loads, as if the bug was already fixed. This
should work out of the box once PR8156 gets fixed since MayFoldLoad will
work as expected.
llvm-svn: 137810
vinsertf128 $1 + vpermilps $0, remove the old code that used to first
do the splat in a 128-bit vector and then insert it into a larger one.
This is better because the handling code gets simpler and also makes a
better room for the upcoming vbroadcast!
llvm-svn: 137807
making random bad assumptions about instructions which are not explicitly listed.
Includes fix for rdar://9956541, a version of "undef ^ undef should return
0 because it's easier than arguing with users".
llvm-svn: 137777
Thumb one requires that many arithmetic instruction forms have an 'S'
suffix. For Thumb2, the whether the suffix is required or precluded depends
on whether the instruction is in an IT block. Use target parser predicates
to check for these sorts of context-sensitive constraints.
llvm-svn: 137746
there is no support for native 256-bit shuffles, be more smart in some
cases, for example, when you can extract specific 128-bit parts and use
regular 128-bit shuffles for them. Example:
For this shuffle:
shufflevector <4 x i64> %a, <4 x i64> %b, <4 x i32>
<i32 1, i32 0, i32 7, i32 6>
This was expanded to:
vextractf128 $1, %ymm1, %xmm2
vpextrq $0, %xmm2, %rax
vmovd %rax, %xmm1
vpextrq $1, %xmm2, %rax
vmovd %rax, %xmm2
vpunpcklqdq %xmm1, %xmm2, %xmm1
vpextrq $0, %xmm0, %rax
vmovd %rax, %xmm2
vpextrq $1, %xmm0, %rax
vmovd %rax, %xmm0
vpunpcklqdq %xmm2, %xmm0, %xmm0
vinsertf128 $1, %xmm1, %ymm0, %ymm0
ret
Now we get:
vshufpd $1, %xmm0, %xmm0, %xmm0
vextractf128 $1, %ymm1, %xmm1
vshufpd $1, %xmm1, %xmm1, %xmm1
vinsertf128 $1, %xmm1, %ymm0, %ymm0
llvm-svn: 137733
Mips1 does not support double precision loads or stores, therefore two single
precision loads or stores must be used in place of these instructions. This
patch treats double precision loads and stores as if they are legal
instructions until MCInstLowering, instead of generating the single precision
instructions during instruction selection or Prolog/Epilog code insertion.
Without the changes made in this patch, llc produces code that has the same
problem described in r137484 or bails out when
MipsInstrInfo::storeRegToStackSlot or loadRegFromStackSlot is called before
register allocation.
llvm-svn: 137711
This commit includes a mention of the landingpad instruction, but it's not
changing the behavior around it. I think the current behavior is correct,
though. Bill, can you double-check that?
llvm-svn: 137691
Apparently we never added code to expand these pseudo instructions, and in
over a year, no one has noticed. Our register allocator must be awesome!
llvm-svn: 137551
of the instruction.
Note that this change affects the existing non-atomic load and store
instructions; the parser now accepts both forms, and the change is noted
in the release notes.
llvm-svn: 137527
vectors. It operates on 128-bit elements instead of regular scalar
types. Recognize shuffles that are suitable for VPERM2F128 and teach
the x86 legalizer how to handle them.
llvm-svn: 137519
the retains and releases all use the same SSA pointer value.
Also, don't let CFG hazards disrupt nested retain+release pair
optimizations.
llvm-svn: 137399
SCEV unrolling can unroll loops with arbitrary induction variables. It
is a prerequisite for -disable-iv-rewrite performance. It is also
easily handles loops of arbitrary structure including multiple exits
and is generally more robust.
This is under a temporary option to avoid affecting default
behavior for the next couple of weeks. It is needed so that I can
checkin unit tests for updateUnloop.
llvm-svn: 137384
(for example, after integer operation), do not pack the registers into a YMM
before saving. Its better to save as two XMM registers.
Before:
vinsertf128 $1, %xmm3, %ymm0, %ymm3
vinsertf128 $0, %xmm1, %ymm3, %ymm1
vmovaps %ymm1, 416(%rsp)
After:
vmovaps %xmm3, 416+16(%rsp)
vmovaps %xmm1, 416(%rsp)
llvm-svn: 137308
data in-register prior to saving to memory. When we reorder the data in memory
we prevent the need to save multiple scalars to memory, making a single regular
store.
llvm-svn: 137238
def : Pat<(X86Movss VR128:$src1,
(bc_v4i32 (v2i64 (load addr:$src2)))),
(MOVLPSrm VR128:$src1, addr:$src2)>;
This matches a MOVSS dag with a MOVLPS instruction. However, MOVSS will replace only the low 32 bits of the register, while the MOVLPS instruction will replace the low 64 bits. A testcase is added and illustrates the bug and also modified the one that was already present. Patch by Tanya Lattner.
llvm-svn: 137227
These are not individual bug fixes. I had to rewrite a good chunk of
the unroller to make it sane. I think it was getting lucky on trivial
completely unrolled loops with no early exits. I included some fairly
simple unit tests for partial unrolling. I didn't do much stress
testing, so it may not be perfect, but should be usable now.
llvm-svn: 137190
This new disassembler can correctly decode all the testcases that the old one did, though
some "expected failure" testcases are XFAIL'd for now because it is not (yet) as strict in
operand checking as the old one was.
llvm-svn: 137144
Coalescing can remove copy-like instructions with sub-register operands
that constrained the register class. Examples are:
x86: GR32_ABCD:sub_8bit_hi -> GR32
arm: DPR_VFP2:ssub0 -> DPR
Recompute the register class of any virtual registers that are used by
less instructions after coalescing.
This affects code generation for the Cortex-A8 where we use NEON
instructions for f32 operations, c.f. fp_convert.ll:
vadd.f32 d16, d1, d0
vcvt.s32.f32 d0, d16
The register allocator is now free to use d16 for the temporary, and
that comes first in the allocation order because it doesn't interfere
with any s-registers.
llvm-svn: 137133
X86FloatingPoint keeps track of pending ST registers for an upcoming
inline asm instruction with fixed stack register constraints. It does
this by remembering which FP register holds the value that should appear
at a fixed stack position for the inline asm.
When that FP register is killed before the inline asm, make sure to
duplicate it to a scratch register, so the ST register still has a live
FP reference.
This could happen when the same FP register was copied to two ST
registers, or when a spill instruction is inserted between the ST copy
and the inline asm.
This fixes PR10602.
llvm-svn: 137050
recurrence, the initial values low bits can sometimes be ignored.
To take advantage of this, added FoldIVUser to IndVarSimplify to fold
an IV operand into a udiv/lshr if the operator doesn't affect the
result.
-indvars -disable-iv-rewrite now transforms
i = phi i4
i1 = i0 + 1
idx = i1 >> (2 or more)
i4 = i + 4
into
i = phi i4
idx = i0 >> ...
i4 = i + 4
llvm-svn: 137013
More parsing support for indexed loads. Fix pre-indexed with writeback
parsing for register offsets and handle basic post-indexed offsets.
llvm-svn: 136982
Memory operand parsing is a bit haphazzard at the moment, in no small part
due to the even more haphazzard representations of memory operands in the .td
files. Start cleaning that all up, at least a bit.
The addressing modes in the .td files will be being simplified to not be
so monolithic, especially with regards to immediate vs. register offsets
and post-indexed addressing. addrmode3 is on its way with this patch, for
example.
This patch is foundational to enable going back to smaller incremental patches
for the individual memory referencing instructions themselves. It does just
enough to get the basics in place and handle the "make check" regression tests
we already have.
Follow-up work will be fleshing out the details and adding more robust test
cases for the individual instructions, starting with ARM mode and moving from
there into Thumb and Thumb2.
llvm-svn: 136845
Don't replace a gep/bitcast with 'undef' because that will form a "free(undef)"
which in turn means "unreachable". What we wanted was a no-op. Instead, analyze
the whole tree and look for all the instructions we need to delete first, then
delete them second, not relying on the use_list to stay consistent.
llvm-svn: 136752
avoid returning early for v8i32 types, which would only be valid for
vector with all zeros. Also split the handling of zeros and ones into separate
checking logic since they are handled differently. This fixes PR10547
llvm-svn: 136642
This includes registers like EFLAGS and ST0-ST7. We don't check for
liveness issues in the verifier and scavenger because registers will
never be allocated from these classes.
While in SSA form, we do care about the liveness of unallocatable
unreserved registers. Liveness of EFLAGS and ST0 neds to be correct for
MachineDCE and MachineSinking.
llvm-svn: 136541
Fix the instruction encoding for operands. Refactor mode to use explicit
instruction definitions per FIXME to be more consistent with loads/stores.
Fix disassembler accordingly. Add tests.
llvm-svn: 136509
Fill in the missing fixed bits and the register operand bits of the instruction
encoding. Refactor the definition to make the mode explicit, which is
consistent with how loads and stores are normally represented and makes
parsing much easier. Add parsing aliases for pseudo-instruction variants.
Update the disassembler for the new representations. Add tests for parsing and
encoding.
llvm-svn: 136479
This hidden llc option runs the machine code verifier after expanding
ARM pseudo-instructions, but before if-conversion.
The machine code verifier is much better at pointing out liveness errors
that can trip up the register scavenger.
llvm-svn: 136439
Add parsing support for BLX (immediate). Since the register operand version is
predicated and the label operand version is not, we have to use some special
handling to get the operand list right for matching.
llvm-svn: 136406
Code like that would only be produced by bugpoint, but we should still
handle it correctly.
When a register is defined by a REG_SEQUENCE of undefs, the register
itself is undef. Previously, we would create a register with uses but no
defs.
Fixes part of PR10520.
llvm-svn: 136401
Add parsing support that handles converting the lsb+width source into the
odd way we represent the instruction (an inverted bitfield mask).
llvm-svn: 136399
This can happen in cases where TableGen generated asm matcher cannot check
whether a register operand is in the right register class. e.g. mem operands.
rdar://8204588
llvm-svn: 136292
llvm-mc gives an "invalid operand" error for instructions that take an unsigned
immediate which have the high bit set such as:
pblendw $0xc5, %xmm2, %xmm1
llvm-mc treats all x86 immediates as signed values and range checks them.
A small number of x86 instructions use the imm8 field as a set of bits.
This change only changes those instructions and where the high bit is not
ignored. The others remain unchanged.
llvm-svn: 136287
Encode the width operand as it encodes in the instruction, which simplifies
the disassembler and the encoder, by using the imm1_32 operand def. Add a
diagnostic for the context-sensitive constraint that the width must be in
the range [1,32-lsb].
llvm-svn: 136264
usage of the shuffle bitmask. Both work in 128-bit lanes without
crossing, but in the former the mask of the high part is the same
used by the low part while in the later both lanes have independent
masks. Handle this properly and and add support for vpermilpd.
llvm-svn: 136200
These copies would coalesce easily, but the resulting value would be
defined by a deleted instruction. Now we also remove the undefined value
number from the destination register.
This fixes PR10503.
llvm-svn: 136174
On x86 we can't encode an immediate LHS of a sub directly. If the RHS comes from a XOR with a constant we can
fold the negation into the xor and add one to the immediate of the sub. Then we can turn the sub into an add,
which can be commuted and encoded efficiently.
This code is generated for __builtin_clz and friends.
llvm-svn: 136167
shuffle before inserting on a 256-bit vector.
- Add AVX versions of movd/movq instructions
- Introduce a few COPY patterns to match insert_subvector instructions.
This turns a trivial insert_subvector instruction into a register copy,
coalescing the xmm into a ymm and avoid emiting on more instruction.
llvm-svn: 136002
Fix the Rn register encoding for both SSAT and USAT. Update the parsing of the
shift operand to correctly handle the allowed shift types and immediate ranges
and issue meaningful diagnostics when an illegal value or shift type is
specified. Add aliases to parse an ommitted shift operand (default value of
'lsl #0').
Add tests for diagnostics and proper encoding.
llvm-svn: 135990
The .local, .hidden, .internal, and .protected are not legal for all supported
file formats (in particular, they're invalid for MachO). Move the parsing for
them into the ELF assembly parser since that's the format they're for.
Similarly, .weak is used by COFF and ELF, but not MachO, so move the parsing
to the COFF and ELF asm parsers. Previously, using any of these directives
on Darwin would result in an assertion failure in the parser; now we get
a diagnostic as we should.
rdar://9827089
llvm-svn: 135921
This fixes PR10463. A two-address instruction with an <undef> use
operand was incorrectly rewritten so the def and use no longer used the
same register, violating the tie constraint.
Fix this by always rewriting <undef> operands with the register a def
operand would use.
llvm-svn: 135885
and was actually very wrong, fix it and make it simpler. Also remove the
ConcatVectors function, which is unused now.
- Fix a introduction of useless nodes in r126664 and r126264. The
VUNPCKL* should never be introduced cause we don't want duplicate
nodes for 128 AVX and non-AVX modes, the actual instruction
difference only exists during isel, but not for target specific DAG
nodes. We only introduce V* target nodes when there is no 128-bit
version already there.
- Fix a fragile test and make it more useful.
llvm-svn: 135729
size but different element types, so that it filters out the cases
that CreateShuffleVectorCast doesn't handle. This fixes rdar://9786827.
llvm-svn: 135721
Aliases for LDM/STM. The single-register versions should encode to LDR/STR
with writeback, but we don't (yet) get that correct. Neither does Darwin's
system assembler, though, so that's not a deal-breaker of a limitation.
llvm-svn: 135702
instruction introduced in AVX, which can operate on 128 and 256-bit vectors.
It considers a 256-bit vector as two independent 128-bit lanes. It can permute
any 32 or 64 elements inside a lane, and restricts the second lane to
have the same permutation of the first one. With the improved splat support
introduced early today, adding codegen for this instruction enable more
efficient 256-bit code:
Instead of:
vextractf128 $0, %ymm0, %xmm0
punpcklbw %xmm0, %xmm0
punpckhbw %xmm0, %xmm0
vinsertf128 $0, %xmm0, %ymm0, %ymm1
vinsertf128 $1, %xmm0, %ymm1, %ymm0
vextractf128 $1, %ymm0, %xmm1
shufps $1, %xmm1, %xmm1
movss %xmm1, 28(%rsp)
movss %xmm1, 24(%rsp)
movss %xmm1, 20(%rsp)
movss %xmm1, 16(%rsp)
vextractf128 $0, %ymm0, %xmm0
shufps $1, %xmm0, %xmm0
movss %xmm0, 12(%rsp)
movss %xmm0, 8(%rsp)
movss %xmm0, 4(%rsp)
movss %xmm0, (%rsp)
vmovaps (%rsp), %ymm0
We get:
vextractf128 $0, %ymm0, %xmm0
punpcklbw %xmm0, %xmm0
punpckhbw %xmm0, %xmm0
vinsertf128 $0, %xmm0, %ymm0, %ymm1
vinsertf128 $1, %xmm0, %ymm1, %ymm0
vpermilps $85, %ymm0, %ymm0
llvm-svn: 135662
The system register spec should be case insensitive. The preferred form for
output with mask values of 4, 8, and 12 references APSR rather than CPSR.
Update and tidy up tests accordingly.
llvm-svn: 135532
Correct the handling of the 's' suffix when parsing ARM mode. It's only a
truly separate opcode in Thumb. Add test cases to make sure we handle
the s and condition suffices correctly, including diagnostics.
llvm-svn: 135513
Add range checking for the immediate operand and handle the "mov" mnemonic
choosing between encodings based on the value of the immediate. Add tests
for fixups, encoding choice and values, and diagnostic for out of range values.
llvm-svn: 135500
- In EmitAtomicBinaryPartword, mask incr in loopMBB only if atomic.swap is the
instruction being expanded, instead of masking it in thisMBB.
- Remove redundant Or in EmitAtomicCmpSwap.
llvm-svn: 135495
(including compilation, assembly). Move relocation model Reloc::Model from
TargetMachine to MCCodeGenInfo so it's accessible even without TargetMachine.
llvm-svn: 135468
For -disable-iv-rewrite, perform LFTR without generating a new
"canonical" induction variable. Instead find the "best" existing
induction variable for use in the loop exit test and compute the final
value of that IV for use in the new loop exit test. In short,
convert to a simple eq/ne exit test as long as it's cheap to do so.
llvm-svn: 135420
moving them out of the loop. Previously, stores and loads to a stack frame
object were inserted to accomplish this. Remove the code that was needed to do
this. Patch by Sasa Stankovic.
llvm-svn: 135415
When splitting a live range immediately before an LDR_POST instruction
that redefines the address register, make sure to use the correct value
number in leaveIntvBefore.
We need the value number entering the instruction.
<rdar://problem/9793765>
llvm-svn: 135413
1) Make non-legal 256-bit loads to be promoted to v4i64. This lets us
canonize the loads and handle things the same way we use to handle
for 128-bit registers. Despite of what one of the removed comments
explained, the load promotion would not mess with VPERM, it's only a
matter of doing the appropriate bitcasts when this instructions comes
to be introduced. Also make LOAD v8i32 legal.
2) Doing 1) exposed two bugs:
- v4i64 was being promoted to itself for several opcodes (introduced
in r124447 by David Greene) causing endless recursion and the stack to
explode.
- there was no support for allOnes BUILD_VECTORs and ANDNP would fail to
match because it was generating early target constant pools during
lowering.
3) The testcases are already checked-in, doing 1) exposed the
bugs in the current testcases.
4) Tidy up code to be more clear and explicit about AVX.
llvm-svn: 135313
when determining validity of matching constraint. Allow i1
types access to the GR8 reg class for x86.
Fixes PR10352 and rdar://9777108
llvm-svn: 135180
Flesh out the options supported for the instruction. Shuffle tests a bit and
add entries for the rest of the options. Add an alias to handle the default
operand of "sy".
llvm-svn: 135109
Keeping the instructions in alphabetical order, just like in the ARM ARM.
Adding FIXMEs for skipped instructions when adding tests out of order.
llvm-svn: 135060
Catch potential cascading errors on a malformed so_reg operand and bail after
the first error.
Add some tests for the diagnostics we do want.
llvm-svn: 135055
Now works for parsing register shifted register and register shifted
immediate arithmetic instructions, including the 'rrx' rotate with extend.
llvm-svn: 135049
if (x != 0) x = 1
if (x == 1) x = 1
Previous codegen looks like this:
mov r1, r0
cmp r1, #1
mov r0, #0
moveq r0, #1
The naive lowering select between two different values. It should recognize the
test is equality test so it's more a conditional move rather than a select:
cmp r0, #1
movne r0, #0
rdar://9758317
llvm-svn: 135017
Print shifted immediate values directly rather than as a payload+shifter
value pair. This makes for more readable output assembly code, simplifies
the instruction printer, and is consistent with how Thumb immediates are
displayed.
llvm-svn: 134902
and MCSubtargetInfo.
- Added methods to update subtarget features (used when targets automatically
detect subtarget features or switch modes).
- Teach X86Subtarget to update MCSubtargetInfo features bits since the
MCSubtargetInfo layer can be shared with other modules.
- These fixes .code 16 / .code 32 support since mode switch is updated in
MCSubtargetInfo so MC code emitter can do the right thing.
llvm-svn: 134884
patch brings numerous advantages to LLVM. One way to look at it
is through diffstat:
109 files changed, 3005 insertions(+), 5906 deletions(-)
Removing almost 3K lines of code is a good thing. Other advantages
include:
1. Value::getType() is a simple load that can be CSE'd, not a mutating
union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
uniques them. This means that the compiler doesn't merge them structurally
which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead
"const Type *" everywhere.
Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.
"LLVM 3.0" is the right time to do this.
There are still some cleanups pending after this, this patch is large enough
as-is.
llvm-svn: 134829
With Lit (not bash) in a test, multiple redirects >%t might open(%t, "w") multiple. It can be avoided if latter redirect is >>%t.
It might work even if ">/dev/null" were used.
llvm-svn: 134814
Try to move spills as early as possible in their basic block. This can
help eliminate interferences by shortening the live range being
spilled.
This fixes PR10221.
llvm-svn: 134776
The normal tBX instruction is predicable, so there's no reason the
pseudos for using it as a return shouldn't be. Gives us some nice code-gen
improvements as can be seen by the test changes. In particular, several
tests now have to disable if-conversion because it works too well and defeats
the test.
llvm-svn: 134746
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
llvm-svn: 134735
We have to do this in DAGBuilder instead of DAGCombiner, because the exact bit is lost after building.
struct foo { char x[24]; };
long bar(struct foo *a, struct foo *b) { return a-b; }
is now compiled into
movl 4(%esp), %eax
subl 8(%esp), %eax
sarl $3, %eax
imull $-1431655765, %eax, %eax
instead of
movl 4(%esp), %eax
subl 8(%esp), %eax
movl $715827883, %ecx
imull %ecx
movl %edx, %eax
shrl $31, %eax
sarl $2, %edx
addl %eax, %edx
movl %edx, %eax
llvm-svn: 134695
It was testing a linear scan feature:
Test if linearscan is unfavoring registers for allocation to allow
more reuse of reloads from stack slots.
The greedy register allocator doesn't access any stack slots in this
function, so the linear scan feature was not being tested.
llvm-svn: 134666