for the NLP because the object it's pointing to may be internal to the file.
This seems counter-intuitive, but bear with me. When we place the LSDA into the
TEXT section, the type info pointers need to be indirect and pc-rel. We
accomplish this by using NLPs. However, sometimes the types are local to the
file. GCC gets around this by not using a NLP in this case, but a "regular"
indirection like this:
GCC_except_tbl:
.long Lfoo-.
__ZTIA: @ This is local
...
Lfoo:
.long __ZTIA
LLVM prefers NLPs on Darwin. In fact, it's more optimal for load performance to
use them.
llvm-svn: 98218
which doesn't support .quad correctly because it is
"really really old". PR6528.
Yet another reason the mc assembler should take over ;-)
llvm-svn: 98205
indicates that an MCSymbol is external or not. (It's true if it's external.)
This will be used to specify the correct information to add to non-lazy
pointers. That will be explained further when this bit is used.
llvm-svn: 98199
directly to the maccu / maccs instructions. We handle this in
ExpandADDSUB since after type legalisation it is messy to
recognise these operations.
llvm-svn: 98150
is preparatory to having PEI's scavenged frame index value reuse logic
properly distinguish types of frame values (e.g., whether the value is
stack-pointer relative or frame-pointer relative).
No functionality change.
llvm-svn: 98086
Make it so. (This patch is in LowerCall_Darwin, which seems
to be used by SVR4 code as well; since that doesn't belong here,
I haven't worried about this case.)
llvm-svn: 98077
register is involved for thumb1. Work around this for the moment by only
re-using SP-relative offsets. This is temporary 'til the code can distinguish
multiple base registers.
llvm-svn: 98071
Place the LSDA into the TEXT section for ARM platforms. This involves making the
encoding indirect, pcrel, and sdata4 instead of an absolute pointer. The
references to the type infos are then non-lazy pointers. Revision 98019 changed
the encoding of non-lazy pointers to add the symbol to the non-lazy pointer
definition if it's a local symbol (otherwise, it's external and set to '0' so
that the loader can adjust it to the real value). This paved the way for this
change to work on ARM.
llvm-svn: 98068
immediate instructions cannot set the condition codes, so they do not have
the extra cc_out operand. We hit an assertion during tail duplication
because the instruction being duplicated had more operands that expected.
llvm-svn: 98001
example, this:
(set DPR:$dst, (fsub (fneg (fmul DPR:$a, DPR:$b)), DPR:$dstin))
is ambiguous because DPR contains both f64 and v2f32. tblgen
currently accidentally picks f64 because it's first in the
regclass.
llvm-svn: 97955
The MicroBlaze backend was generating stack layouts that did not
conform correctly to the ABI. This update generates stack layouts
which are closer to what GCC does.
Variable arguments support was added as well but the stack layout
for varargs has not been finalized.
llvm-svn: 97807
This code:
float floatingPointComparison(float x, float y) {
double product = (double)x * y;
if (product == 0.0)
return product;
return product - 1.0;
}
produces this:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jne 0x00000004
0000000000000016 jp 0x00000002
0000000000000018 jmp 0x00000008
000000000000001a addsd 0x00000006(%rip),%xmm0
0000000000000022 cvtsd2ss %xmm0,%xmm0
0000000000000026 ret
The "jne/jp/jmp" sequence can be reduced to this instead:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jp 0x00000002
0000000000000016 je 0x00000008
0000000000000018 addsd 0x00000006(%rip),%xmm0
0000000000000020 cvtsd2ss %xmm0,%xmm0
0000000000000024 ret
for a savings of 2 bytes.
This xform can happen when we recognize that jne and jp jump to the same "true"
MBB, the unconditional jump would jump to the "false" MBB, and the "true" branch
is the fall-through MBB.
llvm-svn: 97766
an undef value. This is only going to come up for bugpoint-reduced tests --
correct programs will not access memory at undefined addresses -- so it's not
worth the effort of doing anything more aggressive.
llvm-svn: 97745
These instructions technically define AL,AH, but a trick in X86ISelDAGToDAG
reads AX in order to avoid reading AH with a REX instruction.
Fix PR6489.
llvm-svn: 97742
Instruction (PLI) for disassembly only.
According to A8.6.120 PLI (immediate, literal), for example, different
instructions are generated for "pli [pc, #0]" and "pli [pc, #-0"]. The
disassembler solves it by mapping -0 (negative zero) to -1, -1 to -2, ..., etc.
llvm-svn: 97731
that they are not destination type specific. This allows
tblgen to factor them and the type check is redundant with
what the isel does anyway.
llvm-svn: 97629
- Eliminate TargetInstrInfo::isIdentical and replace it with produceSameValue. In the default case, produceSameValue just checks whether two machine instructions are identical (except for virtual register defs). But targets may override it to check for unusual cases (e.g. ARM pic loads from constant pools).
llvm-svn: 97628
CopyToReg/CopyFromReg/INLINEASM. These are annoying because
they have the same opcode before an after isel. Fix this by
setting their NodeID to -1 to indicate that they are selected,
just like what automatically happens when selecting things that
end up being machine nodes.
With that done, give IsLegalToFold a new flag that causes it to
ignore chains. This lets the HandleMergeInputChains routine be
the one place that validates chains after a match is successful,
enabling the new hotness in chain processing. This smarter
chain processing eliminates the need for "PreprocessRMW" in the
X86 and MSP430 backends and enables MSP to start matching it's
multiple mem operand instructions more aggressively.
I currently #if out the dead code in the X86 backend and MSP
backend, I'll remove it for real in a follow-on patch.
The testcase changes are:
test/CodeGen/X86/sse3.ll: we generate better code
test/CodeGen/X86/store_op_load_fold2.ll: PreprocessRMW was
miscompiling this before, we now generate correct code
Convert it to filecheck while I'm at it.
test/CodeGen/MSP430/Inst16mm.ll: Add a testcase for mem/mem
folding to make anton happy. :)
llvm-svn: 97596
the opc string passed in, since it's a given from the class inheritance of T2sI.
The fixed the extra 's' in adcss & sbcss when disassembly printing.
llvm-svn: 97582
DoInstructionSelection. Inline "SelectRoot" into it from DAGISelHeader.
Sink some other stuff out of DAGISelHeader into SDISel.
Eliminate the various 'Indent' stuff from various targets, which dates
to when isel was recursive.
17 files changed, 114 insertions(+), 430 deletions(-)
llvm-svn: 97555
Extracting the low element of a vector is now done with EXTRACT_SUBREG,
and the zero-extension performed by load movss is now modeled with
SUBREG_TO_REG, and so on.
Register-to-register movss and movsd are no longer considered copies;
they are two-address instructions which insert a scalar into a vector.
llvm-svn: 97354
but codegen'd differently. This really wanted to use some
sort of subreg to get the low 4 bytes of the G8RC register
or something. However, it's invalid and nothing is testing
it, so I'm just zapping the bogosity.
llvm-svn: 97345
o Parallel addition and subtraction, signed/unsigned
o Miscellaneous operations: QADD, QDADD, QSUB, QDSUB
o Unsigned sum of absolute differences [and accumulate]: USAD8, USADA8
o Signed/Unsigned saturate: SSAT, SSAT16, USAT, USAT16
o Signed multiply accumulate long (halfwords): SMLAL<x><y>
o Signed multiply accumulate/subtract [long] (dual): SMLAD[x], SMLALD[X], SMLSD[X], SMLSLD[X]
o Signed dual multiply add/subtract [long]: SMUAD[X], SMUSD[X]
llvm-svn: 97276
This is possible because F8RC is a subclass of F4RC. We keep FMRSD around so
fextend has a pattern.
Also allow folding of memory operands on FMRSD.
llvm-svn: 97275
The PowerPC floating point registers can represent both f32 and f64 via the
two register classes F4RC and F8RC. F8RC is considered a subclass of F4RC to
allow cross-class coalescing. This coalescing only affects whether registers
are spilled as f32 or f64.
Spill slots must be accessed with load/store instructions corresponding to the
class of the spilled register. PPCInstrInfo::foldMemoryOperandImpl was looking
at the instruction opcode which is wrong.
X86 has similar floating point register classes, but doesn't try to fold
memory operands, so there is no problem there.
llvm-svn: 97262
object construction. There is no provision to change them when the
code for a function generated.
So we have to change these names while printing assembly.
llvm-svn: 97213
terms of store and load, which means bitcasting between scalar
integer and vector has endian-specific results, which undermines
this whole approach.
llvm-svn: 97137
(511*16) bytes register displacement (D-form).
NOTE: This is a potential headache, given the SPU's local core limitations,
allowing the software developer to commit stack overrun suicide unknowingly.
Also, large SPU stack frames will cause code size explosion. But, one presumes
that the software developer knows what they're doing...
Contributed by Kalle.Raiskila@nokia.com, edited slightly before commit.
llvm-svn: 97091
- Function uses all scratch registers AND
- Function does not use any callee saved registers AND
- Stack size is too big to address with immediate offsets.
In this case a register must be scavenged to calculate the address of a stack
object, and the scavenger needs a spare register or emergency spill slot.
llvm-svn: 97071
greater-than-or-equal SELECT_CCs to NEON vmin/vmax instructions. This is
only allowed when UnsafeFPMath is set or when at least one of the operands
is known to be nonzero.
llvm-svn: 97065
the number of value bits, not the number of bits of allocation for in-memory
storage.
Make getTypeStoreSize and getTypeAllocSize work consistently for arrays and
vectors.
Fix several places in CodeGen which compute offsets into in-memory vectors
to use TargetData information.
This fixes PR1784.
llvm-svn: 97064
Adding the function "lookupGCCName" to the MBlazeIntrinsicInfo
class to support the Clang MicroBlaze target.
Additionally, minor fixes which remove some unused PIC code
(PIC is not supported yet in the MicroBlaze backend) and
removed some unused variables.
llvm-svn: 97054
necessary to swap the operands to handle NaN and negative zero properly.
Also, reintroduce logic for checking for NaN conditions when forming
SSE min and max instructions, fixed to take into consideration NaNs and
negative zeros. This allows forming min and max instructions in more
cases.
llvm-svn: 97025
memory from three or four registers and VST2 (multiple two-element structures)
which stores to memory from two double-spaced registers.
A8.6.391 & A8.6.393
llvm-svn: 97018
three or four registers and VLD2 (multiple two-element structures) which loads
memory into two double-spaced registers.
A8.6.307 & A8.6.310
llvm-svn: 96980
The MicroBlaze is a highly configurable 32-bit soft-microprocessor for
use on Xilinx FPGAs. For more information see:
http://www.xilinx.com/tools/microblaze.htmhttp://en.wikipedia.org/wiki/MicroBlaze
The current LLVM MicroBlaze backend generates assembly which can be
compiled using the an appropriate binutils assembler.
llvm-svn: 96969
about ownership and update policies. It isn't clear why it is doing all
this lowering at isel time instead of in legalize. This fixes fcmp64.ll
llvm-svn: 96849
126.gcc nightly tests. These failures uncovered latent bugs that machine DCE
could remove one half of a stack adjust down/up pair, causing PEI to assert.
This update fixes that, and the tests now pass.
llvm-svn: 96822
o signed/unsigned add/subtract
o signed/unsigned halving add/subtract
o unsigned sum of absolute difference [and accumulate]
o signed/unsigned saturate
o signed multiply accumulate/subtract [long] dual
llvm-svn: 96795
during a tail call. A parameter might overwrite this stack slot during the tail
call.
The sequence during a tail call is:
1.) load return address to temp reg
2.) move parameters (might involve storing to return address stack slot)
3.) store return address to new location from temp reg
If the stack location is marked immutable CodeGen can colocate load (1) with the
store (3).
This fixes bug 6225.
llvm-svn: 96783
SSE min and max instructions. The real thing this code needs to be
concerned about is negative zero.
Update the sse-minmax.ll test accordingly, and add tests for
-enable-unsafe-fp-math mode as well.
llvm-svn: 96775
create an X86ISD::Cmp node with result type i64 on the
CodeGen/X86/shift-i256.ll testcase and the new isel was assert on it
downstream.
llvm-svn: 96768
it to follow the mode needed by the new isel. Instead of returning
the input and output chains, it just returns the (currently only one,
which is a silly limitation) node that has input and output chains.
Since we want the old thing to still work, add a new
SelectScalarSSELoad to emulate the old interface. The XXX suffix
and the wrapper will eventually go away.
llvm-svn: 96715
dragonegg self-host build. I reverted 96640 in order to revert
96556 (96640 goes on top of 96556), but it also looks like with
both of them applied the breakage happens even earlier. The
symptom of the 96556 miscompile is the following crash:
llvm[3]: Compiling AlphaISelLowering.cpp for Release build
cc1plus: /home/duncan/tmp/tmp/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp:4982: void llvm::SelectionDAG::ReplaceAllUsesWith(llvm::SDNode*, llvm::SDNode*, llvm::SelectionDAG::DAGUpdateListener*): Assertion `(!From->hasAnyUseOfValue(i) || From->getValueType(i) == To->getValueType(i)) && "Cannot use this version of ReplaceAllUsesWith!"' failed.
Stack dump:
0. Running pass 'X86 DAG->DAG Instruction Selection' on function '@_ZN4llvm19AlphaTargetLowering14LowerOperationENS_7SDValueERNS_12SelectionDAGE'
g++: Internal error: Aborted (program cc1plus)
This occurs when building LLVM using LLVM built by LLVM (via
dragonegg). Probably LLVM has miscompiled itself, though it
may have miscompiled GCC and/or dragonegg itself: at this point
of the self-host build, all of GCC, LLVM and dragonegg were built
using LLVM. Unfortunately this kind of thing is extremely hard
to debug, and while I did rummage around a bit I didn't find any
smoking guns, aka obviously miscompiled code.
Found by bisection.
r96556 | evancheng | 2010-02-18 03:13:50 +0100 (Thu, 18 Feb 2010) | 5 lines
Some dag combiner goodness:
Transform br (xor (x, y)) -> br (x != y)
Transform br (xor (xor (x,y), 1)) -> br (x == y)
Also normalize (and (X, 1) == / != 1 -> (and (X, 1)) != / == 0 to match to "test on x86" and "tst on arm"
r96640 | evancheng | 2010-02-19 01:34:39 +0100 (Fri, 19 Feb 2010) | 16 lines
Transform (xor (setcc), (setcc)) == / != 1 to
(xor (setcc), (setcc)) != / == 1.
e.g. On x86_64
%0 = icmp eq i32 %x, 0
%1 = icmp eq i32 %y, 0
%2 = xor i1 %1, %0
br i1 %2, label %bb, label %return
=>
testl %edi, %edi
sete %al
testl %esi, %esi
sete %cl
cmpb %al, %cl
je LBB1_2
llvm-svn: 96672
for ARM to just check if a function has a FP to determine if it's safe
to simplify the stack adjustment pseudo ops prior to eliminating frame
indices. Allow targets to override the default behavior and does so for ARM
and Thumb2.
llvm-svn: 96634
* Use "S" abbreviation for scalar single FP registers in class and pattern
names, instead of keeping the "D" (for "double") abbreviation and tacking on
an "s" elsewhere in the name.
* Move the scalar single FP register classes and patterns to be more
consistent with other definitions in the file.
* Rename "VNEGf32d" definition to "VNEGfd" for consistency.
* Deleted the N2VDIntsPat pattern; N2VSPat is good enough.
llvm-svn: 96521
This pass is supposed to be run on the linked .bc module.
It traveses the module call graph twice. Once starting from the main function
and marking each reached function as "ML". Again, starting from the ISR
and cloning any reachable function that was marked as "ML". After cloning
the function, it remaps all the call sites in IL functions to call the
cloned functions.
Currently only marking is being done.
llvm-svn: 96435
into a roundss intrinsic, producing a cyclic dag. The root cause
of this is badness handling ComplexPattern nodes in the old dagisel
that I noticed through inspection. Eliminate a copy of the of the
code that handled ComplexPatterns by making EmitChildMatchCode call
into EmitMatchCode.
llvm-svn: 96408
If there exists a use of a build_vector that's the bitwise complement of the mask,
then transform the node to
(and (xor x, (build_vector -1,-1,-1,-1)), (build_vector ~c1,~c2,~c3,~c4)).
Since this transformation is only useful when 1) the given build_vector will
become a load from constpool, and 2) (and (xor x -1), y) matches to a single
instruction, I decided this is appropriate as a x86 specific transformation.
rdar://7323335
llvm-svn: 96389
non-temporal. Fix from r96241 for botched encoding of MOVNTDQ.
Add documentation for !nontemporal metadata.
Add a simpler movnt testcase.
llvm-svn: 96386
to have the predicate on the pattern itself instead. Support for the new
ISel. Remove definitions of CarryDefIsUnused and CarryDefIsUsed since they are
no longer used anywhere.
llvm-svn: 96384
They won't work with the new ISel mechanism, as Requires predicates are no
longer allowed to reference the node being selected. Moving the predicate to
the patterns instead solves the problem.
This patch handles ARM mode. Thumb2 will follow.
llvm-svn: 96381
branch in ARM v4 code, since it gets clobbered by the return address before
it is used. Instead of adding a new register class containing all the GPRs
except LR, just use the existing tGPR class.
llvm-svn: 96360
IsLegalToFold and IsProfitableToFold. The generic version of the later simply checks whether the folding candidate has a single use.
This allows the target isel routines more flexibility in deciding whether folding makes sense. The specific case we are interested in is folding constant pool loads with multiple uses.
llvm-svn: 96255