Move platform independent code (lowering of possibly overwritten
arguments, check for tail call optimization eligibility) from
target X86ISelectionLowering.cpp to TargetLowering.h and
SelectionDAGISel.cpp.
Initial PowerPC tail call implementation:
Support ppc32 implemented and tested (passes my tests and
test-suite llvm-test).
Support ppc64 implemented and half tested (passes my tests).
On ppc tail call optimization is performed if
caller and callee are fastcc
call is a tail call (in tail call position, call followed by ret)
no variable argument lists or byval arguments
option -tailcallopt is enabled
Supported:
* non pic tail calls on linux/darwin
* module-local tail calls on linux(PIC/GOT)/darwin(PIC)
* inter-module tail calls on darwin(PIC)
If constraints are not met a normal call will be emitted.
A test checking the argument lowering behaviour on x86-64 was added.
llvm-svn: 50477
We now compile test2/test3 to:
_test2:
## InlineAsm Start
set %xmm0, %xmm1
## InlineAsm End
addps %xmm1, %xmm0
ret
_test3:
## InlineAsm Start
set %xmm0, %xmm1
## InlineAsm End
paddd %xmm1, %xmm0
ret
as expected.
llvm-svn: 50389
towards PR2094. It now compiles the attached .ll file to:
_sad16_sse2:
movslq %ecx, %rax
## InlineAsm Start
%ecx %rdx %rax %rax %r8d %rdx %rsi
## InlineAsm End
## InlineAsm Start
set %eax
## InlineAsm End
ret
which is pretty decent for a 3 output, 4 input asm.
llvm-svn: 50386
c1, f1 = CopyToReg
c2, f2 = CopyToReg
c3 = TokenFactor c1, c2
...
= user c3, ..., f2
Now that the two CopyToReg's and the user are "flagged" together. They effectively forms a single scheduling unit. The TokenFactor is now both an operand and a successor of the Flagged nodes.
llvm-svn: 50376
conversion open the door for many nasty implicit conversion issues, and
can be easily solved by initializing with (V.begin(), V.end()) when
needed.
This patch includes many small cleanups for sdisel also.
llvm-svn: 50340
When choosing between constraints with multiple options,
like "ir", test to see if we can use the 'i' constraint and
go with that if possible. This produces more optimal ASM in
all cases (sparing a register and an instruction to load it),
and fixes inline asm like this:
void test () {
asm volatile (" %c0 %1 " : : "imr" (42), "imr"(14));
}
Previously we would dump "42" into a memory location (which
is ok for the 'm' constraint) which would cause a problem
because the 'c' modifier is not valid on memory operands.
Isn't it great how inline asm turns 'missed optimization'
into 'compile failed'??
Incidentally, this was the todo in
PowerPC/2007-04-24-InlineAsm-I-Modifier.ll
Please do NOT pull this into Tak.
llvm-svn: 50315
to the block that defines their operands. This doesn't work in the
case that the operand is an invoke, because invoke is a terminator
and must be the last instruction in a block.
Replace it with support in SelectionDAGISel for copying struct values
into sequences of virtual registers.
llvm-svn: 50279
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.
Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.
This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.
Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.
This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.
llvm-svn: 49572
before an invoke. Failure to do this causes references in
the landing pad to variables that were not set. Fixes
g++.dg/eh/delayslot1.C
g++.dg/eh/fp-regs.C
g++.old-deja/g++.brendan/eh1.C
llvm-svn: 49243
review feedback.
-enable-eh is still accepted but doesn't do anything.
EH intrinsics use Dwarf EH if the target supports that,
and are handled by LowerInvoke otherwise.
The separation of the EH table and frame move data is,
I think, logically figured out, but either one still
causes full EH info to be generated (not sure how to
split the metadata correctly).
MachineModuleInfo::needsFrameInfo is no longer used and
is removed.
llvm-svn: 49064
not marked nounwind, or for all functions when -enable-eh
is set, provided the target supports Dwarf EH.
llvm-gcc generates nounwind in the right places; other FEs
will need to do so also. Given such a FE, -enable-eh should
no longer be needed.
llvm-svn: 49006
nodes. This doesn't currently have much impact the generated code, but it
does produce simpler-looking SelectionDAGs, and consequently
simpler-looking ScheduleDAGs, because there are fewer spurious
dependencies.
In particular, CopyValueToVirtualRegister now uses the entry node as the
input chain dependency for new CopyToReg nodes instead of calling getRoot
and depending on the most recent memory reference.
Also, rename UnorderedChains to PendingExports and pull it up from being
a local variable in SelectionDAGISel::BuildSelectionDAG to being a
member variable of SelectionDAGISel, so that it doesn't have to be
passed around to all the places that need it.
llvm-svn: 48893
flags. This is needed by the new legalize types
infrastructure which wants to expand the 64 bit
constants previously used to hold the flags on
32 bit machines. There are two functional changes:
(1) in LowerArguments, if a parameter has the zext
attribute set then that is marked in the flags;
before it was being ignored; (2) PPC had some bogus
code for handling two word arguments when using the
ELF 32 ABI, which was hard to convert because of
the bogusness. As suggested by the original author
(Nicolas Geoffray), I've disabled it for the moment.
Tested with "make check" and the Ada ACATS testsuite.
llvm-svn: 48640
getCopyToParts problem was noticed by the new
LegalizeTypes infrastructure. In order to avoid
this kind of thing in the future I've added a
check that EXTRACT_ELEMENT is only used with
integers. Once LegalizeTypes is up and running
most likely BUILD_PAIR and EXTRACT_ELEMENT can
be removed, in favour of using apints instead.
llvm-svn: 48294
field to 32 bits, thus enabling correct handling of ByVal
structs bigger than 0x1ffff. Abstract interface a bit.
Fixes gcc.c-torture/execute/pr23135.c and
gcc.c-torture/execute/pr28982b.c in gcc testsuite (were ICE'ing
on ppc32, quietly producing wrong code on x86-32.)
llvm-svn: 48122
they are produced by calls (which are known exact) and by cross block copies
which are known to be produced by extends.
This improves:
define double @test2() {
%tmp85 = call double asm sideeffect "fld0", "={st(0)}"()
ret double %tmp85
}
from:
_test2:
subl $20, %esp
# InlineAsm Start
fld0
# InlineAsm End
fstpl 8(%esp)
movsd 8(%esp), %xmm0
movsd %xmm0, (%esp)
fldl (%esp)
addl $20, %esp
#FP_REG_KILL
ret
to:
_test2:
# InlineAsm Start
fld0
# InlineAsm End
#FP_REG_KILL
ret
by avoiding a f64 <-> f80 trip
llvm-svn: 48108
inline asms.
Fix PR2078 by marking aliases of registers used when a register is
marked used. This prevents EAX from being allocated when AX is listed
in the clobber set for the asm.
llvm-svn: 47426
the return value is zero-extended if it isn't
sign-extended. It may also be any-extended.
Also, if a floating point value was returned
in a larger floating point type, pass 1 as the
second operand to FP_ROUND, which tells it
that all the precision is in the original type.
I think this is right but I could be wrong.
Finally, when doing libcalls, set isZExt on
a parameter if it is "unsigned". Currently
isSExt is set when signed, and nothing is
set otherwise. This should be right for all
calls to standard library routines.
llvm-svn: 47122
node as soon as we create it in SDISel. Previously we would lower it in
legalize. The problem with this is that it only exposes the argument
loads implied by FORMAL_ARGUMENTs after legalize, so that only dag combine 2
can hack on them. This causes us to miss some optimizations because
datatype expansion also happens here.
Exposing the loads early allows us to do optimizations on them. For example
we now compile arg-cast.ll to:
_foo:
movl $2147483647, %eax
andl 8(%esp), %eax
ret
where we previously produced:
_foo:
subl $12, %esp
movsd 16(%esp), %xmm0
movsd %xmm0, (%esp)
movl $2147483647, %eax
andl 4(%esp), %eax
addl $12, %esp
ret
It might also make sense to do this for ISD::CALL nodes, which have implicit
stores on many targets.
llvm-svn: 47054
handle arbitrary precision integers and any number
of parts. For example, on a 32 bit machine an i50
corresponds to two i32 parts. getCopyToParts will
extend the i50 to an i64 then write half of the i64
to each part; getCopyFromParts will combine the two
i32 parts into an i64 then truncate the result to
i50.
llvm-svn: 47024
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
llvm-svn: 46659
and switch various codegen pieces and the X86 backend over
to using it.
* Add some comments to SelectionDAGNodes.h
* Introduce a second argument to FP_ROUND, which indicates
whether the FP_ROUND changes the value of its input. If
not it is safe to xform things like fp_extend(fp_round(x)) -> x.
llvm-svn: 46125
up to the various compiler pipelines.
This doesn't actually add support for any GC algorithms, which means it
temporarily breaks a few tests. To be fixed shortly.
llvm-svn: 45669
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
llvm-svn: 45467
to know about calls that cannot throw ('nounwind'):
if such a call does throw for some reason then the
personality will terminate the program. The distinction
between an ordinary call and a nounwind call is that
an ordinary call gets an entry in the exception table
but a nounwind call does not. This patch sets up the
exception table appropriately. One oddity is that
I've chosen to bracket nounwind calls with labels (like
invokes) - the other choice would have been to bracket
ordinary calls with labels. While bracketing
ordinary calls is more natural (because bracketing
by labels would then correspond exactly to getting an
entry in the exception table), I didn't do it because
introducing labels impedes some optimizations and I'm
guessing that ordinary calls occur more often than
nounwind calls. This fixes the gcc filter2 eh test,
at least at -O0 (the inliner needs some tweaking at
higher optimization levels).
llvm-svn: 45197
throw exceptions", just mark intrinsics with the nounwind
attribute. Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).
llvm-svn: 44544
the function type, instead they belong to functions
and function calls. This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully
a bitcode guru (who might that be? :) ) will fix it.
llvm-svn: 44359
1) Change the interface to TargetLowering::ExpandOperationResult to
take and return entire NODES that need a result expanded, not just
the value. This allows us to handle things like READCYCLECOUNTER,
which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
ExpandOperationResult. This makes the result simpler and fully
general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
allowing them to work with LegalizeDAGTypes.
LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.
llvm-svn: 44300
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
llvm-svn: 43620
FE.
- Explicitly pass in the alignment of the load & store.
- XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on
unaligned pointers.
llvm-svn: 43398
have their own custom memcpy lowering code. This code needs to be factored out
into a target-independent lowering method with hooks to the backend. In the
meantime, just call memcpy if we're trying to copy onto a stack.
llvm-svn: 43262
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
llvm-svn: 43172
take a deleted nodes vector, instead of requiring it.
One more significant change: Implement the start of a legalizer that
just works on types. This legalizer is designed to run before the
operation legalizer and ensure just that the input dag is transformed
into an output dag whose operand and result types are all legal, even
if the operations on those types are not.
This design/impl has the following advantages:
1. When finished, this will *significantly* reduce the amount of code in
LegalizeDAG.cpp. It will remove all the code related to promotion and
expansion as well as splitting and scalarizing vectors.
2. The new code is very simple, idiomatic, and modular: unlike
LegalizeDAG.cpp, it has no 3000 line long functions. :)
3. The implementation is completely iterative instead of recursive, good
for hacking on large dags without blowing out your stack.
4. The implementation updates nodes in place when possible instead of
deallocating and reallocating the entire graph that points to some
mutated node.
5. The code nicely separates out handling of operations with invalid
results from operations with invalid operands, making some cases
simpler and easier to understand.
6. The new -debug-only=legalize-types option is very very handy :),
allowing you to easily understand what legalize types is doing.
This is not yet done. Until the ifdef added to SelectionDAGISel.cpp is
enabled, this does nothing. However, this code is sufficient to legalize
all of the code in 186.crafty, olden and freebench on an x86 machine. The
biggest issues are:
1. Vectors aren't implemented at all yet
2. SoftFP is a mess, I need to talk to Evan about it.
3. No lowering to libcalls is implemented yet.
4. Various operations are missing etc.
5. There are FIXME's for stuff I hax0r'd out, like softfp.
Hey, at least it is a step in the right direction :). If you'd like to help,
just enable the #ifdef in SelectionDAGISel.cpp and compile code with it. If
this explodes it will tell you what needs to be implemented. Help is
certainly appreciated.
Once this goes in, we can do three things:
1. Add a new pass of dag combine between the "type legalizer" and "operation
legalizer" passes. This will let us catch some long-standing isel issues
that we miss because operation legalization often obfuscates the dag with
target-specific nodes.
2. We can rip out all of the type legalization code from LegalizeDAG.cpp,
making it much smaller and simpler. When that happens we can then
reimplement the core functionality left in it in a much more efficient and
non-recursive way.
3. Once the whole legalizer is non-recursive, we can implement whole-function
selectiondags maybe...
llvm-svn: 42981
enabled by passing -tailcallopt to llc. The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
elf/pic enabled + callee is in module + callee has
visibility protected or hidden
llvm-svn: 42870
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).
llvm-svn: 41967
2. Lower calls to fabs and friends to FABS nodes etc unless the function has
internal linkage. Before we wouldn't lower if it had a definition, which
is incorrect. This allows us to compile:
define double @fabs(double %f) {
%tmp2 = tail call double @fabs( double %f )
ret double %tmp2
}
into:
_fabs:
fabs f1, f1
blr
llvm-svn: 41805
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double. Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)
llvm-svn: 41747
labels are generated bracketing each call (not just
invokes). This is used to generate entries in
the exception table required by the C++ personality.
However it gets in the way of tail-merging. This
patch solves the problem by no longer placing labels
around ordinary calls. Instead we generate entries
in the exception table that cover every instruction
in the function that wasn't covered by an invoke
range (the range given by the labels around the invoke).
As an optimization, such entries are only generated for
parts of the function that contain a call, since for
the moment those are the only instructions that can
throw an exception [1]. As a happy consequence, we
now get a smaller exception table, since the same
region can cover many calls. While there, I also
implemented folding of invoke ranges - successive
ranges are merged when safe to do so. Finally, if
a selector contains only a cleanup, there's a special
shorthand for it - place a 0 in the call-site entry.
I implemented this while there. As a result, the
exception table output (excluding filters) is now
optimal - it cannot be made smaller [2]. The
problem with throw filters is that folding them
optimally is hard, and the benefit of folding them is
minimal.
[1] I tested that having trapping instructions (eg
divide by zero) in such a region doesn't cause trouble.
[2] It could be made smaller with the help of higher
layers, eg by having branch folding reorder basic blocks
ending in invokes with the same landing pad so they
follow each other. I don't know if this is worth doing.
llvm-svn: 41718
gcc exception handling: if an exception unwinds through
an invoke, then execution must branch to the invoke's
unwind target. We previously tried to enforce this by
appending a cleanup action to every selector, however
this does not always work correctly due to an optimization
in the C++ unwinding runtime: if only cleanups would be
run while unwinding an exception, then the program just
terminates without actually executing the cleanups, as
invoke semantics would require. I was hoping this
wouldn't be a problem, but in fact it turns out to be the
cause of all the remaining failures in the LLVM testsuite
(these also fail with -enable-correct-eh-support, so turning
on -enable-eh didn't make things worse!). Instead we need
to append a full-blown catch-all to the end of each
selector. The correct way of doing this depends on the
personality function, i.e. it is language dependent, so
can only be done by gcc. Thus this patch which generalizes
the eh.selector intrinsic so that it can handle all possible
kinds of action table entries (before it didn't accomodate
cleanups): now 0 indicates a cleanup, and filters have to be
specified using the number of type infos plus one rather than
the number of type infos. Related gcc patches will cause
Ada to pass a cleanup (0) to force the selector to always
fire, while C++ will use a C++ catch-all (null).
llvm-svn: 41484
- *Always* round up the size of the allocation to multiples of stack
alignment to ensure the stack ptr is never left in an invalid state after a dynamic_stackalloc.
llvm-svn: 41132
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.
llvm-svn: 40807
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.
In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.
After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be
thought as 'finished': I expect many small and not so small glitches
everywhere.
llvm-svn: 39855
register ordering, for both physical and virtual registers. Update the PPC
target lowering for calls to expect registers for the call result to
already be in target order.
llvm-svn: 38471
so must be lowered to a value, not nothing at all.
Subtle point: I made eh_selector return 0 and
eh_typeid_for return 1. This means that only
cleanups (destructors) will be run as the exception
unwinds [if eh_typeid_for returned 0 then it would
be as if the first catch always matched, and the
corresponding handler would be run], which is
probably want you want in the CBE.
llvm-svn: 37947
endian swapping should be done, and update the code to use it. This fixes
some register ordering issues on big-endian systems, such as PowerPC,
introduced by the recent illegal by-val arguments changes.
llvm-svn: 37921
refactored getCopyFromParts and getCopyToParts, which are more general.
This effectively adds support for lowering illegal by-val vector call
arguments.
llvm-svn: 37843
illegal value type will be transformed to, for code that needs the
register type after all transformations instead of just after the first
transformation.
Factor out the code that uses this information to do copy-from-regs and
copy-to-regs for various purposes into separate functions so that they
are done consistently.
llvm-svn: 37781
extended vector types. Remove the special SDNode opcodes used for pre-legalize
vector operations, and the special MVT::Vector type used with them. Adjust
lowering and legalize to work with the normal SDNode kinds instead, and to
use the normal MVT functions to work with vector types instead of using the
two special operands that the pre-legalize nodes held.
This allows pre-legalize and post-legalize DAGs, and the code that operates
on them, to be more consistent. Pre-legalize vector operators can be handled
more consistently with scalar operators. And, -view-dag-combine1-dags and
-view-legalize-dags now look prettier for vector code.
llvm-svn: 37719
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.
llvm-svn: 37704
VCONCAT_VECTORS. Use these for CopyToReg and CopyFromReg legalizing in
the case that the full register is to be split into subvectors instead
of scalars. This replaces uses of VBIT_CONVERT to present values as
vector-of-vector types in order to make whole subvectors accessible via
BUILD_VECTOR and EXTRACT_VECTOR_ELT.
This is in preparation for adding extended ValueType values, where
having vector-of-vector types is undesirable.
llvm-svn: 37569
crashing but breaks exception handling. The problem
described in PR1224 is that invoke is a terminator that
can produce a value. The value may be needed in other
blocks. The code that writes to registers values needed
in other blocks runs before terminators are lowered (in
this case invoke) so asserted because the value was not
yet available. The fix that was applied was to do invoke
lowering earlier, before writing values to registers.
The problem this causes is that the code to copy values
to registers can be output after the invoke call. If
an exception is raised and control is passed to the
landing pad then this copy-code will never execute. If
the value is needed in some code path reached via the
landing pad then that code will get something bogus.
So revert the original fix and simply skip invoke values
in the general copying to registers code. Instead copy
the invoke value to a register in the invoke lowering code.
llvm-svn: 37567
simplifies the code in DwarfWriter, allows for multiple filters and
makes it trivial to specify filters accompanied by cleanups or catch-all
specifications (see next patch). What a deal! Patch blessed by Anton.
llvm-svn: 37398
register, preallocate all input registers and the early clobbered output.
This fixes PR1357 and CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
llvm-svn: 36599
before the copies into physregs are done. This avoids having flag operands
skip the store, causing cycles in the dag at sched time. This fixes infinite
loops on these tests:
test/CodeGen/Generic/2007-04-08-MultipleFrameIndices.ll for PR1308
test/CodeGen/PowerPC/2007-01-29-lbrx-asm.ll
test/CodeGen/PowerPC/2007-01-31-InlineAsmAddrMode.ll
test/CodeGen/X86/2006-07-12-InlineAsmQConstraint.ll for PR828
llvm-svn: 36547
If the operand is not already an indirect operand, spill it to a constant
pool entry or a stack slot.
This fixes PR1356 and CodeGen/X86/2007-04-27-InlineAsm-IntMemInput.ll
llvm-svn: 36536
class supports. In the case of vectors, this means we often get the wrong
type (e.g. we get v4f32 instead of v8i16). Make sure to convert the vector
result to the right type. This fixes CodeGen/X86/2007-04-11-InlineAsmVectorResult.ll
llvm-svn: 35944
1. Fix some bugs in the jump table lowering threshold
2. Implement much better metric for optimal pivot selection
3. Tune thresholds for different lowering methods
4. Implement shift-and trick for lowering small (<machine word
length) cases with few destinations. Good testcase will follow.
llvm-svn: 35816
1) Take address scale into consideration. e.g. i32* -> scale 4.
2) Examine all the users of GEP.
3) Generalize to inter-block GEP's (no longer uses loopinfo).
4) Don't do xform if GEP has other variable index(es).
llvm-svn: 35403