comments (fast isel, X86). This doesn't seem
to break any functionality, but will introduce
cases where -g affects the generated code. I'll
be fixing that.
llvm-svn: 93811
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
llvm-svn: 93531
I asked Devang to do back on Sep 27. Instead of going through the
MetadataContext class with methods like getMD() and getMDs(), just
ask the instruction directly for its metadata with getMetadata()
and getAllMetadata().
This includes a variety of other fixes and improvements: previously
all Value*'s were bloated because the HasMetadata bit was thrown into
value, adding a 9th bit to a byte. Now this is properly sunk down to
the Instruction class (the only place where it makes sense) and it
will be folded away somewhere soon.
This also fixes some confusion in getMDs and its clients about
whether the returned list is indexed by the MDID or densely packed.
This is now returned sorted and densely packed and the comments make
this clear.
This introduces a number of fixme's which I'll follow up on.
llvm-svn: 92235
so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
Likewise for eh.typeid.for. This aligns us with gcc, which always uses a
32 bit value for the selector on all platforms. My understanding is that
the register allocator used to assert if the selector intrinsic size didn't
match the pointer size, and this was the reason for introducing the two
variants. However my testing shows that this is no longer the case (I
fixed some bugs in selector lowering yesterday, and some more today in the
fastisel path; these might have caused the original problems).
llvm-svn: 84106
While recording beginning of a function, use scope info from the first location entry instead of just relying on first location entry itself.
llvm-svn: 83684
from floating-point to integer first, and bitcast the result
back to floating-point. Previously, this test was passing by
falling back to SelectionDAG lowering. The resulting code isn't
as nice, but it's correct and CodeGen now stays on the fast path.
llvm-svn: 81171
This involves temporarily hard wiring some parts to use the global context. This isn't ideal, but it's
the only way I could figure out to make this process vaguely incremental.
llvm-svn: 75445
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
code in preparation for code generation. The main thing it does
is handle the case when eh.exception calls (and, in a future
patch, eh.selector calls) are far away from landing pads. Right
now in practice you only find eh.exception calls close to landing
pads: either in a landing pad (the common case) or in a landing
pad successor, due to loop passes shifting them about. However
future exception handling improvements will result in calls far
from landing pads:
(1) Inlining of rewinds. Consider the following case:
In function @f:
...
invoke @g to label %normal unwind label %unwinds
...
unwinds:
%ex = call i8* @llvm.eh.exception()
...
In function @g:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
"rethrow exception"
Now inline @g into @f. Currently this is turned into:
In function @f:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
invoke "rethrow exception" to label %normal unwind label %unwinds
unwinds:
%ex = call i8* @llvm.eh.exception()
...
However we would like to simplify invoke of "rethrow exception" into
a branch to the %unwinds label. Then %unwinds is no longer a landing
pad, and the eh.exception call there is then far away from any landing
pads.
(2) Using the unwind instruction for cleanups.
It would be nice to have codegen handle the following case:
invoke @something to label %continue unwind label %run_cleanups
...
handler:
... perform cleanups ...
unwind
This requires turning "unwind" into a library call, which
necessarily takes a pointer to the exception as an argument
(this patch also does this unwind lowering). But that means
you are using eh.exception again far from a landing pad.
(3) Bugpoint simplifications. When bugpoint is simplifying
exception handling code it often generates eh.exception calls
far from a landing pad, which then causes codegen to assert.
Bugpoint then latches on to this assertion and loses sight
of the original problem.
Note that it is currently rare for this pass to actually do
anything. And in fact it normally shouldn't do anything at
all given the code coming out of llvm-gcc! But it does fire
a few times in the testsuite. As far as I can see this is
almost always due to the LoopStrengthReduce codegen pass
introducing pointless loop preheader blocks which are landing
pads and only contain a branch to another block. This other
block contains an eh.exception call. So probably by tweaking
LoopStrengthReduce a bit this can be avoided.
llvm-svn: 72276
checking for bcopy... no
checking for getc_unlocked... Assertion failed: (0 && "Unknown SCEV kind!"), function operator(), file /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore.roots/llvmCore~obj/src/lib/Analysis/ScalarEvolution.cpp, line 511.
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmgcc42.roots/llvmgcc42~obj/src/libdecnumber/decUtility.c:360: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
make[4]: *** [decUtility.o] Error 1
make[4]: *** Waiting for unfinished jobs....
Assertion failed: (0 && "Unknown SCEV kind!"), function operator(), file /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore.roots/llvmCore~obj/src/lib/Analysis/ScalarEvolution.cpp, line 511.
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmgcc42.roots/llvmgcc42~obj/src/libdecnumber/decNumber.c:5591: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
make[4]: *** [decNumber.o] Error 1
make[3]: *** [all-stage2-libdecnumber] Error 2
make[3]: *** Waiting for unfinished jobs....
llvm-svn: 71165
-Replace DebugLocTuple's Source ID with CompileUnit's GlobalVariable*
-Remove DwarfWriter::getOrCreateSourceID
-Make necessary changes for the above (fix callsites, etc.)
llvm-svn: 70520
Massive check in. This changes the "-fast" flag to "-O#" in llc. If you want to
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.
Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'll change the JIT with a follow-up patch.
llvm-svn: 70343
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.
Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'm not 100% sure if it's necessary to change it there...
llvm-svn: 70270
Insetad of doing ...
if (inlined_subroutine && known_location)
DW_TAG_inline_subroutine
else
DW_TAG_subprogram
do
if (inlined_subroutine) {
if (known_location)
DW_TAG_inline_subroutine
} else {
DW_TAG_subprogram
}
llvm-svn: 69300
Create debug_inlined dwarf section using these information. This info is used by gdb, at least on Darwin, to enable better experience debugging inlined functions. See DwarfWriter.cpp for more information on structure of debug_inlined section.
llvm-svn: 68847
ptrtoint and inttoptr in X86FastISel. These casts aren't always
handled in the generic FastISel code because X86 sometimes needs
custom code to do truncation and zero-extension.
llvm-svn: 66988
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.
llvm-svn: 66941
them are generic changes.
- Use the "fast" flag that's already being passed into the asm printers instead
of shoving it into the DwarfWriter.
- Instead of calling "MI->getParent()->getParent()" for every MI, set the
machine function when calling "runOnMachineFunction" in the asm printers.
llvm-svn: 65379
a DBG_LABEL or not. We want to fall back to the original way of emitting debug
info when we're in -O0/-fast mode.
- Add plumbing in to pass the "Fast" flag to places that need it.
- XFAIL DebugInfo/deaddebuglabel.ll. This is finding 11 labels instead of 8. I
need to investigate still.
llvm-svn: 65367
and use it in x86 address mode folding. Also, make
getRegForValue return 0 for illegal types even if it has a
ValueMap for them, because Argument values are put in the
ValueMap. This fixes PR3181.
llvm-svn: 60696
- Move the EH landing-pad code and adjust it so that it works
with FastISel as well as with SDISel.
- Add FastISel support for @llvm.eh.exception and
@llvm.eh.selector.
llvm-svn: 57539
HandlePHINodesInSuccessorBlocks that works FastISel-style. This
allows PHI nodes to be updated correctly while using FastISel.
This also involves some code reorganization; ValueMap and
MBBMap are now members of the FastISel class, so they needn't
be passed around explicitly anymore. Also, SelectInstructions
is changed to SelectInstruction, and only does one instruction
at a time.
llvm-svn: 55746
the simple fix, materializing the constant before every use. It might be better to either track domination of uses or
to materialize all constants and the beginning of the function and let remat sort when to do materialization at uses.
llvm-svn: 55703
assignment when selecting the def. This is the naive solution to the problem: insert a copy to the pre-chosen
vreg. Other solutions might be preferable, such as:
1) Passing the dest reg into FastEmit_. However, this would require the higher level code to know about reg classes, which they don't currently.
2) Selecting blocks in reverse postorder. This has some compile time cost for computing the order, and we'd need to measure its impact.
llvm-svn: 55555
was inserted or not. This allows bitcast in fast isel to properly handle the case
where an appropriate reg-to-reg copy is not available.
llvm-svn: 55375
and use it in FastISelEmitter.cpp, and make FastISel
subtarget aware. Among other things, this lets it work
properly on x86 targets that don't have SSE, where it
successfully selects x87 instructions.
llvm-svn: 55156
class hold a MachineRegisterInfo member, and make the
MachineBasicBlock be passed in to SelectInstructions rather
than the FastISel constructor.
llvm-svn: 55076