Commit Graph

157 Commits

Author SHA1 Message Date
Duncan Sands 4c904fa797 Fix PR7272: when inlining through a callsite with byval arguments,
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared.  Fixes PR7272.

llvm-svn: 105255
2010-05-31 21:00:26 +00:00
Chris Lattner c2432b9d44 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner fc8d9ee6c3 Implement rdar://6295824 and PR6724 with two tiny changes
that can have a big effect :).  The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call.  In this case, it will rerun all the passes it 
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.

The second patch is to add *all* call sites to the 
DevirtualizedCalls list the inliner uses.  This list is
about to get renamed, but the jist of this is that the 
inliner now reconsiders *all* inlined call sites as candidates
for further inlining.  The intuition is this that in cases 
like this:

f() { g(1); }     g(int x) { h(x); }

We analyze this bottom up, and may decide that it isn't 
profitable to inline H into G.  Next step, we decide that it is
profitable to inline G into F, and do so, which means that F 
now calls H.  Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).

In my spot checks, this doesn't have a big impact on code.  For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes).  252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.

llvm-svn: 102823
2010-05-01 01:15:56 +00:00
Chris Lattner c691de3b4e switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Chris Lattner 2eee5d3467 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner 4ba01ec869 refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner 016c00a311 when inlining something like this:
define void @f3(void (i8*)* %__f) ssp {
entry:
  call void %__f(i8* undef)
  unreachable
}

define void @f4(i8* %this) ssp align 2 {
entry:
  call void @f3(void (i8*)* @f2) ssp
  ret void
}

The inliner is turning the indirect call to %__f into a direct
call to F2.  Make the call graph more precise when this happens.

The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.

llvm-svn: 102131
2010-04-22 21:31:00 +00:00
Chris Lattner 0a3b5b4e39 eliminate dead #include.
llvm-svn: 102119
2010-04-22 20:41:10 +00:00
Eric Christopher 7258dcd77f Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Gabor Greif f375520f7b reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Gabor Greif 403e9694f9 back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Gabor Greif 33ae80bff7 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Gabor Greif 9fd00c7d25 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif aafd209632 rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Mon P Wang c576ee9040 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Mon P Wang 999c1b927b Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang a972ab8564 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Bob Wilson 6f7fd28824 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang 7460571381 Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Eric Christopher 1d38538fb6 Temporarily revert this, it's causing an issue with an internal project.
llvm-svn: 99451
2010-03-24 23:35:21 +00:00
Chris Lattner 00eeac4179 add some accessors to callsite/callinst/invokeinst to check
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't.  This fixes PR6682.

llvm-svn: 99341
2010-03-23 22:59:07 +00:00
Devang Patel be94f23992 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Devang Patel f6eeaebd76 Implement support to debug inlined functions.
llvm-svn: 86748
2009-11-10 23:06:00 +00:00
Chris Lattner c6b3b25f94 Fix a pretty serious misfeature of the inliner: if it inlines a function
with multiple return values it inserts a PHI to merge them all together.
However, if the return values are all the same, it ends up with a pointless
PHI and this pointless PHI happens to really block SRoA from happening in 
at least a silly C++ example written by Doug, but probably others.  This 
fixes rdar://7339069.

llvm-svn: 85206
2009-10-27 05:39:41 +00:00
Chris Lattner 88b36f1140 Simplify some code (first hunk) and fix PR5208 (second hunk) by
updating the callgraph when introducing a call.

llvm-svn: 84310
2009-10-17 05:39:39 +00:00
Duncan Sands 9ed7b16bf3 Introduce and use convenience methods for getting pointer types
where the element is of a basic builtin type.  For example, to get
an i8* use getInt8PtrTy.

llvm-svn: 83379
2009-10-06 15:40:36 +00:00
Nick Lewycky 42fb7452df Instruction::clone does not need to take an LLVMContext&. Remove that and
update all the callers.

llvm-svn: 82889
2009-09-27 07:38:41 +00:00
Eric Christopher 66d8555f7e Fix comment.
llvm-svn: 81138
2009-09-06 22:20:54 +00:00
Chris Lattner 8900f3ec57 remove a bunch of explicit code previously needed to update the
callgraph.  This is now dead because RAUW does the job.

llvm-svn: 80703
2009-09-01 18:44:06 +00:00
Chris Lattner 063d06527e Change CallGraphNode to maintain it's Function as an AssertingVH
for sanity.  This didn't turn up any bugs.

Change CallGraphNode to maintain its "callsite" information in the 
call edges list as a WeakVH instead of as an instruction*.  This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again.  This fixes the class of problem indicated
by PR4029 and PR3601.

llvm-svn: 80663
2009-09-01 06:31:31 +00:00
Devang Patel 80ae34974b Reapply 79977.
Use MDNodes to encode debug info in llvm IR.

llvm-svn: 80406
2009-08-28 23:24:31 +00:00
Chris Lattner b1cba3f91e enhance InlineFunction to be able to optionally return
a the list of static allocas that it inlined.

llvm-svn: 80203
2009-08-27 04:20:52 +00:00
Chris Lattner d84dbb3443 smallvectorize the list of returns built by CloneAndPruneFunctionInto.
llvm-svn: 80202
2009-08-27 04:02:30 +00:00
Chris Lattner 5eef6ad6a9 reduce inlining factor some stuff out to a static helper function,
and other code cleanups.  No functionality change.

llvm-svn: 80199
2009-08-27 03:51:50 +00:00
Devang Patel f08e35d9dc Revert 79977. It causes llvm-gcc bootstrap failures on some platforms.
llvm-svn: 80073
2009-08-26 05:01:18 +00:00
Devang Patel 02aac922b4 Update DebugInfo interface to use metadata, instead of special named llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)

llvm-svn: 79977
2009-08-25 05:24:07 +00:00
Owen Anderson 55f1c09e31 Push LLVMContexts through the IntegerType APIs.
llvm-svn: 78948
2009-08-13 21:58:54 +00:00
Owen Anderson b292b8ce70 Move more code back to 2.5 APIs.
llvm-svn: 77635
2009-07-30 23:03:37 +00:00
Owen Anderson 4056ca9568 Move types back to the 2.5 API.
llvm-svn: 77516
2009-07-29 22:17:13 +00:00
Owen Anderson 487375e9a2 Move ConstantExpr to 2.5 API.
llvm-svn: 77494
2009-07-29 18:55:55 +00:00
Owen Anderson edb4a70325 Revert the ConstantInt constructors back to their 2.5 forms where possible, thanks to contexts-on-types. More to come.
llvm-svn: 77011
2009-07-24 23:12:02 +00:00
Owen Anderson 47db941fd3 Get rid of the Pass+Context magic.
llvm-svn: 76702
2009-07-22 00:24:57 +00:00
Owen Anderson 4fdeba9706 Revert yesterday's change by removing the LLVMContext parameter to AllocaInst and MallocInst.
llvm-svn: 75863
2009-07-15 23:53:25 +00:00
Owen Anderson b6b2530000 Move EVER MORE stuff over to LLVMContext.
llvm-svn: 75703
2009-07-14 23:09:55 +00:00
Owen Anderson 1e5f00e7a7 This started as a small change, I swear. Unfortunately, lots of things call the [I|F]CmpInst constructors. Who knew!?
llvm-svn: 75200
2009-07-09 23:48:35 +00:00
Owen Anderson e70b637033 More LLVMContext-ification.
llvm-svn: 74807
2009-07-05 22:41:43 +00:00
Eli Friedman 36b9026fa7 PR4123: don't crash when inlining a call which uses its own result.
llvm-svn: 71199
2009-05-08 00:22:04 +00:00
Devang Patel 046bf624b9 While inlining, clone llvm.dbg.func.start intrinsic and adjust
llvm.dbg.region.end instrinsic. This nested llvm.dbg.func.start/llvm.dbg.region.end pair now enables DW_TAG_inlined_subroutine support in code generator.

llvm-svn: 69118
2009-04-15 00:17:06 +00:00
Devang Patel 4ce6e69022 Update call graph after inlining invoke.
Patch by Jay Foad.

llvm-svn: 68120
2009-03-31 17:36:12 +00:00
Dale Johannesen 845e582cbe Revert unintended commmit.
llvm-svn: 66001
2009-03-04 02:09:48 +00:00