[PM] Provide an initial, minimal port of the inliner to the new pass manager.
This doesn't implement *every* feature of the existing inliner, but
tries to implement the most important ones for building a functional
optimization pipeline and beginning to sort out bugs, regressions, and
other problems.
Notable, but intentional omissions:
- No alloca merging support. Why? Because it isn't clear we want to do
this at all. Active discussion and investigation is going on to remove
it, so for simplicity I omitted it.
- No support for trying to iterate on "internally" devirtualized calls.
Why? Because it adds what I suspect is inappropriate coupling for
little or no benefit. We will have an outer iteration system that
tracks devirtualization including that from function passes and
iterates already. We should improve that rather than approximate it
here.
- Optimization remarks. Why? Purely to make the patch smaller, no other
reason at all.
The last one I'll probably work on almost immediately. But I wanted to
skip it in the initial patch to try to focus the change as much as
possible as there is already a lot of code moving around and both of
these *could* be skipped without really disrupting the core logic.
A summary of the different things happening here:
1) Adding the usual new PM class and rigging.
2) Fixing minor underlying assumptions in the inline cost analysis or
inline logic that don't generally hold in the new PM world.
3) Adding the core pass logic which is in essence a loop over the calls
in the nodes in the call graph. This is a bit duplicated from the old
inliner, but only a handful of lines could realistically be shared.
(I tried at first, and it really didn't help anything.) All told,
this is only about 100 lines of code, and most of that is the
mechanics of wiring up analyses from the new PM world.
4) Updating the LazyCallGraph (in the new PM) based on the *newly
inlined* calls and references. This is very minimal because we cannot
form cycles.
5) When inlining removes the last use of a function, eagerly nuking the
body of the function so that any "one use remaining" inline cost
heuristics are immediately refined, and queuing these functions to be
completely deleted once inlining is complete and the call graph
updated to reflect that they have become dead.
6) After all the inlining for a particular function, updating the
LazyCallGraph and the CGSCC pass manager to reflect the
function-local simplifications that are done immediately and
internally by the inline utilties. These are the exact same
fundamental set of CG updates done by arbitrary function passes.
7) Adding a bunch of test cases to specifically target CGSCC and other
subtle aspects in the new PM world.
Many thanks to the careful review from Easwaran and Sanjoy and others!
Differential Revision: https://reviews.llvm.org/D24226
llvm-svn: 290161
2016-12-20 11:15:32 +08:00
|
|
|
; RUN: opt < %s -passes='cgscc(inline)' -inline-threshold=0 -S | FileCheck %s
|
|
|
|
|
|
|
|
; The 'test1_' prefixed functions test the basic 'last callsite' inline
|
|
|
|
; threshold adjustment where we specifically inline the last call site of an
|
|
|
|
; internal function regardless of cost.
|
|
|
|
|
|
|
|
define internal void @test1_f() {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
; Identical to @test1_f but doesn't get inlined because there is more than one
|
|
|
|
; call. If this *does* get inlined, the body used both here and in @test1_f
|
|
|
|
; isn't a good test for different threshold based on the last call.
|
|
|
|
define internal void @test1_g() {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test1() {
|
|
|
|
; CHECK-LABEL: define void @test1()
|
|
|
|
entry:
|
|
|
|
call void @test1_f()
|
|
|
|
; CHECK-NOT: @test1_f
|
|
|
|
|
|
|
|
call void @test1_g()
|
|
|
|
call void @test1_g()
|
|
|
|
; CHECK: call void @test1_g()
|
|
|
|
; CHECK: call void @test1_g()
|
|
|
|
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
; The 'test2_' prefixed functions test that we can discover the last callsite
|
2018-01-29 13:17:03 +08:00
|
|
|
; bonus after having inlined the prior call site. For this to work, we need
|
[PM] Provide an initial, minimal port of the inliner to the new pass manager.
This doesn't implement *every* feature of the existing inliner, but
tries to implement the most important ones for building a functional
optimization pipeline and beginning to sort out bugs, regressions, and
other problems.
Notable, but intentional omissions:
- No alloca merging support. Why? Because it isn't clear we want to do
this at all. Active discussion and investigation is going on to remove
it, so for simplicity I omitted it.
- No support for trying to iterate on "internally" devirtualized calls.
Why? Because it adds what I suspect is inappropriate coupling for
little or no benefit. We will have an outer iteration system that
tracks devirtualization including that from function passes and
iterates already. We should improve that rather than approximate it
here.
- Optimization remarks. Why? Purely to make the patch smaller, no other
reason at all.
The last one I'll probably work on almost immediately. But I wanted to
skip it in the initial patch to try to focus the change as much as
possible as there is already a lot of code moving around and both of
these *could* be skipped without really disrupting the core logic.
A summary of the different things happening here:
1) Adding the usual new PM class and rigging.
2) Fixing minor underlying assumptions in the inline cost analysis or
inline logic that don't generally hold in the new PM world.
3) Adding the core pass logic which is in essence a loop over the calls
in the nodes in the call graph. This is a bit duplicated from the old
inliner, but only a handful of lines could realistically be shared.
(I tried at first, and it really didn't help anything.) All told,
this is only about 100 lines of code, and most of that is the
mechanics of wiring up analyses from the new PM world.
4) Updating the LazyCallGraph (in the new PM) based on the *newly
inlined* calls and references. This is very minimal because we cannot
form cycles.
5) When inlining removes the last use of a function, eagerly nuking the
body of the function so that any "one use remaining" inline cost
heuristics are immediately refined, and queuing these functions to be
completely deleted once inlining is complete and the call graph
updated to reflect that they have become dead.
6) After all the inlining for a particular function, updating the
LazyCallGraph and the CGSCC pass manager to reflect the
function-local simplifications that are done immediately and
internally by the inline utilties. These are the exact same
fundamental set of CG updates done by arbitrary function passes.
7) Adding a bunch of test cases to specifically target CGSCC and other
subtle aspects in the new PM world.
Many thanks to the careful review from Easwaran and Sanjoy and others!
Differential Revision: https://reviews.llvm.org/D24226
llvm-svn: 290161
2016-12-20 11:15:32 +08:00
|
|
|
; a callsite dependent cost so we have a trivial predicate guarding all the
|
|
|
|
; cost, and set that in a particular direction.
|
|
|
|
|
|
|
|
define internal void @test2_f(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
; Identical to @test2_f but doesn't get inlined because there is more than one
|
|
|
|
; call. If this *does* get inlined, the body used both here and in @test2_f
|
|
|
|
; isn't a good test for different threshold based on the last call.
|
|
|
|
define internal void @test2_g(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test2() {
|
|
|
|
; CHECK-LABEL: define void @test2()
|
|
|
|
entry:
|
|
|
|
; The first call is trivial to inline due to the argument.
|
|
|
|
call void @test2_f(i1 false)
|
|
|
|
; CHECK-NOT: @test2_f
|
|
|
|
|
|
|
|
; The second call is too expensive to inline unless we update the number of
|
|
|
|
; calls after inlining the second.
|
|
|
|
call void @test2_f(i1 true)
|
|
|
|
; CHECK-NOT: @test2_f
|
|
|
|
|
|
|
|
; Sanity check that two calls with the hard predicate remain uninlined.
|
|
|
|
call void @test2_g(i1 true)
|
|
|
|
call void @test2_g(i1 true)
|
|
|
|
; CHECK: call void @test2_g(i1 true)
|
|
|
|
; CHECK: call void @test2_g(i1 true)
|
|
|
|
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
; The 'test3_' prefixed functions are similar to the 'test2_' functions but the
|
|
|
|
; relative order of the trivial and hard to inline callsites is reversed. This
|
|
|
|
; checks that the order of calls isn't significant to whether we observe the
|
|
|
|
; "last callsite" threshold difference because the next-to-last gets inlined.
|
|
|
|
; FIXME: We don't currently catch this case.
|
|
|
|
|
|
|
|
define internal void @test3_f(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
; Identical to @test3_f but doesn't get inlined because there is more than one
|
|
|
|
; call. If this *does* get inlined, the body used both here and in @test3_f
|
|
|
|
; isn't a good test for different threshold based on the last call.
|
|
|
|
define internal void @test3_g(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test3() {
|
|
|
|
; CHECK-LABEL: define void @test3()
|
|
|
|
entry:
|
|
|
|
; The first call is too expensive to inline unless we update the number of
|
|
|
|
; calls after inlining the second.
|
|
|
|
call void @test3_f(i1 true)
|
|
|
|
; FIXME: We should inline this call without iteration.
|
|
|
|
; CHECK: call void @test3_f(i1 true)
|
|
|
|
|
|
|
|
; But the second call is trivial to inline due to the argument.
|
|
|
|
call void @test3_f(i1 false)
|
|
|
|
; CHECK-NOT: @test3_f
|
|
|
|
|
|
|
|
; Sanity check that two calls with the hard predicate remain uninlined.
|
|
|
|
call void @test3_g(i1 true)
|
|
|
|
call void @test3_g(i1 true)
|
|
|
|
; CHECK: call void @test3_g(i1 true)
|
|
|
|
; CHECK: call void @test3_g(i1 true)
|
|
|
|
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
; The 'test4_' prefixed functions are similar to the 'test2_' prefixed
|
|
|
|
; functions but include unusual constant expressions that make discovering that
|
|
|
|
; a function is dead harder.
|
|
|
|
|
|
|
|
define internal void @test4_f(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
; Identical to @test4_f but doesn't get inlined because there is more than one
|
|
|
|
; call. If this *does* get inlined, the body used both here and in @test4_f
|
|
|
|
; isn't a good test for different threshold based on the last call.
|
|
|
|
define internal void @test4_g(i1 %b) {
|
|
|
|
entry:
|
|
|
|
%p = alloca i32
|
|
|
|
br i1 %b, label %then, label %exit
|
|
|
|
|
|
|
|
then:
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
store volatile i32 0, i32* %p
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test4() {
|
|
|
|
; CHECK-LABEL: define void @test4()
|
|
|
|
entry:
|
|
|
|
; The first call is trivial to inline due to the argument. However this
|
|
|
|
; argument also uses the function being called as part of a complex
|
|
|
|
; constant expression. Merely inlining and deleting the call isn't enough to
|
|
|
|
; drop the use count here, we need to GC the dead constant expression as
|
|
|
|
; well.
|
|
|
|
call void @test4_f(i1 icmp ne (i64 ptrtoint (void (i1)* @test4_f to i64), i64 ptrtoint(void (i1)* @test4_f to i64)))
|
|
|
|
; CHECK-NOT: @test4_f
|
|
|
|
|
|
|
|
; The second call is too expensive to inline unless we update the number of
|
|
|
|
; calls after inlining the second.
|
|
|
|
call void @test4_f(i1 true)
|
|
|
|
; CHECK-NOT: @test4_f
|
|
|
|
|
|
|
|
; And check that a single call to a function which is used by a complex
|
|
|
|
; constant expression cannot be inlined because the constant expression forms
|
|
|
|
; a second use. If this part starts failing we need to use more complex
|
|
|
|
; constant expressions to reference a particular function with them.
|
|
|
|
%sink = alloca i1
|
|
|
|
store volatile i1 icmp ne (i64 ptrtoint (void (i1)* @test4_g to i64), i64 ptrtoint(void (i1)* @test4_g to i64)), i1* %sink
|
|
|
|
call void @test4_g(i1 true)
|
|
|
|
; CHECK: store volatile i1 false
|
|
|
|
; CHECK: call void @test4_g(i1 true)
|
|
|
|
|
|
|
|
ret void
|
|
|
|
}
|