2016-04-25 23:26:57 +08:00
|
|
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
|
|
|
|
; RUN: llc -mtriple=x86_64-unknown-unknown < %s | FileCheck %s
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
|
2016-02-25 08:23:27 +08:00
|
|
|
; cmp with single-use load, should not form branch.
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
define i32 @test1(double %a, double* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test1:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: ucomisd (%rdi), %xmm0
|
|
|
|
; CHECK-NEXT: cmovbel %edx, %esi
|
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load double, double* %b, align 8
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = fcmp olt double %load, %a
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
ret i32 %cond
|
|
|
|
}
|
|
|
|
|
|
|
|
; Sanity check: no load.
|
|
|
|
define i32 @test2(double %a, double %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test2:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: ucomisd %xmm1, %xmm0
|
|
|
|
; CHECK-NEXT: cmovbel %esi, %edi
|
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = fcmp ogt double %a, %b
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
ret i32 %cond
|
|
|
|
}
|
|
|
|
|
|
|
|
; Multiple uses of the load.
|
|
|
|
define i32 @test4(i32 %a, i32* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test4:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: movl (%rsi), %eax
|
|
|
|
; CHECK-NEXT: cmpl %edi, %eax
|
|
|
|
; CHECK-NEXT: cmovael %ecx, %edx
|
|
|
|
; CHECK-NEXT: addl %edx, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load i32, i32* %b, align 4
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = icmp ult i32 %load, %a
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
%add = add i32 %cond, %load
|
|
|
|
ret i32 %add
|
|
|
|
}
|
|
|
|
|
|
|
|
; Multiple uses of the cmp.
|
|
|
|
define i32 @test5(i32 %a, i32* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test5:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: cmpl %edi, (%rsi)
|
|
|
|
; CHECK-NEXT: cmoval %edi, %ecx
|
|
|
|
; CHECK-NEXT: cmovael %edx, %ecx
|
|
|
|
; CHECK-NEXT: movl %ecx, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load i32, i32* %b, align 4
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = icmp ult i32 %load, %a
|
|
|
|
%cmp1 = icmp ugt i32 %load, %a
|
|
|
|
%cond = select i1 %cmp1, i32 %a, i32 %y
|
|
|
|
%cond5 = select i1 %cmp, i32 %cond, i32 %x
|
|
|
|
ret i32 %cond5
|
|
|
|
}
|
2016-04-26 00:56:52 +08:00
|
|
|
|
|
|
|
; If a select is not obviously predictable, don't turn it into a branch.
|
|
|
|
define i32 @weighted_select1(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select1:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
|
|
|
; CHECK-NEXT: cmovnel %edi, %esi
|
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !0
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-04-27 01:11:17 +08:00
|
|
|
; If a select is obviously predictable, turn it into a branch.
|
2016-04-26 00:56:52 +08:00
|
|
|
define i32 @weighted_select2(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select2:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
2016-04-27 01:11:17 +08:00
|
|
|
; CHECK-NEXT: jne [[LABEL_BB5:.*]]
|
|
|
|
; CHECK: movl %esi, %edi
|
|
|
|
; CHECK-NEXT: [[LABEL_BB5]]
|
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
2016-04-26 00:56:52 +08:00
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !1
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-04-27 01:11:17 +08:00
|
|
|
; Note the reversed profile weights: it doesn't matter if it's
|
|
|
|
; obviously true or obviously false.
|
|
|
|
; Either one should become a branch rather than conditional move.
|
|
|
|
; TODO: But likely true vs. likely false should affect basic block placement?
|
|
|
|
define i32 @weighted_select3(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select3:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
2016-09-04 05:26:36 +08:00
|
|
|
; CHECK-NEXT: je [[LABEL_BB6:.*]]
|
|
|
|
; CHECK: movl %edi, %eax
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; CHECK-NEXT: retq
|
2016-09-04 05:26:36 +08:00
|
|
|
; CHECK: [[LABEL_BB6]]
|
|
|
|
; CHECK-NEXT: movl %esi, %edi
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
|
|
|
; CHECK-NEXT: retq
|
2016-04-27 01:11:17 +08:00
|
|
|
;
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !2
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-05-10 01:31:55 +08:00
|
|
|
; Weightlessness is no reason to die.
|
|
|
|
define i32 @unweighted_select(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: unweighted_select:
|
|
|
|
; CHECK: # BB#0:
|
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
|
|
|
; CHECK-NEXT: cmovnel %edi, %esi
|
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
;
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !3
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
2016-04-27 01:11:17 +08:00
|
|
|
|
2016-04-26 00:56:52 +08:00
|
|
|
!0 = !{!"branch_weights", i32 1, i32 99}
|
|
|
|
!1 = !{!"branch_weights", i32 1, i32 100}
|
2016-04-27 01:11:17 +08:00
|
|
|
!2 = !{!"branch_weights", i32 100, i32 1}
|
2016-05-10 01:31:55 +08:00
|
|
|
!3 = !{!"branch_weights", i32 0, i32 0}
|
2016-04-26 00:56:52 +08:00
|
|
|
|