2017-06-13 01:05:43 +08:00
|
|
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
|
2016-04-25 23:26:57 +08:00
|
|
|
; RUN: llc -mtriple=x86_64-unknown-unknown < %s | FileCheck %s
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
|
2016-02-25 08:23:27 +08:00
|
|
|
; cmp with single-use load, should not form branch.
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
define i32 @test1(double %a, double* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test1:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: ucomisd (%rdi), %xmm0
|
|
|
|
; CHECK-NEXT: cmovbel %edx, %eax
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: retq
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load double, double* %b, align 8
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = fcmp olt double %load, %a
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
ret i32 %cond
|
|
|
|
}
|
|
|
|
|
|
|
|
; Sanity check: no load.
|
|
|
|
define i32 @test2(double %a, double %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test2:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: ucomisd %xmm1, %xmm0
|
|
|
|
; CHECK-NEXT: cmovbel %esi, %eax
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: retq
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = fcmp ogt double %a, %b
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
ret i32 %cond
|
|
|
|
}
|
|
|
|
|
|
|
|
; Multiple uses of the load.
|
|
|
|
define i32 @test4(i32 %a, i32* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test4:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: movl (%rsi), %eax
|
|
|
|
; CHECK-NEXT: cmpl %edi, %eax
|
|
|
|
; CHECK-NEXT: cmovael %ecx, %edx
|
|
|
|
; CHECK-NEXT: addl %edx, %eax
|
|
|
|
; CHECK-NEXT: retq
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load i32, i32* %b, align 4
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = icmp ult i32 %load, %a
|
|
|
|
%cond = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
%add = add i32 %cond, %load
|
|
|
|
ret i32 %add
|
|
|
|
}
|
|
|
|
|
|
|
|
; Multiple uses of the cmp.
|
|
|
|
define i32 @test5(i32 %a, i32* nocapture %b, i32 %x, i32 %y) {
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-LABEL: test5:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: movl %ecx, %eax
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: cmpl %edi, (%rsi)
|
|
|
|
; CHECK-NEXT: cmoval %edi, %eax
|
|
|
|
; CHECK-NEXT: cmovael %edx, %eax
|
2016-04-25 23:26:57 +08:00
|
|
|
; CHECK-NEXT: retq
|
2015-02-28 05:17:42 +08:00
|
|
|
%load = load i32, i32* %b, align 4
|
CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
2012-05-05 20:49:22 +08:00
|
|
|
%cmp = icmp ult i32 %load, %a
|
|
|
|
%cmp1 = icmp ugt i32 %load, %a
|
|
|
|
%cond = select i1 %cmp1, i32 %a, i32 %y
|
|
|
|
%cond5 = select i1 %cmp, i32 %cond, i32 %x
|
|
|
|
ret i32 %cond5
|
|
|
|
}
|
2016-04-26 00:56:52 +08:00
|
|
|
|
2017-09-06 14:28:08 +08:00
|
|
|
; Zero-extended select.
|
|
|
|
define void @test6(i32 %a, i32 %x, i32* %y.ptr, i64* %z.ptr) {
|
|
|
|
; CHECK-LABEL: test6:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0: # %entry
|
2018-02-01 06:04:26 +08:00
|
|
|
; CHECK-NEXT: # kill: def $esi killed $esi def $rsi
|
2017-09-06 14:28:08 +08:00
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
|
|
|
; CHECK-NEXT: cmovnsl (%rdx), %esi
|
|
|
|
; CHECK-NEXT: movq %rsi, (%rcx)
|
|
|
|
; CHECK-NEXT: retq
|
|
|
|
entry:
|
|
|
|
%y = load i32, i32* %y.ptr
|
|
|
|
%cmp = icmp slt i32 %a, 0
|
|
|
|
%z = select i1 %cmp, i32 %x, i32 %y
|
|
|
|
%z.ext = zext i32 %z to i64
|
|
|
|
store i64 %z.ext, i64* %z.ptr
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2016-04-26 00:56:52 +08:00
|
|
|
; If a select is not obviously predictable, don't turn it into a branch.
|
|
|
|
define i32 @weighted_select1(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select1:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-04-26 00:56:52 +08:00
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
|
|
|
; CHECK-NEXT: cmovnel %edi, %eax
|
2016-04-26 00:56:52 +08:00
|
|
|
; CHECK-NEXT: retq
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !0
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-04-27 01:11:17 +08:00
|
|
|
; If a select is obviously predictable, turn it into a branch.
|
2016-04-26 00:56:52 +08:00
|
|
|
define i32 @weighted_select2(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select2:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
2016-04-26 00:56:52 +08:00
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
2017-09-06 14:28:08 +08:00
|
|
|
; CHECK-NEXT: jne .LBB6_2
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK-NEXT: # %bb.1: # %select.false
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
2017-09-06 14:28:08 +08:00
|
|
|
; CHECK-NEXT: .LBB6_2: # %select.end
|
2016-04-26 00:56:52 +08:00
|
|
|
; CHECK-NEXT: retq
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !1
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-04-27 01:11:17 +08:00
|
|
|
; Note the reversed profile weights: it doesn't matter if it's
|
|
|
|
; obviously true or obviously false.
|
|
|
|
; Either one should become a branch rather than conditional move.
|
|
|
|
; TODO: But likely true vs. likely false should affect basic block placement?
|
|
|
|
define i32 @weighted_select3(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: weighted_select3:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: movl %edi, %eax
|
2016-04-27 01:11:17 +08:00
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
2017-09-06 14:28:08 +08:00
|
|
|
; CHECK-NEXT: je .LBB7_1
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK-NEXT: # %bb.2: # %select.end
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; CHECK-NEXT: retq
|
2017-09-06 14:28:08 +08:00
|
|
|
; CHECK-NEXT: .LBB7_1: # %select.false
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; CHECK-NEXT: retq
|
2016-04-27 01:11:17 +08:00
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !2
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
|
|
|
|
2016-05-10 01:31:55 +08:00
|
|
|
; Weightlessness is no reason to die.
|
|
|
|
define i32 @unweighted_select(i32 %a, i32 %b) {
|
|
|
|
; CHECK-LABEL: unweighted_select:
|
2017-12-05 01:18:51 +08:00
|
|
|
; CHECK: # %bb.0:
|
2016-05-10 01:31:55 +08:00
|
|
|
; CHECK-NEXT: movl %esi, %eax
|
2018-09-20 02:59:08 +08:00
|
|
|
; CHECK-NEXT: testl %edi, %edi
|
|
|
|
; CHECK-NEXT: cmovnel %edi, %eax
|
2016-05-10 01:31:55 +08:00
|
|
|
; CHECK-NEXT: retq
|
|
|
|
%cmp = icmp ne i32 %a, 0
|
|
|
|
%sel = select i1 %cmp, i32 %a, i32 %b, !prof !3
|
|
|
|
ret i32 %sel
|
|
|
|
}
|
2016-04-27 01:11:17 +08:00
|
|
|
|
2016-04-26 00:56:52 +08:00
|
|
|
!0 = !{!"branch_weights", i32 1, i32 99}
|
|
|
|
!1 = !{!"branch_weights", i32 1, i32 100}
|
2016-04-27 01:11:17 +08:00
|
|
|
!2 = !{!"branch_weights", i32 100, i32 1}
|
2016-05-10 01:31:55 +08:00
|
|
|
!3 = !{!"branch_weights", i32 0, i32 0}
|
2016-04-26 00:56:52 +08:00
|
|
|
|