llvm-project/llvm/test/CodeGen/X86/flags-copy-lowering.mir

1058 lines
37 KiB
Plaintext
Raw Normal View History

[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
# RUN: llc -run-pass x86-flags-copy-lowering -verify-machineinstrs -o - %s | FileCheck %s
#
# Lower various interesting copy patterns of EFLAGS without using LAHF/SAHF.
--- |
target triple = "x86_64-unknown-unknown"
declare void @foo()
define i32 @test_branch(i64 %a, i64 %b) {
entry:
call void @foo()
ret i32 0
}
define i32 @test_branch_fallthrough(i64 %a, i64 %b) {
entry:
call void @foo()
ret i32 0
}
define void @test_setcc(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_cmov(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_adc(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_sbb(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_adcx(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_adox(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_rcl(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_rcr(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define void @test_setb_c(i64 %a, i64 %b) {
entry:
call void @foo()
ret void
}
define i64 @test_branch_with_livein_and_kill(i64 %a, i64 %b) {
entry:
call void @foo()
ret i64 0
}
define i64 @test_branch_with_interleaved_livein_and_kill(i64 %a, i64 %b) {
entry:
call void @foo()
ret i64 0
}
define i64 @test_mid_cycle_copies(i64 %a, i64 %b) {
entry:
call void @foo()
ret i64 0
}
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
define i32 @test_existing_setcc(i64 %a, i64 %b) {
entry:
call void @foo()
ret i32 0
}
define i32 @test_existing_setcc_memory(i64 %a, i64 %b) {
entry:
call void @foo()
ret i32 0
}
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
...
---
name: test_branch
# CHECK-LABEL: name: test_branch
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2, %bb.3
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
JA_1 %bb.1, implicit $eflags
JB_1 %bb.2, implicit $eflags
JMP_1 %bb.3
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: JNE_1 %bb.1, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.4:
; CHECK-NEXT: successors: {{.*$}}
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: TEST8rr %[[B_REG]], %[[B_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: JNE_1 %bb.2, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.3
bb.1:
%3:gr32 = MOV32ri 42
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
$eax = COPY %3
RET 0, $eax
bb.2:
%4:gr32 = MOV32ri 43
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
$eax = COPY %4
RET 0, $eax
bb.3:
%5:gr32 = MOV32r0 implicit-def dead $eflags
$eax = COPY %5
RET 0, $eax
...
---
name: test_branch_fallthrough
# CHECK-LABEL: name: test_branch_fallthrough
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2, %bb.3
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
JA_1 %bb.2, implicit $eflags
JB_1 %bb.3, implicit $eflags
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: JNE_1 %bb.2, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.4:
; CHECK-NEXT: successors: {{.*$}}
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: TEST8rr %[[B_REG]], %[[B_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: JNE_1 %bb.3, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.1:
bb.1:
%5:gr32 = MOV32r0 implicit-def dead $eflags
$eax = COPY %5
RET 0, $eax
bb.2:
%3:gr32 = MOV32ri 42
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
$eax = COPY %3
RET 0, $eax
bb.3:
%4:gr32 = MOV32ri 43
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
$eax = COPY %4
RET 0, $eax
...
---
name: test_setcc
# CHECK-LABEL: name: test_setcc
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NEXT: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NEXT: %[[NE_REG:[^:]*]]:gr8 = SETNEr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
%3:gr8 = SETAr implicit $eflags
%4:gr8 = SETBr implicit $eflags
%5:gr8 = SETEr implicit $eflags
SETNEm $rsp, 1, $noreg, -16, $noreg, implicit killed $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
MOV8mr $rsp, 1, $noreg, -16, $noreg, killed %3
MOV8mr $rsp, 1, $noreg, -16, $noreg, killed %4
MOV8mr $rsp, 1, $noreg, -16, $noreg, killed %5
; CHECK-NOT: $eflags =
; CHECK-NOT: = SET{{.*}}
; CHECK: MOV8mr {{.*}}, killed %[[A_REG]]
; CHECK-NEXT: MOV8mr {{.*}}, killed %[[B_REG]]
; CHECK-NEXT: MOV8mr {{.*}}, killed %[[E_REG]]
; CHECK-NOT: MOV8mr {{.*}}, killed %[[NE_REG]]
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
RET 0
...
---
name: test_cmov
# CHECK-LABEL: name: test_cmov
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NEXT: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
%3:gr64 = CMOVA64rr %0, %1, implicit $eflags
%4:gr64 = CMOVB64rr %0, %1, implicit $eflags
%5:gr64 = CMOVE64rr %0, %1, implicit $eflags
%6:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %3:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NEXT: TEST8rr %[[B_REG]], %[[B_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NEXT: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %5:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NEXT: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %6:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %3
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %4
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %6
RET 0
...
---
name: test_adc
# CHECK-LABEL: name: test_adc
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = ADC64ri32 %2:gr64, 42, implicit-def $eflags, implicit $eflags
%5:gr64 = ADC64ri32 %4:gr64, 42, implicit-def $eflags, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: dead %{{[^:]*}}:gr8 = ADD8ri %[[CF_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %4:gr64 = ADC64ri32 %2, 42, implicit-def $eflags, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = ADC64ri32 %4, 42, implicit-def{{( dead)?}} $eflags, implicit{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
RET 0
...
---
name: test_sbb
# CHECK-LABEL: name: test_sbb
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = SUB64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = SBB64ri32 %2:gr64, 42, implicit-def $eflags, implicit killed $eflags
%5:gr64 = SBB64ri32 %4:gr64, 42, implicit-def dead $eflags, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: dead %{{[^:]*}}:gr8 = ADD8ri %[[CF_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %4:gr64 = SBB64ri32 %2, 42, implicit-def $eflags, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = SBB64ri32 %4, 42, implicit-def{{( dead)?}} $eflags, implicit{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
RET 0
...
---
name: test_adcx
# CHECK-LABEL: name: test_adcx
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NEXT: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = CMOVE64rr %0, %1, implicit $eflags
%5:gr64 = MOV64ri32 42
%6:gr64 = ADCX64rr %2, %5, implicit-def $eflags, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = MOV64ri32 42
; CHECK-NEXT: dead %{{[^:]*}}:gr8 = ADD8ri %[[CF_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %6:gr64 = ADCX64rr %2, %5, implicit-def{{( dead)?}} $eflags, implicit killed $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %4
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %6
RET 0
...
---
name: test_adox
# CHECK-LABEL: name: test_adox
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NEXT: %[[OF_REG:[^:]*]]:gr8 = SETOr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = CMOVE64rr %0, %1, implicit $eflags
%5:gr64 = MOV64ri32 42
%6:gr64 = ADOX64rr %2, %5, implicit-def $eflags, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
[x86] Introduce a pass to begin more systematically fixing PR36028 and similar issues. The key idea is to lower COPY nodes populating EFLAGS by scanning the uses of EFLAGS and introducing dedicated code to preserve the necessary state in a GPR. In the vast majority of cases, these uses are cmovCC and jCC instructions. For such cases, we can very easily save and restore the necessary information by simply inserting a setCC into a GPR where the original flags are live, and then testing that GPR directly to feed the cmov or conditional branch. However, things are a bit more tricky if arithmetic is using the flags. This patch handles the vast majority of cases that seem to come up in practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of partially preserved EFLAGS as LLVM doesn't currently model that at all. There are a large number of operations that techinaclly observe EFLAGS currently but shouldn't in this case -- they typically are using DF. Currently, they will not be handled by this approach. However, I have never seen this issue come up in practice. It is already pretty rare to have these patterns come up in practical code with LLVM. I had to resort to writing MIR tests to cover most of the logic in this pass already. I suspect even with its current amount of coverage of arithmetic users of EFLAGS it will be a significant improvement over the current use of pushf/popf. It will also produce substantially faster code in most of the common patterns. This patch also removes all of the old lowering for EFLAGS copies, and the hack that forced us to use a frame pointer when EFLAGS copies were found anywhere in a function so that the dynamic stack adjustment wasn't a problem. None of this is needed as we now lower all of these copies directly in MI and without require stack adjustments. Lots of thanks to Reid who came up with several aspects of this approach, and Craig who helped me work out a couple of things tripping me up while working on this. Differential Revision: https://reviews.llvm.org/D45146 llvm-svn: 329657
2018-04-10 09:41:17 +08:00
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = MOV64ri32 42
; CHECK-NEXT: dead %{{[^:]*}}:gr8 = ADD8ri %[[OF_REG]], 127, implicit-def $eflags
; CHECK-NEXT: %6:gr64 = ADOX64rr %2, %5, implicit-def{{( dead)?}} $eflags, implicit killed $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %4
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %6
RET 0
...
---
name: test_rcl
# CHECK-LABEL: name: test_rcl
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = RCL64r1 %2:gr64, implicit-def $eflags, implicit $eflags
%5:gr64 = RCL64r1 %4:gr64, implicit-def $eflags, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: dead %{{[^:]*}}:gr8 = ADD8ri %[[CF_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %4:gr64 = RCL64r1 %2, implicit-def $eflags, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = RCL64r1 %4, implicit-def{{( dead)?}} $eflags, implicit{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
RET 0
...
---
name: test_rcr
# CHECK-LABEL: name: test_rcr
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr64 = RCR64r1 %2:gr64, implicit-def $eflags, implicit $eflags
%5:gr64 = RCR64r1 %4:gr64, implicit-def $eflags, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: dead %{{[^:]*}}:gr8 = ADD8ri %[[CF_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %4:gr64 = RCR64r1 %2, implicit-def $eflags, implicit killed $eflags
; CHECK-NEXT: %5:gr64 = RCR64r1 %4, implicit-def{{( dead)?}} $eflags, implicit{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
RET 0
...
---
name: test_setb_c
# CHECK-LABEL: name: test_setb_c
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
%2:gr64 = ADD64rr %0, %1, implicit-def $eflags
%3:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[CF_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %3
%4:gr8 = SETB_C8r implicit-def $eflags, implicit $eflags
MOV8mr $rsp, 1, $noreg, -16, $noreg, killed %4
; CHECK-NOT: $eflags =
; CHECK: %[[ZERO:[^:]*]]:gr32 = MOV32r0 implicit-def $eflags
; CHECK-NEXT: %[[ZERO_SUBREG:[^:]*]]:gr8 = COPY %[[ZERO]].sub_8bit
; CHECK-NEXT: %[[REPLACEMENT:[^:]*]]:gr8 = SUB8rr %[[ZERO_SUBREG]], %[[CF_REG]]
; CHECK-NEXT: MOV8mr $rsp, 1, $noreg, -16, $noreg, killed %[[REPLACEMENT]]
$eflags = COPY %3
%5:gr16 = SETB_C16r implicit-def $eflags, implicit $eflags
MOV16mr $rsp, 1, $noreg, -16, $noreg, killed %5
; CHECK-NOT: $eflags =
; CHECK: %[[CF_EXT:[^:]*]]:gr32 = MOVZX32rr8 %[[CF_REG]]
; CHECK-NEXT: %[[CF_TRUNC:[^:]*]]:gr16 = COPY %[[CF_EXT]].sub_16bit
; CHECK-NEXT: %[[ZERO:[^:]*]]:gr32 = MOV32r0 implicit-def $eflags
; CHECK-NEXT: %[[ZERO_SUBREG:[^:]*]]:gr16 = COPY %[[ZERO]].sub_16bit
; CHECK-NEXT: %[[REPLACEMENT:[^:]*]]:gr16 = SUB16rr %[[ZERO_SUBREG]], %[[CF_TRUNC]]
; CHECK-NEXT: MOV16mr $rsp, 1, $noreg, -16, $noreg, killed %[[REPLACEMENT]]
$eflags = COPY %3
%6:gr32 = SETB_C32r implicit-def $eflags, implicit $eflags
MOV32mr $rsp, 1, $noreg, -16, $noreg, killed %6
; CHECK-NOT: $eflags =
; CHECK: %[[CF_EXT:[^:]*]]:gr32 = MOVZX32rr8 %[[CF_REG]]
; CHECK-NEXT: %[[ZERO:[^:]*]]:gr32 = MOV32r0 implicit-def $eflags
; CHECK-NEXT: %[[REPLACEMENT:[^:]*]]:gr32 = SUB32rr %[[ZERO]], %[[CF_EXT]]
; CHECK-NEXT: MOV32mr $rsp, 1, $noreg, -16, $noreg, killed %[[REPLACEMENT]]
$eflags = COPY %3
%7:gr64 = SETB_C64r implicit-def $eflags, implicit $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %7
; CHECK-NOT: $eflags =
; CHECK: %[[CF_EXT1:[^:]*]]:gr32 = MOVZX32rr8 %[[CF_REG]]
; CHECK-NEXT: %[[CF_EXT2:[^:]*]]:gr64 = SUBREG_TO_REG 0, %[[CF_EXT1]], %subreg.sub_32bit
; CHECK-NEXT: %[[ZERO:[^:]*]]:gr32 = MOV32r0 implicit-def $eflags
; CHECK-NEXT: %[[ZERO_EXT:[^:]*]]:gr64 = SUBREG_TO_REG 0, %[[ZERO]], %subreg.sub_32bit
; CHECK-NEXT: %[[REPLACEMENT:[^:]*]]:gr64 = SUB64rr %[[ZERO_EXT]], %[[CF_EXT2]]
; CHECK-NEXT: MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %[[REPLACEMENT]]
RET 0
...
---
name: test_branch_with_livein_and_kill
# CHECK-LABEL: name: test_branch_with_livein_and_kill
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2, %bb.3
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[S_REG:[^:]*]]:gr8 = SETSr implicit $eflags
; CHECK-NEXT: %[[NE_REG:[^:]*]]:gr8 = SETNEr implicit $eflags
; CHECK-NEXT: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
JA_1 %bb.1, implicit $eflags
JB_1 %bb.2, implicit $eflags
JMP_1 %bb.3
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.1, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.4:
; CHECK-NEXT: successors: {{.*$}}
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: TEST8rr %[[B_REG]], %[[B_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.2, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.3
bb.1:
liveins: $eflags
%3:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[NE_REG]], %[[NE_REG]], implicit-def $eflags
; CHECK-NEXT: %3:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
$rax = COPY %3
RET 0, $rax
bb.2:
liveins: $eflags
%4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[NE_REG]], %[[NE_REG]], implicit-def $eflags
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
$rax = COPY %4
RET 0, $rax
bb.3:
liveins: $eflags
%5:gr64 = CMOVS64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[S_REG]], %[[S_REG]], implicit-def $eflags
; CHECK-NEXT: %5:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
$rax = COPY %5
RET 0, $rax
...
---
name: test_branch_with_interleaved_livein_and_kill
# CHECK-LABEL: name: test_branch_with_interleaved_livein_and_kill
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2, %bb.5
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr64 = COPY $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[S_REG:[^:]*]]:gr8 = SETSr implicit $eflags
; CHECK-NEXT: %[[P_REG:[^:]*]]:gr8 = SETPr implicit $eflags
; CHECK-NEXT: %[[NE_REG:[^:]*]]:gr8 = SETNEr implicit $eflags
; CHECK-NEXT: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NEXT: %[[O_REG:[^:]*]]:gr8 = SETOr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
JA_1 %bb.1, implicit $eflags
JB_1 %bb.2, implicit $eflags
JMP_1 %bb.5
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.1, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.6:
; CHECK-NEXT: successors: {{.*$}}
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: TEST8rr %[[B_REG]], %[[B_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.2, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.5
bb.1:
liveins: $eflags
%3:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[NE_REG]], %[[NE_REG]], implicit-def $eflags
; CHECK-NEXT: %3:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
$rax = COPY %3
RET 0, $rax
bb.2:
; The goal is to have another batch of successors discovered in a block
; between two successors which kill $eflags. This ensures that neither of
; the surrounding kills impact recursing through this block.
successors: %bb.3, %bb.4
liveins: $eflags
JO_1 %bb.3, implicit $eflags
JMP_1 %bb.4
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[O_REG]], %[[O_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.3, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.4
bb.3:
liveins: $eflags
%4:gr64 = CMOVNE64rr %0, %1, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[NE_REG]], %[[NE_REG]], implicit-def $eflags
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
$rax = COPY %4
RET 0, $rax
bb.4:
liveins: $eflags
%5:gr64 = CMOVP64rr %0, %1, implicit $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[P_REG]], %[[P_REG]], implicit-def $eflags
; CHECK-NEXT: %5:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
$rax = COPY %5
RET 0, $rax
bb.5:
liveins: $eflags
%6:gr64 = CMOVS64rr %0, %1, implicit killed $eflags
; CHECK-NOT: $eflags =
; CHECK: TEST8rr %[[S_REG]], %[[S_REG]], implicit-def $eflags
; CHECK-NEXT: %6:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
$rax = COPY %6
RET 0, $rax
...
---
# This test case is designed to exercise a particularly challenging situation:
# when the flags are copied and restored *inside* of a complex and cyclic CFG
# all of which have live-in flags. To correctly handle this case we have to walk
# up the dominator tree and locate a viable reaching definition location,
# checking for clobbers along any path. The CFG for this function looks like the
# following diagram, control flowing out the bottom of blocks and in the top:
#
# bb.0
# | __________________
# |/ \
# bb.1 |
# |\_________ |
# | __ \ ____ |
# |/ \ |/ \ |
# bb.2 | bb.4 | |
# |\__/ / \ | |
# | / \ | |
# bb.3 bb.5 bb.6 | |
# | \ / | |
# | \ / | |
# | bb.7 | |
# | ________/ \____/ |
# |/ |
# bb.8 |
# |\__________________/
# |
# bb.9
#
# We set EFLAGS in bb.0, clobber them in bb.3, and copy them in bb.2 and bb.6.
# Because of the cycles this requires hoisting the `SETcc` instructions to
# capture the flags for the bb.6 copy to bb.1 and using them for the copy in
# `bb.2` as well despite the clobber in `bb.3`. The clobber in `bb.3` also
# prevents hoisting the `SETcc`s up to `bb.0`.
#
# Throughout the test we use branch instructions that are totally bogus (as the
# flags are obviously not changing!) but this is just to allow us to send
# a small but complex CFG structure through the backend and force it to choose
# plausible lowering decisions based on the core CFG presented, regardless of
# the futility of the actual branches.
name: test_mid_cycle_copies
# CHECK-LABEL: name: test_mid_cycle_copies
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
; CHECK: bb.0:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: CMP64rr %0, %1, implicit-def $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
JMP_1 %bb.1
bb.1:
successors: %bb.2, %bb.4
liveins: $eflags
; Outer loop header, target for one set of hoisting.
JE_1 %bb.2, implicit $eflags
JMP_1 %bb.4
; CHECK: bb.1:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NEXT: %[[B_REG:[^:]*]]:gr8 = SETBr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
bb.2:
successors: %bb.2, %bb.3
liveins: $eflags
; Inner loop with a local copy. We should eliminate this but can't hoist.
%2:gr64 = COPY $eflags
$eflags = COPY %2
JE_1 %bb.2, implicit $eflags
JMP_1 %bb.3
; CHECK: bb.2:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.2, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
bb.3:
successors: %bb.8
liveins: $eflags
; Use and then clobber $eflags. Then hop to the outer loop latch.
%3:gr64 = ADC64ri32 %0, 42, implicit-def dead $eflags, implicit $eflags
; CHECK: bb.3:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: dead %{{[^:]*}}:gr8 = ADD8ri %[[B_REG]], 255, implicit-def $eflags
; CHECK-NEXT: %3:gr64 = ADC64ri32 %0, 42, implicit-def{{( dead)?}} $eflags, implicit{{( killed)?}} $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %3
JMP_1 %bb.8
bb.4:
successors: %bb.5, %bb.6
liveins: $eflags
; Another inner loop, this one with a diamond.
JE_1 %bb.5, implicit $eflags
JMP_1 %bb.6
; CHECK: bb.4:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.5, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
bb.5:
successors: %bb.7
liveins: $eflags
; Just use $eflags on this side of the diamond.
%4:gr64 = CMOVA64rr %0, %1, implicit $eflags
; CHECK: bb.5:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: %4:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %4
JMP_1 %bb.7
bb.6:
successors: %bb.7
liveins: $eflags
; Use, copy, and then use $eflags again.
%5:gr64 = CMOVA64rr %0, %1, implicit $eflags
; CHECK: bb.6:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: %5:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %5
%6:gr64 = COPY $eflags
$eflags = COPY %6:gr64
%7:gr64 = CMOVA64rr %0, %1, implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: %7:gr64 = CMOVNE64rr %0, %1, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
MOV64mr $rsp, 1, $noreg, -16, $noreg, killed %7
JMP_1 %bb.7
bb.7:
successors: %bb.4, %bb.8
liveins: $eflags
; Inner loop latch.
JE_1 %bb.4, implicit $eflags
JMP_1 %bb.8
; CHECK: bb.7:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.4, implicit killed $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
bb.8:
successors: %bb.1, %bb.9
; Outer loop latch. Note that we cannot have EFLAGS live-in here as that
; immediately require PHIs.
CMP64rr %0, %1, implicit-def $eflags
JE_1 %bb.1, implicit $eflags
JMP_1 %bb.9
; CHECK: bb.8:
; CHECK-NOT: COPY{{( killed)?}} $eflags
; CHECK: CMP64rr %0, %1, implicit-def $eflags
; CHECK-NEXT: JE_1 %bb.1, implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
bb.9:
liveins: $eflags
; And we're done.
%8:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
$rax = COPY %8
RET 0, $rax
; CHECK: bb.9:
; CHECK-NOT: $eflags
; CHECK: %8:gr64 = CMOVE64rr %0, %1, implicit killed $eflags
...
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
---
name: test_existing_setcc
# CHECK-LABEL: name: test_existing_setcc
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2, %bb.3
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
%2:gr8 = SETAr implicit $eflags
%3:gr8 = SETAEr implicit $eflags
%4:gr64 = COPY $eflags
; CHECK: CMP64rr %0, %1, implicit-def $eflags
; CHECK-NEXT: %[[A_REG:[^:]*]]:gr8 = SETAr implicit $eflags
; CHECK-NEXT: %[[AE_REG:[^:]*]]:gr8 = SETAEr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %4
JA_1 %bb.1, implicit $eflags
JB_1 %bb.2, implicit $eflags
JMP_1 %bb.3
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[A_REG]], %[[A_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.1, implicit killed $eflags
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: bb.4:
; CHECK-NEXT: successors: {{.*$}}
; CHECK-SAME: {{$[[:space:]]}}
; CHECK-NEXT: TEST8rr %[[AE_REG]], %[[AE_REG]], implicit-def $eflags
; CHECK-NEXT: JE_1 %bb.2, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.3
bb.1:
%5:gr32 = MOV32ri 42
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
$eax = COPY %5
RET 0, $eax
bb.2:
%6:gr32 = MOV32ri 43
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
$eax = COPY %6
RET 0, $eax
bb.3:
%7:gr32 = MOV32r0 implicit-def dead $eflags
$eax = COPY %7
RET 0, $eax
...
---
name: test_existing_setcc_memory
# CHECK-LABEL: name: test_existing_setcc_memory
liveins:
- { reg: '$rdi', virtual-reg: '%0' }
- { reg: '$rsi', virtual-reg: '%1' }
body: |
bb.0:
successors: %bb.1, %bb.2
liveins: $rdi, $rsi
%0:gr64 = COPY $rdi
%1:gr64 = COPY $rsi
CMP64rr %0, %1, implicit-def $eflags
SETEm %0, 1, $noreg, -16, $noreg, implicit $eflags
%2:gr64 = COPY $eflags
; CHECK: CMP64rr %0, %1, implicit-def $eflags
; We cannot reuse this SETE because it stores the flag directly to memory,
; so we have two SETEs here. FIXME: It'd be great if something could fold
; these automatically. If not, maybe we want to unfold SETcc instructions
; writing to memory so we can reuse them.
; CHECK-NEXT: SETEm {{.*}} implicit $eflags
; CHECK-NEXT: %[[E_REG:[^:]*]]:gr8 = SETEr implicit $eflags
; CHECK-NOT: COPY{{( killed)?}} $eflags
ADJCALLSTACKDOWN64 0, 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
CALL64pcrel32 @foo, csr_64, implicit $rsp, implicit $ssp, implicit $rdi, implicit-def $rsp, implicit-def $ssp, implicit-def $eax
ADJCALLSTACKUP64 0, 0, implicit-def dead $rsp, implicit-def dead $eflags, implicit-def dead $ssp, implicit $rsp, implicit $ssp
$eflags = COPY %2
JE_1 %bb.1, implicit $eflags
JMP_1 %bb.2
; CHECK-NOT: $eflags =
;
; CHECK: TEST8rr %[[E_REG]], %[[E_REG]], implicit-def $eflags
; CHECK-NEXT: JNE_1 %bb.1, implicit killed $eflags
; CHECK-NEXT: JMP_1 %bb.2
bb.1:
%3:gr32 = MOV32ri 42
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
$eax = COPY %3
RET 0, $eax
bb.2:
%4:gr32 = MOV32ri 43
[x86] Fix a really subtle miscompile due to a somewhat glaring bug in EFLAGS copy lowering. If you have a branch of LLVM, you may want to cherrypick this. It is extremely unlikely to hit this case empirically, but it will likely manifest as an "impossible" branch being taken somewhere, and will be ... very hard to debug. Hitting this requires complex conditions living across complex control flow combined with some interesting memory (non-stack) initialized with the results of a comparison. Also, because you have to arrange for an EFLAGS copy to be in *just* the right place, almost anything you do to the code will hide the bug. I was unable to reduce anything remotely resembling a "good" test case from the place where I hit it, and so instead I have constructed synthetic MIR testing that directly exercises the bug in question (as well as the good behavior for completeness). The issue is that we would mistakenly assume any SETcc with a valid condition and an initial operand that was a register and a virtual register at that to be a register *defining* SETcc... It isn't though.... This would in turn cause us to test some other bizarre register, typically the base pointer of some memory. Now, testing this register and using that to branch on doesn't make any sense. It even fails the machine verifier (if you are running it) due to the wrong register class. But it will make it through LLVM, assemble, and it *looks* fine... But wow do you get a very unsual and surprising branch taken in your actual code. The fix is to actually check what kind of SETcc instruction we're dealing with. Because there are a bunch of them, I just test the may-store bit in the instruction. I've also added an assert for sanity that ensure we are, in fact, *defining* the register operand. =D llvm-svn: 338481
2018-08-01 11:01:58 +08:00
$eax = COPY %4
RET 0, $eax
...