Introduce control flow speculation tracking pass for AArch64
The pass implements tracking of control flow miss-speculation into a "taint"
register. That taint register can then be used to mask off registers with
sensitive data when executing under miss-speculation, a.k.a. "transient
execution".
This pass is aimed at mitigating against SpectreV1-style vulnarabilities.
At the moment, it implements the tracking of miss-speculation of control
flow into a taint register, but doesn't implement a mechanism yet to then
use that taint register to mask off vulnerable data in registers (something
for a follow-on improvement). Possible strategies to mask out vulnerable
data that can be implemented on top of this are:
- speculative load hardening to automatically mask of data loaded
in registers.
- using intrinsics to mask of data in registers as indicated by the
programmer (see https://lwn.net/Articles/759423/).
For AArch64, the following implementation choices are made.
Some of these are different than the implementation choices made in
the similar pass implemented in X86SpeculativeLoadHardening.cpp, as
the instruction set characteristics result in different trade-offs.
- The speculation hardening is done after register allocation. With a
relative abundance of registers, one register is reserved (X16) to be
the taint register. X16 is expected to not clash with other register
reservation mechanisms with very high probability because:
. The AArch64 ABI doesn't guarantee X16 to be retained across any call.
. The only way to request X16 to be used as a programmer is through
inline assembly. In the rare case a function explicitly demands to
use X16/W16, this pass falls back to hardening against speculation
by inserting a DSB SYS/ISB barrier pair which will prevent control
flow speculation.
- It is easy to insert mask operations at this late stage as we have
mask operations available that don't set flags.
- The taint variable contains all-ones when no miss-speculation is detected,
and contains all-zeros when miss-speculation is detected. Therefore, when
masking, an AND instruction (which only changes the register to be masked,
no other side effects) can easily be inserted anywhere that's needed.
- The tracking of miss-speculation is done by using a data-flow conditional
select instruction (CSEL) to evaluate the flags that were also used to
make conditional branch direction decisions. Speculation of the CSEL
instruction can be limited with a CSDB instruction - so the combination of
CSEL + a later CSDB gives the guarantee that the flags as used in the CSEL
aren't speculated. When conditional branch direction gets miss-speculated,
the semantics of the inserted CSEL instruction is such that the taint
register will contain all zero bits.
One key requirement for this to work is that the conditional branch is
followed by an execution of the CSEL instruction, where the CSEL
instruction needs to use the same flags status as the conditional branch.
This means that the conditional branches must not be implemented as one
of the AArch64 conditional branches that do not use the flags as input
(CB(N)Z and TB(N)Z). This is implemented by ensuring in the instruction
selectors to not produce these instructions when speculation hardening
is enabled. This pass will assert if it does encounter such an instruction.
- On function call boundaries, the miss-speculation state is transferred from
the taint register X16 to be encoded in the SP register as value 0.
Future extensions/improvements could be:
- Implement this functionality using full speculation barriers, akin to the
x86-slh-lfence option. This may be more useful for the intrinsics-based
approach than for the SLH approach to masking.
Note that this pass already inserts the full speculation barriers if the
function for some niche reason makes use of X16/W16.
- no indirect branch misprediction gets protected/instrumented; but this
could be done for some indirect branches, such as switch jump tables.
Differential Revision: https://reviews.llvm.org/D54896
llvm-svn: 349456
2018-12-18 16:50:02 +08:00
|
|
|
# RUN: llc -verify-machineinstrs -mtriple=aarch64-none-linux-gnu \
|
|
|
|
# RUN: -start-before aarch64-speculation-hardening -o - %s \
|
2020-03-28 07:58:06 +08:00
|
|
|
# RUN: | FileCheck %s
|
Introduce control flow speculation tracking pass for AArch64
The pass implements tracking of control flow miss-speculation into a "taint"
register. That taint register can then be used to mask off registers with
sensitive data when executing under miss-speculation, a.k.a. "transient
execution".
This pass is aimed at mitigating against SpectreV1-style vulnarabilities.
At the moment, it implements the tracking of miss-speculation of control
flow into a taint register, but doesn't implement a mechanism yet to then
use that taint register to mask off vulnerable data in registers (something
for a follow-on improvement). Possible strategies to mask out vulnerable
data that can be implemented on top of this are:
- speculative load hardening to automatically mask of data loaded
in registers.
- using intrinsics to mask of data in registers as indicated by the
programmer (see https://lwn.net/Articles/759423/).
For AArch64, the following implementation choices are made.
Some of these are different than the implementation choices made in
the similar pass implemented in X86SpeculativeLoadHardening.cpp, as
the instruction set characteristics result in different trade-offs.
- The speculation hardening is done after register allocation. With a
relative abundance of registers, one register is reserved (X16) to be
the taint register. X16 is expected to not clash with other register
reservation mechanisms with very high probability because:
. The AArch64 ABI doesn't guarantee X16 to be retained across any call.
. The only way to request X16 to be used as a programmer is through
inline assembly. In the rare case a function explicitly demands to
use X16/W16, this pass falls back to hardening against speculation
by inserting a DSB SYS/ISB barrier pair which will prevent control
flow speculation.
- It is easy to insert mask operations at this late stage as we have
mask operations available that don't set flags.
- The taint variable contains all-ones when no miss-speculation is detected,
and contains all-zeros when miss-speculation is detected. Therefore, when
masking, an AND instruction (which only changes the register to be masked,
no other side effects) can easily be inserted anywhere that's needed.
- The tracking of miss-speculation is done by using a data-flow conditional
select instruction (CSEL) to evaluate the flags that were also used to
make conditional branch direction decisions. Speculation of the CSEL
instruction can be limited with a CSDB instruction - so the combination of
CSEL + a later CSDB gives the guarantee that the flags as used in the CSEL
aren't speculated. When conditional branch direction gets miss-speculated,
the semantics of the inserted CSEL instruction is such that the taint
register will contain all zero bits.
One key requirement for this to work is that the conditional branch is
followed by an execution of the CSEL instruction, where the CSEL
instruction needs to use the same flags status as the conditional branch.
This means that the conditional branches must not be implemented as one
of the AArch64 conditional branches that do not use the flags as input
(CB(N)Z and TB(N)Z). This is implemented by ensuring in the instruction
selectors to not produce these instructions when speculation hardening
is enabled. This pass will assert if it does encounter such an instruction.
- On function call boundaries, the miss-speculation state is transferred from
the taint register X16 to be encoded in the SP register as value 0.
Future extensions/improvements could be:
- Implement this functionality using full speculation barriers, akin to the
x86-slh-lfence option. This may be more useful for the intrinsics-based
approach than for the SLH approach to masking.
Note that this pass already inserts the full speculation barriers if the
function for some niche reason makes use of X16/W16.
- no indirect branch misprediction gets protected/instrumented; but this
could be done for some indirect branches, such as switch jump tables.
Differential Revision: https://reviews.llvm.org/D54896
llvm-svn: 349456
2018-12-18 16:50:02 +08:00
|
|
|
|
|
|
|
# Check that the speculation hardening pass generates code as expected for
|
|
|
|
# basic blocks ending with a variety of branch patterns:
|
|
|
|
# - (1) no branches (fallthrough)
|
|
|
|
# - (2) one unconditional branch
|
|
|
|
# - (3) one conditional branch + fall-through
|
|
|
|
# - (4) one conditional branch + one unconditional branch
|
|
|
|
# - other direct branches don't seem to be generated by the AArch64 codegen
|
|
|
|
--- |
|
|
|
|
define void @nobranch_fallthrough(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @uncondbranch(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @condbranch_fallthrough(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @condbranch_uncondbranch(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @indirectbranch(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
[SLH] AArch64: correctly pick temporary register to mask SP
As part of speculation hardening, the stack pointer gets masked with the
taint register (X16) before a function call or before a function return.
Since there are no instructions that can directly mask writing to the
stack pointer, the stack pointer must first be transferred to another
register, where it can be masked, before that value is transferred back
to the stack pointer.
Before, that temporary register was always picked to be x17, since the
ABI allows clobbering x17 on any function call, resulting in the
following instruction pattern being inserted before function calls and
returns/tail calls:
mov x17, sp
and x17, x17, x16
mov sp, x17
However, x17 can be live in those locations, for example when the call
is an indirect call, using x17 as the target address (blr x17).
To fix this, this patch looks for an available register just before the
call or terminator instruction and uses that.
In the rare case when no register turns out to be available (this
situation is only encountered twice across the whole test-suite), just
insert a full speculation barrier at the start of the basic block where
this occurs.
Differential Revision: https://reviews.llvm.org/D56717
llvm-svn: 351930
2019-01-23 16:18:39 +08:00
|
|
|
; Also check that a non-default temporary register gets picked correctly to
|
|
|
|
; transfer the SP to to and it with the taint register when the default
|
|
|
|
; temporary isn't available.
|
|
|
|
define void @indirect_call_x17(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
@g = common dso_local local_unnamed_addr global i64 (...)* null, align 8
|
|
|
|
define void @indirect_tailcall_x17(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @indirect_call_lr(i32 %a, i32 %b) speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
define void @RS_cannot_find_available_regs() speculative_load_hardening {
|
|
|
|
ret void
|
|
|
|
}
|
Introduce control flow speculation tracking pass for AArch64
The pass implements tracking of control flow miss-speculation into a "taint"
register. That taint register can then be used to mask off registers with
sensitive data when executing under miss-speculation, a.k.a. "transient
execution".
This pass is aimed at mitigating against SpectreV1-style vulnarabilities.
At the moment, it implements the tracking of miss-speculation of control
flow into a taint register, but doesn't implement a mechanism yet to then
use that taint register to mask off vulnerable data in registers (something
for a follow-on improvement). Possible strategies to mask out vulnerable
data that can be implemented on top of this are:
- speculative load hardening to automatically mask of data loaded
in registers.
- using intrinsics to mask of data in registers as indicated by the
programmer (see https://lwn.net/Articles/759423/).
For AArch64, the following implementation choices are made.
Some of these are different than the implementation choices made in
the similar pass implemented in X86SpeculativeLoadHardening.cpp, as
the instruction set characteristics result in different trade-offs.
- The speculation hardening is done after register allocation. With a
relative abundance of registers, one register is reserved (X16) to be
the taint register. X16 is expected to not clash with other register
reservation mechanisms with very high probability because:
. The AArch64 ABI doesn't guarantee X16 to be retained across any call.
. The only way to request X16 to be used as a programmer is through
inline assembly. In the rare case a function explicitly demands to
use X16/W16, this pass falls back to hardening against speculation
by inserting a DSB SYS/ISB barrier pair which will prevent control
flow speculation.
- It is easy to insert mask operations at this late stage as we have
mask operations available that don't set flags.
- The taint variable contains all-ones when no miss-speculation is detected,
and contains all-zeros when miss-speculation is detected. Therefore, when
masking, an AND instruction (which only changes the register to be masked,
no other side effects) can easily be inserted anywhere that's needed.
- The tracking of miss-speculation is done by using a data-flow conditional
select instruction (CSEL) to evaluate the flags that were also used to
make conditional branch direction decisions. Speculation of the CSEL
instruction can be limited with a CSDB instruction - so the combination of
CSEL + a later CSDB gives the guarantee that the flags as used in the CSEL
aren't speculated. When conditional branch direction gets miss-speculated,
the semantics of the inserted CSEL instruction is such that the taint
register will contain all zero bits.
One key requirement for this to work is that the conditional branch is
followed by an execution of the CSEL instruction, where the CSEL
instruction needs to use the same flags status as the conditional branch.
This means that the conditional branches must not be implemented as one
of the AArch64 conditional branches that do not use the flags as input
(CB(N)Z and TB(N)Z). This is implemented by ensuring in the instruction
selectors to not produce these instructions when speculation hardening
is enabled. This pass will assert if it does encounter such an instruction.
- On function call boundaries, the miss-speculation state is transferred from
the taint register X16 to be encoded in the SP register as value 0.
Future extensions/improvements could be:
- Implement this functionality using full speculation barriers, akin to the
x86-slh-lfence option. This may be more useful for the intrinsics-based
approach than for the SLH approach to masking.
Note that this pass already inserts the full speculation barriers if the
function for some niche reason makes use of X16/W16.
- no indirect branch misprediction gets protected/instrumented; but this
could be done for some indirect branches, such as switch jump tables.
Differential Revision: https://reviews.llvm.org/D54896
llvm-svn: 349456
2018-12-18 16:50:02 +08:00
|
|
|
...
|
|
|
|
---
|
|
|
|
name: nobranch_fallthrough
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
; CHECK-LABEL: nobranch_fallthrough
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1
|
|
|
|
liveins: $w0, $w1
|
|
|
|
; CHECK-NOT: csel
|
|
|
|
bb.1:
|
|
|
|
liveins: $w0
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: uncondbranch
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
; CHECK-LABEL: uncondbranch
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1
|
|
|
|
liveins: $w0, $w1
|
|
|
|
B %bb.1
|
|
|
|
; CHECK-NOT: csel
|
|
|
|
bb.1:
|
|
|
|
liveins: $w0
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: condbranch_fallthrough
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
; CHECK-LABEL: condbranch_fallthrough
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.2
|
|
|
|
liveins: $w0, $w1
|
|
|
|
$wzr = SUBSWrs renamable $w0, renamable $w1, 0, implicit-def $nzcv, implicit-def $nzcv
|
|
|
|
Bcc 11, %bb.2, implicit $nzcv
|
|
|
|
; CHECK: b.lt [[BB_LT_T:\.LBB[0-9_]+]]
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
liveins: $nzcv, $w0
|
|
|
|
; CHECK: csel x16, x16, xzr, ge
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
bb.2:
|
|
|
|
liveins: $nzcv, $w0
|
|
|
|
; CHECK: csel x16, x16, xzr, lt
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: condbranch_uncondbranch
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
; CHECK-LABEL: condbranch_uncondbranch
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.2
|
|
|
|
liveins: $w0, $w1
|
|
|
|
$wzr = SUBSWrs renamable $w0, renamable $w1, 0, implicit-def $nzcv, implicit-def $nzcv
|
|
|
|
Bcc 11, %bb.2, implicit $nzcv
|
|
|
|
B %bb.1, implicit $nzcv
|
|
|
|
; CHECK: b.lt [[BB_LT_T:\.LBB[0-9_]+]]
|
|
|
|
|
|
|
|
bb.1:
|
|
|
|
liveins: $nzcv, $w0
|
|
|
|
; CHECK: csel x16, x16, xzr, ge
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
bb.2:
|
|
|
|
liveins: $nzcv, $w0
|
|
|
|
; CHECK: csel x16, x16, xzr, lt
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: indirectbranch
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
; Check that no instrumentation is done on indirect branches (for now).
|
|
|
|
; CHECK-LABEL: indirectbranch
|
|
|
|
bb.0:
|
|
|
|
successors: %bb.1, %bb.2
|
|
|
|
liveins: $x0
|
|
|
|
BR $x0
|
|
|
|
bb.1:
|
|
|
|
liveins: $x0
|
|
|
|
; CHECK-NOT: csel
|
|
|
|
RET undef $lr, implicit $x0
|
|
|
|
bb.2:
|
|
|
|
liveins: $x0
|
|
|
|
; CHECK-NOT: csel
|
|
|
|
RET undef $lr, implicit $x0
|
|
|
|
...
|
[SLH] AArch64: correctly pick temporary register to mask SP
As part of speculation hardening, the stack pointer gets masked with the
taint register (X16) before a function call or before a function return.
Since there are no instructions that can directly mask writing to the
stack pointer, the stack pointer must first be transferred to another
register, where it can be masked, before that value is transferred back
to the stack pointer.
Before, that temporary register was always picked to be x17, since the
ABI allows clobbering x17 on any function call, resulting in the
following instruction pattern being inserted before function calls and
returns/tail calls:
mov x17, sp
and x17, x17, x16
mov sp, x17
However, x17 can be live in those locations, for example when the call
is an indirect call, using x17 as the target address (blr x17).
To fix this, this patch looks for an available register just before the
call or terminator instruction and uses that.
In the rare case when no register turns out to be available (this
situation is only encountered twice across the whole test-suite), just
insert a full speculation barrier at the start of the basic block where
this occurs.
Differential Revision: https://reviews.llvm.org/D56717
llvm-svn: 351930
2019-01-23 16:18:39 +08:00
|
|
|
---
|
|
|
|
name: indirect_call_x17
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $x17
|
|
|
|
; CHECK-LABEL: indirect_call_x17
|
|
|
|
; CHECK: mov x0, sp
|
|
|
|
; CHECK: and x0, x0, x16
|
|
|
|
; CHECK: mov sp, x0
|
|
|
|
; CHECK: blr x17
|
|
|
|
BLR killed renamable $x17, implicit-def dead $lr, implicit $sp
|
|
|
|
RET undef $lr, implicit undef $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: indirect_tailcall_x17
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
liveins: $x0
|
|
|
|
; CHECK-LABEL: indirect_tailcall_x17
|
|
|
|
; CHECK: mov x1, sp
|
|
|
|
; CHECK: and x1, x1, x16
|
|
|
|
; CHECK: mov sp, x1
|
|
|
|
; CHECK: br x17
|
|
|
|
$x8 = ADRP target-flags(aarch64-page) @g
|
|
|
|
$x17 = LDRXui killed $x8, target-flags(aarch64-pageoff, aarch64-nc) @g
|
|
|
|
TCRETURNri killed $x17, 0, implicit $sp, implicit $x0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: indirect_call_lr
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
; CHECK-LABEL: indirect_call_lr
|
|
|
|
; CHECK: mov x1, sp
|
|
|
|
; CHECK-NEXT: and x1, x1, x16
|
|
|
|
; CHECK-NEXT: mov sp, x1
|
|
|
|
; CHECK-NEXT: blr x30
|
|
|
|
liveins: $x0, $lr
|
|
|
|
BLR killed renamable $lr, implicit-def dead $lr, implicit $sp, implicit-def $sp, implicit-def $w0
|
|
|
|
$w0 = nsw ADDWri killed $w0, 1, 0
|
|
|
|
RET undef $lr, implicit $w0
|
|
|
|
...
|
|
|
|
---
|
|
|
|
name: RS_cannot_find_available_regs
|
|
|
|
tracksRegLiveness: true
|
|
|
|
body: |
|
|
|
|
bb.0:
|
|
|
|
; In the rare case when no free temporary register is available for the
|
|
|
|
; propagate taint-to-sp operation, just put in a full speculation barrier
|
|
|
|
; (isb+dsb sy) at the start of the basic block. And don't put masks on
|
|
|
|
; instructions for the rest of the basic block, since speculation in that
|
|
|
|
; basic block was already done, so no need to do masking.
|
|
|
|
; CHECK-LABEL: RS_cannot_find_available_regs
|
|
|
|
; CHECK: dsb sy
|
|
|
|
; CHECK-NEXT: isb
|
|
|
|
; CHECK-NEXT: ldr x0, [x0]
|
|
|
|
; The following 2 instructions come from propagating the taint encoded in
|
|
|
|
; sp at function entry to x16. It turns out the taint info in x16 is not
|
|
|
|
; used in this function, so those instructions could be optimized away. An
|
|
|
|
; optimization for later if it turns out this situation occurs often enough.
|
|
|
|
; CHECK-NEXT: cmp sp, #0
|
|
|
|
; CHECK-NEXT: csetm x16, ne
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
liveins: $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7, $x8, $x9, $x10, $x11, $x12, $x13, $x14, $x15, $x17, $x18, $x19, $x20, $x21, $x22, $x23, $x24, $x25, $x26, $x27, $x28, $fp, $lr
|
|
|
|
$x0 = LDRXui killed $x0, 0
|
|
|
|
RET undef $lr, implicit $x0
|
|
|
|
...
|