2022-04-07 18:03:55 +08:00
|
|
|
// RUN: %clang_cc1 -no-opaque-pointers -emit-llvm %s -o - -triple x86_64-linux-gnu | FileCheck %s --check-prefix=CHECK-NOSANITIZE
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
|
2022-04-07 18:03:55 +08:00
|
|
|
// RUN: %clang_cc1 -no-opaque-pointers -fsanitize=implicit-signed-integer-truncation -fno-sanitize-recover=implicit-signed-integer-truncation -emit-llvm %s -o - -triple x86_64-linux-gnu | FileCheck %s -implicit-check-not="call void @__ubsan_handle_implicit_conversion" --check-prefixes=CHECK-SANITIZE,CHECK-SANITIZE-ANYRECOVER,CHECK-SANITIZE-NORECOVER,CHECK-SANITIZE-UNREACHABLE
|
|
|
|
// RUN: %clang_cc1 -no-opaque-pointers -fsanitize=implicit-signed-integer-truncation -fsanitize-recover=implicit-signed-integer-truncation -emit-llvm %s -o - -triple x86_64-linux-gnu | FileCheck %s -implicit-check-not="call void @__ubsan_handle_implicit_conversion" --check-prefixes=CHECK-SANITIZE,CHECK-SANITIZE-ANYRECOVER,CHECK-SANITIZE-RECOVER
|
|
|
|
// RUN: %clang_cc1 -no-opaque-pointers -fsanitize=implicit-signed-integer-truncation -fsanitize-trap=implicit-signed-integer-truncation -emit-llvm %s -o - -triple x86_64-linux-gnu | FileCheck %s -implicit-check-not="call void @__ubsan_handle_implicit_conversion" --check-prefixes=CHECK-SANITIZE,CHECK-SANITIZE-TRAP,CHECK-SANITIZE-UNREACHABLE
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[INT:.*]] = {{.*}} c"'int'\00" }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[UNSIGNED_SHORT:.*]] = {{.*}} c"'unsigned short'\00" }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_100:.*]] = {{.*}}, i32 100, i32 11 }, {{.*}}* @[[INT]], {{.*}}* @[[UNSIGNED_SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_200:.*]] = {{.*}}, i32 200, i32 11 }, {{.*}}* @[[INT]], {{.*}}* @[[UNSIGNED_SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_300:.*]] = {{.*}}, i32 300, i32 10 }, {{.*}}* @[[INT]], {{.*}}* @[[UNSIGNED_SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_400:.*]] = {{.*}}, i32 400, i32 10 }, {{.*}}* @[[INT]], {{.*}}* @[[UNSIGNED_SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[SHORT:.*]] = {{.*}} c"'short'\00" }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_500:.*]] = {{.*}}, i32 500, i32 11 }, {{.*}}* @[[INT]], {{.*}}* @[[SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_600:.*]] = {{.*}}, i32 600, i32 11 }, {{.*}}* @[[INT]], {{.*}}* @[[SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_700:.*]] = {{.*}}, i32 700, i32 10 }, {{.*}}* @[[INT]], {{.*}}* @[[SHORT]], i8 2 }
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-DAG: @[[LINE_800:.*]] = {{.*}}, i32 800, i32 10 }, {{.*}}* @[[INT]], {{.*}}* @[[SHORT]], i8 2 }
|
|
|
|
|
|
|
|
unsigned short t0(unsigned short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t0(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], 1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t0(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = zext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], 1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_100]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_100]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
#line 100
|
|
|
|
return x++;
|
|
|
|
}
|
|
|
|
unsigned short t1(unsigned short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t1(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], -1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t1(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = zext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], -1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_200]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_200]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
#line 200
|
|
|
|
return x--;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned short t2(unsigned short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t2(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], 1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[INC]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t2(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = zext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], 1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_300]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_300]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_PROMOTED_DEMOTED]]
|
|
|
|
#line 300
|
|
|
|
return ++x;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned short t3(unsigned short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t3(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], -1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[INC]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t3(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = zext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], -1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_400]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_400]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_PROMOTED_DEMOTED]]
|
|
|
|
#line 400
|
|
|
|
return --x;
|
|
|
|
}
|
|
|
|
|
|
|
|
signed short t4(signed short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t4(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], 1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t4(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = sext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], 1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = sext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_500]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_500]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
#line 500
|
|
|
|
return x++;
|
|
|
|
}
|
|
|
|
signed short t5(signed short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t5(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], -1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t5(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = sext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], -1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = sext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_600]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_600]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_RELOADED]]
|
|
|
|
#line 600
|
|
|
|
return x--;
|
|
|
|
}
|
|
|
|
|
|
|
|
signed short t6(signed short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t6(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], 1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[INC]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t6(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = sext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], 1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = sext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_700]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_700]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_PROMOTED_DEMOTED]]
|
|
|
|
#line 700
|
|
|
|
return ++x;
|
|
|
|
}
|
|
|
|
|
|
|
|
signed short t7(signed short x) {
|
|
|
|
// CHECK-NOSANITIZE-LABEL: @t7(
|
|
|
|
// CHECK-NOSANITIZE-NEXT: entry:
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: [[INC:%.*]] = add i16 [[X_RELOADED]], -1
|
|
|
|
// CHECK-NOSANITIZE-NEXT: store i16 [[INC]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-NOSANITIZE-NEXT: ret i16 [[INC]]
|
|
|
|
//
|
|
|
|
// CHECK-SANITIZE-LABEL: @t7(
|
|
|
|
// CHECK-SANITIZE-NEXT: entry:
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_ADDR:%.*]] = alloca i16, align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X:%.*]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_RELOADED:%.*]] = load i16, i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED:%.*]] = sext i16 [[X_RELOADED]] to i32
|
|
|
|
// CHECK-SANITIZE-NEXT: [[INC:%.*]] = add i32 [[X_PROMOTED]], -1
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED:%.*]] = trunc i32 [[INC]] to i16
|
|
|
|
// CHECK-SANITIZE-NEXT: [[X_PROMOTED_DEMOTED_PROMOTED:%.*]] = sext i16 [[X_PROMOTED_DEMOTED]] to i32, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: [[TRUNCHECK:%.*]] = icmp eq i32 [[X_PROMOTED_DEMOTED_PROMOTED]], [[INC]], !nosanitize
|
|
|
|
// CHECK-SANITIZE-NEXT: br i1 [[TRUNCHECK]], label %[[CONT:.*]], label %[[HANDLER_IMPLICIT_X_PROMOTEDERSION:[^,]+]],{{.*}} !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[HANDLER_IMPLICIT_X_PROMOTEDERSION]]:
|
2020-10-21 17:11:25 +08:00
|
|
|
// CHECK-SANITIZE-TRAP-NEXT: call void @llvm.ubsantrap(i8 7){{.*}}, !nosanitize
|
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
https://github.com/llvm/llvm-project/blob/901cd3b3f62d0c700e5d2c3f97eff97d634bec5e/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp#L1320-L1334
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1de4c49300430e4f1f4dfc110a79ab9
but got immediately reverted in cbfa237892e55b7129a1178c9b03f26683d643af
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 22:07:06 +08:00
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP1:%.*]] = zext i32 [[INC]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-ANYRECOVER-NEXT: [[TMP2:%.*]] = zext i16 [[X_PROMOTED_DEMOTED]] to i64, !nosanitize
|
|
|
|
// CHECK-SANITIZE-NORECOVER-NEXT: call void @__ubsan_handle_implicit_conversion_abort(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_800]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: call void @__ubsan_handle_implicit_conversion(i8* bitcast ({ {{{.*}}}, {{{.*}}}*, {{{.*}}}*, i8 }* @[[LINE_800]] to i8*), i64 [[TMP1]], i64 [[TMP2]]) #2, !nosanitize
|
|
|
|
// CHECK-SANITIZE-UNREACHABLE-NEXT: unreachable, !nosanitize
|
|
|
|
// CHECK-SANITIZE-RECOVER-NEXT: br label %[[CONT]], !nosanitize
|
|
|
|
// CHECK-SANITIZE: [[CONT]]:
|
|
|
|
// CHECK-SANITIZE-NEXT: store i16 [[X_PROMOTED_DEMOTED]], i16* [[X_ADDR]], align 2
|
|
|
|
// CHECK-SANITIZE-NEXT: ret i16 [[X_PROMOTED_DEMOTED]]
|
|
|
|
#line 800
|
|
|
|
return --x;
|
|
|
|
}
|