llvm-project/clang/test/Analysis/symbol-simplification-fixpo...

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

49 lines
2.2 KiB
C++
Raw Normal View History

[analyzer][solver] Iterate to a fixpoint during symbol simplification with constants D103314 introduced symbol simplification when a new constant constraint is added. Currently, we simplify existing equivalence classes by iterating over all existing members of them and trying to simplify each member symbol with simplifySVal. At the end of such a simplification round we may end up introducing a new constant constraint. Example: ``` if (a + b + c != d) return; if (c + b != 0) return; // Simplification starts here. if (b != 0) return; ``` The `c == 0` constraint is the result of the first simplification iteration. However, we could do another round of simplification to reach the conclusion that `a == d`. Generally, we could do as many new iterations until we reach a fixpoint. We can reach to a fixpoint by recursively calling `State->assume` on the newly simplified symbol. By calling `State->assume` we re-ignite the whole assume machinery (along e.g with adjustment handling). Why should we do this? By reaching a fixpoint in simplification we are capable of discovering infeasible states at the moment of the introduction of the **first** constant constraint. Let's modify the previous example just a bit, and consider what happens without the fixpoint iteration. ``` if (a + b + c != d) return; if (c + b != 0) return; // Adding a new constraint. if (a == d) return; // This brings in a contradiction. if (b != 0) return; clang_analyzer_warnIfReached(); // This produces a warning. // The path is already infeasible... if (c == 0) // ...but we realize that only when we evaluate `c == 0`. return; ``` What happens currently, without the fixpoint iteration? As the inline comments suggest, without the fixpoint iteration we are doomed to realize that we are on an infeasible path only after we are already walking on that. With fixpoint iteration we can detect that before stepping on that. With fixpoint iteration, the `clang_analyzer_warnIfReached` does not warn in the above example b/c during the evaluation of `b == 0` we realize the contradiction. The engine and the checkers do rely on that either `assume(Cond)` or `assume(!Cond)` should be feasible. This is in fact assured by the so called expensive checks (LLVM_ENABLE_EXPENSIVE_CHECKS). The StdLibraryFuncionsChecker is notably one of the checkers that has a very similar assertion. Before this patch, we simply added the simplified symbol to the equivalence class. In this patch, after we have added the simplified symbol, we remove the old (more complex) symbol from the members of the equivalence class (`ClassMembers`). Removing the old symbol is beneficial because during the next iteration of the simplification we don't have to consider again the old symbol. Contrary to how we handle `ClassMembers`, we don't remove the old Sym->Class relation from the `ClassMap`. This is important for two reasons: The constraints of the old symbol can still be found via it's equivalence class that it used to be the member of (1). We can spare one removal and thus one additional tree in the forest of `ClassMap` (2). Performance and complexity: Let us assume that in a State we have N non-trivial equivalence classes and that all constraints and disequality info is related to non-trivial classes. In the worst case, we can simplify only one symbol of one class in each iteration. The number of symbols in one class cannot grow b/c we replace the old symbol with the simplified one. Also, the number of the equivalence classes can decrease only, b/c the algorithm does a merge operation optionally. We need N iterations in this case to reach the fixpoint. Thus, the steps needed to be done in the worst case is proportional to `N*N`. Empirical results (attached) show that there is some hardly noticeable run-time and peak memory discrepancy compared to the baseline. In my opinion, these differences could be the result of measurement error. This worst case scenario can be extended to that cases when we have trivial classes in the constraints and in the disequality map are transforming to such a State where there are only non-trivial classes, b/c the algorithm does merge operations. A merge operation on two trivial classes results in one non-trivial class. Differential Revision: https://reviews.llvm.org/D106823
2021-07-27 04:55:44 +08:00
// RUN: %clang_analyze_cc1 %s \
// RUN: -analyzer-checker=core \
// RUN: -analyzer-checker=debug.ExprInspection \
// RUN: 2>&1 | FileCheck %s
// In this test we check whether the solver's symbol simplification mechanism
// is capable of reaching a fixpoint. This should be done after TWO iterations.
void clang_analyzer_printState();
void test(int a, int b, int c, int d) {
if (a + b + c != d)
return;
if (c + b != 0)
return;
clang_analyzer_printState();
// CHECK: "constraints": [
// CHECK-NEXT: { "symbol": "(((reg_$0<int a>) + (reg_$1<int b>)) + (reg_$2<int c>)) != (reg_$3<int d>)", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "(reg_$2<int c>) + (reg_$1<int b>)", "range": "{ [0, 0] }" }
// CHECK-NEXT: ],
// CHECK-NEXT: "equivalence_classes": [
// CHECK-NEXT: [ "((reg_$0<int a>) + (reg_$1<int b>)) + (reg_$2<int c>)", "reg_$3<int d>" ]
// CHECK-NEXT: ],
// CHECK-NEXT: "disequality_info": null,
// Simplification starts here.
if (b != 0)
return;
clang_analyzer_printState();
// CHECK: "constraints": [
// CHECK-NEXT: { "symbol": "(((reg_$0<int a>) + (reg_$1<int b>)) + (reg_$2<int c>)) != (reg_$3<int d>)", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "((reg_$0<int a>) + (reg_$2<int c>)) != (reg_$3<int d>)", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "(reg_$0<int a>) != (reg_$3<int d>)", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "(reg_$2<int c>) + (reg_$1<int b>)", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "reg_$1<int b>", "range": "{ [0, 0] }" },
// CHECK-NEXT: { "symbol": "reg_$2<int c>", "range": "{ [0, 0] }" }
// CHECK-NEXT: ],
// CHECK-NEXT: "equivalence_classes": [
// CHECK-NEXT: [ "(((reg_$0<int a>) + (reg_$1<int b>)) + (reg_$2<int c>)) != (reg_$3<int d>)", "((reg_$0<int a>) + (reg_$2<int c>)) != (reg_$3<int d>)", "(reg_$0<int a>) != (reg_$3<int d>)" ],
// CHECK-NEXT: [ "((reg_$0<int a>) + (reg_$1<int b>)) + (reg_$2<int c>)", "(reg_$0<int a>) + (reg_$2<int c>)", "reg_$0<int a>", "reg_$3<int d>" ],
// CHECK-NEXT: [ "(reg_$2<int c>) + (reg_$1<int b>)", "reg_$2<int c>" ]
// CHECK-NEXT: ],
// CHECK-NEXT: "disequality_info": null,
[analyzer][solver] Iterate to a fixpoint during symbol simplification with constants D103314 introduced symbol simplification when a new constant constraint is added. Currently, we simplify existing equivalence classes by iterating over all existing members of them and trying to simplify each member symbol with simplifySVal. At the end of such a simplification round we may end up introducing a new constant constraint. Example: ``` if (a + b + c != d) return; if (c + b != 0) return; // Simplification starts here. if (b != 0) return; ``` The `c == 0` constraint is the result of the first simplification iteration. However, we could do another round of simplification to reach the conclusion that `a == d`. Generally, we could do as many new iterations until we reach a fixpoint. We can reach to a fixpoint by recursively calling `State->assume` on the newly simplified symbol. By calling `State->assume` we re-ignite the whole assume machinery (along e.g with adjustment handling). Why should we do this? By reaching a fixpoint in simplification we are capable of discovering infeasible states at the moment of the introduction of the **first** constant constraint. Let's modify the previous example just a bit, and consider what happens without the fixpoint iteration. ``` if (a + b + c != d) return; if (c + b != 0) return; // Adding a new constraint. if (a == d) return; // This brings in a contradiction. if (b != 0) return; clang_analyzer_warnIfReached(); // This produces a warning. // The path is already infeasible... if (c == 0) // ...but we realize that only when we evaluate `c == 0`. return; ``` What happens currently, without the fixpoint iteration? As the inline comments suggest, without the fixpoint iteration we are doomed to realize that we are on an infeasible path only after we are already walking on that. With fixpoint iteration we can detect that before stepping on that. With fixpoint iteration, the `clang_analyzer_warnIfReached` does not warn in the above example b/c during the evaluation of `b == 0` we realize the contradiction. The engine and the checkers do rely on that either `assume(Cond)` or `assume(!Cond)` should be feasible. This is in fact assured by the so called expensive checks (LLVM_ENABLE_EXPENSIVE_CHECKS). The StdLibraryFuncionsChecker is notably one of the checkers that has a very similar assertion. Before this patch, we simply added the simplified symbol to the equivalence class. In this patch, after we have added the simplified symbol, we remove the old (more complex) symbol from the members of the equivalence class (`ClassMembers`). Removing the old symbol is beneficial because during the next iteration of the simplification we don't have to consider again the old symbol. Contrary to how we handle `ClassMembers`, we don't remove the old Sym->Class relation from the `ClassMap`. This is important for two reasons: The constraints of the old symbol can still be found via it's equivalence class that it used to be the member of (1). We can spare one removal and thus one additional tree in the forest of `ClassMap` (2). Performance and complexity: Let us assume that in a State we have N non-trivial equivalence classes and that all constraints and disequality info is related to non-trivial classes. In the worst case, we can simplify only one symbol of one class in each iteration. The number of symbols in one class cannot grow b/c we replace the old symbol with the simplified one. Also, the number of the equivalence classes can decrease only, b/c the algorithm does a merge operation optionally. We need N iterations in this case to reach the fixpoint. Thus, the steps needed to be done in the worst case is proportional to `N*N`. Empirical results (attached) show that there is some hardly noticeable run-time and peak memory discrepancy compared to the baseline. In my opinion, these differences could be the result of measurement error. This worst case scenario can be extended to that cases when we have trivial classes in the constraints and in the disequality map are transforming to such a State where there are only non-trivial classes, b/c the algorithm does merge operations. A merge operation on two trivial classes results in one non-trivial class. Differential Revision: https://reviews.llvm.org/D106823
2021-07-27 04:55:44 +08:00
// Keep the symbols and the constraints! alive.
(void)(a * b * c * d);
return;
}