Summary:
Dear All,
We have been looking at the following problem, where any code after the constant bound loop is not analyzed because of the limit on how many times the same block is visited, as described in bugzillas #7638 and #23438. This problem is of interest to us because we have identified significant bugs that the checkers are not locating. We have been discussing a solution involving ranges as a longer term project, but I would like to propose a patch to improve the current implementation.
Example issue:
```
for (int i = 0; i < 1000; ++i) {...something...}
int *p = 0;
*p = 0xDEADBEEF;
```
The proposal is to go through the first and last iterations of the loop. The patch creates an exploded node for the approximate last iteration of constant bound loops, before the max loop limit / block visit limit is reached. It does this by identifying the variable in the loop condition and finding the value which is “one away” from the loop being false. For example, if the condition is (x < 10), then an exploded node is created where the value of x is 9. Evaluating the loop body with x = 9 will then result in the analysis continuing after the loop, providing x is incremented.
The patch passes all the tests, with some modifications to coverage.c, in order to make the ‘function_which_gives_up’ continue to give up, since the changes allowed the analysis to progress past the loop.
This patch does introduce possible false positives, as a result of not knowing the state of variables which might be modified in the loop. I believe that, as a user, I would rather have false positives after loops than do no analysis at all. I understand this may not be the common opinion and am interested in hearing your views. There are also issues regarding break statements, which are not considered. A more advanced implementation of this approach might be able to consider other conditions in the loop, which would allow paths leading to breaks to be analyzed.
Lastly, I have performed a study on large code bases and I think there is little benefit in having “max-loop” default to 4 with the patch. For variable bound loops this tends to result in duplicated analysis after the loop, and it makes little difference to any constant bound loop which will do more than a few iterations. It might be beneficial to lower the default to 2, especially for the shallow analysis setting.
Please let me know your opinions on this approach to processing constant bound loops and the patch itself.
Regards,
Sean Eveson
SN Systems - Sony Computer Entertainment Group
Reviewers: jordan_rose, krememek, xazax.hun, zaks.anna, dcoughlin
Subscribers: krememek, xazax.hun, cfe-commits
Differential Revision: http://reviews.llvm.org/D12358
llvm-svn: 251621
Add an option (-analyzer-config min-blocks-for-inline-large=14) to control the function
size the inliner considers as large, in relation to "max-times-inline-large". The option
defaults to the original hard coded behaviour, which I believe should be adjustable with
the other inlining settings.
The analyzer-config test has been modified so that the analyzer will reach the
getMinBlocksForInlineLarge() method and store the result in the ConfigTable, to ensure it
is dumped by the debug checker.
A patch by Sean Eveson!
Differential Revision: http://reviews.llvm.org/D12406
llvm-svn: 247463
The analyzer doesn't currently expect CFG blocks with terminators to be
empty, but this can happen when generating conditional destructors for
a complex logical expression, such as (a && (b || Temp{})). Moreover,
the branch conditions for these expressions are not persisted in the
state. Even for handling noreturn destructors this needs more work.
This reverts r186498.
llvm-svn: 186925
Summary:
This patch enables ExprEndgine to reason about temporary object destructors.
However, these destructor calls are never inlined, since this feature is still
broken. Still, this is sufficient to properly handle noreturn temporary
destructors and close bug #15599. I have also enabled the cfg-temporary-dtors
analyzer option by default.
Reviewers: jordan_rose
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1131
llvm-svn: 186498
The analyzer uses LazyCompoundVals to represent rvalues of aggregate types,
most importantly structs and arrays. This allows us to efficiently copy
around an entire struct, rather than doing a memberwise load every time a
struct rvalue is encountered. This can also keep memory usage down by
allowing several structs to "share" the same snapshotted bindings.
However, /lookup/ through LazyCompoundVals can be expensive, especially
since they can end up chaining back to the original value. While we try
to reuse LazyCompoundVals whenever it's safe, and cache information about
this transitivity, the fact is it's sometimes just not a good idea to
perpetuate LazyCompoundVals -- the tradeoffs just aren't worth it.
This commit changes RegionStore so that binding a LazyCompoundVal to struct
will do a memberwise copy if the struct is simple enough. Today's definition
of "simple enough" is "up to N scalar members" (see below), but that could
easily be changed in the future. This is enough to bring the test case in
PR15697 back down to a manageable analysis time (within 20% of its original
time, in an unfair test where the new analyzer is not compiled with LTO).
The actual value of "N" is controlled by a new -analyzer-config option,
'region-store-small-struct-limit'. It defaults to "2", meaning structs with
zero, one, or two scalar members will be considered "simple enough" for
this code path.
It's worth noting that a more straightforward implementation would do this
on load, not on bind, and make use of the structure we already have for this:
CompoundVal. A long time ago, this was actually how RegionStore modeled
aggregate-to-aggregate copies, but today it's only used for compound literals.
Unfortunately, it seems that we've special-cased LazyCompoundVal in certain
places (such as liveness checks) but failed to similarly special-case
CompoundVal in all of them. Until we're confident that CompoundVal is
handled properly everywhere, this solution is safer, since the entire
optimization is just an implementation detail of RegionStore.
<rdar://problem/13599304>
llvm-svn: 179767
This is an opt-in tweak for leak diagnostics to reference the allocation
site if the diagnostic consumer only wants a pithy amount of information,
and not the entire path.
This is a strawman enhancement that I expect to see some experimentation
with over the next week, and can go away if we don't want it.
Currently it is only used by RetainCountChecker, but could be used
by MallocChecker if and when we decide this should stay in.
llvm-svn: 179634
Redefine the shallow mode to inline all functions for which we have a
definite definition (ipa=inlining). However, only inline functions that
are up to 4 basic blocks large and cut the max exploded nodes generated
per top level function in half.
This makes shallow faster and allows us to keep inlining small
functions. For example, we would keep inlining wrapper functions and
constructors/destructors.
With the new shallow, it takes 104s to analyze sqlite3, whereas
the deep mode is 658s and previous shallow is 209s.
llvm-svn: 173958
The idea is to introduce a higher level "user mode" option for
different use scenarios. For example, if one wants to run the analyzer
for a small project each time the code is built, they would use
the "shallow" mode.
The user mode option will influence the default settings for the
lower-level analyzer options. For now, this just influences the ipa
modes, but we plan to find more optimal settings for them.
llvm-svn: 173386
The idea is to eventually place all analyzer options under
"analyzer-config". In addition, this lays the ground for introduction of
a high-level analyzer mode option, which will influence the
default setting for IPAMode.
llvm-svn: 173385
performance heuristic
After inlining a function with more than 13 basic blocks 32 times, we
are not going to inline it anymore. The idea is that inlining large
functions leads to drastic performance implications. Since the function
has already been inlined, we know that we've analyzed it in many
contexts.
The following metrics are used:
- Large function is a function with more than 13 basic blocks (we
should switch to another metric, like cyclomatic complexity)
- We consider that we've inlined a function many times if it's been
inlined 32 times. This number is configurable with -analyzer-config
max-times-inline-large=xx
This heuristic addresses a performance regression introduced with
inlining on one benchmark. The analyzer on this benchmark became 60
times slower with inlining turned on. The heuristic allows us to analyze
it in 24% of the time. The performance improvements on the other
benchmarks I've tested with are much lower - under 10%, which is
expected.
llvm-svn: 170361
After every 1000 CFGElements processed, the ExplodedGraph trims out nodes
that satisfy a number of criteria for being "boring" (single predecessor,
single successor, and more). Rather than controlling this with a cc1 option,
which can only disable this behavior, we now have an analyzer-config option,
'graph-trim-interval', which can change this interval from 1000 to something
else. Setting the value to 0 disables reclamation.
The next commit relies on this behavior to actually test anything.
llvm-svn: 166528
table, making it printable with the ConfigDump checker. Along the
way, fix a really serious bug where the value was getting parsed
from the string in code that was in an assert() call. This means
in a Release-Asserts build this code wouldn't work as expected.
llvm-svn: 165041
string in the config table so that it can be dumped as part of the
config dumper. Add a test to show that these options are sticking
and can be cross-checked using FileCheck.
llvm-svn: 164954