during checker registration. There are no immediate clients of this,
but this provides a way for checkers to query the options table
at startup instead.
llvm-svn: 179626
Redefine the shallow mode to inline all functions for which we have a
definite definition (ipa=inlining). However, only inline functions that
are up to 4 basic blocks large and cut the max exploded nodes generated
per top level function in half.
This makes shallow faster and allows us to keep inlining small
functions. For example, we would keep inlining wrapper functions and
constructors/destructors.
With the new shallow, it takes 104s to analyze sqlite3, whereas
the deep mode is 658s and previous shallow is 209s.
llvm-svn: 173958
deterministic.
Commit message for r170826:
[analyzer] Traverse the Call Graph in topological order.
Modify the call graph by removing the parentless nodes. Instead all
nodes are children of root to ensure they are all reachable. Remove the
tracking of nodes that are "top level" or global. This information is
not used and can be obtained from the Decls stored inside
CallGraphNodes.
Instead of existing ordering hacks, analyze the functions in topological
order over the Call Graph.
Together with the addition of devirtualizable ObjC message sends and
blocks to the call graph, this gives around 6% performance improvement
on several large ObjC benchmarks.
llvm-svn: 170906
Modify the call graph by removing the parentless nodes. Instead all
nodes are children of root to ensure they are all reachable. Remove the
tracking of nodes that are "top level" or global. This information is
not used and can be obtained from the Decls stored inside
CallGraphNodes.
Instead of existing ordering hacks, analyze the functions in topological
order over the Call Graph.
Together with the addition of devirtualizable ObjC message sends and
blocks to the call graph, this gives around 6% performance improvement
on several large ObjC benchmarks.
llvm-svn: 170826
This paves the road for constructing a better function dependency graph.
If we analyze a function before the functions it calls and inlines,
there is more opportunity for optimization.
Note, we add call edges to the called methods that correspond to
function definitions (declarations with bodies).
llvm-svn: 170825
top level.
This heuristic is already turned on for non-ObjC methods
(inlining-mode=noredundancy). If a method has been previously analyzed,
while being inlined inside of another method, do not reanalyze it as top
level.
This commit applies it to ObjCMethods as well. The main caveat here is
that to catch the retain release errors, we are still going to reanalyze
all the ObjC methods but without inlining turned on.
Gives 21% performance increase on one heavy ObjC benchmark, which
suffered large performance regressions due to ObjC inlining.
llvm-svn: 169639
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
...but do run them on user headers.
Previously, we were inconsistent here: non-path-sensitive checks on code
/bodies/ were only run in the main source file, but checks on
/declarations/ were run in /all/ headers. Neither of those is the
behavior we want.
Thanks to Sujit for pointing this out!
<rdar://problem/12454226>
llvm-svn: 165635
This is similar to how we divide up the StaticAnalyzer libraries to separate
core functionality to what is clearly associated with Frontend actions.
llvm-svn: 163050
PathDiagnostics are actually profiled and uniqued independently of the
path on which the bug occurred. This is used to merge diagnostics that
refer to the same issue along different paths, as well as by the plist
diagnostics to reference files created by the HTML diagnostics.
However, there are two problems with the current implementation:
1) The bug description is included in the profile, but some
PathDiagnosticConsumers prefer abbreviated descriptions and some
prefer verbose descriptions. Fixed by including both descriptions in
the PathDiagnostic objects and always using the verbose one in the profile.
2) The "minimal" path generation scheme provides extra information about
which events came from macros that the "extensive" scheme does not.
This resulted not only in different locations for the plist and HTML
diagnostics, but also in diagnostics being uniqued in the plist output
but not in the HTML output. Fixed by storing the "end path" location
explicitly in the PathDiagnostic object, rather than trying to find the
last piece of the path when the diagnostic is requested.
This should hopefully finish unsticking our internal buildbot.
llvm-svn: 162965
reanalyzed.
The policy on what to reanalyze should be in AnalysisConsumer with the
rest of visitation order logic.
There is no reason why ExprEngine needs to pass the Visited set to
CoreEngine, it can populate it itself.
llvm-svn: 162957
a comma separated collection of key:value pairs (which are strings). This
allows a general way to provide analyzer configuration data from the command line.
No clients yet.
llvm-svn: 162827
This fixes several issues:
- removes egregious hack where PlistDiagnosticConsumer would forward to HTMLDiagnosticConsumer,
but diagnostics wouldn't be generated consistently in the same way if PlistDiagnosticConsumer
was used by itself.
- emitting diagnostics to the terminal (using clang's diagnostic machinery) is no longer a special
case, just another PathDiagnosticConsumer. This also magically resolved some duplicate warnings,
as we now use PathDiagnosticConsumer's diagnostic pruning, which has scope for the entire translation
unit, not just the scope of a BugReporter (which is limited to a particular ExprEngine).
As an interesting side-effect, diagnostics emitted to the terminal also have their trailing "." stripped,
just like with diagnostics emitted to plists and HTML. This required some tests to be updated, but now
the tests have higher fidelity with what users will see.
There are some inefficiencies in this patch. We currently generate the report graph (from the ExplodedGraph)
once per PathDiagnosticConsumer, which is a bit wasteful, but that could be pulled up higher in the
logic stack. There is some intended duplication, however, as we now generate different PathDiagnostics (for the same issue)
for different PathDiagnosticConsumers. This is necessary to produce the diagnostics that a particular
consumer expects.
llvm-svn: 162028
very simple semantic analysis that just builds the AST; minor changes for lexer
to pick up source locations I didn't think about before.
Comments AST is modelled along the ideas of HTML AST: block and inline content.
* Block content is a paragraph or a command that has a paragraph as an argument
or verbatim command.
* Inline content is placed within some block. Inline content includes plain
text, inline commands and HTML as tag soup.
llvm-svn: 159790
we are encountering some scalability issues with memory usage. The
appropriate long term fix is to make the analysis more scalable, but
this will at least prevent the analyzer swapping when
analyzing very large functions.
llvm-svn: 159578
in the call graph had been inlined but for whatever reason we did not inline some
of its callees.
Also, fix a related traversal bug where we meant to do a BFS of the callgraph but
instead were doing a DFS.
llvm-svn: 159577
express library-level dependencies within Clang.
This is no more verbose really, and plays nicer with the rest of the
CMake facilities. It should also have no change in functionality.
llvm-svn: 158888
We should lock the number of elements after the initial parsing is
complete. Recursive AST visitors in AnalyzesConsumer and CallGarph can
trigger lazy pch deserialization resulting in more calls to
HandleTopLevelDecl and appending to the LocalTUDecls list. We should
ignore those.
llvm-svn: 157762
of a mutable SmallPtrSet. While iterating over LocalTUDecls, there were cases
where we could modify LocalTUDecls, which could result in invalidating an iterator
and an analyzer crash. Along the way, switch some uses of std::queue to std::dequeue,
which should be slightly more efficient.
Unfortunately, this is a difficult case to create a test case for.
llvm-svn: 155680
We should not deserialize unused declarations from the PCH file. Achieve
this by storing the top level declarations during parsing
(HandleTopLevelDecl ASTConsumer callback) and analyzing/building a call
graph only for those.
Tested the patch on a sample ObjC file that uses PCH. With the patch,
the analyzes is 17.5% faster and clang consumes 40% less memory.
Got about 10% overall build/analyzes time decrease on a large Objective
C project.
A bit of CallGraph refactoring/cleanup as well..
llvm-svn: 154625
Store this info inside the function summary generated for all analyzed
functions. This is useful for coverage stats and can be helpful for
analyzer state space search strategies.
llvm-svn: 153923
count.
This is an optimization for "retry without inlining" option. Here, if we
failed to inline a function due to reaching the basic block max count,
we are going to store this information and not try to inline it
again in the translation unit. This can be viewed as a function summary.
On sqlite, with this optimization, we are 30% faster then before and
cover 10% more basic blocks (partially because the number of times we
reach timeout is decreased by 20%).
llvm-svn: 153730
The analyzer gives up path exploration under certain conditions. For
example, when the same basic block has been visited more than 4 times.
With inlining turned on, this could lead to decrease in code coverage.
Specifically, if we give up inside the inlined function, the rest of
parent's basic blocks will not get analyzed.
This commit introduces an option to enable re-run along the failed path,
in which we do not inline the last inlined call site. This is done by
enqueueing the node before the processing of the inlined call site
with a special policy encoded in the state. The policy tells us not to
inline the call site along the path.
This lead to ~10% increase in the number of paths analyzed. Even though
we expected a much greater coverage improvement.
The option is turned off by default for now.
llvm-svn: 153534