the active module macros at the point of definition, rather than reconstructing
it from the macro history. No functionality change intended.
llvm-svn: 235941
Previously we'd defer this determination until writing the AST, which doesn't
allow us to use this information when building other submodules of the same
module. This change also allows us to use a uniform mechanism for writing
module macro records, independent of whether they are local or imported.
llvm-svn: 235614
in debugger mode) to accept @import declarations
and pass them to the debugger.
In the preprocessor, accept import declarations
if the debugger is enabled, but don't actually
load the module, just pass the import path on to
the preprocessor callbacks.
In the Objective-C parser, if it sees an import
declaration in statement context (usual for LLDB),
ignore it and return a NullStmt.
llvm-svn: 223855
rather than trying to extract this information from the FileEntry after the
fact.
This has a number of beneficial effects. For instance, diagnostic messages for
failed module builds give a path relative to the "module root" rather than an
absolute file path, and the contents of the module includes file is no longer
dependent on what files the including TU happened to inspect prior to
triggering the module build.
llvm-svn: 223095
Only those callers who are dynamically passing ownership should need the
3 argument form. Those accepting the default ("do pass ownership")
should do so explicitly with a unique_ptr now.
llvm-svn: 216614
Currently the analyzer lazily models some functions using 'BodyFarm',
which constructs a fake function implementation that the analyzer
can simulate that approximates the semantics of the function when
it is called. BodyFarm does this by constructing the AST for
such definitions on-the-fly. One strength of BodyFarm
is that all symbols and types referenced by synthesized function
bodies are contextual adapted to the containing translation unit.
The downside is that these ASTs are hardcoded in Clang's own
source code.
A more scalable model is to allow these models to be defined as source
code in separate "model" files and have the analyzer use those
definitions lazily when a function body is needed. Among other things,
it will allow more customization of the analyzer for specific APIs
and platforms.
This patch provides the initial infrastructure for this feature.
It extends BodyFarm to use an abstract API 'CodeInjector' that can be
used to synthesize function bodies. That 'CodeInjector' is
implemented using a new 'ModelInjector' in libFrontend, which lazily
parses a model file and injects the ASTs into the current translation
unit.
Models are currently found by specifying a 'model-path' as an
analyzer option; if no path is specified the CodeInjector is not
used, thus defaulting to the current behavior in the analyzer.
Models currently contain a single function definition, and can
be found by finding the file <function name>.model. This is an
initial starting point for something more rich, but it bootstraps
this feature for future evolution.
This patch was contributed by Gábor Horváth as part of his
Google Summer of Code project.
Some notes:
- This introduces the notion of a "model file" into
FrontendAction and the Preprocessor. This nomenclature
is specific to the static analyzer, but possibly could be
generalized. Essentially these are sources pulled in
exogenously from the principal translation.
Preprocessor gets a 'InitializeForModelFile' and
'FinalizeForModelFile' which could possibly be hoisted out
of Preprocessor if Preprocessor exposed a new API to
change the PragmaHandlers and some other internal pieces. This
can be revisited.
FrontendAction gets a 'isModelParsingAction()' predicate function
used to allow a new FrontendAction to recycle the Preprocessor
and ASTContext. This name could probably be made something
more general (i.e., not tied to 'model files') at the expense
of losing the intent of why it exists. This can be revisited.
- This is a moderate sized patch; it has gone through some amount of
offline code review. Most of the changes to the non-analyzer
parts are fairly small, and would make little sense without
the analyzer changes.
- Most of the analyzer changes are plumbing, with the interesting
behavior being introduced by ModelInjector.cpp and
ModelConsumer.cpp.
- The new functionality introduced by this change is off-by-default.
It requires an analyzer config option to enable.
llvm-svn: 216550
And in the process, discover that FileManager::removeStatCache had a
double-delete when removing an element from the middle of the list (at
the beginning or the end of the list, there was no problem) and add a
unit test to exercise the code path (which successfully crashed when run
(with modifications to match the old API) without this patch applied)
llvm-svn: 215388
Remove pointless MICache: it only ever contained up to 1 object, and was only
non-empty when recovering from an error. There's no performance or memory win
from maintaining this cache.
llvm-svn: 213825
MacroArgs are owned by TokenLexer, and when a TokenLexer is destroyed, it'll
call its MacroArgs's destroy() method. destroy() only appends the MacroArg to
Preprocessor's MacroArgCache list, and Preprocessor's destructor then calls
deallocate() on all MacroArgs in that list. This method then ends up freeing
the MacroArgs's memory.
In a code completion context, Parser::cutOffParsing() gets called when a code
completion token is hit, which changes the type of the current token to
tok::eof. eof tokens aren't always ConsumeToken()ed, so
Preprocessor::HandleEndOfFile() isn't always called, and that function is
responsible for popping the macro stack.
Due to this, Preprocessor::CurTokenLexer can be non-NULL when
~Preprocessor runs. It's a unique_ptr, so it ended up being destructed after
~Preprocessor completed, and its MacroArgs thus got added to the freelist after
the code freeing things on the freelist had already completed. The fix is to
explicitly call reset() before the freelist processing happens. (See the bug
for more notes.)
llvm-svn: 208438
The Preprocessor::Initialize() function already offers a clear interface to
achieve this, further reducing the confusing number of states a newly
constructed preprocessor can have.
llvm-svn: 207825
These features are new in VS 2013 and are necessary in order to layout
std::ostream correctly. Currently we have an ABI incompatibility when
self-hosting with the 2013 stdlib in our convertible_fwd_ostream wrapper
in gtest.
This change adds another implicit attribute, MSVtorDispAttr, because
implicit attributes are currently the best way to make sure the
information stays on class templates through instantiation.
Reviewers: majnemer
Differential Revision: http://llvm-reviews.chandlerc.com/D2746
llvm-svn: 201274
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
module. Use the marker to diagnose cases where we try to transition between
submodules when not at the top level (most likely because a closing brace was
missing at the end of a header file, but is also possible if submodule headers
attempt to do something fundamentally non-modular, like our .def files).
llvm-svn: 195543
The preprocessor currently recognizes module declarations to load a
module based on seeing the 'import' keyword followed by an
identifier. This sequence is fairly unlikely in C (one would need a
type named 'import'), but is more common in Objective-C (where a
variable named 'import' can cause problems). Since import declarations
currently require a leading '@', recognize that in the preprocessor as
well. Fixes <rdar://problem/15084587>.
llvm-svn: 194225
Before this patch, Lex() would recurse whenever the current lexer changed (e.g.
upon entry into a macro). This patch turns the recursion into a loop: the
various lex routines now don't return a token when the current lexer changes,
and at the top level Preprocessor::Lex() now loops until it finds a token.
Normally, the recursion wouldn't end up being very deep, but the recursion depth
can explode in edge cases like a bunch of consecutive macros which expand to
nothing (like in the testcase test/Preprocessor/macro_expand_empty.c in this
patch).
<rdar://problem/14569770>
llvm-svn: 190980
For each macro directive (define, undefine, visibility) have a separate object that gets chained
to the macro directive history. This has several benefits:
-No need to mutate a MacroDirective when there is a undefine/visibility directive. Stuff like
PPMutationListener become unnecessary.
-No need to keep extra source locations for the undef/visibility locations for the define directive object
(which is the majority of the directives)
-Much easier to hide/unhide a section in the macro directive history.
-Easier to track the effects of the directives across different submodules.
llvm-svn: 178037
for the data specific to a macro definition (e.g. what the tokens are), and
MacroDirective class which encapsulates the changes to the "macro namespace"
(e.g. the location where the macro name became active, the location where it was undefined, etc.)
(A MacroDirective always points to a MacroInfo object.)
Usually a macro definition (MacroInfo) is where a macro name becomes active (MacroDirective) but
splitting the concepts allows us to better model the effect of modules to the macro namespace
(also as a bonus it allows better modeling of push_macro/pop_macro #pragmas).
Modules can have their own macro history, separate from the local (current translation unit)
macro history; MacroDirectives will be used to model the macro history (changes to macro namespace).
For example, if "@import A;" imports macro FOO, there will be a new local MacroDirective created
to indicate that "FOO" became active at the import location. Module "A" itself will contain another
MacroDirective in its macro history (at the point of the definition of FOO) and both MacroDirectives
will point to the same MacroInfo object.
Introducing the separation of macro concepts is the first part towards better modeling of module macros.
llvm-svn: 175585
Compilation always sets this explicitly, but creating a preprocessor
manually should still put the 'IsPreprocessedOutput' flag in a valid state.
llvm-svn: 174077
This is a missing piece for C99 conformance.
This patch handles UCNs by adding a '\\' case to LexTokenInternal and
LexIdentifier -- if we see a backslash, we tentatively try to read in a UCN.
If the UCN is not syntactically well-formed, we fall back to the old
treatment: a backslash followed by an identifier beginning with 'u' (or 'U').
Because the spelling of an identifier with UCNs still has the UCN in it, we
need to convert that to UTF-8 in Preprocessor::LookUpIdentifierInfo.
Of course, valid code that does *not* use UCNs will see only a very minimal
performance hit (checks after each identifier for non-ASCII characters,
checks when converting raw_identifiers to identifiers that they do not
contain UCNs, and checks when getting the spelling of an identifier that it
does not contain a UCN).
This patch also adds basic support for actual UTF-8 in the source. This is
treated almost exactly the same as UCNs except that we consider stray
Unicode characters to be mistakes and offer a fixit to remove them.
llvm-svn: 173369