the active module macros at the point of definition, rather than reconstructing
it from the macro history. No functionality change intended.
llvm-svn: 235941
Previously we'd defer this determination until writing the AST, which doesn't
allow us to use this information when building other submodules of the same
module. This change also allows us to use a uniform mechanism for writing
module macro records, independent of whether they are local or imported.
llvm-svn: 235614
in debugger mode) to accept @import declarations
and pass them to the debugger.
In the preprocessor, accept import declarations
if the debugger is enabled, but don't actually
load the module, just pass the import path on to
the preprocessor callbacks.
In the Objective-C parser, if it sees an import
declaration in statement context (usual for LLDB),
ignore it and return a NullStmt.
llvm-svn: 223855
rather than trying to extract this information from the FileEntry after the
fact.
This has a number of beneficial effects. For instance, diagnostic messages for
failed module builds give a path relative to the "module root" rather than an
absolute file path, and the contents of the module includes file is no longer
dependent on what files the including TU happened to inspect prior to
triggering the module build.
llvm-svn: 223095
Only those callers who are dynamically passing ownership should need the
3 argument form. Those accepting the default ("do pass ownership")
should do so explicitly with a unique_ptr now.
llvm-svn: 216614
Currently the analyzer lazily models some functions using 'BodyFarm',
which constructs a fake function implementation that the analyzer
can simulate that approximates the semantics of the function when
it is called. BodyFarm does this by constructing the AST for
such definitions on-the-fly. One strength of BodyFarm
is that all symbols and types referenced by synthesized function
bodies are contextual adapted to the containing translation unit.
The downside is that these ASTs are hardcoded in Clang's own
source code.
A more scalable model is to allow these models to be defined as source
code in separate "model" files and have the analyzer use those
definitions lazily when a function body is needed. Among other things,
it will allow more customization of the analyzer for specific APIs
and platforms.
This patch provides the initial infrastructure for this feature.
It extends BodyFarm to use an abstract API 'CodeInjector' that can be
used to synthesize function bodies. That 'CodeInjector' is
implemented using a new 'ModelInjector' in libFrontend, which lazily
parses a model file and injects the ASTs into the current translation
unit.
Models are currently found by specifying a 'model-path' as an
analyzer option; if no path is specified the CodeInjector is not
used, thus defaulting to the current behavior in the analyzer.
Models currently contain a single function definition, and can
be found by finding the file <function name>.model. This is an
initial starting point for something more rich, but it bootstraps
this feature for future evolution.
This patch was contributed by Gábor Horváth as part of his
Google Summer of Code project.
Some notes:
- This introduces the notion of a "model file" into
FrontendAction and the Preprocessor. This nomenclature
is specific to the static analyzer, but possibly could be
generalized. Essentially these are sources pulled in
exogenously from the principal translation.
Preprocessor gets a 'InitializeForModelFile' and
'FinalizeForModelFile' which could possibly be hoisted out
of Preprocessor if Preprocessor exposed a new API to
change the PragmaHandlers and some other internal pieces. This
can be revisited.
FrontendAction gets a 'isModelParsingAction()' predicate function
used to allow a new FrontendAction to recycle the Preprocessor
and ASTContext. This name could probably be made something
more general (i.e., not tied to 'model files') at the expense
of losing the intent of why it exists. This can be revisited.
- This is a moderate sized patch; it has gone through some amount of
offline code review. Most of the changes to the non-analyzer
parts are fairly small, and would make little sense without
the analyzer changes.
- Most of the analyzer changes are plumbing, with the interesting
behavior being introduced by ModelInjector.cpp and
ModelConsumer.cpp.
- The new functionality introduced by this change is off-by-default.
It requires an analyzer config option to enable.
llvm-svn: 216550
And in the process, discover that FileManager::removeStatCache had a
double-delete when removing an element from the middle of the list (at
the beginning or the end of the list, there was no problem) and add a
unit test to exercise the code path (which successfully crashed when run
(with modifications to match the old API) without this patch applied)
llvm-svn: 215388
Remove pointless MICache: it only ever contained up to 1 object, and was only
non-empty when recovering from an error. There's no performance or memory win
from maintaining this cache.
llvm-svn: 213825
MacroArgs are owned by TokenLexer, and when a TokenLexer is destroyed, it'll
call its MacroArgs's destroy() method. destroy() only appends the MacroArg to
Preprocessor's MacroArgCache list, and Preprocessor's destructor then calls
deallocate() on all MacroArgs in that list. This method then ends up freeing
the MacroArgs's memory.
In a code completion context, Parser::cutOffParsing() gets called when a code
completion token is hit, which changes the type of the current token to
tok::eof. eof tokens aren't always ConsumeToken()ed, so
Preprocessor::HandleEndOfFile() isn't always called, and that function is
responsible for popping the macro stack.
Due to this, Preprocessor::CurTokenLexer can be non-NULL when
~Preprocessor runs. It's a unique_ptr, so it ended up being destructed after
~Preprocessor completed, and its MacroArgs thus got added to the freelist after
the code freeing things on the freelist had already completed. The fix is to
explicitly call reset() before the freelist processing happens. (See the bug
for more notes.)
llvm-svn: 208438
The Preprocessor::Initialize() function already offers a clear interface to
achieve this, further reducing the confusing number of states a newly
constructed preprocessor can have.
llvm-svn: 207825
These features are new in VS 2013 and are necessary in order to layout
std::ostream correctly. Currently we have an ABI incompatibility when
self-hosting with the 2013 stdlib in our convertible_fwd_ostream wrapper
in gtest.
This change adds another implicit attribute, MSVtorDispAttr, because
implicit attributes are currently the best way to make sure the
information stays on class templates through instantiation.
Reviewers: majnemer
Differential Revision: http://llvm-reviews.chandlerc.com/D2746
llvm-svn: 201274
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
module. Use the marker to diagnose cases where we try to transition between
submodules when not at the top level (most likely because a closing brace was
missing at the end of a header file, but is also possible if submodule headers
attempt to do something fundamentally non-modular, like our .def files).
llvm-svn: 195543
The preprocessor currently recognizes module declarations to load a
module based on seeing the 'import' keyword followed by an
identifier. This sequence is fairly unlikely in C (one would need a
type named 'import'), but is more common in Objective-C (where a
variable named 'import' can cause problems). Since import declarations
currently require a leading '@', recognize that in the preprocessor as
well. Fixes <rdar://problem/15084587>.
llvm-svn: 194225
Before this patch, Lex() would recurse whenever the current lexer changed (e.g.
upon entry into a macro). This patch turns the recursion into a loop: the
various lex routines now don't return a token when the current lexer changes,
and at the top level Preprocessor::Lex() now loops until it finds a token.
Normally, the recursion wouldn't end up being very deep, but the recursion depth
can explode in edge cases like a bunch of consecutive macros which expand to
nothing (like in the testcase test/Preprocessor/macro_expand_empty.c in this
patch).
<rdar://problem/14569770>
llvm-svn: 190980
For each macro directive (define, undefine, visibility) have a separate object that gets chained
to the macro directive history. This has several benefits:
-No need to mutate a MacroDirective when there is a undefine/visibility directive. Stuff like
PPMutationListener become unnecessary.
-No need to keep extra source locations for the undef/visibility locations for the define directive object
(which is the majority of the directives)
-Much easier to hide/unhide a section in the macro directive history.
-Easier to track the effects of the directives across different submodules.
llvm-svn: 178037
for the data specific to a macro definition (e.g. what the tokens are), and
MacroDirective class which encapsulates the changes to the "macro namespace"
(e.g. the location where the macro name became active, the location where it was undefined, etc.)
(A MacroDirective always points to a MacroInfo object.)
Usually a macro definition (MacroInfo) is where a macro name becomes active (MacroDirective) but
splitting the concepts allows us to better model the effect of modules to the macro namespace
(also as a bonus it allows better modeling of push_macro/pop_macro #pragmas).
Modules can have their own macro history, separate from the local (current translation unit)
macro history; MacroDirectives will be used to model the macro history (changes to macro namespace).
For example, if "@import A;" imports macro FOO, there will be a new local MacroDirective created
to indicate that "FOO" became active at the import location. Module "A" itself will contain another
MacroDirective in its macro history (at the point of the definition of FOO) and both MacroDirectives
will point to the same MacroInfo object.
Introducing the separation of macro concepts is the first part towards better modeling of module macros.
llvm-svn: 175585
Compilation always sets this explicitly, but creating a preprocessor
manually should still put the 'IsPreprocessedOutput' flag in a valid state.
llvm-svn: 174077
This is a missing piece for C99 conformance.
This patch handles UCNs by adding a '\\' case to LexTokenInternal and
LexIdentifier -- if we see a backslash, we tentatively try to read in a UCN.
If the UCN is not syntactically well-formed, we fall back to the old
treatment: a backslash followed by an identifier beginning with 'u' (or 'U').
Because the spelling of an identifier with UCNs still has the UCN in it, we
need to convert that to UTF-8 in Preprocessor::LookUpIdentifierInfo.
Of course, valid code that does *not* use UCNs will see only a very minimal
performance hit (checks after each identifier for non-ASCII characters,
checks when converting raw_identifiers to identifiers that they do not
contain UCNs, and checks when getting the spelling of an identifier that it
does not contain a UCN).
This patch also adds basic support for actual UTF-8 in the source. This is
treated almost exactly the same as UCNs except that we consider stray
Unicode characters to be mistakes and offer a fixit to remove them.
llvm-svn: 173369
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
PreprocessingRecord and into its own class, PPConditionalDirectiveRecord.
Decoupling allows a client to use the functionality of PPConditionalDirectiveRecord
without needing a PreprocessingRecord.
llvm-svn: 169229
common LexStringLiteral function. In doing so, some consistency problems have
been ironed out (e.g. where the first token in the string literal was lexed
with macro expansion, but subsequent ones were not) and also an erroneous
diagnostic has been corrected.
LexStringLiteral is complemented by a FinishLexStringLiteral function which
can be used in the situation where the first token of the string literal has
already been lexed.
llvm-svn: 168266
MacroInfo*. Instead of simply dumping an offset into the current file,
give each macro definition a proper ID with all of the standard
modules-remapping facilities. Additionally, when a macro is modified
in a subsequent AST file (e.g., #undef'ing a macro loaded from another
module or from a precompiled header), provide a macro update record
rather than rewriting the entire macro definition. This gives us
greater consistency with the way we handle declarations, and ties
together macro definitions much more cleanly.
Note that we're still not actually deserializing macro history (we
never were), but it's far easy to do properly now.
llvm-svn: 165560
* Retain comments in the AST
* Serialize/deserialize comments
* Find comments attached to a certain Decl
* Expose raw comment text and SourceRange via libclang
llvm-svn: 158771
r158085 added some logic to track predefined declarations. The main reason we
had predefined declarations in the input was because the __builtin_va_list
declarations were injected into the preprocessor input. As of r158592 we
explicitly build the __builtin_va_list declarations. Therefore the predefined
decl tracking is no longer needed.
llvm-svn: 158732
The preprocessor's handling of diagnostic push/pops is stateful, so
encountering pragmas during a re-parse causes problems. HTMLRewrite
already filters out normal # directives including #pragma, so it's
clear it's not expected to be interpreting pragmas in this mode.
This fix adds a flag to Preprocessor to explicitly disable pragmas.
The "right" fix might be to separate pragma lexing from pragma
parsing so that we can throw away pragmas like we do preprocessor
directives, but right now it's important to get the fix in.
Note that this has nothing to do with the "hack" of re-using the
input preprocessor in HTMLRewrite. Even if we someday copy the
preprocessor instead of re-using it, the copy would (and should) include
the diagnostic level tables and have the same problems.
llvm-svn: 158214
In standard C since C89, a 'translation-unit' is syntactically defined to have
at least one "external-declaration", which is either a decl or a function
definition. In Clang the latter gives us a declaration as well.
The tricky bit about this warning is that our predefines can contain external
declarations (__builtin_va_list and the 128-bit integer types). Therefore our
AST parser now makes sure we have at least one declaration that doesn't come
from the predefines buffer.
Also, remove bogus warning about empty source files. This doesn't catch source
files that only contain comments, and never fired anyway because of our
predefines.
PR12665 and <rdar://problem/9165548>
llvm-svn: 158085
so we can destroy it even if it was constructed with "DelayInitialization = true",
and we didn't end up calling Preprocessor::Initialize.
Fixes crashes in rdar://11558355
llvm-svn: 157892
If we are pre-expanding a macro argument don't actually "activate"
the pragma at that point, activate the pragma whenever we encounter
it again in the token stream.
This ensures that we will activate it in the correct location
or that we will ignore it if it never enters the token stream, e.g:
\#define EMPTY(x)
\#define INACTIVE(x) EMPTY(x)
INACTIVE(_Pragma("clang diagnostic ignored \"-Wconversion\""))
This also fixes the crash in rdar://11168596.
llvm-svn: 153959
Enable incremental parsing by the Preprocessor,
where more code can be provided after an EOF.
It mainly prevents the tearing down of the topmost lexer.
To be used like this:
PP.enableIncrementalProcessing();
while (getMoreSource()) {
while (Parser.ParseTopLevelDecl(ADecl)) {...}
}
PP.enableIncrementalProcessing(false);
llvm-svn: 152914
Introduce PreprocessingRecord::rangeIntersectsConditionalDirective() which returns
true if a given range intersects with a conditional directive block.
llvm-svn: 152018
This seems to negatively affect compile time onsome ObjC tests
(which use a lot of partial diagnostics I assume). I have to come
up with a way to keep them inline without including Diagnostic.h
everywhere. Now adding a new diagnostic requires a full rebuild
of e.g. the static analyzer which doesn't even use those diagnostics.
This reverts commit 6496bd10dc3a6d5e3266348f08b6e35f8184bc99.
This reverts commit 7af19b817ba964ac560b50c1ed6183235f699789.
This reverts commit fdd15602a42bbe26185978ef1e17019f6d969aa7.
This reverts commit 00bd44d5677783527d7517c1ffe45e4d75a0f56f.
This reverts commit ef9b60ffed980864a8db26ad30344be429e58ff5.
llvm-svn: 150006
- Move the offending methods out of line and fix transitive includers.
- This required changing an enum in the PPCallback API into an unsigned.
llvm-svn: 149782
for getting the name of the module file, unifying the code for
searching for a module with a given name (into lookupModule()) and
separating out the mapping to a module file (into
getModuleFileName()). No functionality change.
llvm-svn: 149197
modules. This leaves us without an explicit syntax for importing
modules in C/C++, because such a syntax needs to be discussed
first. In Objective-C/Objective-C++, the @import syntax is used to
import modules.
Note that, under -fmodules, C/C++ programs can import modules via the
#include mechanism when a module map is in place for that header. This
allows us to work with modules in C/C++ without committing to a syntax.
llvm-svn: 147467
within module maps, which will (eventually) be used to re-export a
module from another module. There are still some pieces missing,
however.
llvm-svn: 145665
(sub)module, all of the names may be hidden, just the macro names may
be exposed (for example, after the preprocessor has seen the import of
the module but the parser has not), or all of the names may be
exposed. Importing a module makes its names, and the names in any of
its non-explicit submodules, visible to name lookup (transitively).
This commit only introduces the notion of name visible and marks
modules and submodules as visible when they are imported. The actual
name-hiding logic in the AST reader will follow (along with test cases).
llvm-svn: 145586
AST file more lazy, so that we don't eagerly load that information for
all known identifiers each time a new AST file is loaded. The eager
reloading made some sense in the context of precompiled headers, since
very few identifiers were defined before PCH load time. With modules,
however, a huge amount of code can get parsed before we see an
@import, so laziness becomes important here.
The approach taken to make this information lazy is fairly simple:
when we load a new AST file, we mark all of the existing identifiers
as being out-of-date. Whenever we want to access information that may
come from an AST (e.g., whether the identifier has a macro definition,
or what top-level declarations have that name), we check the
out-of-date bit and, if it's set, ask the AST reader to update the
IdentifierInfo from the AST files. The update is a merge, and we now
take care to merge declarations before/after imports with declarations
from multiple imports.
The results of this optimization are fairly dramatic. On a small
application that brings in 14 non-trivial modules, this takes modules
from being > 3x slower than a "perfect" PCH file down to 30% slower
for a full rebuild. A partial rebuild (where the PCH file or modules
can be re-used) is down to 7% slower. Making the PCH file just a
little imperfect (e.g., adding two smallish modules used by a bunch of
.m files that aren't in the PCH file) tips the scales in favor of the
modules approach, with 24% faster partial rebuilds.
This is just a first step; the lazy scheme could possibly be improved
by adding versioning, so we don't search into modules we already
searched. Moreover, we'll need similar lazy schemes for all of the
other lookup data structures, such as DeclContexts.
llvm-svn: 143100