-code-completion-at=filename:line:column
which performs code completion at the specified location by truncating
the file at that position and enabling code completion. This approach
makes it possible to run multiple tests from a single test file, and
gives a more natural command-line interface.
llvm-svn: 82571
essence, code completion is triggered by a magic "code completion"
token produced by the lexer [*], which the parser recognizes at
certain points in the grammar. The parser then calls into the Action
object with the appropriate CodeCompletionXXX action.
Sema implements the CodeCompletionXXX callbacks by performing minimal
translation, then forwarding them to a CodeCompletionConsumer
subclass, which uses the results of semantic analysis to provide
code-completion results. At present, only a single, "printing" code
completion consumer is available, for regression testing and
debugging. However, the design is meant to permit other
code-completion consumers.
This initial commit contains two code-completion actions: one for
member access, e.g., "x." or "p->", and one for
nested-name-specifiers, e.g., "std::". More code-completion actions
will follow, along with improved gathering of code-completion results
for the various contexts.
[*] In the current -code-completion-dump testing/debugging mode, the
file is truncated at the completion point and EOF is translated into
"code completion".
llvm-svn: 82166
declaration in the AST.
The new ASTContext::getCommentForDecl function searches for a comment
that is attached to the given declaration, and returns that comment,
which may be composed of several comment blocks.
Comments are always available in an AST. However, to avoid harming
performance, we don't actually parse the comments. Rather, we keep the
source ranges of all of the comments within a large, sorted vector,
then lazily extract comments via a binary search in that vector only
when needed (which never occurs in a "normal" compile).
Comments are written to a precompiled header/AST file as a blob of
source ranges. That blob is only lazily loaded when one requests a
comment for a declaration (this never occurs in a "normal" compile).
The indexer testbed now supports comment extraction. When the
-point-at location points to a declaration with a Doxygen-style
comment, the indexer testbed prints the associated comment
block(s). See test/Index/comments.c for an example.
Some notes:
- We don't actually attempt to parse the comment blocks themselves,
beyond identifying them as Doxygen comment blocks to associate them
with a declaration.
- We won't find comment blocks that aren't adjacent to the
declaration, because we start our search based on the location of
the declaration.
- We don't go through the necessary hops to find, for example,
whether some redeclaration of a declaration has comments when our
current declaration does not. Similarly, we don't attempt to
associate a \param Foo marker in a function body comment with the
parameter named Foo (although that is certainly possible).
- Verification of my "no performance impact" claims is still "to be
done".
llvm-svn: 74704
with dos style newlines. I have a trivial test for this:
// RUN: clang-cc %s -verify
#define test(x, y) \
x ## y
but I don't know how to get svn to not change newlines and testrunner
doesn't work with dos style newlines either, so "not worth it". :)
rdar://6994000
llvm-svn: 73945
line, and when the pragma is at the end of a file. In this case, the last
token consumed could pop the lexer, invalidating CurPPLexer. Thanks to
Peter Thoman for pointing it out.
llvm-svn: 73689
registered when PCH wasn't being used. We should always install (in BuiltinInfo)
information about target-specific builtins, but we shouldn't register any builtin
identifier infos. This fixes the build of apps that use PCH and target specific
builtins together.
llvm-svn: 73492
builtin preprocessor macro. This appears to work with two caveats:
1) builtins are registered in -E mode, and 2) target-specific builtins
are unconditionally registered even if they aren't supported by the
target (e.g. SSE4 builtin when only SSE1 is enabled).
llvm-svn: 73289
diagnostic to include the full instantiation location for the
invalid paste. For:
#define foo(a, b) a ## b
#define bar(x) foo(x, ])
bar(a)
bar(zdy)
Instead of:
t.c:3:22: error: pasting formed 'a]', an invalid preprocessing token
#define foo(a, b) a ## b
^
t.c:3:22: error: pasting formed 'zdy]', an invalid preprocessing token
we now produce:
t.c:7:1: error: pasting formed 'a]', an invalid preprocessing token
bar(a)
^
t.c:4:16: note: instantiated from:
#define bar(x) foo(x, ])
^
t.c:3:22: note: instantiated from:
#define foo(a, b) a ## b
^
t.c:8:1: error: pasting formed 'zdy]', an invalid preprocessing token
bar(zdy)
^
t.c:4:16: note: instantiated from:
#define bar(x) foo(x, ])
^
t.c:3:22: note: instantiated from:
#define foo(a, b) a ## b
^
llvm-svn: 72519
1. When we accept "#garbage" in asm-with-cpp mode, change the token kind
of the # to unknown so that the preprocessor won't try to process it as
a real #. This fixes a crash on the attached example
2. Fix macro definition extents processing to handle #foo at the end of a
macro to say the definition ends with the foo, not the #.
This is a follow-on fix to r72283, and rdar://6916026
llvm-svn: 72388
two empty arguments. Also, add an assert so that this bug
manifests as an assertion failure, not a valgrind problem.
This fixes rdar://6880648 - [cpp] crash in ArgNeedsPreexpansion
llvm-svn: 71616
that if we're going to print an extension warning anyway,
there's no point to changing behavior based on NoExtensions: it will
only make error recovery worse.
Note that this doesn't cause any behavior change because NoExtensions
isn't used by the current front-end. I'm still considering what to do about
the remaining use of NoExtensions in IdentifierTable.cpp.
llvm-svn: 70273
PCH file. In the Cocoa-prefixed "Hello, World" benchmark, this takes
us from reading 503 identifiers down to 37 and from 470 macros down to
4. It also results in an 8% performance improvement.
llvm-svn: 70094