The extra data stored on user-defined literal Tokens is stored in extra
allocated memory, which is managed by the PreprocessorLexer because there isn't
a better place to put it that makes sure it gets deallocated, but only after
it's used up. My testing has shown no significant slowdown as a result, but
independent testing would be appreciated.
llvm-svn: 112458
reparsing an ASTUnit. When saving a preamble, create a buffer larger
than the actual file we're working with but fill everything from the
end of the preamble to the end of the file with spaces (so the lexer
will quickly skip them). When we load the file, create a buffer of the
same size, filling it with the file and then spaces. Then, instruct
the lexer to start lexing after the preamble, therefore continuing the
parse from the spot where the preamble left off.
It's now possible to perform a simple preamble build + parse (+
reparse) with ASTUnit. However, one has to disable a bunch of checking
in the PCH reader to do so. That part isn't committed; it will likely
be handled with some other kind of flag (e.g., -fno-validate-pch).
As part of this, fix some issues with null termination of the memory
buffers created for the preamble; we were trying to explicitly
NULL-terminate them, even though they were also getting implicitly
NULL terminated, leading to excess warnings about NULL characters in
source files.
llvm-svn: 109445
is present.
Rather than using clang_getCursorExtent(), which requires
us to lex the token at the ending position to determine its
length. Then, we'd be comparing [a, b) source ranges that cover the
characters in the range rather than the normal behavior for Clang's
source ranges, which covers the tokens in the range. However, relexing
causes us to read the source file (which may come from a precompiled
header), which is rather unfortunate and affects performance.
In the new scheme, we only use Clang-style source ranges that cover
the tokens in the range. At the entry points where this matters
(clang_annotateTokens, clang_getCursor), we make sure to move source
locations to the start of the token.
Addresses most of <rdar://problem/8049381>.
llvm-svn: 109134
which is the part of the file that contains all of the initial
comments, includes, and preprocessor directives that occur before any
of the actual code. Added a new -print-preamble cc1 action that is
only used for testing.
llvm-svn: 108913
1) Suppress diagnostics as soon as we form the code-completion
token, so we don't get any error/warning spew from the early
end-of-file.
2) If we consume a code-completion token when we weren't expecting
one, go into a code-completion recovery path that produces the best
results it can based on the context that the parser is in.
llvm-svn: 104585
make it miss (invalid) things like:
<<<<<<<
>>>>>>>
and crash if
<<<<<<<
was at the end of the line. When we find a >>>>>>> that is not at the
end of the line, make sure to reset Pos so we don't crash on something
like:
<<<<<<< >>>>>>>
This isn't worth making testcases for, since each would require a new file.
rdar://7987078 - signal 11 compiling "<<<<<<<<<<"
llvm-svn: 103968
in an input file like this:
# 42
int x;
we were emitting:
# <something>
int x;
(with a space before the int) because we weren't clearing the leading
whitespace flag properly after the \n from the directive was handled.
llvm-svn: 101084
SourceManager's getBuffer() (and similar) operations. This abstract
can be used to force callers to cope with errors in getBuffer(), such
as missing files and changed files. Fix a bunch of callers to use the
new interface.
Add some very basic checks for file consistency (file size,
modification time) into ContentCache::getBuffer(), although these
checks don't help much until we've updated the main callers (e.g.,
SourceManager::getSpelling()).
llvm-svn: 98585
doing so invalidates the file guard optimization and is not
in the spirit of "#if 0" because it is supposed to completely
skip everything, even if it isn't lexically valid. Patch by
Abramo Bagnara!
llvm-svn: 95253
region of interest (if provided). Implement clang_getCursor() in terms
of this traversal rather than using the Index library; the unified
cursor visitor is more complete, and will be The Way Forward.
Minor other tweaks needed to make this work:
- Extend Preprocessor::getLocForEndOfToken() to accept an offset
from the end, making it easy to move to the last character in the
token (rather than just past the end of the token).
- In Lexer::MeasureTokenLength(), the length of whitespace is zero.
llvm-svn: 94200
incompatible with user-defined literals, specifically with the following form:
0x1p+1
The preprocessing-number token extends only as far as the 'p'; the '+' is not
included. Previously we could get away with this extension as p was an invalid
suffix, but now with user-defined literals, 'p' might well be a valid suffix
and we are forced to consider it as such.
This patch also adds a warning in non-0x C++ modes telling the user that
this extension is incompatible with C++0x that is enabled by default
(previously and with other languages, we warn only with a compliance
option such as -pedantic).
llvm-svn: 93135
1. Don't make a copy of LangOptions every time a lexer is created.
2. Don't make CharInfo global mutable state.
3. Fix the implementation to properly treat ^Z as EOF instead of as
horizontal whitespace, which matches the semantic implemented by VC++.
llvm-svn: 91586
files with the contents of an arbitrary memory buffer. Use this new
functionality to drastically clean up the way in which we handle file
truncation for code-completion: all of the truncation/completion logic
is now encapsulated in the preprocessor where it belongs
(<rdar://problem/7434737>).
llvm-svn: 90300
stat a file but where mmaping it fails. In this case, we emit an
error like:
t.c:1:10: fatal error: error opening file '../../foo.h'
instead of "cannot find file".
llvm-svn: 90110
-code-completion-at=filename:line:column
which performs code completion at the specified location by truncating
the file at that position and enabling code completion. This approach
makes it possible to run multiple tests from a single test file, and
gives a more natural command-line interface.
llvm-svn: 82571
essence, code completion is triggered by a magic "code completion"
token produced by the lexer [*], which the parser recognizes at
certain points in the grammar. The parser then calls into the Action
object with the appropriate CodeCompletionXXX action.
Sema implements the CodeCompletionXXX callbacks by performing minimal
translation, then forwarding them to a CodeCompletionConsumer
subclass, which uses the results of semantic analysis to provide
code-completion results. At present, only a single, "printing" code
completion consumer is available, for regression testing and
debugging. However, the design is meant to permit other
code-completion consumers.
This initial commit contains two code-completion actions: one for
member access, e.g., "x." or "p->", and one for
nested-name-specifiers, e.g., "std::". More code-completion actions
will follow, along with improved gathering of code-completion results
for the various contexts.
[*] In the current -code-completion-dump testing/debugging mode, the
file is truncated at the completion point and EOF is translated into
"code completion".
llvm-svn: 82166
declaration in the AST.
The new ASTContext::getCommentForDecl function searches for a comment
that is attached to the given declaration, and returns that comment,
which may be composed of several comment blocks.
Comments are always available in an AST. However, to avoid harming
performance, we don't actually parse the comments. Rather, we keep the
source ranges of all of the comments within a large, sorted vector,
then lazily extract comments via a binary search in that vector only
when needed (which never occurs in a "normal" compile).
Comments are written to a precompiled header/AST file as a blob of
source ranges. That blob is only lazily loaded when one requests a
comment for a declaration (this never occurs in a "normal" compile).
The indexer testbed now supports comment extraction. When the
-point-at location points to a declaration with a Doxygen-style
comment, the indexer testbed prints the associated comment
block(s). See test/Index/comments.c for an example.
Some notes:
- We don't actually attempt to parse the comment blocks themselves,
beyond identifying them as Doxygen comment blocks to associate them
with a declaration.
- We won't find comment blocks that aren't adjacent to the
declaration, because we start our search based on the location of
the declaration.
- We don't go through the necessary hops to find, for example,
whether some redeclaration of a declaration has comments when our
current declaration does not. Similarly, we don't attempt to
associate a \param Foo marker in a function body comment with the
parameter named Foo (although that is certainly possible).
- Verification of my "no performance impact" claims is still "to be
done".
llvm-svn: 74704
with dos style newlines. I have a trivial test for this:
// RUN: clang-cc %s -verify
#define test(x, y) \
x ## y
but I don't know how to get svn to not change newlines and testrunner
doesn't work with dos style newlines either, so "not worth it". :)
rdar://6994000
llvm-svn: 73945
that if we're going to print an extension warning anyway,
there's no point to changing behavior based on NoExtensions: it will
only make error recovery worse.
Note that this doesn't cause any behavior change because NoExtensions
isn't used by the current front-end. I'm still considering what to do about
the remaining use of NoExtensions in IdentifierTable.cpp.
llvm-svn: 70273
This allows it to accurately measure tokens, so that we get:
t.cpp:8:13: error: unknown type name 'X'
static foo::X P;
~~~~~^
instead of the woefully inferior:
t.cpp:8:13: error: unknown type name 'X'
static foo::X P;
~~~~ ^
Most of this is just plumbing to push the reference around.
llvm-svn: 69099
Eventually, would be nice to be able to run these modifications even
when we don't want the warning or errors for the actual diagnostic.
llvm-svn: 68272
Now instead of just tracking the expansion history, also track the full
range of the macro that got replaced. For object-like macros, this doesn't
change anything. For _Pragma and function-like macros, this means we track
the locations of the ')'.
This is required for PR3579 because apparently GCC uses the line of the ')'
of a function-like macro as the location to expand __LINE__ to.
llvm-svn: 64601
.def file for each library. This means that adding a diagnostic
to sema doesn't require all the other libraries to be rebuilt.
Patch by Anders Johnsen!
llvm-svn: 63111
Token now has a class of kinds for "literals", which include
numeric constants, strings, etc. These tokens can optionally have
a pointer to the start of the token in the lexer buffer. This
makes it faster to get spelling and do other gymnastics, because we
don't have to go through source locations.
This change is performance neutral, but will make other changes
more feasible down the road.
llvm-svn: 63028
ground work for implementing #line, and fixes the "out of macro ID's"
problem.
There is nothing particularly tricky about the code, other than the
very performance sensitive SourceManager::getFileID() method.
llvm-svn: 62978
Refactor how the preprocessor changes a token from being an tok::identifier to a
keyword (e.g. tok::kw_for). Instead of doing this in HandleIdentifier, hoist this
common case out into the caller, so that every keyword doesn't have to go through
HandleIdentifier. This drops time in HandleIdentifier from 1.25ms to .62ms, and
speeds up clang -Eonly with PTH by about 1%.
llvm-svn: 62855
tells us whether Preprocessor::HandleIdentifier needs to be called.
Because this method is only rarely needed, this saves a call and a
bunch of random checks. This drops the time in HandleIdentifier
from 3.52ms to .98ms on cocoa.h on my machine.
llvm-svn: 62675
the chunk ID not the file ID. This exposes problems in
TextDiagnosticPrinter where it should have been using the canonical
file ID but wasn't. Fix these along the way.
llvm-svn: 62427
method. This lets us clean up the interface and make it more obvious that
this method is *really really* _Pragma specific.
Note that _Pragma handling uglifies the Lexer in the critical path. It would
be very interesting to consider making _Pragma remapping be a new special
lexer class of its own.
llvm-svn: 62425
"FileID" a concept that is now enforced by the compiler's type checker
instead of yet-another-random-unsigned floating around.
This is an important distinction from the "FileID" currently tracked by
SourceLocation. *That* FileID may refer to the start of a file or to a
chunk within it. The new FileID *only* refers to the file (and its
#include stack and eventually #line data), it cannot refer to a chunk.
FileID is a completely opaque datatype to all clients, only SourceManager
is allowed to poke and prod it.
llvm-svn: 62407
the "physical" location of tokens, refer to the "spelling" location.
This is more concrete and useful, tokens aren't really physical objects!
llvm-svn: 62309
would not eat the "-1" in "0x0p-1", but LiteralSupport would accept
it when extensions are on. This caused strangeness and failures
when hexfloats were properly treated as an extension (not error)
in LiteralSupport.
llvm-svn: 59865
one for building up the diagnostic that is in flight (DiagnosticBuilder)
and one for pulling structured information out of the diagnostic when
formatting and presenting it.
There is no functionality change with this patch.
llvm-svn: 59849
- Add variants of IsNonPragmaNonMacroLexer to accept an IncludeMacroStack entry
(simplifies some uses).
- Use IsNonPragmaNonMacroLexer in Preprocessor::LookupFile.
- Add 'FileID' to PreprocessorLexer, and have Preprocessor query this fileid
when looking up the FileEntry for a file
Performance testing of -Eonly on Cocoa.h shows no performance regression because
of this patch.
llvm-svn: 59666
even whitespace, as tokens from the file. This is enabled with
L->SetKeepWhitespaceMode(true) on a raw lexer. In this mode, you too
can use clang as a really complex version of 'cat' with code like this:
Lexer RawLex(SourceLocation::getFileLoc(SM.getMainFileID(), 0),
PP.getLangOptions(), File.first, File.second);
RawLex.SetKeepWhitespaceMode(true);
Token RawTok;
RawLex.LexFromRawLexer(RawTok);
while (RawTok.isNot(tok::eof)) {
std::cout << PP.getSpelling(RawTok);
RawLex.LexFromRawLexer(RawTok);
}
This will emit exactly the input file, with no canonicalization or other
translation. Realistic clients actually do something with the tokens of
course :)
llvm-svn: 57401
same we we do an unterminated string or character literal. This makes
it so we can guarantee that the lexer never calls into the
preprocessor (which would be suicide for a raw lexer).
llvm-svn: 57395
using LexRawToken, create one and use LexFromRawLexer. This avoids
twiddling the RawLexer flag around and simplifies some code (even
speeding raw lexing up a tiny bit).
This change also improves the token paster to use a Lexer on the stack
instead of new/deleting it.
llvm-svn: 57393
- Added as private members for each because it is not clear where to
put the common definition. Perhaps the IdentifierInfos all of these
"pseudo-keywords" should be collected into one place (this would
KnownFunctionIDs and Objective-C property IDs, for example).
Remove Token::isNamedIdentifier.
- There isn't a good reason to use strcmp when we have interned
strings, and there isn't a good reason to encourage clients to do
so.
llvm-svn: 54794
lib dir and move all the libraries into it. This follows the main
llvm tree, and allows the libraries to be built in parallel. The
top level now enforces that all the libs are built before Driver,
but we don't care what order the libs are built in. This speeds
up parallel builds, particularly incremental ones.
llvm-svn: 48402