This patch ensures built-in functions are rewritten using the proper
parent declaration.
Existing tests are modified to run in C++ mode to ensure the
functionality works also with C++ for OpenCL while not increasing the
testing runtime.
llvm-svn: 365499
Ignore trailing NullStmts in compound expressions when determining the result type and value. This is to match the GCC behavior which ignores semicolons at the end of compound expressions.
Patch by Dominic Ferreira.
llvm-svn: 365498
In gcc PowerPC, long double has 3 mangling schemes:
-mlong-double-64: `e`
-mlong-double-128 -mabi=ibmlongdouble: `g`
-mlong-double-128 -mabi=ieeelongdouble: `u9__ieee128` (gcc <= 8.1: `U10__float128`)
The current useFloat128ManglingForLongDouble() bisection is not suitable
when we support -mlong-double-128 in clang (D64277). Replace
useFloat128ManglingForLongDouble() with getLongDoubleMangling() and
getFloat128Mangling() to allow 3 mangling schemes.
I also deleted the `getTriple().isOSBinFormatELF()` check (the Darwin
support has gone: https://reviews.llvm.org/D50988).
For x86, change the mangled code of __float128 from `U10__float128` to `g`. `U10__float128` was wrongly copied from PowerPC.
The test will be added to `test/CodeGen/x86-long-double.cpp` in D64277.
Reviewed By: erichkeane
Differential Revision: https://reviews.llvm.org/D64276
llvm-svn: 365480
Summary:
A tooling-focused alternative to the AST. This commit focuses on the
memory-management strategy and the structure of the AST.
More to follow later:
- Operations to mutate the syntax trees and corresponding textual
replacements.
- Mapping between clang AST nodes and syntax tree nodes.
- More node types corresponding to the language constructs.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: llvm-commits, mgorny, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D61637
........
Fixes buildbots which were crashing on SyntaxTests.exe
llvm-svn: 365465
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61809
llvm-svn: 365438
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 365435
-mlong-double-64 is supported on some ports of gcc (i386, x86_64, and ppc{32,64}).
On many other targets, there will be an error:
error: unrecognized command line option '-mlong-double-64'
This patch makes the driver option -mlong-double-64 available for x86
and ppc. The CC1 option -mlong-double-64 is available on all targets for
users to test on unsupported targets.
LongDoubleSize is added as a VALUE_LANGOPT so that the option can be
shared with -mlong-double-128 when we support it in clang.
Also, make powerpc*-linux-musl default to use 64-bit long double. It is
currently the only supported ABI on musl and is also how people
configure powerpc*-linux-musl-gcc.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D64067
llvm-svn: 365412
On macOS, BOOL is a typedef for signed char, but it should never hold a value
that isn't 1 or 0. Any code that expects a different value in their BOOL should
be fixed.
rdar://51954400
Differential revision: https://reviews.llvm.org/D63856
llvm-svn: 365408
This reverts r365382 (git commit 8b1becf2e3)
Appears to regress this semi-reduced fragment of valid code from windows
SDK headers:
#define InterlockedIncrement64 _InterlockedIncrement64
extern "C" __int64 InterlockedIncrement64(__int64 volatile *Addend);
#pragma intrinsic(_InterlockedIncrement64)
unsigned __int64 InterlockedIncrement(unsigned __int64 volatile *Addend) {
return (unsigned __int64)(InterlockedIncrement64)((volatile __int64 *)Addend);
}
Found on a buildbot here, but no mail was sent due to it already being
red:
http://lab.llvm.org:8011/builders/sanitizer-windows/builds/48067
llvm-svn: 365393
Summary:
A tooling-focused alternative to the AST. This commit focuses on the
memory-management strategy and the structure of the AST.
More to follow later:
- Operations to mutate the syntax trees and corresponding textual
replacements.
- Mapping between clang AST nodes and syntax tree nodes.
- More node types corresponding to the language constructs.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: llvm-commits, mgorny, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D61637
llvm-svn: 365355
Summary:
During CTU analysis of complex projects, the loaded AST-contents of
imported TUs can grow bigger than available system memory. This option
introduces a threshold on the number of TUs to be imported for a single
TU in order to prevent such cases.
Differential Revision: https://reviews.llvm.org/D59798
llvm-svn: 365314
Summary:
Change the vuqadd scalar instrinsics to have the second argument as unsigned values, not signed,
accordingly to https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics
So now the compiler correctly warns that a undefined negative float conversion is being done.
Reviewers: LukeCheeseman, john.brawn
Reviewed By: john.brawn
Subscribers: john.brawn, javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64242
llvm-svn: 365300
Summary:
Change the vsqadd scalar instrinsics to have the second argument as signed values, not unsigned,
accordingly to https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics
The existing unsigned argument can cause faulty code as negative float to unsigned conversion is
undefined, which llvm/clang optimizes away.
Reviewers: LukeCheeseman, john.brawn
Reviewed By: john.brawn
Subscribers: john.brawn, javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64239
llvm-svn: 365298
Summary:
Prior to r329065, we used [-max, max] as the range of representable
values because LLVM's `fptrunc` did not guarantee defined behavior when
truncating from a larger floating-point type to a smaller one. Now that
has been fixed, we can make clang follow normal IEEE 754 semantics in this
regard and take the larger range [-inf, +inf] as the range of representable
values.
In practice, this affects two parts of the frontend:
* the constant evaluator no longer treats floating-point evaluations
that result in +-inf as being undefined (because they no longer leave
the range of representable values of the type)
* UBSan no longer treats conversions to floating-point type that are
outside the [-max, +max] range as being undefined
In passing, also remove the float-divide-by-zero sanitizer from
-fsanitize=undefined, on the basis that while it's undefined per C++
rules (and we disallow it in constant expressions for that reason), it
is defined by Clang / LLVM / IEEE 754.
Reviewers: rnk, BillyONeal
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63793
llvm-svn: 365272
Some Rewrite functions are already overloaded to accept
CharSourceRange, and this extends others in the same manner. I'm
calling these in code that's not ready to upstream, but I figure they
might be useful to others in the meantime.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D61467
llvm-svn: 365258
This patch is a major part of my GSoC project, aimed to improve the bug
reports of the analyzer.
TL;DR: Help the analyzer understand that some conditions are important,
and should be explained better. If an CFGBlock is a control dependency
of a block where an expression value is tracked, explain the condition
expression better by tracking it.
if (A) // let's explain why we believe A to be true
10 / x; // division by zero
This is an experimental feature, and can be enabled by the
off-by-default analyzer configuration "track-conditions".
In detail:
This idea was inspired by the program slicing algorithm. Essentially,
two things are used to produce a program slice (a subset of the program
relevant to a (statement, variable) pair): data and control
dependencies. The bug path (the linear path in the ExplodedGraph that leads
from the beginning of the analysis to the error node) enables to
analyzer to argue about data dependencies with relative ease.
Control dependencies are a different slice of the cake entirely.
Just because we reached a branch during symbolic execution, it
doesn't mean that that particular branch has any effect on whether the
bug would've occured. This means that we can't simply rely on the bug
path to gather control dependencies.
In previous patches, LLVM's IDFCalculator, which works on a control flow
graph rather than the ExplodedGraph was generalized to solve this issue.
We use this information to heuristically guess that the value of a tracked
expression depends greatly on it's control dependencies, and start
tracking them as well.
After plenty of evaluations this was seen as great idea, but still
lacking refinements (we should have different descriptions about a
conditions value), hence it's off-by-default.
Differential Revision: https://reviews.llvm.org/D62883
llvm-svn: 365207
I intend to improve the analyzer's bug reports by tracking condition
expressions.
01 bool b = messyComputation();
02 int i = 0;
03 if (b) // control dependency of the bug site, let's explain why we assume val
04 // to be true
05 10 / i; // warn: division by zero
I'll detail this heuristic in the followup patch, strictly related to this one
however:
* Create the new ControlDependencyCalculator class that uses llvm::IDFCalculator
to (lazily) calculate control dependencies for Clang's CFG.
* A new debug checker debug.DumpControlDependencies is added for lit tests
* Add unittests
Differential Revision: https://reviews.llvm.org/D62619
llvm-svn: 365197
getTerminatorCondition() returned a condition that may be outside of the
block, while the new function returns the proper one:
if (A && B && C) {}
Return C instead of A && B && C.
Differential Revision: https://reviews.llvm.org/D63538
llvm-svn: 365177
This moves Bitcode/Bitstream*, Bitcode/BitCodes.h to Bitstream/.
This is needed to avoid a circular dependency when using the bitstream
code for parsing optimization remarks.
Since Bitcode uses Core for the IR part:
libLLVMRemarks -> Bitcode -> Core
and Core uses libLLVMRemarks to generate remarks (see
IR/RemarkStreamer.cpp):
Core -> libLLVMRemarks
we need to separate the Bitstream and Bitcode part.
For clang-doc, it seems that it doesn't need the whole bitcode layer, so
I updated the CMake to only use the bitstream part.
Differential Revision: https://reviews.llvm.org/D63899
llvm-svn: 365091
For the following terminator statement:
if (A && B && C && D)
The built CFG is the following:
[B5 (ENTRY)]
Succs (1): B4
[B1]
1: 10
2: j
3: [B1.2] (ImplicitCastExpr, LValueToRValue, int)
4: [B1.1] / [B1.3]
5: int x = 10 / j;
Preds (1): B2
Succs (1): B0
[B2]
1: C
2: [B2.1] (ImplicitCastExpr, LValueToRValue, _Bool)
T: if [B4.4] && [B3.2] && [B2.2]
Preds (1): B3
Succs (2): B1 B0
[B3]
1: B
2: [B3.1] (ImplicitCastExpr, LValueToRValue, _Bool)
T: [B4.4] && [B3.2] && ...
Preds (1): B4
Succs (2): B2 B0
[B4]
1: 0
2: int j = 0;
3: A
4: [B4.3] (ImplicitCastExpr, LValueToRValue, _Bool)
T: [B4.4] && ...
Preds (1): B5
Succs (2): B3 B0
[B0 (EXIT)]
Preds (4): B1 B2 B3 B4
However, even though the path of execution in B2 only depends on C's value,
CFGBlock::getCondition() would return the entire condition (A && B && C). For
B3, it would return A && B. I changed this the actual condition.
Differential Revision: https://reviews.llvm.org/D63538
llvm-svn: 365036
Transform clang::DominatorTree to be able to also calculate post dominators.
* Tidy up the documentation
* Make it clang::DominatorTree template class (similarly to how
llvm::DominatorTreeBase works), rename it to clang::CFGDominatorTreeImpl
* Clang's dominator tree is now called clang::CFGDomTree
* Clang's brand new post dominator tree is called clang::CFGPostDomTree
* Add a lot of asserts to the dump() function
* Create a new checker to test the functionality
Differential Revision: https://reviews.llvm.org/D62551
llvm-svn: 365028
https://bugs.llvm.org/show_bug.cgi?id=42041
In Clang's CFG, we use nullpointers to represent unreachable nodes, for
example, in the included testfile, block B0 is unreachable from block
B1, resulting in a nullpointer dereference somewhere in
llvm::DominatorTreeBase<clang::CFGBlock, false>::recalculate.
This patch fixes this issue by specializing
llvm::DomTreeBuilder::SemiNCAInfo::ChildrenGetter::Get for
clang::CFG to not contain nullpointer successors.
Differential Revision: https://reviews.llvm.org/D62507
llvm-svn: 365026
Currently this check generates the replacement with the newline in the end.
The proper way to export it to YAML is to have two \n\n instead of one.
Without this fix clients should reinterpret the replacement as
"#include <utility> " instead of "#include <utility>\n"
Differential Revision: https://reviews.llvm.org/D63482
llvm-svn: 365017
Summary:
Currently HeaderSearch only looks at SearchDir's passed into it, but in
addition to those paths headers can be relative to including file's directory.
This patch makes sure that is taken into account.
Reviewers: gribozavr
Subscribers: jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63295
llvm-svn: 365005
This commit adds a new builtin, __builtin_bit_cast(T, v), which performs a
bit_cast from a value v to a type T. This expression can be evaluated at
compile time under specific circumstances.
The compile time evaluation currently doesn't support bit-fields, but I'm
planning on fixing this in a follow up (some of the logic for figuring this out
is in CodeGen). I'm also planning follow-ups for supporting some more esoteric
types that the constexpr evaluator supports, as well as extending
__builtin_memcpy constexpr evaluation to use the same infrastructure.
rdar://44987528
Differential revision: https://reviews.llvm.org/D62825
llvm-svn: 364954
This option behaves similarly to AlignConsecutiveDeclarations and
AlignConsecutiveAssignments, aligning the assignment of C/C++
preprocessor macros on consecutive lines.
I've worked in many projects (embedded, mostly) where header files full
of large, well-aligned "#define" blocks are a common pattern. We
normally avoid using clang-format on these files, since it ruins any
existing alignment in said blocks. This style option will align "simple"
PP macros (no parameters) and PP macros with parameter lists on
consecutive lines.
Related Bugzilla entry (thanks mcuddie):
https://llvm.org/bugs/show_bug.cgi?id=20637
Patch by Nick Renieris (VelocityRa)!
Differential Revision: https://reviews.llvm.org/D28462
llvm-svn: 364938
Summary:
This revision allows users to specify the insertion of an included directive (at
the top of the file being rewritten) as part of a rewrite rule. These
directives are bundled with `RewriteRule` cases, so that different cases can
potentially result in different include actions.
Reviewers: ilya-biryukov, gribozavr
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63892
llvm-svn: 364917
When matching C standard library functions in the checker, it's easy to forget
that they are often implemented as macros that are expanded to builtins.
Such builtins would have a different name, so matching the callee identifier
would fail, or may sometimes have more arguments than expected, so matching
the exact number of arguments would fail, but this is fine as long as we have
all the arguments that we need in their respective places.
This patch adds a set of flags to the CallDescription class so that to handle
various special matching rules, and adds the first flag into this set,
which enables a more fuzzy matching for functions that
may be implemented as compiler builtins.
Differential Revision: https://reviews.llvm.org/D62556
llvm-svn: 364867
It encapsulates the procedure of figuring out whether a call event
corresponds to a function that's modeled by a checker.
Checker developers no longer need to worry about performance of
lookups into their own custom maps.
Add unittests - which finally test CallDescription itself as well.
Differential Revision: https://reviews.llvm.org/D62441
llvm-svn: 364866
Previously, lambda captures were processed in the function called during
capturing the variables. It leads to the recursive functions calls and
may result in the compiler crash.
llvm-svn: 364820
Summary:
Now we store the errors for the Decls in the "to" context too. For
that, however, we have to put these errors in a shared state (among all
the ASTImporter objects which handle the same "to" context but different
"from" contexts).
After a series of imports from different "from" TUs we have a "to" context
which may have erroneous nodes in it. (Remember, the AST is immutable so
there is no way to delete a node once we had created it and we realized
the error later.) All these erroneous nodes are marked in
ASTImporterSharedState::ImportErrors. Clients of the ASTImporter may
use this as an input. E.g. the static analyzer engine may not try to
analyze a function if that is marked as erroneous (it can be queried via
ASTImporterSharedState::getImportDeclErrorIfAny()).
Reviewers: a_sidorin, a.sidorin, shafik
Subscribers: rnkovacs, dkrupp, Szelethus, gamesh411, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62376
llvm-svn: 364785
Summary:
During import of a specific Decl D, it may happen that some AST nodes
had already been created before we recognize an error. In this case we
signal back the error to the caller, but the "to" context remains
polluted with those nodes which had been created. Ideally, those nodes
should not had been created, but that time we did not know about the
error, the error happened later. Since the AST is immutable (most of
the cases we can't remove existing nodes) we choose to mark these nodes
as erroneous.
Here are the steps of the algorithm:
1) We keep track of the nodes which we visit during the import of D: See
ImportPathTy.
2) If a Decl is already imported and it is already on the import path
(we have a cycle) then we copy/store the relevant part of the import
path. We store these cycles for each Decl.
3) When we recognize an error during the import of D then we set up this
error to all Decls in the stored cycles for D and we clear the stored
cycles.
Reviewers: a_sidorin, a.sidorin, shafik
Subscribers: rnkovacs, dkrupp, Szelethus, gamesh411, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62375
llvm-svn: 364771