This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.
This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.
I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.
Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.
I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.
llvm-svn: 224385
This particularly helps the fidelity of ASan reports (which can occur
even in these examples - if, for example, one uses placement new over a
buffer of insufficient size - now ASan will correctly identify which
member's initialization went over the end of the buffer).
This doesn't cover all types of members - more coming.
llvm-svn: 223726
Summary:
When -fsanitize-address-field-padding=1 is present
don't emit memcpy for copy constructor.
Thanks Nico for the extra test case.
Test Plan: regression tests
Reviewers: thakis, rsmith
Reviewed By: rsmith
Subscribers: rsmith, cfe-commits
Differential Revision: http://reviews.llvm.org/D6515
llvm-svn: 223563
We currently use i32 (...)** as the type of the vptr field in the LLVM
struct type. LLVM's GlobalOpt prefers any bitcasts to be on the side of
the data being stored rather than on the pointer being stored to.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D5916
llvm-svn: 223267
Consider this program:
struct A {
virtual void operator-() { printf("base\n"); }
};
struct B final : public A {
virtual void operator-() override { printf("derived\n"); }
};
int main() {
B* b = new B;
-static_cast<A&>(*b);
}
Before this patch, clang saw the virtual call to A::operator-(), figured out
that it can be devirtualized, and then just called A::operator-() directly,
without going through the vtable. Instead, it should've looked up which
operator-() the call devirtualizes to and should've called that.
For regular virtual member calls, clang gets all this right already. So
instead of giving EmitCXXOperatorMemberCallee() all the logic that
EmitCXXMemberCallExpr() already has, cut the latter function into two pieces,
call the second piece EmitCXXMemberOrOperatorMemberCallExpr(), and use it also
to generate code for calls to virtual member operators.
This way, virtual overloaded operators automatically don't get devirtualized
if they have covariant returns (like it was done for regular calls in r218602),
etc.
This also happens to fix (or at least improve) codegen for explicit constructor
calls (`A a; a.A::A()`) in MS mode with -fsanitize-address-field-padding=1.
(This adjustment for virtual operator calls seems still wrong with the MS ABI.)
llvm-svn: 223185
Get rid of ugly SanitizerOptions class thrust into LangOptions:
* Make SanitizeAddressFieldPadding a regular language option,
and rely on default behavior to initialize/reset it.
* Make SanitizerBlacklistFile a regular member LangOptions.
* Introduce the helper class "SanitizerSet" to represent the
set of enabled sanitizers and make it a member of LangOptions.
It is exactly the entity we want to cache and modify in CodeGenFunction,
for instance. We'd also be able to reuse SanitizerSet in
CodeGenOptions for storing the set of recoverable sanitizers,
and in the Driver to represent the set of sanitizers
turned on/off by the commandline flags.
No functionality change.
llvm-svn: 221653
Use the bitmask to store the set of enabled sanitizers instead of a
bitfield. On the negative side, it makes syntax for querying the
set of enabled sanitizers a bit more clunky. On the positive side, we
will be able to use SanitizerKind to eventually implement the
new semantics for -fsanitize-recover= flag, that would allow us
to make some sanitizers recoverable, and some non-recoverable.
No functionality change.
llvm-svn: 221558
SanitizerOptions is not even a POD now, so having global variable of
this type, is not nice. Instead, provide a regular constructor and clear()
method, and let each CodeGenFunction has its own copy of SanitizerOptions
it uses.
llvm-svn: 220920
Summary:
The general approach is to add extra paddings after every field
in AST/RecordLayoutBuilder.cpp, then add code to CTORs/DTORs that poisons the paddings
(CodeGen/CGClass.cpp).
Everything is done under the flag -fsanitize-address-field-padding.
The blacklist file (-fsanitize-blacklist) allows to avoid the transformation
for given classes or source files.
See also https://code.google.com/p/address-sanitizer/wiki/IntraObjectOverflow
Test Plan: run SPEC2006 and some of the Chromium tests with -fsanitize-address-field-padding
Reviewers: samsonov, rnk, rsmith
Reviewed By: rsmith
Subscribers: majnemer, cfe-commits
Differential Revision: http://reviews.llvm.org/D5687
llvm-svn: 219961
This change adds UBSan check to upcasts. Namely, when we
perform derived-to-base conversion, we:
1) check that the pointer-to-derived has suitable alignment
and underlying storage, if this pointer is non-null.
2) if vptr-sanitizer is enabled, and we perform conversion to
virtual base, we check that pointer-to-derived has a matching vptr.
llvm-svn: 219642
It's possible to construct cases where the first field we are trying to
copy is in the middle of an IR field. In some complicated cases, we
would fail to use an appropriate offset inside the object. Earlier
builds of clang seemed to miscompile the code by copying an insufficient
number of bytes. Up until now, we would assert: the copying offset was
insufficiently aligned.
This fixes PR21232.
llvm-svn: 219524
There are situations when clang knows that the C1 and C2 constructors
or the D1 and D2 destructors are identical. We already optimize some
of these cases, but cannot optimize it when the GlobalValue is
weak_odr.
The problem with weak_odr is that an old TU seeing the same code will
have a C1 and a C2 comdat with the corresponding symbols. We cannot
suddenly start putting the C2 symbol in the C1 comdat as we cannot
guarantee that the linker will not pick a .o with only C1 in it.
The solution implemented by GCC is to expand the ABI to have a comdat
whose name uses a C5/D5 suffix and always has both symbols. That is
what this patch implements.
llvm-svn: 217874
We assumed that the incoming this argument would be the last argument.
However, this is not true under the MS ABI.
This fixes PR20897.
llvm-svn: 217642
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
There were code paths that are duplicated for constructors and destructors just
because we have both CXXCtorType and CXXDtorsTypes.
This patch introduces an unified enum and reduces code deplication a bit.
llvm-svn: 217383
Summary:
This is a first small step towards passing generic "Expr" instead of
ArgBeg/ArgEnd pair into EmitCallArgs() family of methods. Having "Expr" will
allow us to get the corresponding FunctionDecl and its ParmVarDecls,
thus allowing us to alter CodeGen depending on the function/parameter
attributes.
No functionality change.
Test Plan: regression test suite
Reviewers: rnk
Reviewed By: rnk
Subscribers: aemerson, cfe-commits
Differential Revision: http://reviews.llvm.org/D4915
llvm-svn: 216214
they're somehow missing a body. Looks like this was left behind when the loop
was generalized, and it's not been problematic before because without modules,
a used, implicit special member function declaration must be a definition.
This was resulting in us trying to emit a constructor declaration rather than
a definition, and producing a constructor missing its member initializers.
llvm-svn: 214473
This implements the central part of support for dllimport/dllexport on
classes: allowing the attribute on class declarations, inheriting it
to class members, and forcing emission of exported members. It's based
on Nico Rieck's patch from http://reviews.llvm.org/D1099.
This patch doesn't propagate dllexport to bases that are template
specializations, which is an interesting problem. It also doesn't
look at the rules when redeclaring classes with different attributes,
I'd like to do that separately.
Differential Revision: http://reviews.llvm.org/D3877
llvm-svn: 209908
When a non-trivial parameter is present, clang now gathers up all the
parameters that lack inreg and puts them into a packed struct. MSVC
always aligns each parameter to 4 bytes and no more, so this is a pretty
simple struct to lay out.
On win64, non-trivial records are passed indirectly. Prior to this
change, clang was incorrectly using byval on win64.
I'm able to self-host a working clang with this change and additional
LLVM patches.
Reviewers: rsmith
Differential Revision: http://llvm-reviews.chandlerc.com/D2636
llvm-svn: 200597
A return type is the declared or deduced part of the function type specified in
the declaration.
A result type is the (potentially adjusted) type of the value of an expression
that calls the function.
Rule of thumb:
* Declarations have return types and parameters.
* Expressions have result types and arguments.
llvm-svn: 200082
Fix a perennial source of confusion in the clang type system: Declarations and
function prototypes have parameters to which arguments are supplied, so calling
these 'arguments' was a stretch even in C mode, let alone C++ where default
arguments, templates and overloading make the distinction important to get
right.
Readability win across the board, especially in the casting, ADL and
overloading implementations which make a lot more sense at a glance now.
Will keep an eye on the builders and update dependent projects shortly.
No functional change.
llvm-svn: 199686
Fixes PR18435, where we generated a base ctor instead of a complete
ctor, and so failed to construct virtual bases when constructing the
complete object.
llvm-svn: 199160
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
Unlike Itanium's VTTs, the 'most derived' boolean or bitfield is the
last parameter for non-variadic constructors, rather than the second.
For variadic constructors, the 'most derived' parameter comes after the
'this' parameter. This affects constructor calls and constructor decls
in a variety of places.
Reviewers: timurrrr
Differential Revision: http://llvm-reviews.chandlerc.com/D2405
llvm-svn: 197518
Summary:
MSVC destroys arguments in the callee from left to right. Because C++
objects have to be destroyed in the reverse order of construction, Clang
has to construct arguments from right to left and destroy arguments from
left to right.
This patch fixes the ordering by reversing the order of evaluation of
all call arguments under the MS C++ ABI.
Fixes PR18035.
Reviewers: rsmith
Differential Revision: http://llvm-reviews.chandlerc.com/D2275
llvm-svn: 196402
deallocation function (and the corresponding unsized deallocation function has
been declared), emit a weak discardable definition of the function that
forwards to the corresponding unsized deallocation.
This allows a C++ standard library implementation to provide both a sized and
an unsized deallocation function, where the unsized one does not just call the
sized one, for instance by putting both in the same object file within an
archive.
llvm-svn: 194055
CodeGenABITypes is a wrapper built on top of CodeGenModule that exposes
some of the functionality of CodeGenTypes (held by CodeGenModule),
specifically methods that determine the LLVM types appropriate for
function argument and return values.
I addition to CodeGenABITypes.h, CGFunctionInfo.h is introduced, and the
definitions of ABIArgInfo, RequiredArgs, and CGFunctionInfo are moved
into this new header from the private headers ABIInfo.h and CGCall.h.
Exposing this functionality is one part of making it possible for LLDB
to determine the actual ABI locations of function arguments and return
values, making it possible for it to determine this for any supported
target without hard-coding ABI knowledge in the LLDB code.
llvm-svn: 193717
The general strategy is to create template versions of the conversion function and static invoker and then during template argument deduction of the conversion function, create the corresponding call-operator and static invoker specializations, and when the conversion function is marked referenced generate the body of the conversion function using the corresponding static-invoker specialization. Similarly, Codegen does something similar - when asked to emit the IR for a specialized static invoker of a generic lambda, it forwards emission to the corresponding call operator.
This patch has been reviewed in person both by Doug and Richard. Richard gave me the LGTM.
A few minor changes:
- per Richard's request i added a simple check to gracefully inform that captures (init, explicit or default) have not been added to generic lambdas just yet (instead of the assertion violation).
- I removed a few lines of code that added the call operators instantiated parameters to the currentinstantiationscope. Not only did it not handle parameter packs, but it is more relevant in the patch for nested lambdas which will follow this one, and fix that problem more comprehensively.
- Doug had commented that the original implementation strategy of using the TypeSourceInfo of the call operator to create the static-invoker was flawed and allowed const as a member qualifier to creep into the type of the static-invoker. I currently kludge around it - but after my initial discussion with Doug, with a follow up session with Richard, I have added a FIXME so that a more elegant solution that involves the use of TrivialTypeSourceInfo call followed by the correct wiring of the template parameters to the functionprototypeloc is forthcoming.
Thanks!
llvm-svn: 191634
They were mostly copy&paste of each other, move it to CodeGenFunction. Of course
the two implementations have diverged over time; the one in CGExprCXX seems to
be the more modern one so I picked that one and moved it to CGClass which feels
like a better home for it. No intended functionality change.
llvm-svn: 189203
This field is just IsDefaulted && !IsDeleted; in all places it's used,
a simple check for isDefaulted() is superior anyway, and we were forgetting
to set it in a few cases.
Also eliminate CXXDestructorDecl::IsImplicitlyDefined, for the same reasons.
No intended functionality change.
llvm-svn: 187891
Based on Peter Collingbourne's destructor patches.
Prior to this change, clang was considering ?1 to be the complete
destructor and the base destructor, which was wrong. This lead to
crashes when clang tried to emit two LLVM functions with the same name.
In this ABI, TUs with non-inline dtors might not emit a complete
destructor. They are emitted as inline thunks in TUs that need them,
and they always delegate to the base dtors of the complete class and its
virtual bases. This change uses the DeferredDecls machinery to emit
complete dtors as needed.
Currently in clang try body destructors can catch exceptions thrown by
virtual base destructors. In the Microsoft C++ ABI, clang may not have
the destructor definition, in which case clang won't wrap the virtual
virtual base destructor calls in a try-catch. Diagnosing this in user
code is TODO.
Finally, for classes that don't use virtual inheritance, MSVC always
calls the base destructor (?1) directly. This is a useful code size
optimization that avoids emitting lots of extra thunks or aliases.
Implementing it also means our existing tests continue to pass, and is
consistent with MSVC's output.
We can do the same for Itanium by tweaking GetAddrOfCXXDestructor, but
it will require further testing.
Reviewers: rjmccall
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1066
llvm-svn: 186828
This simplifies the core benefit of -flimit-debug-info by taking a more
systematic approach to avoid emitting debug info definitions for types
that only require declarations. The previous ad-hoc approach (3 cases
removed in this patch) had many holes.
The general approach (adding a bit to TagDecl and callback through
ASTConsumer) has been discussed with Richard Smith - though always open
to revision.
llvm-svn: 186262
This allows clang to use the backend parameter attribute 'returned' when generating 'this'-returning constructors and destructors in ARM and MSVC C++ ABIs.
llvm-svn: 185291
This function only makes sense there. Eventually it should no longer
be part of the CGCXXABI interface, as it is an Itanium-specific detail.
Differential Revision: http://llvm-reviews.chandlerc.com/D821
llvm-svn: 185213
1) Removed useless return value of CGCXXABI::EmitConstructorCall and CGCXXABI::EmitVirtualDestructorCall and implementations
2) Corrected last portion of CodeGenCXX/constructor-destructor-return-this to correctly test for non-'this'-return of virtual destructor calls
llvm-svn: 184330
In Itanium, dynamic classes have one vtable with several different
address points for dynamic base classes that can't share vtables.
In the MS C++ ABI, each vbtable that can't be shared gets its own
symbol, similar to how ctor vtables work in Itanium. However, instead
of mangling the subobject offset into the symbol, the unique portions of
the inheritance path are mangled into the symbol to make it unique.
This patch implements the MSVC 2012 scheme for forming unique vbtable
symbol names. MSVC 2010 use the same mangling with a different subset
of the path. Implementing that mangling and possibly others is TODO.
Each vbtable is an array of i32 offsets from the vbptr that points to it
to another virtual base subobject. The first entry of a vbtable always
points to the base of the current subobject, implying that it is the
same no matter which parent class contains it.
Reviewers: rjmccall
Differential Revision: http://llvm-reviews.chandlerc.com/D636
llvm-svn: 184309
The backend will now use the generic 'returned' attribute to form tail calls where possible, as well as avoid save-restores of 'this' in some cases (specifically the cases that matter for the ARM C++ ABI).
This patch also reverts a prior front-end only partial implementation of these optimizations, since it's no longer required.
llvm-svn: 184205
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).
This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.
llvm-svn: 183872
While we can't yet emit vbtables, this allows us to find virtual bases
of objects constructed in other TUs.
This make iostream hello world work, since basic_ostream virtually
inherits from basic_ios.
Differential Revision: http://llvm-reviews.chandlerc.com/D795
llvm-svn: 182870
unnamed bitfields.
Unnamed bitfields won't have an explicit copy operation
in the AST, which breaks the strong form of the invariant.
rdar://13816940
llvm-svn: 181289
a lambda.
Bug #1 is that CGF's CurFuncDecl was "stuck" at lambda invocation
functions. Fix that by generally improving getNonClosureContext
to look through lambdas and captured statements but only report
code contexts, which is generally what's wanted. Audit uses of
CurFuncDecl and getNonClosureAncestor for correctness.
Bug #2 is that lambdas weren't specially mapping 'self' when inside
an ObjC method. Fix that by removing the requirement for that
and using the normal EmitDeclRefLValue path in LoadObjCSelf.
rdar://13800041
llvm-svn: 181000
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
Updated from r177211.
rdar://12818789
llvm-svn: 177541
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
rdar://12818789
llvm-svn: 177211
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.
Also, normalize the API for loading and storing complexes.
I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.
llvm-svn: 176656
field to be memcpy'd, rather instead of ASTContext::getTypeAlign(<Field Type>).
For packed structs the alignment of a field may be less than the alignment of
the field's type.
<rdar://problem/13338585>
llvm-svn: 176512
bitfield. CGBitField::StorageAlignment holds the alignment in chars, but
emitMemcpy had been treating it as if it were held in bits, leading to
underaligned memcpys.
Related to PR15348.
Thanks very much to Chandler for the diagnosis.
llvm-svn: 176163
bitfield related issues.
The original commit broke Takumi's builder. The bug was caused by bitfield sizes
being determined by their underlying type, rather than the field info. A similar
issue with bitfield alignments showed up on closer testing. Both have been fixed
in this patch.
llvm-svn: 175389
move-constructors and move-assignment operators, use memcpy to copy adjacent
POD members.
Previously, classes with one or more Non-POD members would fall back on
element-wise copies for all members, including POD members. This often
generated a lot of IR. Without padding metadata, it wasn't often possible
for the LLVM optimizers to turn the element-wise copies into a memcpy.
This code hasn't yet received any serious tuning. I didn't see any serious
regressions on a self-hosted clang build, or any of the nightly tests, but
I think it's important to get this out in the wild to get more testing.
Insights, feedback and comments welcome.
Many thanks to David Blaikie, Richard Smith, and especially John McCall for
their help and feedback on this work.
llvm-svn: 174919
This does limit these typedefs to being sequences, but no current usage
requires them to be contiguous (we could expand this to a more general
iterator pair range concept at some point).
Also, it'd be nice if SmallVector were constructible directly from an ArrayRef
but this is a bit tricky since ArrayRef depends on SmallVectorBaseImpl for the
inverse conversion. (& generalizing over all range-like things, while nice,
would require some nontrivial SFINAE I haven't thought about yet)
llvm-svn: 170482
at whether the *selected* constructor would be trivial rather than considering
whether the array's element type has *any* non-trivial constructors of the
relevant kind.
llvm-svn: 167562
don't explode if the offset we get is zero. This can happen if
you have an empty virtual base class.
While I'm at it, remove an unnecessary block from the IR-generation
of the null-check, mark the eventual GEP as inbounds, and generally
prettify.
llvm-svn: 161100
in the ABI arrangement, and leave a hook behind so that we can easily
tweak CCs on platforms that use different CCs by default for C++
instance methods.
llvm-svn: 159894
In addition, I've made the pointer and reference typedef 'void' rather than T*
just so they can't get misused. I would've omitted them entirely but
std::distance likes them to be there even if it doesn't use them.
This rolls back r155808 and r155869.
Review by Doug Gregor incorporating feedback from Chandler Carruth.
llvm-svn: 158104
filter_decl_iterator had a weird mismatch where both op* and op-> returned T*
making it difficult to generalize this filtering behavior into a reusable
library of any kind.
This change errs on the side of value, making op-> return T* and op* return
T&.
(reviewed by Richard Smith)
llvm-svn: 155808
Note that this transformation has a substantial semantic effect outside of ARC: it gives the converted lambda lifetime semantics similar to a block literal. With ARC, the effect is much less obvious because the lifetime of blocks is already managed.
llvm-svn: 151797
- variant members with nontrivial destructors make the containing class's
destructor deleted
- check for a virtual destructor after checking for overridden methods in the
base class(es)
- check for an inaccessible operator delete for a class with a virtual
destructor.
Do not try to call an anonymous union field's destructor from the destructor of
the containing class.
llvm-svn: 151483
optional argument passed through the variadic ellipsis)
potentially affects how we need to lower it. Propagate
this information down to the various getFunctionInfo(...)
overloads on CodeGenTypes. Furthermore, rename those
overloads to clarify their distinct purposes, and make
sure we're calling the right one in the right place.
This has a nice side-effect of making it easier to construct
a function type, since the 'variadic' bit is no longer
separable.
This shouldn't really change anything for our existing
platforms, with one minor exception --- we should now call
variadic ObjC methods with the ... in the "right place"
(see the test case), which I guess matters for anyone
running GNUStep on MIPS. Mostly it's just a substantial
clean-up.
llvm-svn: 150788
conversion to function pointer. Rather than having IRgen synthesize
the body of this function, we instead introduce a static member
function "__invoke" with the same signature as the lambda's
operator() in the AST. Sema then generates a body for the conversion
to function pointer which simply returns the address of __invoke. This
approach makes it easier to evaluate a call to the conversion function
as a constant, makes the linkage of the __invoke function follow the
normal rules for member functions, and may make life easier down the
road if we ever want to constexpr'ify some of lambdas.
Note that IR generation is responsible for filling in the body of
__invoke (Sema just adds a dummy body), because the body can't
generally be expressed in C++.
Eli, please review!
llvm-svn: 150783
Start handling debug line and scope information better:
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
after fixing a few bugs that were exposed in gdb testsuite testing.
llvm-svn: 141893
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
llvm-svn: 141732
generation when we're dealing with an implicitly-defined copy or move
constructor. And, actually set the implicitly-defined bit for
implicitly-defined constructors and destructors. Should fix self-host.
llvm-svn: 140334
This makes the code duplication of implicit special member handling even worse,
but the cleanup will have to come later. For now, this works.
Follow-up with tests for explicit defaulting and enabling the __has_feature
flag to come.
llvm-svn: 138821
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
- Emit default-initialization of arrays that were partially initialized
with initializer lists with a loop, rather than emitting the default
initializer N times;
- support destroying VLAs of non-trivial type, although this is not
yet exposed to users; and
- support the partial destruction of arrays initialized with
initializer lists when an initializer throws an exception.
llvm-svn: 134784
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.
Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.
llvm-svn: 133103
optimization. Make sure to require a vtable when trying to get the address
of a VTT, otherwise we would never end up emitting the VTT.
llvm-svn: 131400
that the destructor body is trivial and that all member variables also have either
trivial destructors or trivial destructor bodies, we don't need to initialize the
vtable pointers since no virtual member functions will be called on the destructor.
Fixes PR9181.
llvm-svn: 131368
the body of a delegating constructor call.
This means that the delegating constructor implementation should be
complete and correct, though there are some rough edges (diagnostic
quality with the cycle detection and using a deleted destructor).
llvm-svn: 130803
As far as I know, this implementation is complete but might be missing a
few optimizations. Exceptions and virtual bases are handled correctly.
Because I'm an optimist, the web page has appropriately been updated. If
I'm wrong, feel free to downgrade its support categories.
llvm-svn: 130642
make sure to mark the destructor. This normally isn't required,
because the destructor should have been marked as part of the
declaration of the local, but it's necessary when the variable
is a parameter because it's the call sites that are responsible
for those destructors.
llvm-svn: 130372
simplify the logic of initializing function parameters so that we don't need
both a variable declaration and a type in FunctionArgList. This also means
that we need to propagate the CGFunctionInfo down in a lot of places rather
than recalculating it from the FAL. There's more we can do to eliminate
redundancy here, and I've left FIXMEs behind to do it.
llvm-svn: 127314
struct X {
X() : au_i1(123) {}
union {
int au_i1;
float au_f1;
};
};
clang will now deal with au_i1 explicitly as an IndirectFieldDecl.
llvm-svn: 120900
the bases are completely initialized. This won't work --- base
initializer expressions can rely on the vtables having been set up.
Check for uses of 'this' in the initializers and force a vtable
initialization if found.
This might not be good enough; we might need to extend this to handle
the possibility of arbitrary code finding an external reference to this
(not yet completely-constructed!) object and accessing through it,
in which case we'll probably find ourselves doing a lot more unnecessary
stores.
llvm-svn: 114153
slot. The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.
I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision. The main obstacle here is that
IR-generation must be much more careful about making sure that exactly
llvm-svn: 113962
This takes some trickery since CastExpr has subclasses (and indeed,
is abstract).
Also, smoosh the CastKind into the bitfield from Expr.
Drops two words of storage from Expr in the common case of expressions
which don't need inheritance paths. Avoids a separate allocation and
another word of overhead in cases needing inheritance paths. Also has
the advantage of not leaking memory, since destructors for AST nodes are
never run.
llvm-svn: 110507
initializer of (). Make sure to use a simple memset() when we can, or
fall back to generating a loop when a simple memset will not
suffice. Fixes <rdar://problem/8212208>, a regression due to my work
in r107857.
llvm-svn: 108977
mostly in avoiding unnecessary work at compile time but also in producing more
sensible block orderings.
Move the destructor cleanups for local variables over to use lazy cleanups.
Eventually all cleanups will do this; for now we have some awkward code
duplication.
Tell IR generation just to never produce landing pads in -fno-exceptions.
This is a much more comprehensive solution to a problem which previously was
half-solved by checks in most cleanup-generation spots.
llvm-svn: 108270
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
implicitly-generated copy constructor. Previously, Sema would perform
some checking and instantiation to determine which copy constructors,
etc., would be called, then CodeGen would attempt to figure out which
copy constructor to call... but would get it wrong, or poke at an
uninstantiated default argument, or fail in other ways.
The new scheme is similar to what we now do for the implicit
copy-assignment operator, where Sema performs all of the semantic
analysis and builds specific ASTs that look similar to the ASTs we'd
get from explicitly writing the copy constructor, so that CodeGen need
only do a direct translation.
However, it's not quite that simple because one cannot explicit write
elementwise copy-construction of an array. So, I've extended
CXXBaseOrMemberInitializer to contain a list of indexing variables
used to copy-construct the elements. For example, if we have:
struct A { A(const A&); };
struct B {
A array[2][3];
};
then we generate an implicit copy assignment operator for B that looks
something like this:
B::B(const B &other) : array[i0][i1](other.array[i0][i1]) { }
CodeGen will loop over the invented variables i0 and i1 to visit all
elements in the array, so that each element in the destination array
will be copy-constructed from the corresponding element in the source
array. Of course, if we're dealing with arrays of scalars or class
types with trivial copy-assignment operators, we just generate a
memcpy rather than a loop.
Fixes PR6928, PR5989, and PR6887. Boost.Regex now compiles and passes
all of its regression tests.
Conspicuously missing from this patch is handling for the exceptional
case, where we need to destruct those objects that we have
constructed. I'll address that case separately.
llvm-svn: 103079
not just the inner expression. This is important if the expression has any
temporaries. Fixes PR 7028.
Basically a symptom of really tragic method names.
llvm-svn: 102998
assignment operators.
Previously, Sema provided type-checking and template instantiation for
copy assignment operators, then CodeGen would synthesize the actual
body of the copy constructor. Unfortunately, the two were not in sync,
and CodeGen might pick a copy-assignment operator that is different
from what Sema chose, leading to strange failures, e.g., link-time
failures when CodeGen called a copy-assignment operator that was not
instantiation, run-time failures when copy-assignment operators were
overloaded for const/non-const references and the wrong one was
picked, and run-time failures when by-value copy-assignment operators
did not have their arguments properly copy-initialized.
This implementation synthesizes the implicitly-defined copy assignment
operator bodies in Sema, so that the resulting ASTs encode exactly
what CodeGen needs to do; there is no longer any special code in
CodeGen to synthesize copy-assignment operators. The synthesis of the
body is relatively simple, and we generate one of three different
kinds of copy statements for each base or member:
- For a class subobject, call the appropriate copy-assignment
operator, after overload resolution has determined what that is.
- For an array of scalar types or an array of class types that have
trivial copy assignment operators, construct a call to
__builtin_memcpy.
- For an array of class types with non-trivial copy assignment
operators, synthesize a (possibly nested!) for loop whose inner
statement calls the copy constructor.
- For a scalar type, use built-in assignment.
This patch fixes at least a few tests cases in Boost.Spirit that were
failing because CodeGen picked the wrong copy-assignment operator
(leading to link-time failures), and I suspect a number of undiagnosed
problems will also go away with this change.
Some of the diagnostics we had previously have gotten worse with this
change, since we're going through generic code for our
type-checking. I will improve this in a subsequent patch.
llvm-svn: 102853