copy constructors of classes with array members, instead using
ArrayInitLoopExpr to represent the initialization loop.
This exposed a bug in the static analyzer where it was unable to differentiate
between zero-initialized and unknown array values, which has also been fixed
here.
llvm-svn: 289618
initialization of each array element:
* ArrayInitLoopExpr is a prvalue of array type with two subexpressions:
a common expression (an OpaqueValueExpr) that represents the up-front
computation of the source of the initialization, and a subexpression
representing a per-element initializer
* ArrayInitIndexExpr is a prvalue of type size_t representing the current
position in the loop
This will be used to replace the creation of explicit index variables in lambda
capture of arrays and copy/move construction of classes with array elements,
and also C++17 structured bindings of arrays by value (which inexplicably allow
copying an array by value, unlike all of C++'s other array declarations).
No uses of these nodes are introduced by this change, however.
llvm-svn: 289413
In amdgcn target, null pointers in global, constant, and generic address space take value 0 but null pointers in private and local address space take value -1. Currently LLVM assumes all null pointers take value 0, which results in incorrectly translated IR. To workaround this issue, instead of emit null pointers in local and private address space, a null pointer in generic address space is emitted and casted to local and private address space.
Tentative definition of global variables with non-zero initializer will have weak linkage instead of common linkage since common linkage requires zero initializer and does not have explicit section to hold the non-zero value.
Virtual member functions getNullPointer and performAddrSpaceCast are added to TargetCodeGenInfo which by default returns ConstantPointerNull and emitting addrspacecast instruction. A virtual member function getNullPointerValue is added to TargetInfo which by default returns 0. Each target can override these virtual functions to get target specific null pointer and the null pointer value for specific address space, and perform specific translations for addrspacecast.
Wrapper functions getNullPointer is added to CodegenModule and getTargetNullPointerValue is added to ASTContext to facilitate getting the target specific null pointers and their values.
This change has no effect on other targets except amdgcn target. Other targets can provide support of non-zero null pointer in a similar way.
This change only provides support for non-zero null pointer for C and OpenCL. Supporting for other languages will be added later incrementally.
Differential Revision: https://reviews.llvm.org/D26196
llvm-svn: 289252
When an object of class type is initialized from a prvalue of the same type
(ignoring cv qualifications), use the prvalue to initialize the object directly
instead of inserting a redundant elidable call to a copy constructor.
llvm-svn: 288866
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.
This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.
Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.
This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).
Differential Revision: https://reviews.llvm.org/D21567
llvm-svn: 277024
Replace inheriting constructors implementation with new approach, voted into
C++ last year as a DR against C++11.
Instead of synthesizing a set of derived class constructors for each inherited
base class constructor, we make the constructors of the base class visible to
constructor lookup in the derived class, using the normal rules for
using-declarations.
For constructors, UsingShadowDecl now has a ConstructorUsingShadowDecl derived
class that tracks the requisite additional information. We create shadow
constructors (not found by name lookup) in the derived class to model the
actual initialization, and have a new expression node,
CXXInheritedCtorInitExpr, to model the initialization of a base class from such
a constructor. (This initialization is special because it performs real perfect
forwarding of arguments.)
In cases where argument forwarding is not possible (for inalloca calls,
variadic calls, and calls with callee parameter cleanup), the shadow inheriting
constructor is not emitted and instead we directly emit the initialization code
into the caller of the inherited constructor.
Note that this new model is not perfectly compatible with the old model in some
corner cases. In particular:
* if B inherits a private constructor from A, and C uses that constructor to
construct a B, then we previously required that A befriends B and B
befriends C, but the new rules require A to befriend C directly, and
* if a derived class has its own constructors (and so its implicit default
constructor is suppressed), it may still inherit a default constructor from
a base class
llvm-svn: 274049
Fixes PR11517 for SPARC.
On most targets, clang lowers va_arg itself, eschewing the use of the
llvm vaarg instruction. This is necessary (at least for now) as the type
argument to the vaarg instruction cannot represent all the ABI
information that is needed to support complex calling conventions.
However, on targets with a simpler varrags ABIs, the LLVM instruction
can work just fine, and clang can simply lower to it. Unfortunately,
even on such targets, vaarg with a struct argument would fail, because
the default lowering to vaarg was naive: it didn't take into account the
ABI attribute computed by classifyArgumentType. In particular, for the
DefaultABIInfo, structs are supposed to be passed indirectly and so
llvm's vaarg instruction should be emitted with a pointer argument.
Now, vaarg instruction emission is able to use computed ABIArgInfo for
the provided argument type, which allows the default ABI support to work
for structs too.
I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore,
although I believe both are now redundant, and could be switched over to
use the default implementation as well.
Differential Revision: http://reviews.llvm.org/D16154
llvm-svn: 261717
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly.
This patch makes us treat splats more properly.
Additionally, this patch adds a new cast kind which allows a bool->int
cast to result in -1 or 0, instead of 1 or 0 (for true and false,
respectively), so we can sanely model OpenCL bool->int casts in the AST.
Differential Revision: http://reviews.llvm.org/D14877
llvm-svn: 257559
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug.
Differential Revision: http://reviews.llvm.org/D13582
llvm-svn: 250795
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
This fixes an issue raised in D12412, where we generated invalid IR.
Thanks to Vedant Kumar for coming up with the initial work around.
Differential Revision: http://reviews.llvm.org/D12412
llvm-svn: 246880
Based on previous discussion on the mailing list, clang currently lacks support
for C99 partial re-initialization behavior:
Reference: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-April/029188.html
Reference: http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_253.htm
This patch attempts to fix this problem.
Given the following code snippet,
struct P1 { char x[6]; };
struct LP1 { struct P1 p1; };
struct LP1 l = { .p1 = { "foo" }, .p1.x[2] = 'x' };
// this example is adapted from the example for "struct fred x[]" in DR-253;
// currently clang produces in l: { "\0\0x" },
// whereas gcc 4.8 produces { "fox" };
// with this fix, clang will also produce: { "fox" };
Differential Review: http://reviews.llvm.org/D5789
llvm-svn: 239446
Patch fixes codegen for aggregate copying of VLAs. Currently method CodeGenFunction::EmitAggregateCopy() does not support copying of VLAs. Patch checks if the size of the type is 0, then checks if the type is actually a variable-length array. Then it calculates total length for this array and calculates total size of the array in bytes:
<total number of elements in array> * aligned_sizeof(ElementType) (if copy assignment is requested).
If simple copying is requested, size is calculated like:
<total number of elements in array> * aligned_sizeof(ElementType) - aligned_sizeof(ElementType) + sizeof(ElementType).
memcpy() is used with this calculated size of the VLA.
Differential Revision: http://reviews.llvm.org/D9851
llvm-svn: 237768
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
The /volatile:ms semantics turn volatile loads and stores into atomic
acquire and release operations. This distinction is important because
volatile memory operations do not form a happens-before relationship
with non-atomic memory. This means that a volatile store is not
sufficient for implementing a mutex unlock routine.
Differential Revision: http://reviews.llvm.org/D7580
llvm-svn: 229082
Matches the existing code for scalar default arguments. Complex default
arguments probably need the same handling too (test/fix to that coming
next).
llvm-svn: 228588
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.
There were essentially 3 options for this:
* The beginning of an expression (this was the behavior prior to this
commit). This meant that stepping through subexpressions would bounce
around from subexpressions back to the start of the outer expression,
etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
where the actual addition occurred)).
* The end of an expression. This seems to be what GCC does /mostly/, and
certainly this for function calls. This has the advantage that
progress is always 'forwards' (never jumping backwards - except for
independent subexpressions if they're evaluated in interesting orders,
etc). "x + y + z" would go "x y z" with the additions occurring at y
and z after the respective loads.
The problem with this is that the user would still have to think
fairly hard about precedence to realize which subexpression is being
evaluated or which operator overload is being called in, say, an asan
backtrace.
* The preferred location or 'exprloc'. In this case you get sort of what
you'd expect, though it's a bit confusing in its own way due to going
'backwards'. In this case the locations would be: "x y + z +" in
lovely postfix arithmetic order. But this does mean that if the op+
were an operator overload, say, and in a backtrace, the backtrace will
point to the exact '+' that's being called, not to the end of one of
its operands.
(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)
llvm-svn: 227027
or a class derived from T. We already supported this when initializing
_Atomic(T) from T for most (and maybe all) other reasonable values of T.
llvm-svn: 214390
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
PNaCl and Emscripten can both handle va_arg IR instructions with
struct type.
Also add a test to cover generating a va_arg IR instruction from
va_arg in C on le32 (as already handled by VisitVAArgExpr() in
CGExprScalar.cpp), which was not covered by a test before.
(This fixes https://code.google.com/p/nativeclient/issues/detail?id=2381)
Differential Revision: http://llvm-reviews.chandlerc.com/D2539
llvm-svn: 199830
adjustFallThroughCount isn't a good name, and the documentation was
even worse. This commit attempts to clarify what it's for and when to
use it.
llvm-svn: 199139
With the introduction of explicit address space casts into LLVM, there's
a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA
and code to produce address space casts from those kinds when appropriate.
Patch by Michele Scandale!
llvm-svn: 197036
This is the same way GenericSelectionExpr works, and it's generally a
more consistent approach.
A large part of this patch is devoted to caching the value of the condition
of a ChooseExpr; it's needed to avoid threading an ASTContext into
IgnoreParens().
Fixes <rdar://problem/14438917>.
llvm-svn: 186738
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).
This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.
llvm-svn: 183872
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.
Also, normalize the API for loading and storing complexes.
I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.
llvm-svn: 176656
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false,
not honoring that aggregate has at least one 'volatile' data member
(even though aggregate itself has not been qualified as 'volatile'.
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).
llvm-svn: 173535
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
Separate out the notions of 'has a trivial special member' and 'has a
non-trivial special member', and use them appropriately. These are not
opposites of one another (there might be no special member, or in C++11 there
might be a trivial one and a non-trivial one). The CXXRecordDecl predicates
continue to produce incorrect results, but do so in fewer cases now, and
they document the cases where they might be wrong.
No functionality changes are intended here (they will come when the predicates
start producing the right answers...).
llvm-svn: 168119
This fixes a regression from r162254, the optimizer has problems reasoning
about the smaller memcpy as it's often not safe to widen a store but making it
smaller is.
llvm-svn: 164917
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.
llvm-svn: 163451
(__builtin_* etc.) so that it isn't possible to take their address.
Specifically, introduce a new type to represent a reference to a builtin
function, and a new cast kind to convert it to a function pointer in the
operand of a call. Fixes PR13195.
llvm-svn: 162962
* when checking that a pointer or reference refers to appropriate storage for a type, also check the alignment and perform a null check
* check that references are bound to appropriate storage
* check that 'this' has appropriate storage in member accesses and member function calls
llvm-svn: 162523
to overwrite objects that might have been allocated into the type's
tail padding. This patch is missing some potential optimizations where
the destination is provably a complete object, but it's necessary for
correctness.
Patch by Jonathan Sauer.
llvm-svn: 162254
if we want to ignore a result, the Dest will be null. Otherwise,
we must copy into it. This means we need to ensure a slot when
loading from a volatile l-value.
With all that in place, fix a bug with chained assignments into
__block variables of aggregate type where we were losing insight into
the actual source of the value during the second assignment.
llvm-svn: 159630
In addition, I've made the pointer and reference typedef 'void' rather than T*
just so they can't get misused. I would've omitted them entirely but
std::distance likes them to be there even if it doesn't use them.
This rolls back r155808 and r155869.
Review by Doug Gregor incorporating feedback from Chandler Carruth.
llvm-svn: 158104
filter_decl_iterator had a weird mismatch where both op* and op-> returned T*
making it difficult to generalize this filtering behavior into a reusable
library of any kind.
This change errs on the side of value, making op-> return T* and op* return
T&.
(reviewed by Richard Smith)
llvm-svn: 155808
initialize an array of unsigned char. Outside C++11 mode, this bug was benign,
and just resulted in us emitting a constant which was double the required
length, padded with 0s. In C++11, it resulted in us generating an array whose
first element was something like i8 ptrtoint ([n x i8]* @str to i8).
llvm-svn: 154756
track whether the referenced declaration comes from an enclosing
local context. I'm amenable to suggestions about the exact meaning
of this bit.
llvm-svn: 152491
we correctly emit loads of BlockDeclRefExprs even when they
don't qualify as ODR-uses. I think I'm adequately convinced
that BlockDeclRefExpr can die.
llvm-svn: 152479
block pointer that returns a block literal which captures (by copy)
the lambda closure itself. Some aspects of the block literal are left
unspecified, namely the capture variable (which doesn't actually
exist) and the body (which will be filled in by IRgen because it can't
be written as an AST).
Because we're switching to this model, this patch also eliminates
tracking the copy-initialization expression for the block capture of
the conversion function, since that information is now embedded in the
synthesized block literal. -1 side tables FTW.
llvm-svn: 151131
We now generate temporary arrays to back std::initializer_list objects
initialized with braces. The initializer_list is then made to point at
the array. We support both ptr+size and start+end forms, although
the latter is untested.
Array lifetime is correct for temporary std::initializer_lists (e.g.
call arguments) and local variables. It is untested for new expressions
and member initializers.
Things left to do:
Massively increase the amount of testing. I need to write tests for
start+end init lists, temporary objects created as a side effect of
initializing init list objects, new expressions, member initialization,
creation of temporary objects (e.g. std::vector) for initializer lists,
and probably more.
Get lifetime "right" for member initializers and new expressions. Not
that either are very useful.
Implement list-initialization of array new expressions.
llvm-svn: 150803
is general goodness because representations of member pointers are
not always equivalent across member pointer types on all ABIs
(even though this isn't really standard-endorsed).
Take advantage of the new information to teach IR-generation how
to do these reinterprets in constant initializers. Make sure this
works when intermingled with hierarchy conversions (although
this is not part of our motivating use case). Doing this in the
constant-evaluator would probably have been better, but that would
require a *lot* of extra structure in the representation of
constant member pointers: you'd really have to track an arbitrary
chain of hierarchy conversions and reinterpretations in order to
get this right. Ultimately, this seems less complex. I also
wasn't quite sure how to extend the constant evaluator to handle
foldings that we don't actually want to treat as extended
constant expressions.
llvm-svn: 150551
- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type. This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks. The feature isn't 100% complete yet, but it's done enough that we want people testing it.
Still to do:
- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases
llvm-svn: 148242
The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment.
<rdar://problem/10463337>
llvm-svn: 145862