This patch is a part of https://reviews.llvm.org/D48456 in an attempt to split
the casting logic up into smaller patches. This contains the code for casting
from fixed point types to boolean types.
Differential Revision: https://reviews.llvm.org/D53308
llvm-svn: 345063
This patch is a part of https://reviews.llvm.org/D48456 in an attempt to
split them up. This contains the code for casting between fixed point types
and other fixed point types.
The method for converting between fixed point types is based off the convert()
method in APFixedPoint.
Differential Revision: https://reviews.llvm.org/D50616
llvm-svn: 344530
This diff includes the logic for setting the precision bits for each primary fixed point type in the target info and logic for initializing a fixed point literal.
Fixed point literals are declared using the suffixes
```
hr: short _Fract
uhr: unsigned short _Fract
r: _Fract
ur: unsigned _Fract
lr: long _Fract
ulr: unsigned long _Fract
hk: short _Accum
uhk: unsigned short _Accum
k: _Accum
uk: unsigned _Accum
```
Errors are also thrown for illegal literal values
```
unsigned short _Accum u_short_accum = 256.0uhk; // expected-error{{the integral part of this literal is too large for this unsigned _Accum type}}
```
Differential Revision: https://reviews.llvm.org/D46915
llvm-svn: 335148
lifetime.start/end expects pointer argument in alloca address space.
However in C++ a temporary variable is in default address space.
This patch changes API CreateMemTemp and CreateTempAlloca to
get the original alloca instruction and pass it lifetime.start/end.
It only affects targets with non-zero alloca address space.
Differential Revision: https://reviews.llvm.org/D45900
llvm-svn: 332593
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
This patch addresses some mostly trivial post-commit review comments received
on r331677.
Additionally, this patch fixes an assertion in `getNarrowingKind` caused by
the use of an uninitialized value from `checkThreeWayNarrowingConversion`.
llvm-svn: 331707
Summary:
This patch tackles long hanging fruit for the builtin operator<=> expressions. It is currently needs some cleanup before landing, but I want to get some initial feedback.
The main changes are:
* Lookup, build, and store the required standard library types and expressions in `ASTContext`. By storing them in ASTContext we don't need to store (and duplicate) the required expressions in the BinaryOperator AST nodes.
* Implement [expr.spaceship] checking, including diagnosing narrowing conversions.
* Implement `ExprConstant` for builtin spaceship operators.
* Implement builitin operator<=> support in `CodeGenAgg`. Initially I emitted the required comparisons using `ScalarExprEmitter::VisitBinaryOperator`, but this caused the operand expressions to be emitted once for every required cmp.
* Implement [builtin.over] with modifications to support the intent of P0946R0. See the note on `BuiltinOperatorOverloadBuilder::addThreeWayArithmeticOverloads` for more information about the workaround.
Reviewers: rsmith, aaron.ballman, majnemer, rnk, compnerd, rjmccall
Reviewed By: rjmccall
Subscribers: rjmccall, rsmith, aaron.ballman, junbuml, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D45476
llvm-svn: 331677
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
the tail padding is not reused.
We track on the AggValueSlot (and through a couple of other
initialization actions) whether we're dealing with an object that might
share its tail padding with some other object, so that we can avoid
emitting stores into the tail padding if that's the case. We still
widen stores into tail padding when we can do so.
Differential Revision: https://reviews.llvm.org/D45306
llvm-svn: 329342
source expressions when iterating over a PseudoObjectExpr's semantic
subexpression list.
Previously the loop in emitPseudoObjectExpr would emit the IR for each
OpaqueValueExpr that was in a PseudoObjectExpr's semantic-form
expression list and use the result when the OpaqueValueExpr later
appeared in other expressions. This caused an assertion failure when
AggExprEmitter tried to copy the result of an OpaqueValueExpr and the
copied type didn't have trivial copy/move constructors or assignment
operators.
This patch adds flag IsUnique to OpaqueValueExpr which indicates it is a
unique reference to its source expression (it is not used in multiple
places). The loop in emitPseudoObjectExpr ignores OpaqueValueExprs that
are unique and CodeGen visitors simply traverse the source expressions
of such OpaqueValueExprs.
rdar://problem/34363596
Differential Revision: https://reviews.llvm.org/D39562
llvm-svn: 327939
In C, we'll wait until the end of the scope to clean up aggregate
temporaries used for returns from calls. This means in cases like:
{
// Assuming that `Bar` is large enough to warrant indirect returns
struct Bar b = {};
b = foo(&b);
b = foo(&b);
b = foo(&b);
b = foo(&b);
}
...We'll allocate space for 5 Bars on the stack (`b`, and 4
temporaries). This becomes painful in things like large switch
statements.
If cleaning up sooner is trivial, we should do it.
llvm-svn: 327229
If CodeGenFunction::EmitCall is:
- asked to emit a call with an indirectly returned value,
- given an invalid return value slot, and
- told the return value of the function it's calling is unused
then it'll make its own temporary, and add lifetime markers so that the
temporary's lifetime ends immediately after the call.
The early lifetime.end becomes problematic when we need to run a
destructor on the result of the function.
Instead of unconditionally saying that results of all calls are used
here (which would be correct, but would also cause us to never emit
lifetime markers for these temporaries), we just build our own temporary
to pass in when a dtor has to be run.
llvm-svn: 327192
ARC mode.
Declaring __strong pointer fields in structs was not allowed in
Objective-C ARC until now because that would make the struct non-trivial
to default-initialize, copy/move, and destroy, which is not something C
was designed to do. This patch lifts that restriction.
Special functions for non-trivial C structs are synthesized that are
needed to default-initialize, copy/move, and destroy the structs and
manage the ownership of the objects the __strong pointer fields point
to. Non-trivial structs passed to functions are destructed in the callee
function.
rdar://problem/33599681
Differential Revision: https://reviews.llvm.org/D41228
llvm-svn: 326307
Currently, clang compiles explicit initializers for array
elements into series of store instructions. For large arrays of
built-in types this results in bloated output code and
significant amount of time spent on the instruction selection
phase. This patch fixes the issue by initializing such arrays
with global constants that store the binary image of the
initializer.
Differential Revision: https://reviews.llvm.org/D43181
llvm-svn: 325478
expressions
C++ allows us to reference static variables through member expressions. Prior to
this commit, non-integer static variables that were referenced using a member
expression were always emitted using lvalue loads. The old behaviour introduced
an inconsistency between regular uses of static variables and member expressions
uses. For example, the following program compiled and linked successfully:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
return Foo::name[0] == 'f';
}
but this program failed to link because "Foo::name" wasn't found:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
Foo f;
return f.name[0] == 'f';
}
This commit ensures that constant static variables referenced through member
expressions are emitted in the same way as ordinary static variable references.
rdar://33942261
Differential Revision: https://reviews.llvm.org/D36876
llvm-svn: 311772
in list-initialization, run cleanups for the default argument after each
iteration of the initialization loop.
We previously only ran the destructor for any temporary once, at the end of the
complete loop, rather than once per iteration!
Re-commit of r302750, reverted in r302776.
llvm-svn: 302817
Revert "clang/test/CodeGenCXX/array-default-argument.cpp: Satisfy targets that have x86_thiscallcc."
This reverts commit r302750 and its fixup r302757 because the test is
still breaking on some of the ARM bots.
array-default-argument.cpp:20:12: error: expected string not found in input
// CHECK: {{call|invoke}}[[THISCALL:( x86_thiscallcc)?]] void @_ZN1AC1Ev([[TEMPORARY:.*]])
^
<stdin>:18:1: note: scanning from here
arrayctor.loop: ; preds = %arrayctor.loop, %entry
^
<stdin>:28:2: note: possible intended match here
call void @_Z1fv()
^
--
llvm-svn: 302776
in list-initialization, run cleanups for the default argument after each
iteration of the initialization loop.
We previously only ran the destructor for any temporary once, at the end of the
complete loop, rather than once per iteration!
llvm-svn: 302750
Details:
Emit suspend expression which roughly looks like:
auto && x = CommonExpr();
if (!x.await_ready()) {
llvm_coro_save();
x.await_suspend(...); (*)
llvm_coro_suspend(); (**)
}
x.await_resume();
where the result of the entire expression is the result of x.await_resume()
(*) If x.await_suspend return type is bool, it allows to veto a suspend:
if (x.await_suspend(...))
llvm_coro_suspend();
(**) llvm_coro_suspend() encodes three possible continuations as a switch instruction:
%where-to = call i8 @llvm.coro.suspend(...)
switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend
i8 0, label %yield.ready ; go here when resumed
i8 1, label %yield.cleanup ; go here when destroyed
]
llvm-svn: 298784
This reverts commit r290171. It triggers a bunch of warnings, because
the new enumerator isn't handled in all switches. We want a warning-free
build.
Replied on the commit with more details.
llvm-svn: 290173
Summary: Enabling the compression of CLK_NULL_QUEUE to variable of type queue_t.
Reviewers: Anastasia
Subscribers: cfe-commits, yaxunl, bader
Differential Revision: https://reviews.llvm.org/D27569
llvm-svn: 290171
copy constructors of classes with array members, instead using
ArrayInitLoopExpr to represent the initialization loop.
This exposed a bug in the static analyzer where it was unable to differentiate
between zero-initialized and unknown array values, which has also been fixed
here.
llvm-svn: 289618
initialization of each array element:
* ArrayInitLoopExpr is a prvalue of array type with two subexpressions:
a common expression (an OpaqueValueExpr) that represents the up-front
computation of the source of the initialization, and a subexpression
representing a per-element initializer
* ArrayInitIndexExpr is a prvalue of type size_t representing the current
position in the loop
This will be used to replace the creation of explicit index variables in lambda
capture of arrays and copy/move construction of classes with array elements,
and also C++17 structured bindings of arrays by value (which inexplicably allow
copying an array by value, unlike all of C++'s other array declarations).
No uses of these nodes are introduced by this change, however.
llvm-svn: 289413
In amdgcn target, null pointers in global, constant, and generic address space take value 0 but null pointers in private and local address space take value -1. Currently LLVM assumes all null pointers take value 0, which results in incorrectly translated IR. To workaround this issue, instead of emit null pointers in local and private address space, a null pointer in generic address space is emitted and casted to local and private address space.
Tentative definition of global variables with non-zero initializer will have weak linkage instead of common linkage since common linkage requires zero initializer and does not have explicit section to hold the non-zero value.
Virtual member functions getNullPointer and performAddrSpaceCast are added to TargetCodeGenInfo which by default returns ConstantPointerNull and emitting addrspacecast instruction. A virtual member function getNullPointerValue is added to TargetInfo which by default returns 0. Each target can override these virtual functions to get target specific null pointer and the null pointer value for specific address space, and perform specific translations for addrspacecast.
Wrapper functions getNullPointer is added to CodegenModule and getTargetNullPointerValue is added to ASTContext to facilitate getting the target specific null pointers and their values.
This change has no effect on other targets except amdgcn target. Other targets can provide support of non-zero null pointer in a similar way.
This change only provides support for non-zero null pointer for C and OpenCL. Supporting for other languages will be added later incrementally.
Differential Revision: https://reviews.llvm.org/D26196
llvm-svn: 289252
When an object of class type is initialized from a prvalue of the same type
(ignoring cv qualifications), use the prvalue to initialize the object directly
instead of inserting a redundant elidable call to a copy constructor.
llvm-svn: 288866
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.
This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.
Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.
This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).
Differential Revision: https://reviews.llvm.org/D21567
llvm-svn: 277024
Replace inheriting constructors implementation with new approach, voted into
C++ last year as a DR against C++11.
Instead of synthesizing a set of derived class constructors for each inherited
base class constructor, we make the constructors of the base class visible to
constructor lookup in the derived class, using the normal rules for
using-declarations.
For constructors, UsingShadowDecl now has a ConstructorUsingShadowDecl derived
class that tracks the requisite additional information. We create shadow
constructors (not found by name lookup) in the derived class to model the
actual initialization, and have a new expression node,
CXXInheritedCtorInitExpr, to model the initialization of a base class from such
a constructor. (This initialization is special because it performs real perfect
forwarding of arguments.)
In cases where argument forwarding is not possible (for inalloca calls,
variadic calls, and calls with callee parameter cleanup), the shadow inheriting
constructor is not emitted and instead we directly emit the initialization code
into the caller of the inherited constructor.
Note that this new model is not perfectly compatible with the old model in some
corner cases. In particular:
* if B inherits a private constructor from A, and C uses that constructor to
construct a B, then we previously required that A befriends B and B
befriends C, but the new rules require A to befriend C directly, and
* if a derived class has its own constructors (and so its implicit default
constructor is suppressed), it may still inherit a default constructor from
a base class
llvm-svn: 274049
Fixes PR11517 for SPARC.
On most targets, clang lowers va_arg itself, eschewing the use of the
llvm vaarg instruction. This is necessary (at least for now) as the type
argument to the vaarg instruction cannot represent all the ABI
information that is needed to support complex calling conventions.
However, on targets with a simpler varrags ABIs, the LLVM instruction
can work just fine, and clang can simply lower to it. Unfortunately,
even on such targets, vaarg with a struct argument would fail, because
the default lowering to vaarg was naive: it didn't take into account the
ABI attribute computed by classifyArgumentType. In particular, for the
DefaultABIInfo, structs are supposed to be passed indirectly and so
llvm's vaarg instruction should be emitted with a pointer argument.
Now, vaarg instruction emission is able to use computed ABIArgInfo for
the provided argument type, which allows the default ABI support to work
for structs too.
I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore,
although I believe both are now redundant, and could be switched over to
use the default implementation as well.
Differential Revision: http://reviews.llvm.org/D16154
llvm-svn: 261717
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly.
This patch makes us treat splats more properly.
Additionally, this patch adds a new cast kind which allows a bool->int
cast to result in -1 or 0, instead of 1 or 0 (for true and false,
respectively), so we can sanely model OpenCL bool->int casts in the AST.
Differential Revision: http://reviews.llvm.org/D14877
llvm-svn: 257559
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug.
Differential Revision: http://reviews.llvm.org/D13582
llvm-svn: 250795
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
This fixes an issue raised in D12412, where we generated invalid IR.
Thanks to Vedant Kumar for coming up with the initial work around.
Differential Revision: http://reviews.llvm.org/D12412
llvm-svn: 246880
Based on previous discussion on the mailing list, clang currently lacks support
for C99 partial re-initialization behavior:
Reference: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-April/029188.html
Reference: http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_253.htm
This patch attempts to fix this problem.
Given the following code snippet,
struct P1 { char x[6]; };
struct LP1 { struct P1 p1; };
struct LP1 l = { .p1 = { "foo" }, .p1.x[2] = 'x' };
// this example is adapted from the example for "struct fred x[]" in DR-253;
// currently clang produces in l: { "\0\0x" },
// whereas gcc 4.8 produces { "fox" };
// with this fix, clang will also produce: { "fox" };
Differential Review: http://reviews.llvm.org/D5789
llvm-svn: 239446
Patch fixes codegen for aggregate copying of VLAs. Currently method CodeGenFunction::EmitAggregateCopy() does not support copying of VLAs. Patch checks if the size of the type is 0, then checks if the type is actually a variable-length array. Then it calculates total length for this array and calculates total size of the array in bytes:
<total number of elements in array> * aligned_sizeof(ElementType) (if copy assignment is requested).
If simple copying is requested, size is calculated like:
<total number of elements in array> * aligned_sizeof(ElementType) - aligned_sizeof(ElementType) + sizeof(ElementType).
memcpy() is used with this calculated size of the VLA.
Differential Revision: http://reviews.llvm.org/D9851
llvm-svn: 237768
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
The /volatile:ms semantics turn volatile loads and stores into atomic
acquire and release operations. This distinction is important because
volatile memory operations do not form a happens-before relationship
with non-atomic memory. This means that a volatile store is not
sufficient for implementing a mutex unlock routine.
Differential Revision: http://reviews.llvm.org/D7580
llvm-svn: 229082
Matches the existing code for scalar default arguments. Complex default
arguments probably need the same handling too (test/fix to that coming
next).
llvm-svn: 228588
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.
There were essentially 3 options for this:
* The beginning of an expression (this was the behavior prior to this
commit). This meant that stepping through subexpressions would bounce
around from subexpressions back to the start of the outer expression,
etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
where the actual addition occurred)).
* The end of an expression. This seems to be what GCC does /mostly/, and
certainly this for function calls. This has the advantage that
progress is always 'forwards' (never jumping backwards - except for
independent subexpressions if they're evaluated in interesting orders,
etc). "x + y + z" would go "x y z" with the additions occurring at y
and z after the respective loads.
The problem with this is that the user would still have to think
fairly hard about precedence to realize which subexpression is being
evaluated or which operator overload is being called in, say, an asan
backtrace.
* The preferred location or 'exprloc'. In this case you get sort of what
you'd expect, though it's a bit confusing in its own way due to going
'backwards'. In this case the locations would be: "x y + z +" in
lovely postfix arithmetic order. But this does mean that if the op+
were an operator overload, say, and in a backtrace, the backtrace will
point to the exact '+' that's being called, not to the end of one of
its operands.
(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)
llvm-svn: 227027
or a class derived from T. We already supported this when initializing
_Atomic(T) from T for most (and maybe all) other reasonable values of T.
llvm-svn: 214390
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
PNaCl and Emscripten can both handle va_arg IR instructions with
struct type.
Also add a test to cover generating a va_arg IR instruction from
va_arg in C on le32 (as already handled by VisitVAArgExpr() in
CGExprScalar.cpp), which was not covered by a test before.
(This fixes https://code.google.com/p/nativeclient/issues/detail?id=2381)
Differential Revision: http://llvm-reviews.chandlerc.com/D2539
llvm-svn: 199830
adjustFallThroughCount isn't a good name, and the documentation was
even worse. This commit attempts to clarify what it's for and when to
use it.
llvm-svn: 199139
With the introduction of explicit address space casts into LLVM, there's
a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA
and code to produce address space casts from those kinds when appropriate.
Patch by Michele Scandale!
llvm-svn: 197036
This is the same way GenericSelectionExpr works, and it's generally a
more consistent approach.
A large part of this patch is devoted to caching the value of the condition
of a ChooseExpr; it's needed to avoid threading an ASTContext into
IgnoreParens().
Fixes <rdar://problem/14438917>.
llvm-svn: 186738
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).
This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.
llvm-svn: 183872
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.
Also, normalize the API for loading and storing complexes.
I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.
llvm-svn: 176656
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false,
not honoring that aggregate has at least one 'volatile' data member
(even though aggregate itself has not been qualified as 'volatile'.
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).
llvm-svn: 173535
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
Separate out the notions of 'has a trivial special member' and 'has a
non-trivial special member', and use them appropriately. These are not
opposites of one another (there might be no special member, or in C++11 there
might be a trivial one and a non-trivial one). The CXXRecordDecl predicates
continue to produce incorrect results, but do so in fewer cases now, and
they document the cases where they might be wrong.
No functionality changes are intended here (they will come when the predicates
start producing the right answers...).
llvm-svn: 168119
This fixes a regression from r162254, the optimizer has problems reasoning
about the smaller memcpy as it's often not safe to widen a store but making it
smaller is.
llvm-svn: 164917
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.
llvm-svn: 163451
(__builtin_* etc.) so that it isn't possible to take their address.
Specifically, introduce a new type to represent a reference to a builtin
function, and a new cast kind to convert it to a function pointer in the
operand of a call. Fixes PR13195.
llvm-svn: 162962
* when checking that a pointer or reference refers to appropriate storage for a type, also check the alignment and perform a null check
* check that references are bound to appropriate storage
* check that 'this' has appropriate storage in member accesses and member function calls
llvm-svn: 162523