Commit Graph

384 Commits

Author SHA1 Message Date
Yaxun Liu 0bc4b2d337 [OpenCL] Generate opaque type for sampler_t and function call for the initializer
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.

This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.

Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.

This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).

Differential Revision: https://reviews.llvm.org/D21567

llvm-svn: 277024
2016-07-28 19:26:30 +00:00
Richard Smith 5179eb7821 P0136R1, DR1573, DR1645, DR1715, DR1736, DR1903, DR1941, DR1959, DR1991:
Replace inheriting constructors implementation with new approach, voted into
C++ last year as a DR against C++11.

Instead of synthesizing a set of derived class constructors for each inherited
base class constructor, we make the constructors of the base class visible to
constructor lookup in the derived class, using the normal rules for
using-declarations.

For constructors, UsingShadowDecl now has a ConstructorUsingShadowDecl derived
class that tracks the requisite additional information. We create shadow
constructors (not found by name lookup) in the derived class to model the
actual initialization, and have a new expression node,
CXXInheritedCtorInitExpr, to model the initialization of a base class from such
a constructor. (This initialization is special because it performs real perfect
forwarding of arguments.)

In cases where argument forwarding is not possible (for inalloca calls,
variadic calls, and calls with callee parameter cleanup), the shadow inheriting
constructor is not emitted and instead we directly emit the initialization code
into the caller of the inherited constructor.

Note that this new model is not perfectly compatible with the old model in some
corner cases. In particular:
 * if B inherits a private constructor from A, and C uses that constructor to
   construct a B, then we previously required that A befriends B and B
   befriends C, but the new rules require A to befriend C directly, and
 * if a derived class has its own constructors (and so its implicit default
   constructor is suppressed), it may still inherit a default constructor from
   a base class

llvm-svn: 274049
2016-06-28 19:03:57 +00:00
Richard Smith 6365e46459 Fix -Werror build.
llvm-svn: 262965
2016-03-08 23:16:16 +00:00
Richard Smith 872307e2ac P0017R1: In C++1z, an aggregate class can have (public non-virtual) base classes; these are initialized as if they were data members.
llvm-svn: 262963
2016-03-08 22:17:41 +00:00
James Y Knight 29b5f086ca Default vaarg lowering should support indirect struct types.
Fixes PR11517 for SPARC.

On most targets, clang lowers va_arg itself, eschewing the use of the
llvm vaarg instruction. This is necessary (at least for now) as the type
argument to the vaarg instruction cannot represent all the ABI
information that is needed to support complex calling conventions.

However, on targets with a simpler varrags ABIs, the LLVM instruction
can work just fine, and clang can simply lower to it. Unfortunately,
even on such targets, vaarg with a struct argument would fail, because
the default lowering to vaarg was naive: it didn't take into account the
ABI attribute computed by classifyArgumentType. In particular, for the
DefaultABIInfo, structs are supposed to be passed indirectly and so
llvm's vaarg instruction should be emitted with a pointer argument.

Now, vaarg instruction emission is able to use computed ABIArgInfo for
the provided argument type, which allows the default ABI support to work
for structs too.

I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore,
although I believe both are now redundant, and could be switched over to
use the default implementation as well.

Differential Revision: http://reviews.llvm.org/D16154

llvm-svn: 261717
2016-02-24 02:59:33 +00:00
George Burgess IV df1ed0099b [Bugfix] Fix ICE on constexpr vector splat.
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly.
This patch makes us treat splats more properly.

Additionally, this patch adds a new cast kind which allows a bool->int
cast to result in -1 or 0, instead of 1 or 0 (for true and false,
respectively), so we can sanely model OpenCL bool->int casts in the AST.

Differential Revision: http://reviews.llvm.org/D14877

llvm-svn: 257559
2016-01-13 01:52:39 +00:00
Tim Northover cc2a6e0608 Atomics: support __c11_* calls on _Atomic struct types.
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.

This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.

llvm-svn: 252507
2015-11-09 19:56:35 +00:00
Alexey Bataev 2bf9b4c0d1 [DEBUG INFO] Emit debug info for type used in explicit cast only.
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug.
Differential Revision: http://reviews.llvm.org/D13582

llvm-svn: 250795
2015-10-20 04:24:12 +00:00
Charles Davis c7d5c94f78 Support __builtin_ms_va_list.
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.

Depends on D1622.

Reviewers: rsmith, rnk, rjmccall

CC: cfe-commits

Differential Revision: http://reviews.llvm.org/D1623

llvm-svn: 247941
2015-09-17 20:55:33 +00:00
John McCall 7f416cc426 Compute and preserve alignment more faithfully in IR-generation.
Introduce an Address type to bundle a pointer value with an
alignment.  Introduce APIs on CGBuilderTy to work with Address
values.  Change core APIs on CGF/CGM to traffic in Address where
appropriate.  Require alignments to be non-zero.  Update a ton
of code to compute and propagate alignment information.

As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.

The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned.  Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay.  I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.

Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.

We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment.  In particular,
field access now uses alignmentAtOffset instead of min.

Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs.  For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint.  That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.

ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments.  In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments.  That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.

I partially punted on applying this work to CGBuiltin.  Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.

llvm-svn: 246985
2015-09-08 08:05:57 +00:00
Reid Kleckner 5ee4b9a11f Don't use unreachable as a placeholder, it confuses EmitBlock
This fixes an issue raised in D12412, where we generated invalid IR.

Thanks to Vedant Kumar for coming up with the initial work around.

Differential Revision: http://reviews.llvm.org/D12412

llvm-svn: 246880
2015-09-04 21:39:15 +00:00
Yunzhong Gao cb77930d6b Implementing C99 partial re-initialization behavior (DR-253)
Based on previous discussion on the mailing list, clang currently lacks support
for C99 partial re-initialization behavior:
Reference: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-April/029188.html
Reference: http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_253.htm

This patch attempts to fix this problem.

Given the following code snippet,

struct P1 { char x[6]; };
struct LP1 { struct P1 p1; };

struct LP1 l = { .p1 = { "foo" }, .p1.x[2] = 'x' };
// this example is adapted from the example for "struct fred x[]" in DR-253;
// currently clang produces in l: { "\0\0x" },
//   whereas gcc 4.8 produces { "fox" };
// with this fix, clang will also produce: { "fox" };


Differential Review: http://reviews.llvm.org/D5789

llvm-svn: 239446
2015-06-10 00:27:52 +00:00
Leny Kholodov 6aab1117e8 [CodeGen] Reuse stack space from unused function results (with more accurate unused result detection)
This patch fixes issues with unused result detection which were found in patch http://reviews.llvm.org/D9743.

Differential Revision: http://reviews.llvm.org/D10042

llvm-svn: 239294
2015-06-08 10:23:49 +00:00
Reid Kleckner 892bb0cace Evaluate union cast subexpressions when the cast value is unused
Fixes PR23597.

llvm-svn: 237839
2015-05-20 21:59:25 +00:00
Alexey Bataev 16dc7b68c4 Fix for aggregate copying of variable length arrays.
Patch fixes codegen for aggregate copying of VLAs. Currently method CodeGenFunction::EmitAggregateCopy() does not support copying of VLAs. Patch checks if the size of the type is 0, then checks if the type is actually a variable-length array. Then it calculates total length for this array and calculates total size of the array in bytes:

<total number of elements in array> * aligned_sizeof(ElementType) (if copy assignment is requested).
If simple copying is requested, size is calculated like:

<total number of elements in array> * aligned_sizeof(ElementType) - aligned_sizeof(ElementType) + sizeof(ElementType).
memcpy() is used with this calculated size of the VLA.
Differential Revision: http://reviews.llvm.org/D9851

llvm-svn: 237768
2015-05-20 03:46:04 +00:00
Richard Smith 419bd09415 PR23373: A defaulted union copy constructor that is not trivial must still be
emitted as a memcpy.

llvm-svn: 236142
2015-04-29 19:26:57 +00:00
Justin Bogner 66242d6c5e InstrProf: Stop using RegionCounter outside of CodeGenPGO (NFC)
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.

llvm-svn: 235664
2015-04-23 23:06:47 +00:00
David Blaikie 2e80428dc5 clang-format my last commit
(sorry, keep forgetting that)

llvm-svn: 234129
2015-04-05 22:47:07 +00:00
David Blaikie 1ed728c499 [opaque pointer type] More GEP API migrations
Looks like the VTable code in particular will need some work to pass
around the pointee type explicitly.

llvm-svn: 234128
2015-04-05 22:45:47 +00:00
David Majnemer ced8bdf74a Sema: Parenthesized bound destructor member expressions can be called
We would wrongfully reject (a.~A)() in both the destructor and
pseudo-destructor cases.

This fixes PR22668.

llvm-svn: 230512
2015-02-25 17:36:15 +00:00
David Majnemer a5b195a1dc Revert "Revert r229082 for a bit, it caused PR22577."
This reverts commit r229123.  It was a red herring, the bug was present
without r229082.

llvm-svn: 229205
2015-02-14 01:35:12 +00:00
Nico Weber 7ce96b853d Revert r229082 for a bit, it caused PR22577.
llvm-svn: 229123
2015-02-13 16:27:00 +00:00
David Majnemer abc482effc MS ABI: Implement /volatile:ms
The /volatile:ms semantics turn volatile loads and stores into atomic
acquire and release operations.  This distinction is important because
volatile memory operations do not form a happens-before relationship
with non-atomic memory.  This means that a volatile store is not
sufficient for implementing a mutex unlock routine.

Differential Revision: http://reviews.llvm.org/D7580

llvm-svn: 229082
2015-02-13 07:55:47 +00:00
David Blaikie 38b2591469 DebugInfo: Refactor default arg handling into a common place (instead of handling in repeatedly for aggregate, complex, and scalar types)
llvm-svn: 228591
2015-02-09 19:13:51 +00:00
David Blaikie 2221aba85b DebugInfo: Suppress the location of instructions in aggregate default arguments.
Matches the existing code for scalar default arguments. Complex default
arguments probably need the same handling too (test/fix to that coming
next).

llvm-svn: 228588
2015-02-09 18:47:14 +00:00
David Blaikie 9b47966615 DebugInfo: Use the preferred location rather than the start location for expression line info
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.

There were essentially 3 options for this:

* The beginning of an expression (this was the behavior prior to this
  commit). This meant that stepping through subexpressions would bounce
  around from subexpressions back to the start of the outer expression,
  etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
  where the actual addition occurred)).

* The end of an expression. This seems to be what GCC does /mostly/, and
  certainly this for function calls. This has the advantage that
  progress is always 'forwards' (never jumping backwards - except for
  independent subexpressions if they're evaluated in interesting orders,
  etc). "x + y + z" would go "x y z" with the additions occurring at y
  and z after the respective loads.
  The problem with this is that the user would still have to think
  fairly hard about precedence to realize which subexpression is being
  evaluated or which operator overload is being called in, say, an asan
  backtrace.

* The preferred location or 'exprloc'. In this case you get sort of what
  you'd expect, though it's a bit confusing in its own way due to going
  'backwards'. In this case the locations would be: "x y + z +" in
  lovely postfix arithmetic order. But this does mean that if the op+
  were an operator overload, say, and in a backtrace, the backtrace will
  point to the exact '+' that's being called, not to the end of one of
  its operands.

(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)

llvm-svn: 227027
2015-01-25 01:19:10 +00:00
David Blaikie 01fb5fb128 DebugInfo: Attribute aggregate expressions to the source location of the expression
Just as r225956 did for scalar expressions (CGExprScalar::Visit), do the
same for aggregate expressions.

llvm-svn: 226388
2015-01-18 01:48:19 +00:00
Richard Smith 77be48ac47 PR18097: Support initializing an _Atomic(T) from an object of C++ class type T
or a class derived from T. We already supported this when initializing
_Atomic(T) from T for most (and maybe all) other reasonable values of T.

llvm-svn: 214390
2014-07-31 06:31:19 +00:00
Richard Smith 8edda96296 A non-trivial array-fill expression isn't necessarily a CXXConstructExpr. It
could be an InitListExpr that runs constructors in C++11 onwards. Fixes a
recent regression (introduced in r210091).

llvm-svn: 210954
2014-06-13 23:04:49 +00:00
Richard Smith 9213a6bfa4 Remove incorrect assertion.
llvm-svn: 210092
2014-06-03 08:40:27 +00:00
Craig Topper 8a13c4180e [C++11] Use 'nullptr'. CodeGen edition.
llvm-svn: 209272
2014-05-21 05:09:00 +00:00
Aaron Ballman e8a8baef44 [C++11] Replacing RecordDecl iterators field_begin() and field_end() with iterator_range fields(). Updating all of the usages of the iterators with range-based for loops.
llvm-svn: 203355
2014-03-08 20:12:42 +00:00
Bob Wilson bf854f0f53 Change PGO instrumentation to compute counts in a separate AST traversal.
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.

This new approach has several advantages:

1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.

2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.

3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.

To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments.  While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.

llvm-svn: 201528
2014-02-17 19:21:09 +00:00
Mark Seaborn 74020868ee Handle va_arg on struct types for the le32 target (PNaCl and Emscripten)
PNaCl and Emscripten can both handle va_arg IR instructions with
struct type.

Also add a test to cover generating a va_arg IR instruction from
va_arg in C on le32 (as already handled by VisitVAArgExpr() in
CGExprScalar.cpp), which was not covered by a test before.

(This fixes https://code.google.com/p/nativeclient/issues/detail?id=2381)

Differential Revision: http://llvm-reviews.chandlerc.com/D2539

llvm-svn: 199830
2014-01-22 20:11:01 +00:00
Justin Bogner 0718a3a420 CodeGen: Rename adjustFallThroughCount -> adjustForControlFlow
adjustFallThroughCount isn't a good name, and the documentation was
even worse. This commit attempts to clarify what it's for and when to
use it.

llvm-svn: 199139
2014-01-13 21:24:22 +00:00
Justin Bogner ef512b9929 CodeGen: Initial instrumentation based PGO implementation
llvm-svn: 198640
2014-01-06 22:27:43 +00:00
David Tweed e1468322eb Add front-end infrastructure now address space casts are in LLVM IR.
With the introduction of explicit address space casts into LLVM, there's
a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA
and code to produce address space casts from those kinds when appropriate.

Patch by Michele Scandale!

llvm-svn: 197036
2013-12-11 13:39:46 +00:00
Nick Lewycky 2d84e84236 Thread a SourceLocation into the EmitCheck for "load_invalid_value". This occurs
when scalars are loaded / undergo lvalue-to-rvalue conversion.

llvm-svn: 191808
2013-10-02 02:29:49 +00:00
Eli Friedman 75807f239e Make IgnoreParens() look through ChooseExprs.
This is the same way GenericSelectionExpr works, and it's generally a
more consistent approach.

A large part of this patch is devoted to caching the value of the condition
of a ChooseExpr; it's needed to avoid threading an ASTContext into
IgnoreParens().

Fixes <rdar://problem/14438917>.

llvm-svn: 186738
2013-07-20 00:40:58 +00:00
Eli Friedman 035b39e3bd Fix build.
Sorry about that.

llvm-svn: 186054
2013-07-11 02:28:36 +00:00
Eli Friedman be4504df26 Simplify atomic load/store IRGen.
Also fixes a couple minor bugs along the way; see testcases.

llvm-svn: 186049
2013-07-11 01:32:21 +00:00
Richard Smith a1c9d4d932 Simplify: we don't need any special-case lifetime extension when initializing
declarations of reference type; they're handled by the general case handling of
MaterializeTemporaryExpr.

llvm-svn: 183875
2013-06-12 23:38:09 +00:00
Richard Smith cc1b96d356 PR12086, PR15117
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).

This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.

llvm-svn: 183872
2013-06-12 22:31:48 +00:00
Richard Smith be93c00ab3 Fix assert on temporary std::initializer_list.
llvm-svn: 182615
2013-05-23 21:54:14 +00:00
Richard Smith 852c9db72b C++1y: Allow aggregates to have default initializers.
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.

There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.

llvm-svn: 179958
2013-04-20 22:23:05 +00:00
John McCall c8e0170578 Standardize accesses to the TargetInfo in IR-gen.
Patch by Stephen Lin!

llvm-svn: 179638
2013-04-16 22:48:15 +00:00
John McCall a8ec7eb9cf Promote atomic type sizes up to a power of two, capped by
MaxAtomicPromoteWidth.  Fix a ton of terrible bugs with
_Atomic types and (non-intrinsic-mediated) loads and stores
thereto.

llvm-svn: 176658
2013-03-07 21:37:17 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
John McCall bea4c3d8c4 Evaluate compound literals directly into the result aggregate
when that aggregate isn't potentially aliased.

llvm-svn: 176654
2013-03-07 21:36:54 +00:00
Fariborz Jahanian 7865220da4 patch for PR9027 and // rdar://11861085
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false, 
not honoring that aggregate has at least one 'volatile' data member 
(even though aggregate itself has not been qualified as 'volatile'. 
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).

llvm-svn: 173535
2013-01-25 23:57:05 +00:00