After LLVM r222434, the Variables field of DISubprograms for forward
declarations will always be null. No need to keep code around to
delete them.
llvm-svn: 222437
This is a followup to r222373. A better solution to the problem solved
there is to not create the leaked nodes at all (we know that they will
never be used for forward declared functions anyway). To avoid bot
breakage in the interval between the cfe and llvm commits, add a check
that the nMDNode is not null before deleting it. This code can completely
go away after the LLVM part is in.
llvm-svn: 222433
For each "omp flush" directive a call to "void kmpc_flush(ident_t *, ...)" function is generated.
Directive "omp flush" may have an associated list of variables to flush, but currently runtime function ignores them. So the patch generates just "call kmpc_flush(ident_t *<loc>, i32 0)".
Differential Revision: http://reviews.llvm.org/D6292
llvm-svn: 222409
While emitting debug information for function forward decalrations, we
create DISubprogram objects that aran't stored in the AllSubprograms
list, and thus won't get finalized by the DIBuilder. During the DIBuilder
finalize(), the temporary MDNode allocated for the DISubprogram
Variables field gets RAUWd with a non temporary DIArray. For the forward
declarations, simply delete that temporary node before we delete the
parent node, so that it doesn't leak.
llvm-svn: 222373
Summary:
With this patch, passing a va_list to another function and reading 10 int's from
it works correctly on a big-endian target.
Based on a pair of patches by David Chisnall, one of which I've reworked
for the current trunk.
Reviewers: theraven, atanasyan
Reviewed By: theraven, atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6248
llvm-svn: 222339
Summary:
This distinguishes between -fpic and -fPIC now, with the additions in LLVM for
PIC level support.
Test Plan: No regressions
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: rnk, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D5400
llvm-svn: 222227
Currently this function would return nothing for functions or globals that
haven't seen a definition yet. Make it return a forward declaration that will
get RAUWed with the definition if one is seen at a later point. The strategy
used to implement this is similar to what's done for types: the forward
declarations are stored in a vector and post processed upon finilization to
perform the required RAUWs.
For now the only user of getDeclarationOrDefinition() is EmitUsingDecl(), thus
this patch allows to emit correct imported declarations even in the absence of
an actual definition of the imported entity.
(Another user will be the debug info generation for argument default values
that I need to resurect).
Differential Revision: http://reviews.llvm.org/D6173
llvm-svn: 222220
We include unused functions and methods in -fcoverage-mapping so that
we can differentiate between uninstrumented and unused. This can cause
problems for uninstantiated templates though, since they may involve
an incomplete type that can't be mangled. This shows up in things like
libc++'s <unordered_map> and makes coverage unusable.
Avoid the issue by skipping uninstantiated methods of a templated
class.
llvm-svn: 222204
When targeting Windows itanium (a MSVC environment), use itanium style
exceptions rather than SEH. Existing test cases already test this code path.
Applying this change ensures that tests wont break due to a parallel change in
LLVM (to correctly report isMSVCEnvironment).
llvm-svn: 222179
used inside blocks. It fixes a crash in naming code
for __func__ etc. when used in a block declared globally.
It also brings back old naming convention for
predefined expression which was broken. rdar://18961148
llvm-svn: 222065
This option was misleading because it looked like it enabled the
language feature of SEH (__try / __except), when this option was really
controlling which EH personality function to use. Mingw only supports
SEH and SjLj EH on x86_64, so we can simply do away with this flag.
llvm-svn: 221963
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.
New code in altivec.h defines these in terms of new builtins, which
are themselves defined in BuiltinsPPC.def. The builtins are converted
to LLVM intrinsics in CGBuiltin.cpp. Additional code is added to
builtins-ppc-vsx.c to verify the correct generation of the intrinsics.
Note that I moved the other VSX builtins so all VSX builtins will be
alphabetical in their own section in BuiltinsPPC.def.
There is a companion patch for LLVM.
llvm-svn: 221768
Summary: If we've added poisoned paddings to a type do not emit memcpy for operator=.
Test Plan: regression tests.
Reviewers: majnemer, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6160
llvm-svn: 221739
Summary:
This change makes the asan-coverge (formerly -mllvm -asan-coverge)
accessible via a clang flag.
Companion patch to LLVM is http://reviews.llvm.org/D6152
Test Plan: regression tests, chromium
Reviewers: samsonov
Reviewed By: samsonov
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6153
llvm-svn: 221719
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.
Tests should be fixed accordingly, I'm working on it.
Test Plan: regression test suite.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6219
llvm-svn: 221716
So DWARF5 specs out auto deduced return types as DW_TAG_unspecified_type
with DW_AT_name "auto", and GCC implements this somewhat, but it
presents a few problems to do this with Clang.
GCC's implementation only applies to member functions where the auto
return type isn't deduced immediately (ie: member functions of templates
or member functions defined out of line). In the common case of an
inline deduced return type function, GCC emits the DW_AT_type as the
deduced return type.
Currently GDB doesn't seem to behave too well with this debug info - it
treats the return type as 'void', even though the definition of the
function has the correctly deduced return type (I guess it sees the
return type the declaration has, doesn't understand it, and assumes
void). This means the function's ABI might be broken (non-trivial return
types, etc), etc.
Clang, on the other hand doesn't track this particular case of a
deducable return type that is deduced immediately versus one that is
deduced 'later'. So if we implement the DWARF5 representation, all
deducible return type functions would get adverse GDB behavior
(including deduced return type lambda functions, inline deduced return
type functions, etc).
Also, we can't just do this for auto types that are not deduced -
because Clang marks even the declaration's return type as deduced (&
provides the underlying type) once a definition is seen that allows the
deduction. So we have to ignore even deduced types - but we can't do
that for auto variables (because this representation only applies to
function declarations - variables and function definitions need the real
type so the function can be called, etc) so we'd need to add an extra
flag to the type unwrapping/creation code to indicate when we want to
see through deduced types and when we don't. It's also not as simple as
just checking at the top level when building a function type (for one
thing, we reuse the function type building for building function pointer
types which might also have 'auto' in them - but be the type of a
variable instead) because the auto might be arbitrarily deeply nested
("auto &", "auto (*)()", etc...)
So, with all that said, let's do the simple thing that works in existing
debuggers for now and treat these functions the same way we do function
templates and implicit special members: omit them from the member list,
since they can't be correctly called anyway (without knowing the return
type the ABI isn't know and a function call could put the arguments in
the wrong place) so they're not much use to the user.
At some point in the future, when GDB understands the DWARF5
representation better it might be worth plumbing through the extra type
builder handling to avoid looking through AutoType for some callers,
etc...
llvm-svn: 221704
For all threadprivate variables which have constructor/destructor emit call to void __kmpc_threadprivate_register(ident_t * <Current Location>, void *<Original Global Addr>, kmpc_ctor <Constructor>, kmpc_cctor NULL, kmpc_dtor <Destructor>);
In expressions all references to such variables are replaced by calls to void *__kmpc_threadprivate_cached(ident_t *<Current Location>, kmp_int32 <Current Thread Id>, void *<Original Global Addr>, size_t <Size of Data>, void ***<Pointer to autogenerated cache – array of private copies of threadprivate variable>);
Test test/OpenMP/threadprivate_codegen.cpp checks that codegen is correct. Also it checks that codegen is correct after serialization/deserialization and one of passes verifies debug info.
Differential Revision: http://reviews.llvm.org/D4002
llvm-svn: 221663
Get rid of ugly SanitizerOptions class thrust into LangOptions:
* Make SanitizeAddressFieldPadding a regular language option,
and rely on default behavior to initialize/reset it.
* Make SanitizerBlacklistFile a regular member LangOptions.
* Introduce the helper class "SanitizerSet" to represent the
set of enabled sanitizers and make it a member of LangOptions.
It is exactly the entity we want to cache and modify in CodeGenFunction,
for instance. We'd also be able to reuse SanitizerSet in
CodeGenOptions for storing the set of recoverable sanitizers,
and in the Driver to represent the set of sanitizers
turned on/off by the commandline flags.
No functionality change.
llvm-svn: 221653
Make sure CodeGenFunction::EmitCheck() knows which sanitizer
it emits check for. Make CheckRecoverableKind enum an
implementation detail and move it away from header.
Currently CheckRecoverableKind is determined by the type of
sanitizer ("unreachable" and "return" are unrecoverable,
"vptr" is always-recoverable, all the rest are recoverable).
This will change in future if we allow to specify which sanitizers
are recoverable, and which are not by -fsanitize-recover= flag.
No functionality change.
llvm-svn: 221635
Homogeneous aggregates on AAPCS_VFP ARM need to be passed *without* being
flattened (e.g. [2 x float] rather than "float, float") for various weird ABI
reasons. However, this isn't the case for anything else; further, we know at
the ABIArgInfo::getDirect callsites whether this flattening is allowed.
So, we can get more unified ARM code, with a simpler Clang, by just using that
knowledge directly.
llvm-svn: 221559
Use the bitmask to store the set of enabled sanitizers instead of a
bitfield. On the negative side, it makes syntax for querying the
set of enabled sanitizers a bit more clunky. On the positive side, we
will be able to use SanitizerKind to eventually implement the
new semantics for -fsanitize-recover= flag, that would allow us
to make some sanitizers recoverable, and some non-recoverable.
No functionality change.
llvm-svn: 221558
We would blindly assume that RTTI data should have the same linkage as
the vtable because we didn't think the RTTI data was external. This
oversight stemmed because we didn't take dllimport into account.
This fixes PR21512.
llvm-svn: 221511
Custom targets in cmake cannot be exported, and this dependency is only
needed in the combined build to ensure that Intrinsics.gen is created
before compiling CodeGen. In the standalone, all of LLVM is build first.
llvm-svn: 221391
When we are generating the global initializer functions, we call
CGDebugInfo::EmitFunctionStart() with a valid decl which is describing
the initialized global variable. Do not update the DeclCache with this
key as it will overwrite the the cached variable DIGlobalVariable with
the newly created artificial DISubprogram.
One could wonder if we should put artificial subprograms in the DIE tree
at all (there are vaild uses for them carrying line information though).
llvm-svn: 221385
mingw64's headers implement fabs by calling __builtin_fabs, so using the
library call results in an infinite loop. If the backend legalizes
@llvm.fabs as a call to fabs later, things should work out, as the crt
provides a definition.
llvm-svn: 221206
Local variables are not initialized, and every target has
been (incorrectly) ignoring the unnecessary request for
zero initialization.
llvm-svn: 221162
It turns out that MinGW never dllimports of exports inline functions.
This means that code compiled with Clang would fail to link with
MinGW-compiled libraries since we might try to import functions that
are not imported.
To fix this, make Clang never dllimport inline functions when targeting
MinGW.
llvm-svn: 221154
It says there is a narrowing conversion when we assign it to an unsigned
3 bit bitfield.
Also, use unsigned instead of size_t for the Size field of the struct in
question. Otherwise they won't run together in MSVC or clang-cl.
llvm-svn: 221019
The most complex aspect of the convention is the handling of homogeneous
vector and floating point aggregates. Reuse the homogeneous aggregate
classification code that we use on PPC64 and ARM for this.
This convention also has a C mangling, and we apparently implement that
in both Clang and LLVM.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D6063
llvm-svn: 221006
Summary:
The Itanium ABI approach of using offset-to-top isn't possible with the
MS ABI, it doesn't have that kind of information lying around.
Instead, we do the following:
- Call the virtual deleting destructor with the "don't delete the object
flag" set. The virtual deleting destructor will return a pointer to
'this' adjusted to the most derived class.
- Call the global delete using the adjusted 'this' pointer.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5996
llvm-svn: 220993
Summary:
When we are adding field paddings for asan even an empty dtor has to remain in the code,
so we ignore -mconstructor-aliases if the paddings are going to be added.
Test Plan: added a test
Reviewers: rsmith, rnk, rafael
Reviewed By: rafael
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6038
llvm-svn: 220986
Reuse the PPC64 HVA detection algorithm for ARM and AArch64. This is a
nice code deduplication, since they are roughly identical. A few virtual
method extension points are needed to understand how big an HVA can be
and what element types it can have for a given architecture.
Also make the record expansion code work in the presence of non-virtual
bases.
Reviewed By: uweigand, asl
Differential Revision: http://reviews.llvm.org/D6045
llvm-svn: 220972
SanitizerOptions is not even a POD now, so having global variable of
this type, is not nice. Instead, provide a regular constructor and clear()
method, and let each CodeGenFunction has its own copy of SanitizerOptions
it uses.
llvm-svn: 220920
The Windows NT SDK uses __readfsdword and declares it as a compiler provided
builtin (#pragma intrinsic(__readfsword). Because intrin.h is not referenced
by winnt.h, it is not possible to provide an out-of-line definition for the
intrinsic. Provide a proper compiler builtin definition.
llvm-svn: 220859
Following the NVVM IR specifications, arguments of aggregate type should be
passed on the stack without splitting (byval).
http://reviews.llvm.org/D6020
Patch by Jacques Pienaar.
llvm-svn: 220854
As discussed in bug 21398, PowerPC ABI code needs to consider C++ base
classes when classifying a class as homogeneous aggregate (or not) for
ABI purposes.
llvm-svn: 220852
An updated implemnentation of VLA types capturing based on previously committed solution for Lambdas.
This version captures the whole VLA type instead of particular variables which are part of VLA size expression and allows to use previusly calculated size of VLA type in captured regions. Required for OpenMP.
Differential Revision: http://reviews.llvm.org/D5099
llvm-svn: 220850
The MS linker cannot do anything interesting with these, it doesn't make
sense to emit them.
This fixes PR21373.
Differential Revision: http://reviews.llvm.org/D5986
llvm-svn: 220595
Avoid an assertion when materializing a lifetime type aggregate temporary. When
performing CodeGen for ObjC++, we could generate a lifetime-only aggregate
temporary by using an initializer list (which is effectively an array). We
would reach through the temporary expression, fishing out the inner expression.
If this expression was a lifetime expression, we would attempt to emit this as a
scalar. This would eventually result in an assertion as the emission would
eventually assert that the expression being emitted has a scalar evaluation
kind.
Add a case to handle the aggregate expressions. Use the EmitAggExpr to emit the
aggregate expression rather than the EmitScalarInit.
Addresses PR21347.
llvm-svn: 220590
This fixes a corner-case where __uuidof as a template argument would
result in us trying to emit a GLValue as an RValue. This would lead to
a crash down the road.
llvm-svn: 220585
Wire it through everywhere we have support for fastcall, essentially.
This allows us to parse the MSVC "14" CTP headers, but we will
miscompile them because LLVM doesn't support __vectorcall yet.
Reviewed By: Aaron Ballman
Differential Revision: http://reviews.llvm.org/D5808
llvm-svn: 220573
Summary:
This allows us to easily identify them in the backend which in turn allows us
to handle them correctly for big-endian targets (where they must be shifted
into the upper bits of the register).
Depends on D5961
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits, theraven
Differential Revision: http://reviews.llvm.org/D5962
llvm-svn: 220566
Summary:
Ensure all integral/enumeration types are appropriately annotated with
signext/zeroext. In particular, i32 now has these attributes when using the
N32/N64 ABI. This paves the way for accurately representing the way the
N32/N64 ABI's promotes integer arguments to i64.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits, theraven
Differential Revision: http://reviews.llvm.org/D5961
llvm-svn: 220563
Clang would previously assert on the following code when targeting MinGW:
struct __declspec(dllimport) S {
virtual ~S();
};
S::~S() {}
Because ~S is a key function and the class is dllimport, we would try to emit a
strong definition of the vtable, with dllimport - which is a conflict. We
should not emit strong vtable definitions for imported classes.
Differential Revision: http://reviews.llvm.org/D5944
llvm-svn: 220532
The previous IR representation used the non-lexical decl context, which
placed the definitions in the same scope as the declarations (ie: within
the class) - this was hidden by the fact that LLVM currently doesn't
respect the context of global variable definitions at all, and always
puts them at the top level (as direct children of the compile_unit).
Having the correct lexical scope improves source fidelity and simplify
backend global variable emission (with changes coming shortly).
Doing something similar for non-member global variables would help
simplify/cleanup things further (see FIXME in the commit) and provide
similar source fidelity benefits to the final debug info.
llvm-svn: 220488
This eliminates some i8* GEPs and makes the IR that clang emits a bit
more canonical. More work is needed for vftables, but that isn't a clear
win so I plan to send it for review.
llvm-svn: 220398
This patch generates some helper variables which used as a private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by default (with the default constructor, if any). In outlined function references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables and implicit barier is set by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D4752
llvm-svn: 220262
This reverts commit r220169 which reverted r220153. However, it also
contains additional changes:
- We may need to add padding *after* we've packed the struct. This
occurs when the aligned next field offset is greater than the new
field's offset. When this occurs, we make the struct packed.
*However*, once packed the next field offset might be less than the
new feild's offset. It is in this case that we might further pad the
struct.
- We would pad structs which were perfectly sized! This behavior is
immensely old. This behavior came from blindly subtracting
NextFieldOffsetInChars from RecordSize. This doesn't take into
account the fact that the struct might have a greater overall
alignment than the last field.
llvm-svn: 220175
This commit caused two tests in LNT to regress. I'm able to reproduce on
any platform and will send reproduction steps to the original commit
log. This should restore the LNT bots that have been failing.
llvm-svn: 220169
a NaN-test prior to the call to the library function.
This should automatically make fastmath (including just non-NaNs) able to avoid
the expensive libcalls and also open the door to more advanced folding in LLVM
based on the rules for complex math.
Two important notes to remember: first is that this isn't yet a proper
limited range mode, it's still just improving the unlimited range mode.
Also, it isn't really perfecet w.r.t. what an unlimited range mode
should be doing because it isn't quite handling the flags produced by
all the operations in the way desirable for that mode, but then neither
is compiler-rt's libcall. When the compiler-rt libcall is improved to
carefully manage flags, the code emitted here should be improved
correspondingly. And it is still a long-term desirable thing to add
a limited range mode to Clang that would be able to use direct math
without library calls here.
Special thanks to Steve Canon for the careful review on this patch and
teaching me about these issues. =D
Differential Revision: http://reviews.llvm.org/D5756
llvm-svn: 220167
Before, ConstStructBuilder::AppendBytes would check packed constraints
prior to padding being added before the field's offset. However, adding
this padding might force our struct to be packed. Because we wouldn't
check *after* adding padding, ConstStructBuilder would be in an
inconsistent state leading to a crash.
This fixes PR21300.
llvm-svn: 220153
This commit changes the way we blacklist global variables in ASan.
Now the global is excluded from instrumentation (either regular
bounds checking, or initialization-order checking) if:
1) Global is explicitly blacklisted by its mangled name.
This part is left unchanged.
2) SourceLocation of a global is in blacklisted source file.
This changes the old behavior, where instead of looking at the
SourceLocation of a variable we simply considered llvm::Module
identifier. This was wrong, as identifier may not correspond to
the file name, and we incorrectly disabled instrumentation
for globals coming from #include'd files.
3) Global is blacklisted by type.
Now we build the type of a global variable using Clang machinery
(QualType::getAsString()), instead of llvm::StructType::getName().
After this commit, the active users of ASan blacklist files
may have to revisit them (this is a backwards-incompatible change).
llvm-svn: 220097
This commit changes the way we blacklist functions in ASan, TSan,
MSan and UBSan. We used to treat function as "blacklisted"
and turned off instrumentation in it in two cases:
1) Function is explicitly blacklisted by its mangled name.
This part is not changed.
2) Function is located in llvm::Module, whose identifier is
contained in the list of blacklisted sources. This is completely
wrong, as llvm::Module may not correspond to the actual source
file function is defined in. Also, function can be defined in
a header, in which case user had to blacklist the .cpp file
this header was #include'd into, not the header itself.
Such functions could cause other problems - for instance, if the
header was included in multiple source files, compiled
separately and linked into a single executable, we could end up
with both instrumented and non-instrumented version of the same
function participating in the same link.
After this change we will make blacklisting decision based on
the SourceLocation of a function definition. If a function is
not explicitly defined in the source file, (for example, the
function is compiler-generated and responsible for
initialization/destruction of a global variable), then it will
be blacklisted if the corresponding global variable is defined
in blacklisted source file, and will be instrumented otherwise.
After this commit, the active users of blacklist files may have
to revisit them. This is a backwards-incompatible change, but
I don't think it's possible or makes sense to support the
old incorrect behavior.
I plan to make similar change for blacklisting GlobalVariables
(which is ASan-specific).
llvm-svn: 219997
Summary:
The general approach is to add extra paddings after every field
in AST/RecordLayoutBuilder.cpp, then add code to CTORs/DTORs that poisons the paddings
(CodeGen/CGClass.cpp).
Everything is done under the flag -fsanitize-address-field-padding.
The blacklist file (-fsanitize-blacklist) allows to avoid the transformation
for given classes or source files.
See also https://code.google.com/p/address-sanitizer/wiki/IntraObjectOverflow
Test Plan: run SPEC2006 and some of the Chromium tests with -fsanitize-address-field-padding
Reviewers: samsonov, rnk, rsmith
Reviewed By: rsmith
Subscribers: majnemer, cfe-commits
Differential Revision: http://reviews.llvm.org/D5687
llvm-svn: 219961
They cannot be written to, so marking them const makes sense and may improve
optimisation.
As a side-effect, SectionInfos has to be moved from Sema to ASTContext.
It also fixes this problem, that occurs when compiling ATL:
warning LNK4254: section 'ATL' (C0000040) merged into '.rdata' (40000040) with different attributes
The ATL headers are putting variables in a special section that's marked
read-only. However, Clang currently can't model that read-onlyness in the IR.
But, by making the variables const, the section does become read-only, and
the linker warning is avoided.
Differential Revision: http://reviews.llvm.org/D5812
llvm-svn: 219960
Plumb through the full QualType of the TemplateArgument::Declaration, as
it's insufficient to only know whether the type is a reference or
pointer (that was necessary for mangling, but insufficient for debug
info). This shouldn't increase the size of TemplateArgument as
TemplateArgument::Integer is still longer by another 32 bits.
Several bits of code were testing that the reference-ness of the
parameters matched, but this seemed to be insufficient (various other
features of the type could've mismatched and wouldn't've been caught)
and unnecessary, at least insofar as removing those tests didn't cause
anything to fail.
(Richard - perchaps you can hypothesize why any of these checks might
need to test reference-ness of the parameters (& explain why
reference-ness is part of the mangling - I would've figured that for the
reference-ness to be different, a prior template argument would have to
be different). I'd be happy to add them in/beef them up and add test
cases if there's a reason for them)
llvm-svn: 219900
The functionality contained in CodeGenFunction::EmitAlignmentAssumption has
been moved to IRBuilder (so that it can also be used by LLVM-level code).
Remove this now-duplicate implementation in favor of the IRBuilder code.
llvm-svn: 219877
CodeGen wouldn't mark the aliasee as thread_local if the aliasee was a
tentative definition.
Even if the definition was already emitted, it would never mark the
alias as thread_local.
This fixes PR21288.
llvm-svn: 219859
Soon we'll need to have access to blacklist before the CodeGen
phase (see http://reviews.llvm.org/D5687), so parse and construct
the blacklist earlier.
llvm-svn: 219857
After http://reviews.llvm.org/D5687 is submitted, we will need
SanitizerBlacklist before the CodeGen phase, so make it a LangOpt
(as it will actually affect ABI / class layout).
llvm-svn: 219842
This change moves SanitizerBlacklist.h from lib/CodeGen
to public Clang headers in include/clang/Basic. SanitizerBlacklist
is currently only used in CodeGen to decide which functions/modules
should be instrumented, but this will soon change as ASan will
optionally modify class layouts during AST construction
(http://reviews.llvm.org/D5687). We need blacklist machinery
to be available at this point.
llvm-svn: 219840
In particular, if you have two identical templates in different TUs in
anonymous namespaces, we would use the same global_ctors comdat key for
both. As a result, only one would be run.
llvm-svn: 219806
Unions are initialized with the default initialization of their first
named member. If that member is not zero initialized, then we should
prefer that member's type. Otherwise, we might try to make an otherwise
unsuitable type (like an array) which we cannot easily initialize with a
pointer to member.
llvm-svn: 219781
When lazily constructing static member variable declarations (when
the vtable optimization fires and the definition of the type is omitted
(or built later, lazily), but the out of line definition of the static
member is provided and must be described in debug info) ensure we use
the canonical declaration when computing the file, line, etc for that
declaration (rather than the definition, which is also a declaration,
but not the canonical one).
llvm-svn: 219736
CodeGenFunction objects aren't really designed to be reused for more
than one function, and doing so can leak debug info location information
from one function into the prologue of the next.
Add an assertion in to catch reuses of CodeGenFunction, which
surprisingly only caught the ObjC atomic getter/setter cases. Fix those
and add a test to demonstrate the issue.
The test is a bit slim, because we're just testing for the absence of a
debug location on the prologue instructions, which by itself probably
wouldn't be the end of the world - but the particular debug location
that was ending up there was for the previous function's last
instruction. This produced debug info for another function within this
function, which is something I'm trying to remove all cases of as its a
substantial source of bugs, especially around inlining (see r219215).
llvm-svn: 219690
This change adds UBSan check to upcasts. Namely, when we
perform derived-to-base conversion, we:
1) check that the pointer-to-derived has suitable alignment
and underlying storage, if this pointer is non-null.
2) if vptr-sanitizer is enabled, and we perform conversion to
virtual base, we check that pointer-to-derived has a matching vptr.
llvm-svn: 219642
This patch generates call to "kmpc_push_num_threads(ident_t *loc, kmp_int32 global_tid, kmp_int32 num_threads);" library function before calling "kmpc_fork_call" each time there is an associated "num_threads" clause in the "omp parallel" directive.
Differential Revision: http://reviews.llvm.org/D5145
llvm-svn: 219599
Adds codegen for 'if' clause. Currently only for 'if' clause used with the 'parallel' directive.
If condition evaluates to true, the code executes parallel version of the code by calling __kmpc_fork_call(loc, 1, microtask, captured_struct/*context*/), where loc - debug location, 1 - number of additional parameters after "microtask" argument, microtask - is outlined finction for the code associated with the 'parallel' directive, captured_struct - list of variables captured in this outlined function.
If condition evaluates to false, the code executes serial version of the code by executing the following code:
global_thread_id.addr = alloca i32
store i32 global_thread_id, global_thread_id.addr
zero.addr = alloca i32
store i32 0, zero.addr
kmpc_serialized_parallel(loc, global_thread_id);
microtask(global_thread_id.addr, zero.addr, captured_struct/*context*/);
kmpc_end_serialized_parallel(loc, global_thread_id);
Where loc - debug location, global_thread_id - global thread id, returned by __kmpc_global_thread_num() call or passed as a first parameter in microtask() call, global_thread_id.addr - address of the variable, where stored global_thread_id value, zero.addr - implicit bound thread id (should be set to 0 for serial call), microtask() and captured_struct are the same as in parallel call.
Also this patch checks if the condition is constant and if it is constant it evaluates its value and then generates either parallel version of the code (if the condition evaluates to true), or the serial version of the code (if the condition evaluates to false).
Differential Revision: http://reviews.llvm.org/D4716
llvm-svn: 219597
Previously loop hints such as #pragma loop vectorize_width(#) required a constant. This patch allows a constant expression to be used as well. Such as a non-type template parameter or an expression (2 * c + 1).
Reviewed by Richard Smith
llvm-svn: 219589
While we ran getUnqualifiedType over the catch type,
it isn't enough for array types. Use getUnqualifiedArrayType instead.
This fixes PR21252.
llvm-svn: 219582
and !=) to support mixed complex and real operand types.
This requires removing an assert from SemaChecking, and adding support
both to the constant evaluator and the code generator to synthesize the
imaginary part when needed. This seemed somewhat cleaner than having
just the comparison operators force real-to-complex conversions.
I've added test cases for these operations. I'm really terrified that
there were *no* tests in-tree which exercised this.
This turned up when trying to build R after my change to the complex
type lowering.
llvm-svn: 219570
for complex math.
This should fix the windows build bots that started having trouble here
and generally fix complex libcall emission on targets which use sret for
complex data types. It also makes the code a bit simpler (despite
calling into a much more complex bucket of code).
llvm-svn: 219565
operators where one type is a C complex type, and to emit both the
efficient and correct implementation for complex arithmetic according to
C11 Annex G using this extra information.
For both multiply and divide the old code was writing a long-hand
reduced version of the math without any of the special handling of inf
and NaN recommended by the standard here. Instead of putting more
complexity here, this change does what GCC does which is to emit
a libcall for the fully general case.
However, the old code also failed to do the proper minimization of the
set of operations when there was a mixed complex and real operation. In
those cases, C provides a spec for much more minimal operations that are
valid. Clang now emits the exact suggested operations. This change isn't
*just* about performance though, without minimizing these operations, we
again lose the correct handling of infinities and NaNs. It is critical
that this happen in the frontend based on assymetric type operands to
complex math operations.
The performance implications of this change aren't trivial either. I've
run a set of benchmarks in Eigen, an open source mathematics library
that makes heavy use of complex. While a few have slowed down due to the
libcall being introduce, most sped up and some by a huge amount: up to
100% and 140%.
In order to make all of this work, also match the algorithm in the
constant evaluator to the one in the runtime library. Currently it is
a broken port of the simplifications from C's Annex G to the long-hand
formulation of the algorithm.
Splitting this patch up is very hard because none of this works without
the AST change to preserve non-complex operands. Sorry for the enormous
change.
Follow-up changes will include support for sinking the libcalls onto
cold paths in common cases and fastmath improvements to allow more
aggressive backend folding.
Differential Revision: http://reviews.llvm.org/D5698
llvm-svn: 219557
It's possible to construct cases where the first field we are trying to
copy is in the middle of an IR field. In some complicated cases, we
would fail to use an appropriate offset inside the object. Earlier
builds of clang seemed to miscompile the code by copying an insufficient
number of bytes. Up until now, we would assert: the copying offset was
insufficiently aligned.
This fixes PR21232.
llvm-svn: 219524
Moved CGOpenMPRegionInfo from CGOpenMPRuntime.h to CGOpenMPRuntime.cpp file and reworked the code for this change. Also added processing of ThreadID variable passed as an argument in outlined functions in parallel and task directives.
llvm-svn: 219490
This patch makes class OMPPrivateScope a common class for all private variables. Reworked processing of firstprivate variables (now it is based on OMPPrivateScope too).
llvm-svn: 219486
Make it possible to pass NULL through variadic functions on 64-bit
Windows targets. The Visual C++ headers define NULL to 0, when they
should define it to 0LL on Win64 so that NULL is a pointer-sized
integer.
Fixes PR20949.
Reviewers: thakis, rsmith
Differential Revision: http://reviews.llvm.org/D5480
llvm-svn: 219456
Assertion failed: "Computed __func__ length differs from type!"
Reworked PredefinedExpr representation with internal StringLiteral field for function declaration.
Differential Revision: http://reviews.llvm.org/D5365
llvm-svn: 219393
Includes parsing and semantic analysis for 'omp teams' directive support from OpenMP 4.0. Adds additional analysis to 'omp target' directive with 'omp teams' directive.
llvm-svn: 219385
Summary:
The current code uses memset to re-initialize EHCleanupScope objects
with breaks the assumptions of the upcoming asan's intra-object-overflow checker.
If there is no DTOR, the new checker will refuse to work.
Test Plan: bootstrap with asan
Reviewers: rnk
Reviewed By: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5656
llvm-svn: 219331
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219306
Boostrapping LLVM+Clang+LLDB without threshold on object size for
lifetime markers insertion has shown there was no significant change
in compile time, so let the stack slot colorizer do its optimization
for all slots.
llvm-svn: 219303
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219297
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219295
Summary:
Previously CodeGen assumed that static locals were emitted before they
could be accessed, which is true for automatic storage duration locals.
However, it is possible to have CodeGen emit a nested function that uses
a static local before emitting the function that defines the static
local, breaking that assumption.
Fix it by creating the static local upon access and ensuring that the
deferred function body gets emitted. We may not be able to emit the
initializer properly from outside the function body, so don't try.
Fixes PR18020. See also previous attempts to fix static locals in
PR6769 and PR7101.
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4787
llvm-svn: 219265
We used to avoid these, but it looks like we did so just because we were
not handling dllexport alias correctly.
Dario Domizioli fixed that, so allow these aliases.
Based on a patch by Dario Domizioli!
llvm-svn: 219206
Includes parsing and semantic analysis for 'omp teams' directive support from OpenMP 4.0. Adds additional analysis to 'omp target' directive with 'omp teams' directive.
llvm-svn: 219197
No functional changes intended.
Renamed EmitOMPSimdLoop to EmitOMPInnerLoop, I plan to re-use
it to emit inner loop in the future patches for CodeGen of the
worksharing loop directives (omp for, omp for simd).
llvm-svn: 219195
By leaving these members out of the member list, we avoid them being
emitted into type unit definitions - while still allowing the
definition/declaration to be injected into the compile unit as expected.
llvm-svn: 219101
By leaving these members out of the member list, we avoid them being
emitted into type unit definitions - while still allowing the
definition/declaration to be injected into the compile unit as expected.
llvm-svn: 219100
Summary:
This add support for the C++11 feature, thread_local global variables.
The ABI Clang implements is an improvement of the MSVC ABI. Sadly,
further improvements could be made but not without sacrificing ABI
compatibility.
The feature is implemented as follows:
- All thread_local initialization routines are pointed to from the
.CRT$XDU section.
- All non-weak thread_local variables have their initialization routines
call from a single function instead of getting their own .CRT$XDU
section entry. This is done to open up optimization opportunities to
the compiler.
- All weak thread_local variables have their own .CRT$XDU section entry.
This entry is in a COMDAT with the global variable it is initializing;
this ensures that we will initialize the global exactly once.
- Destructors are registered in the initialization function using
__tlregdtor.
Differential Revision: http://reviews.llvm.org/D5597
llvm-svn: 219074
We already add the align parameter attribute for function parameters that have
the align_value attribute (or those with a typedef type having that attribute),
which is an important special case, but does not handle pointers with value
alignment assumptions that come into scope in any other way. To handle the
general case, emit an @llvm.assume-based alignment assumption whenever we load
the pointer-typed lvalue of an align_value-attributed variable (except for
function parameters, which we already deal with at entry).
I'll also note that this is more general than Intel's described support in:
https://software.intel.com/en-us/articles/data-alignment-to-assist-vectorization
which states that the compiler inserts __assume_aligned directives in response
to align_value-attributed variables only for function parameters and for the
initializers of local variables. I think that we can make the optimizer deal
with this more-general scheme (which could lead to a lot of calls to
@llvm.assume inside of loop bodies, for example), but if not, I'll rework this
to be less aggressive.
llvm-svn: 219052
When the aligned clause of an OpenMP simd pragma is not provided with an
explicit alignment, a target-dependent default must be used. This adds such a
default of PPC targets.
This will become slightly more complicated when BG/Q support is added (because
then it will depend on the type). For now, 16 is a correct value for all
systems, and covers Altivec and VSX vectors.
llvm-svn: 218994
This adds support for the align_value attribute. This attribute is supported by
Intel's compiler (versions 14.0+), and several of my HPC users have requested
support in Clang. It specifies an alignment assumption on the values to which a
pointer points, and is used by numerical libraries to encourage efficient
generation of vector code.
Of course, we already have an aligned attribute that can specify enhanced
alignment for a type, so why is this additional attribute important? The
problem is that if you want to specify that an input array of T is, say,
64-byte aligned, you could try this:
typedef double aligned_double attribute((aligned(64)));
void foo(aligned_double *P) {
double x = P[0]; // This is fine.
double y = P[1]; // What alignment did those doubles have again?
}
the access here to P[1] causes problems. P was specified as a pointer to type
aligned_double, and any object of type aligned_double must be 64-byte aligned.
But if P[0] is 64-byte aligned, then P[1] cannot be, and this access causes
undefined behavior. Getting round this problem requires a lot of awkward
casting and hand-unrolling of loops, all of which is bad.
With the align_value attribute, we can accomplish what we'd like in a well
defined way:
typedef double *aligned_double_ptr attribute((align_value(64)));
void foo(aligned_double_ptr P) {
double x = P[0]; // This is fine.
double y = P[1]; // This is fine too.
}
This attribute does not create a new type (and so it not part of the type
system), and so will only "propagate" through templates, auto, etc. by
optimizer deduction after inlining. This seems consistent with Intel's
implementation (thanks to Alexey for confirming the various Intel-compiler
behaviors).
As a final note, I would have chosen to call this aligned_value, not
align_value, for better naming consistency with the aligned attribute, but I
think it would be more useful to users to adopt Intel's name.
llvm-svn: 218910
Prior to GCC 4.4, __sync_fetch_and_nand was implemented as:
{ tmp = *ptr; *ptr = ~tmp & value; return tmp; }
but this was changed in GCC 4.4 to be:
{ tmp = *ptr; *ptr = ~(tmp & value); return tmp; }
in response to this change, support for sync_fetch_and_nand (and
sync_nand_and_fetch) was removed in r99522 in order to avoid miscompiling code
depending on the old semantics. However, at this point:
1. Many years have passed, and the amount of code relying on the old
semantics is likely smaller.
2. Through the work of many contributors, all LLVM backends have been updated
such that "atomicrmw nand" provides the newer GCC 4.4+ semantics (this process
was complete July of 2014 (added to the release notes in r212635).
3. The lack of this intrinsic is now a needless impediment to porting codes
from GCC to Clang (I've now seen several examples of this).
It is true, however, that we still set GNUC_MINOR to 2 (corresponding to GCC
4.2). To compensate for this, and to address the original concern regarding
code relying on the old semantics, I've added a warning that specifically
details the fact that the semantics have changed and that we provide the newer
semantics.
Fixes PR8842.
llvm-svn: 218905
Summary:
Currently, with struct my_struct { int x; method_ptr y; };
a call to foo(my_struct s) may end up dropping the last 4 bytes
of the method pointer for x86_64 NaCl and x32.
When checking Has64BitPointers, also check if the method pointer
straddles an eightbyte boundary and classify Hi as well as Lo if needed.
Test Plan: test/CodeGenCXX/x86_64-arguments-nacl-x32.cpp
Reviewers: dschuff, pavel.v.chupin
Subscribers: jfb
Differential Revision: http://reviews.llvm.org/D5555
llvm-svn: 218889
Complex address expressions are no longer part of DIVariable, but
rather an extra argument to the debug intrinsics.
http://reviews.llvm.org/D4919
rdar://problem/17994491
llvm-svn: 218788
Complex address expressions are no longer part of DIVariable, but
rather an extra argument to the debug intrinsics.
http://reviews.llvm.org/D4919
rdar://problem/17994491
llvm-svn: 218777
This patch implements collapsing of the loops (in particular, in
presense of clause 'collapse'). It calculates number of iterations N
and expressions nesessary to calculate the nested loops counters
values based on new iteration variable (that goes from 0 to N-1)
in Sema. It also adds Codegen for 'omp simd', which uses
(and tests) this feature.
Differential Revision: http://reviews.llvm.org/D5184
llvm-svn: 218743
When generating coverage regions, we were doing a linear search
through the existing regions in order to try to merge related ones.
Most of the time this would find what it was looking for in a small
number of steps and it wasn't a big deal, but in cases with many
regions and few mergeable ones this leads to an absurd compile time
regression.
This changes the coverage mapping logic to do a single sort and then
merge as we go, which is a bit simpler and about 100 times faster.
I've also added FIXMEs on a couple of behaviours that seem a little
suspect, while keeping them behaving as they were - I'll look into
these soon.
The test changes here are mostly tedious reorganization, because the
ordering of regions we output has become slightly (but not completely)
more consistent from the almost completely arbitrary ordering we got
before.
llvm-svn: 218738
This struct has some members that are accessed directly and others
that need accessors, but it's all just public. This is confusing, so
I've changed it to a class and made more members private.
llvm-svn: 218737
This is the last piece of CGCall code that had implicit assumptions about
the order in which Clang arguments are translated to LLVM ones (positions
of inalloca argument, sret, this, padding arguments etc.) Now all of
this data is encapsulated in ClangToLLVMArgsMapping. If this information
would be required somewhere else, this class can be moved to a separate
header or pulled into CGFunctionInfo.
llvm-svn: 218634
Add a method to calculate the number of arguments given QualType
expnads to. Use this method in ClangToLLVMArgMapping calculation.
This number may be cached in CodeGenTypes for efficiency, if needed.
llvm-svn: 218623
Hoist the logic which determines the way QualType is expanded
into a separate method. Remove a bunch of copy-paste and simplify
getTypesFromArgs() / ExpandTypeFromArgs() / ExpandTypeToArgs() methods.
llvm-svn: 218615
Fixes incorrect codegen when devirtualization is aborted due to covariant return types.
Differential Revision: http://reviews.llvm.org/D5321
llvm-svn: 218602
Clang uses two types to talk about a C++ class, the
NonVirtualBaseLLVMType and the LLVMType. Previously, we would allow one
of these to be packed and the other not.
This is problematic. If both don't agree on a common subset of fields,
then routines like getLLVMFieldNo will point to the wrong field. Solve
this by copying the 'packed'-ness of the complete type to the
non-virtual subobject. For this to work, we need to take into account
the non-virtual subobject's size and alignment when we are computing the
layout of the complete object.
This fixes PR21089.
llvm-svn: 218577
(clang crashed in CodeGen in llvm::Module::getNamedValue on
thread_local std::unique_ptr<int>).
Differential Revision: http://reviews.llvm.org/D5353
llvm-svn: 218503
In addition to __builtin_assume_aligned, GCC also supports an assume_aligned
attribute which specifies the alignment (and optional offset) of a function's
return value. Here we implement support for the assume_aligned attribute by making
use of the @llvm.assume intrinsic.
llvm-svn: 218500
AFAICT the semantics of frem match libm's fmod.
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 218488
Most of the debug info emission is powered essentially from function
definitions - if we emit the definition of a function, we emit the types
of its parameters, the members of those types, and so on and so forth.
For types that aren't referenced even indirectly due to this - because
they only appear in temporary expressions, not in any named variable, we
need to explicitly emit/add them as is done here. This is not the only
case of such code, and we might want to consider handling "void
func(void*); ... func(new T());" (currently debug info for T is not
emitted) at some point, though GCC doesn't. There's a much broader
solution to these issues, but it's a lot of work for possibly marginal
gain (but might help us improve the default -fno-standalone-debug
behavior to be even more aggressive in some places). See the original
review thread for more details.
Patch by jyoti allur (jyoti.yalamanchili@gmail.com)!
Differential Revision: http://reviews.llvm.org/D2498
llvm-svn: 218390
On further investigation, COMDATs should work with .ctors, and the issue
I was hitting probably reproduces with .init_array.
This reverts commit r218287.
llvm-svn: 218313
In particular, pre-.init_array ELF uses the .ctors section mechanism.
MinGW COFF also uses .ctors, now that I think about it. Therefore,
restrict this optimization to the two platforms that are currently known
to work: ELF with .init_array and COFF with .CRT$XCU.
llvm-svn: 218287
Summary:
Vectors are normally 16-byte aligned, however the O32 ABI enforces a
maximum alignment of 8-bytes since the base of the stack is 8-byte aligned.
Previously, this was enforced on the caller side, but not on the callee side.
This fixes the output of OpenCL's printf when given vectors.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: llvm-commits, pekka.jaaskelainen
Differential Revision: http://reviews.llvm.org/D5433
llvm-svn: 218248
This patch adds codegen for constructs:
#pragma omp critical [name]
<body>
It generates global variable ".gomp_critical_user_[name].var" of type int32[8]. Then it generates library call "kmpc_critical(loc, gtid, .gomp_critical_user_[name].var)", code for <body> statement and final call "kmpc_end_critical(loc, gtid, .gomp_critical_user_[name].var)".
Differential Revision: http://reviews.llvm.org/D5202
llvm-svn: 218239
This patch makes sure that the dllexport attribute is transferred to the alias when such alias is created. It only affects the Itanium ABI because for the MSVC ABI a workaround is in place to not generate aliases of dllexport ctors/dtors.
A new CodeGenModule function is provided, CodeGenModule::setAliasAttributes, to factor the code for transferring attributes to aliases.
llvm-svn: 218159
Clang can already handle
-------------------------------------------
struct S {
static const int x;
};
template<typename T> struct U {
static const int k;
};
template<typename T> const int U<T>::k = T::x;
const int S::x = 42;
extern const int *f();
const int *g() { return &U<S>::k; }
int main() {
return *f() + U<S>::k;
}
const int *f() { return &U<S>::k; }
-------------------------------------------
since r217264 which puts the .inint_array section in the same COMDAT
as the variable.
This patch allows the linker to more easily delete some dead code and data by
putting the guard variable and init function in the same COMDAT.
This is a fixed version of r218089.
llvm-svn: 218141
The field is defined as:
If the third field is present, non-null, and points to a global variable or function, the initializer function will only run if the associated data from the current module is not discarded.
And without COMDATs we can't implement that.
llvm-svn: 218097
Clang can already handle
-------------------------------------------
struct S {
static const int x;
};
template<typename T> struct U {
static const int k;
};
template<typename T> const int U<T>::k = T::x;
const int S::x = 42;
extern const int *f();
const int *g() { return &U<S>::k; }
int main() {
return *f() + U<S>::k;
}
const int *f() { return &U<S>::k; }
-------------------------------------------
since r217264 which puts the .inint_array section in the same COMDAT
as the variable.
This patch allows the linker to more easily delete some dead code and data by
putting the guard variable and init function in the same COMDAT.
llvm-svn: 218089
CodeGen would try to come up with an LLVM IR type for a pointer to
member type on the way to forming an LLVM IR type for a pointer to
pointer to member type.
However, if the pointer to member representation has not been locked in yet,
we would not be able to come up with a pointer to member IR type.
In these cases, make the pointer to member type an incomplete type.
This will make the pointer to pointer to member type a pointer to an
incomplete type. If the class eventually obtains an inheritance model,
we will make the pointer to member type represent the actual inheritance
model.
Differential Revision: http://reviews.llvm.org/D5373
llvm-svn: 218084