This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314979
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314977
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38456
llvm-svn: 314780
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.
This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.
Differential Revision: https://reviews.llvm.org/D38404
llvm-svn: 314657
Summary:
This is the follow-up patch to D37924.
This change refactors clang to use the the newly added section headers
in SpecialCaseList to specify which sanitizers blacklists entries
should apply to, like so:
[cfi-vcall]
fun:*bad_vcall*
[cfi-derived-cast|cfi-unrelated-cast]
fun:*bad_cast*
The SanitizerSpecialCaseList class has been added to allow querying by
SanitizerMask, and SanitizerBlacklist and its downstream users have been
updated to provide that information. Old blacklists not using sections
will continue to function identically since the blacklist entries will
be placed into a '[*]' section by default matching against all
sanitizers.
Reviewers: pcc, kcc, eugenis, vsk
Reviewed By: eugenis
Subscribers: dberris, cfe-commits, mgorny
Differential Revision: https://reviews.llvm.org/D37925
llvm-svn: 314171
Summary:
Microsoft Visual Studio expects debug locations to correspond to
statements. We used to emit locations for expressions nested inside statements.
This would confuse the debugger, causing it to stop multiple times on the
same line and breaking the "step into specific" feature. This change inhibits
the emission of debug locations for nested expressions when emitting CodeView
debug information, unless column information is enabled.
Fixes PR34312.
Reviewers: rnk, zturner
Reviewed By: rnk
Subscribers: majnemer, echristo, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D37529
llvm-svn: 312965
the interface.
The ultimate goal here is to make it easier to do some more interesting
things in constant emission, like emit constant initializers that have
ignorable side-effects, or doing the majority of an initialization
in-place and then patching up the last few things with calls. But for
now this is mostly just a refactoring.
llvm-svn: 310964
This patch adds support for the `long_call`, `far`, and `near` attributes
for MIPS targets. The `long_call` and `far` attributes are synonyms. All
these attributes override `-mlong-calls` / `-mno-long-calls` command
line options for particular function.
Differential revision: https://reviews.llvm.org/D35479
llvm-svn: 308667
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.
It only affects IR generated for amdgcn target.
Differential Revision: https://reviews.llvm.org/D33842
llvm-svn: 307470
Summary: OpenCL and SPIR version metadata must be generated once per module instead of once per mangled global value.
Reviewers: Anastasia, yaxunl
Reviewed By: Anastasia
Subscribers: ahatanak, cfe-commits
Differential Revision: https://reviews.llvm.org/D34235
llvm-svn: 305796
Summary:
We can emit vtable definition having inline function
if they are all emitted.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D33437
llvm-svn: 304394
Summary:
Unless I'm missing something, the DeferredGlobal struct's GV field is
unused, removing which makes the struct itself trivial.
Reviewers: rafael, chandlerc
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D32691
llvm-svn: 301789
Summary:
The -fxray-always-instrument= and -fxray-never-instrument= flags take
filenames that are used to imbue the XRay instrumentation attributes
using a whitelist mechanism (similar to the sanitizer special cases
list). We use the same syntax and semantics as the sanitizer blacklists
files in the implementation.
As implemented, we respect the attributes that are already defined in
the source file (i.e. those that have the
[[clang::xray_{always,never}_instrument]] attributes) before applying
the always/never instrument lists.
Reviewers: rsmith, chandlerc
Subscribers: jfb, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D30388
llvm-svn: 299041
After r297760, __isOSVersionAtLeast in compiler-rt loads the CoreFoundation
symbols at runtime. This means that `@available` will always fail when used in a
binary without a linked CoreFoundation.
This commit forces Clang to emit a reference to a CoreFoundation symbol when
`@available` is used to ensure that linking will fail when CoreFoundation isn't
linked with the build product.
rdar://31039592
Differential Revision: https://reviews.llvm.org/D30977
llvm-svn: 298588
Summary:
Now when you ask clang to link in a bitcode module, you can tell it to
set attributes on that module's functions to match what we would have
set if we'd emitted those functions ourselves.
This is particularly important for fast-math attributes in CUDA
compilations.
Each CUDA compilation links in libdevice, a bitcode library provided by
nvidia as part of the CUDA distribution. Without this patch, if we have
a user-function F that is compiled with -ffast-math that calls a
function G from libdevice, F will have the unsafe-fp-math=true (etc.)
attributes, but G will have no attributes.
Since F calls G, the inliner will merge G's attributes into F's. It
considers the lack of an unsafe-fp-math=true attribute on G to be
tantamount to unsafe-fp-math=false, so it "merges" these by setting
unsafe-fp-math=false on F.
This then continues up the call graph, until every function that
(transitively) calls something in libdevice gets unsafe-fp-math=false
set, thus disabling fastmath in almost all CUDA code.
Reviewers: echristo
Subscribers: hfinkel, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D28538
llvm-svn: 293097
Introduced in r181561 - it may've been subsumed by work done to allow
emission of declarations for vtable types while still emitting some of
their member functions correctly for those declarations. Whatever the
reason, the tests pass without this code now.
llvm-svn: 292439
Our newly aggressive constant folding logic makes it possible for
CGExprConstant to see the same CompoundLiteralExpr more than once. So,
emitting a new GlobalVariable every time we see a CompoundLiteral is no
longer correct.
We had a similar issue with BlockExprs that was caught while testing
said aggressive folding, so I applied the same style of fix (see D26410)
here. If we find yet another case where this needs to happen, we should
probably refactor this so we don't have a third DenseMap+getter+setter.
As a design note: getAddrOfConstantCompoundLiteralIfEmitted is really
only intended to be called by ConstExprEmitter::EmitLValue. So,
returning a GlobalVariable* instead of a ConstantAddress costs us
effectively nothing, and saves us either a few bytes per entry in our
map or a bit of code duplication.
llvm-svn: 290661
This is a recommit of r290149, which was reverted in r290169 due to msan
failures. msan was failing because we were calling
`isMostDerivedAnUnsizedArray` on an invalid designator, which caused us
to read uninitialized memory. To fix this, the logic of the caller of
said function was simplified, and we now have a `!Invalid` assert in
`isMostDerivedAnUnsizedArray`, so we can catch this particular bug more
easily in the future.
Fingers crossed that this patch sticks this time. :)
Original commit message:
This patch does three things:
- Gives us the alloc_size attribute in clang, which lets us infer the
number of bytes handed back to us by malloc/realloc/calloc/any user
functions that act in a similar manner.
- Teaches our constexpr evaluator that evaluating some `const` variables
is OK sometimes. This is why we have a change in
test/SemaCXX/constant-expression-cxx11.cpp and other seemingly
unrelated tests. Richard Smith okay'ed this idea some time ago in
person.
- Uniques some Blocks in CodeGen, which was reviewed separately at
D26410. Lack of uniquing only really shows up as a problem when
combined with our new eagerness in the face of const.
llvm-svn: 290297
This commit fails MSan when running test/CodeGen/object-size.c in
a confusing way. After some discussion with George, it isn't really
clear what is going on here. We can make the MSan failure go away by
testing for the invalid bit, but *why* things are invalid isn't clear.
And yet, other code in the surrounding area is doing precisely this and
testing for invalid.
George is going to take a closer look at this to better understand the
nature of the failure and recommit it, for now backing it out to clean
up MSan builds.
llvm-svn: 290169
This patch does three things:
- Gives us the alloc_size attribute in clang, which lets us infer the
number of bytes handed back to us by malloc/realloc/calloc/any user
functions that act in a similar manner.
- Teaches our constexpr evaluator that evaluating some `const` variables
is OK sometimes. This is why we have a change in
test/SemaCXX/constant-expression-cxx11.cpp and other seemingly
unrelated tests. Richard Smith okay'ed this idea some time ago in
person.
- Uniques some Blocks in CodeGen, which was reviewed separately at
D26410. Lack of uniquing only really shows up as a problem when
combined with our new eagerness in the face of const.
Differential Revision: https://reviews.llvm.org/D14274
llvm-svn: 290149
Properly attribute DLL storage to runtime functions. When generating the
runtime function, scan for an existing declaration which may provide an explicit
declaration (local storage) or a DLL import or export storage from the user.
Honour that if available. Otherwise, if building with a local visibility of the
public or standard namespaces (-flto-visibility-public-std), give the symbols
local storage (it indicates a /MT[d] link, so static runtime). Otherwise,
assume that the link is dynamic, and give the runtime function dllimport
storage.
This allows for implementations to get the correct storage as long as they are
properly declared, the user to override the import storage, and in case no
explicit storage is given, use of the import storage.
llvm-svn: 289776
In amdgcn target, null pointers in global, constant, and generic address space take value 0 but null pointers in private and local address space take value -1. Currently LLVM assumes all null pointers take value 0, which results in incorrectly translated IR. To workaround this issue, instead of emit null pointers in local and private address space, a null pointer in generic address space is emitted and casted to local and private address space.
Tentative definition of global variables with non-zero initializer will have weak linkage instead of common linkage since common linkage requires zero initializer and does not have explicit section to hold the non-zero value.
Virtual member functions getNullPointer and performAddrSpaceCast are added to TargetCodeGenInfo which by default returns ConstantPointerNull and emitting addrspacecast instruction. A virtual member function getNullPointerValue is added to TargetInfo which by default returns 0. Each target can override these virtual functions to get target specific null pointer and the null pointer value for specific address space, and perform specific translations for addrspacecast.
Wrapper functions getNullPointer is added to CodegenModule and getTargetNullPointerValue is added to ASTContext to facilitate getting the target specific null pointers and their values.
This change has no effect on other targets except amdgcn target. Other targets can provide support of non-zero null pointer in a similar way.
This change only provides support for non-zero null pointer for C and OpenCL. Supporting for other languages will be added later incrementally.
Differential Revision: https://reviews.llvm.org/D26196
llvm-svn: 289252
performed at the CodeGenModule level.
Would be NFC except we now also use a different uniquing structure so
that we don't get spurious conflicts if you ask for both an NSString
and a CFString for the same content (which is possible with builtins).
llvm-svn: 288287
Looks like CurFn's name outlives FunctionName, so we can just pass
StringRefs around rather than going from a StringRef to a std::string
to a const char* to a StringRef.
llvm-svn: 285873
This is essential when clang is running in incremental processing mode because
we don't want to reemit the 'tors over and over again.
Patch by Axel Naumann!
Reviewed by Richard Smith and me. (https://reviews.llvm.org/D25605)
llvm-svn: 285277
abstract information about the callee. NFC.
The goal here is to make it easier to recognize indirect calls and
trigger additional logic in certain cases. That logic will come in
a later patch; in the meantime, I felt that this was a significant
improvement to the code.
llvm-svn: 285258
Summary:
Emitting deferred diagnostics during codegen was a hack. It did work,
but usability was poor, both for us as compiler devs and for users. We
don't codegen if there are any sema errors, so for users this meant that
they wouldn't see deferred errors if there were any non-deferred errors.
For devs, this meant that we had to carefully split up our tests so that
when we tested deferred errors, we didn't emit any non-deferred errors.
This change moves checking for deferred errors into Sema. See the big
comment in SemaCUDA.cpp for an overview of the idea.
This checking adds overhead to compilation, because we have to maintain
a partial call graph. As a result, this change makes deferred errors a
CUDA-only concept (whereas before they were a general concept). If
anyone else wants to use this framework for something other than CUDA,
we can generalize at that time.
This patch makes the minimal set of test changes -- after this lands,
I'll go back through and do a cleanup of the tests that we no longer
have to split up.
Reviewers: rnk
Subscribers: cfe-commits, rsmith, tra
Differential Revision: https://reviews.llvm.org/D25541
llvm-svn: 284158
Move the definition of `getTriple()` into the header. It would just call
`getTarget().getTriple()`. Inline the definition to allow the compiler to see
the same amount of the layout as previously. Remove the more verbose
`getTarget().getTriple()` in favour of `getTriple()`.
llvm-svn: 281487
Summary:
This patch lets you create diagnostics that are emitted if and only if a
particular FunctionDecl is codegen'ed.
This is necessary for CUDA, where some constructs -- e.g. calls from
host+device functions to host functions when compiling for device -- are
allowed to appear in semantically-correct programs, but only if they're
never codegen'ed.
Reviewers: rnk
Subscribers: cfe-commits, tra
Differential Revision: https://reviews.llvm.org/D23241
llvm-svn: 278735
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.
This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.
Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.
This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).
Differential Revision: https://reviews.llvm.org/D21567
llvm-svn: 277024
It makes compiler-rt tests fail if the gold plugin is enabled.
Revert "Rework interface for bitset-using features to use a notion of LTO visibility."
Revert "Driver: only produce CFI -fvisibility= error when compiling."
Revert "clang/test/CodeGenCXX/cfi-blacklist.cpp: Exclude ms targets. They would be non-cfi."
llvm-svn: 267871
Bitsets, and the compiler features they rely on (vtable opt, CFI),
only have visibility within the LTO'd part of the linkage unit. Therefore,
only enable these features for classes with hidden LTO visibility. This
notion is based on object file visibility or (on Windows)
dllimport/dllexport attributes.
We provide the [[clang::lto_visibility_public]] attribute to override the
compiler's LTO visibility inference in cases where the class is defined
in the non-LTO'd part of the linkage unit, or where the ABI supports
calling classes derived from abstract base classes with hidden visibility
in other linkage units (e.g. COM on Windows).
If the cross-DSO CFI mode is enabled, bitset checks are emitted even for
classes with public LTO visibility, as that mode uses a separate mechanism
to cause bitsets to be exported.
This mechanism replaces the whole-program-vtables blacklist, so remove the
-fwhole-program-vtables-blacklist flag.
Because __declspec(uuid()) now implies [[clang::lto_visibility_public]], the
support for the special attr:uuid blacklist entry is removed.
Differential Revision: http://reviews.llvm.org/D18635
llvm-svn: 267784