headers. We previously got this check backwards and treated the wrapper header
as being textual.
This is important because our wrapper headers sometimes inject macros into the
system headers that they #include_next, and sometimes replace them entirely.
llvm-svn: 285152
This has the following ABI impact:
1) Functions whose parameter or return types are non-throwing function pointer
types have different manglings in c++1z mode from prior modes. This is
necessary because c++1z permits overloading on the noexceptness of function
pointer parameter types. A warning is issued for cases that will change
manglings in c++1z mode.
2) Functions whose parameter or return types contain instantiation-dependent
exception specifications change manglings in all modes. This is necessary
to support overloading on / SFINAE in these exception specifications, which
a careful reading of the standard indicates has essentially always been
permitted.
Note that, in order to be affected by these changes, the code in question must
specify an exception specification on a function pointer/reference type that is
written syntactically within the declaration of another function. Such
declarations are very rare, and I have so far been unable to find any code
that would be affected by this. (Note that such things will probably become
more common in C++17, since it's a lot easier to get a noexcept function type
as a function parameter / return type there.)
This change does not affect the set of symbols produced by a build of clang,
libc++, or libc++abi.
llvm-svn: 285150
Instead of storing a pointer, store the members we need.
The reason for doing this is that it makes it far easier to create
synthetic sections. It also avoids reading data from files multiple
times., which might help with cross endian linking and host
architectures with slow unaligned access.
There are obvious compacting opportunities, but this already has mixed
results even on native x86_64 linking.
There is also the possibility of better refactoring the code for
handling common symbols, but this already shows that a custom class is
not necessary.
llvm-svn: 285148
Summary:
Fixes PR28281.
MSVC lists indirect virtual base classes in the field list of a class.
This change makes Clang emit the information necessary for LLVM to
emit such records.
Reviewers: rnk, ruiu, zturner
Differential Revision: https://reviews.llvm.org/D25579
llvm-svn: 285132
On SparcV8, it was previously the case that a variable-sized alloca
might overlap by 4-bytes the last fixed stack variable, effectively
because 92 (the number of bytes reserved for the register spill area) !=
96 (the offset added to SP for where to start a DYNAMIC_STACKALLOC).
It's not as simple as changing 96 to 92, because variables that should
be 8-byte aligned would then be misaligned.
For now, simply increase the allocation size by 8 bytes for each dynamic
allocation -- wastes space, but at least doesn't overlap. As the large
comment says, doing this more efficiently will require larger changes in
llvm.
Also adds some test cases showing that we continue to not support
dynamic stack allocation and over-alignment in the same function.
llvm-svn: 285131
Summary:
Fixes PR28281.
MSVC lists indirect virtual base classes in the field list of a class,
using LF_IVBCLASS records. This change makes LLVM emit such records
when processing DW_TAG_inheritance tags with the DIFlagVirtual and
(newly introduced) DIFlagIndirect tags.
Reviewers: rnk, ruiu, zturner
Differential Revision: https://reviews.llvm.org/D25578
llvm-svn: 285130
Summary:
Select instruction annotation in IR PGO uses the edge count to infer the
branch count. It's currently placed in setInstrumentedCounts() where
no all the BB counts have been computed. This leads to wrong branch weights.
Move the annotation after all BB counts are populated.
Reviewers: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25961
llvm-svn: 285128
Summary:
This is only forced on if there is no non-Cortex-A53 CPU specified as
well. Android's platform and NDK builds need to assume that the code can
be run on Cortex-A53 devices, so we always enable the fix unless we know
specifically that the code is only running on a different kind of CPU.
Reviewers: cfe-commits
Subscribers: aemerson, rengolin, tberghammer, pirama, danalbert
Differential Revision: https://reviews.llvm.org/D25761
llvm-svn: 285127
This patch updates some of the existing Error examples, expands on the
documentation for handleErrors, and includes new sections that cover
a number of helpful utilities and common error usage idioms.
llvm-svn: 285122
- Add entries for protocols on categories
- Add relation between categories and class they extend
- Add relation between getters/setters and their corresponding property
- Use category name location as the location of category decls/defs if it has one
llvm-svn: 285120
The original patch of the A->B->A BitCast optimization was reverted by r274094 because it may cause infinite loop inside compiler https://llvm.org/bugs/show_bug.cgi?id=27996.
The problem is with following code
xB = load (type B);
xA = load (type A);
+yA = (A)xB; B -> A
+zAn = PHI[yA, xA]; PHI
+zBn = (B)zAn; // A -> B
store zAn;
store zBn;
optimizeBitCastFromPhi generates
+zBn = (B)zAn; // A -> B
and expects it will be combined with the following store instruction to another
store zAn
Unfortunately before combineStoreToValueType is called on the store instruction, optimizeBitCastFromPhi is called on the new BitCast again, and this pattern repeats indefinitely.
optimizeBitCastFromPhi only generates BitCast for load/store instructions, only the BitCast before store can cause the reexecution of optimizeBitCastFromPhi, and BitCast before store can easily be handled by InstCombineLoadStoreAlloca.cpp. So the solution to the problem is if all users of a CI are store instructions, we should not do optimizeBitCastFromPhi on it. Then optimizeBitCastFromPhi will not be called on the new BitCast instructions.
Differential Revision: https://reviews.llvm.org/D23896
llvm-svn: 285116
Summary:
The project has been renamed to Acxxel, so this old directory needs to
be deleted.
Reviewers: jlebar, jprice
Subscribers: beanz, mgorny, parallel_libs-commits, modocache
Differential Revision: https://reviews.llvm.org/D25964
llvm-svn: 285115
Summary:
Acxxel is basically a simplified redesign of StreamExecutor.
Here are the major points where Acxxel differs from the current
StreamExecutor design:
* Acxxel doesn't support the kernel and kernel loader types designed for
emission by the compiler to support type-safe kernel launches. For
CUDA, kernels in Acxxel can be seamlessly launched using the standard
CUDA triple-chevron kernel launch syntax that is available with clang
and nvcc. For CUDA and OpenCL, kernel arguments can be passed in the
old-fashioned way, as one array of pointers to arguments and another
array of argument sizes. Although OpenCL doesn't get a type-safe
kernel launch method, it does still get the benefit of all the memory
management wrappers. In the future, clang may add support for
triple-chevron OpenCL kernel launchs, or some other type-safe OpenCL
kernel launch method.
* Acxxel does not depend on any other code in LLVM, so it builds
completely independently from LLVM.
The goal will be to check in Acxxel and remove StreamExecutor, or
perhaps to remove the old StreamExecutor and rename Acxxel to
StreamExecutor, so I think Acxxel should be thought of as a new version
of StreamExecutor, not as a separate project.
Reviewers: jlebar, jprice
Subscribers: beanz, mgorny, modocache, parallel_libs-commits
Differential Revision: https://reviews.llvm.org/D25701
llvm-svn: 285111
This reverts r285093, as it caused unexpected buildbot failures on
clang-ppc64le-linux, clang-ppc64be-linux, clang-ppc64be-linux-multistage
and clang-ppc64be-linux-lnt. Failing test ubsan/TestCases/TypeCheck/vptr.cpp.
llvm-svn: 285110
Summary:
The intention is to make APFloat an interface class, so that later I can add a second implementation class DoubleAPFloat to correctly implement PPCDoubleDouble semantic. The interface of IEEEFloat is not public, and can be simplified (currently it's exactly the same as the old APFloat), but that belongs to a separate patch.
DoubleAPFloat should look like:
class DoubleAPFloat {
const fltSemantics *Semantics;
std::unique_ptr<APFloat> APFloats; // Two heap-allocated APFloats.
};
There is no functional change, nor public interface change.
Reviewers: hfinkel, chandlerc, iteratee, echristo, kbarton
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D25536
llvm-svn: 285105
Add an option to allow easier experimentation by target maintainers with the
minimum number of entries to create jump tables. Also clarify the name of
the other existing option governing the creation of jump tables.
Differential revision: https://reviews.llvm.org/D25883
llvm-svn: 285104