CGBlocks.cpp.
This commit fixes a bug in clang's code-gen where it creates the
following functions but doesn't attach function attributes to them:
__copy_helper_block_
__destroy_helper_block_
__Block_byref_object_copy_
__Block_byref_object_dispose_
rdar://problem/20828324
Differential Revision: http://reviews.llvm.org/D13525
llvm-svn: 249735
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246764
Original commit message:
[ARM] Allow passing/returning of __fp16 arguments
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246760
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246755
These changes are for Android x86_64 targets to be compatible
with current Android g++ and conform to AMD64 ABI.
https://llvm.org/bugs/show_bug.cgi?id=23897
* Return type of long double (fp128) should be fp128, not x86_fp80.
* Vararg of long double (fp128) could be in register and overflowed to memory.
https://llvm.org/bugs/show_bug.cgi?id=24111
* Return value of long double (fp128) _Complex should be in memory like a structure of {fp128,fp128}.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244468
This is the PS4 counterpart to r229376, which quotes the library name if the
name contains space. It was discovered that if a library name contains both
double-quote and space characters, quoting the name might produce unexpected
results, but we are mostly concerned with a Windows host environment, which
does not allow double-quote or slashes in file/folder names.
Differential Revision: http://reviews.llvm.org/D11275
llvm-svn: 242689
We shouldn't crash despite the AMD64 ABI not giving clear guidance as to
how to pass around vector types <= 32 bits. Instead, classify such
vectors as INTEGER to be compatible with GCC.
This fixes PR24162.
llvm-svn: 242508
For Mips direct-to-nacl, the goal is to be close to le32 front-end and
use Mips32EL backend. This patch defines new NaClMips32ELTargetInfo and
modifies it slightly to be close to le32. It also adds necessary parts,
inline with ARM and X86.
Differential Revision: http://reviews.llvm.org/D10739
llvm-svn: 241678
We didn't correctly process the case where a base class is classified as
MEMORY. This would cause us to trip over an assertion.
This fixes PR24020.
Differential Revision: http://reviews.llvm.org/D10907
llvm-svn: 241667
We forgot to run postMerge after decided that the union had to be
classified as MEMORY. This left us with Lo == MEMORY and Hi == SSEUp
which is an invalid combination.
This fixes PR24021.
Differential Revision: http://reviews.llvm.org/D10908
llvm-svn: 241666
Summary:
Byval argument pair formation assumes that if a type is less than 8 bytes
it must be an integer and not a pointer, which is not true for x32 and NaCl.
Relax the assertion and add a test for a codegen case that triggered it.
Reviewers: jvoung
Subscribers: jfb, cfe-commits
Differential Revision: http://reviews.llvm.org/D10701
llvm-svn: 240600
As specified in the SysV AVX512 ABI drafts. It follows the same scheme
as AVX2:
Arguments of type __m512 are split into eight eightbyte chunks.
The least significant one belongs to class SSE and all the others
to class SSEUP.
This also means we change the OpenMP SIMD default alignment on AVX512.
Based on r240337.
Differential Revision: http://reviews.llvm.org/D9894
llvm-svn: 240338
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
This patch fixes an assertion failure in method
'X86_64ABIInfo::GetByteVectorType'.
Method 'GetByteVectorType' (in TargetInfo.cpp) is responsible
for mapping a QualType 'Ty' (for an argument or return value) to an LLVM IR
type that, according to the ABI, must be passed in a XMM/YMM vector register.
When selecting the IR vector type, method 'GetByteVectorType' always tries to
choose the "best" IR vector type for the 'Ty' in input. In particular, if Ty
is a wrapper structure, it keeps unwrapping it until it finds a vector type VTy.
That VTy is the "preferred IR type".
However, function 'isSingleElementStructure' (used to unwrap structures) does
not know how to look through union types. So, before this patch, if Ty was in
a nest of wrapper structures with at least two union types, we would have
triggered an assertion failure (added at revision 230971).
With this patch, if method 'GetByteVectorType' fails to find the preferred
vector type, we just return a valid (although potentially 'less friendly')
vector type based on the type size. So, rather than asserting on an 'unexpected'
'Ty' in input, we conservatively return vector type <2 x double> if Ty is 16
bytes, or <4 x double> if Ty is 32 bytes.
Differential Revision: http://reviews.llvm.org/D10190
llvm-svn: 238861
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238601
Re-land the change r238200, but with modifications in the tests that should
prevent new failures in some environments as reported with the original
change on the mailing list.
llvm-svn: 238253
On MIPS unsigned int type should not be zero extended but sign-extended.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D9198
llvm-svn: 238200
We already have the ABI, we don't need a "HasAVX" flag.
This will also makes it easier to add an AVX512 ABI.
No functional change intended.
llvm-svn: 237989
Also add trivial handling of transparent unions.
PPC32, MSP430, and XCore apparently all rely on DefaultABIInfo. This
should worry you, because DefaultABIInfo is not implementing the rules
of any particular ABI.
Fixes PR23097, patch by Andy Gibbs.
llvm-svn: 237630
This patch adds support for the z13 architecture type. For compatibility
with GCC, a pair of options -mvx / -mno-vx can be used to selectively
enable/disable use of the vector facility.
When the vector facility is present, we default to the new vector ABI.
This is characterized by two major differences:
- Vector types are passed/returned in vector registers
(except for unnamed arguments of a variable-argument list function).
- Vector types are at most 8-byte aligned.
The reason for the choice of 8-byte vector alignment is that the hardware
is able to efficiently load vectors at 8-byte alignment, and the ABI only
guarantees 8-byte alignment of the stack pointer, so requiring any higher
alignment for vectors would require dynamic stack re-alignment code.
However, for compatibility with old code that may use vector types, when
*not* using the vector facility, the old alignment rules (vector types
are naturally aligned) remain in use.
These alignment rules are not only implemented at the C language level,
but also at the LLVM IR level. This is done by selecting a different
DataLayout string depending on whether the vector ABI is in effect or not.
Based on a patch by Richard Sandiford.
llvm-svn: 236531
- Changed CUDALaunchBounds arguments from integers to Expr* so they can
be saved in AST for instantiation.
- Added support for template instantiation of launch_bounds attrubute.
- Moved evaluation of launch_bounds arguments to NVPTXTargetCodeGenInfo::
SetTargetAttributes() where it can be done after template instantiation.
- Added a warning on negative launch_bounds arguments.
- Amended test cases.
Differential Revision: http://reviews.llvm.org/D8985
llvm-svn: 235452
Something like { void*, void * } would be passed to a function as a [2 x i64], but returned as an i128. This patch unifies the 2 behaviours so that we also return it as a [2 x i64].
This is better for the quality of the IR, and the size of the final LLVM binary as we tend to want to insert/extract values from these types and do so with the insert/extract instructions is less IR than shifting, truncating, and or'ing values.
Reviewed by Tim Northover.
llvm-svn: 235231
C structs.
This comes up when we have a function that takes a struct and is defined in a
C++ file and used in a C file.
Before this commit, we will generate byval for C++ and will expand the struct
for C, thus causing difference at IR level. We will use bitcast of function type
at the callsite, which causes the inliner to not inline the function.
This commit changes how we handle small C like structs at IR level, but at
backend, we should generate the same argument passing before and after the
commit.
Note that the condition for expanding is still over conservative. We should be
able to expand type that is spelled with “class” and types that are not C-like.
But this commit fixes the inconsistent argument passing between C/C++.
Reviewed by John.
rdar://20121030
llvm-svn: 234033
Running the GCC's inter-compiler ABI compatibility test suite uncovered
a couple of errors in clang's SystemZ ABI implementation. These all
affect only rare corner cases:
- Short vector types
GCC synthetic vector types defined with __attribute__ ((vector_size ...))
are always passed and returned by reference. (This is not documented in
the official ABI document, but is the de-facto ABI implemented by GCC.)
clang would do that only for vector sizes >= 16 bytes, but not for shorter
vector types.
- Float-like aggregates and empty bitfields
clang would consider any aggregate containing an empty bitfield as
first element to be a float-like aggregate. That's obviously wrong.
According to the ABI doc, the presence of an empty bitfield makes
an aggregate to be *not* float-like. However, due to a bug in GCC,
empty bitfields are ignored in C++; this patch changes clang to be
compatible with this "feature" of GCC.
- Float-like aggregates and va_arg
The va_arg implementation would mis-detect some aggregates as float-like
that aren't actually passed as such. This applies to aggregates that
have only a single element of type float or double, but using an aligned
attribute that increases the total struct size to more than 8 bytes.
This error occurred because the va_arg implement used to have an copy
of the float-like aggregate detection logic (i.e. it would call the
isFPArgumentType routine, but not perform the size check).
To simplify the logic, this patch removes the duplicated logic and
instead simply checks the (possibly coerced) LLVM argument type as
already determined by classifyArgumentType.
llvm-svn: 233543
Support for the QPX vector instruction set, used on the IBM BG/Q supercomputer,
has recently been added to the LLVM PowerPC backend. This vector instruction
set requires some ABI modifications because the ABI on the BG/Q expects
<4 x double> vectors to be provided with 32-byte stack alignment, and to be
handled as native vector types (similar to how Altivec vectors are handled on
mainline PPC systems). I've named this ABI variant elfv1-qpx, have made this
the default ABI when QPX is supported, and have updated the ABI handling code
to provide QPX vectors with the correct stack alignment and associated
register-assignment logic.
llvm-svn: 231960
When passing a type with large alignment byval, we were specifying the type's
alignment rather than the alignment that the backend is actually capable of
producing (ABIAlign).
This would be OK (if odd) assuming the backend dealt with it prooperly,
unfortunately it doesn't and trying to pass types with "byval align 16" can
cause it to set fp incorrectly and trash the stack during the prologue. I'll be
fixing that in a separate patch, but Clang should still be emitting IR that's
as close to its intent as possible.
rdar://20059039
llvm-svn: 231706
Opt in Win64 to supporting sjlj lowering. We have the backend lowering,
so I think this was just an oversight because WinX86_64TargetCodeGenInfo
doesn't inherit from X86_64TargetCodeGenInfo.
llvm-svn: 231280
isSingleElementStruct was a bit too tight in its definition of struct
so we got a mismatch between classify() and the actual code generation.
To make matters worse the code in GetByteVectorType still defaulted to
<2 x double> if it encountered a type it didn't know, making this a
silent miscompilation (PR22753).
Completely remove the "preferred type" stuff from GetByteVectorType and
make it fail an assertion if someone tries to use it with a type not
suitable for a vector register.
llvm-svn: 230971
The backend should now be able to handle all AAPCS rules based on argument
type, which means Clang no longer has to duplicate the register-counting logic
and the CodeGen can be significantly simplified.
llvm-svn: 230349