We create an aligned temporary space and copy the content over from ap.cur to
the temporary space. This is necessary if the natural alignment of the type is
greater than the ABI alignment.
rdar://12439123
llvm-svn: 166040
Convert the uses of the Attributes class over to the new format. The
Attributes::get method call now takes an LLVM context so that the attributes
object can be uniquified and stored.
llvm-svn: 165918
For 64-bit PowerPC SVR4, an aggregate containing only one
floating-point field (float, double, or long double) must be passed in
a register as though just that field were present. This patch
addresses the issue during Clang code generation by specifying in the
ABIArgInfo for the argument that the underlying type is passed
directly in a register. The included test case verifies flat and
nested structs for the three data types.
llvm-svn: 165816
Most of the pieces for this were already in place, but a proper EmitVAArg
is needed for aggregates and complex numbers to be handled. Although the
va_list for 64-bit PowerPC SVR4 consists of GPRs 3 through 10 together with
the overflow portion of the parameter save area, we can treat va_list as
pointing to contiguous memory for all parameters, since the back end forces
the parameter GPRs to memory for varargs functions.
There is no need at this time to model parameters and return values beyond
what the DefaultABIInfo provides.
llvm-svn: 165143
This patch uses a new ABIInfo implementation specific to the le32
target, rather than falling back to DefaultABIInfo. Its behavior is
basically the same, but it also allows the regparm argument attribute.
It also includes basic tests for argument codegen and attributes.
llvm-svn: 163333
Most of the code guarded with ANDROIDEABI are not
ARM-specific, and having no relation with arm-eabi.
Thus, it will be more natural to call this
environment "Android" instead of "ANDROIDEABI".
Note: We are not using ANDROID because several projects
are using "-DANDROID" as the conditional compilation
flag.
llvm-svn: 163088
attribute. It is a variation of the x86_64 ABI:
* A struct returned indirectly uses the first register argument to pass the
pointer.
* Floats, Doubles and structs containing only one of them are not passed in
registers.
* Other structs are split into registers if they fit on the remaining ones.
Otherwise they are passed in memory.
* When a struct doesn't fit it still consumes the registers.
llvm-svn: 161022
values:
- Return integer vectors in integer registers.
- Pass vector arguments in integer registers.
- Set an upper bound for argument alignment. The largest alignment is 8-byte
for O32 and 16-byte for N32/64.
llvm-svn: 159676
In addition, I've made the pointer and reference typedef 'void' rather than T*
just so they can't get misused. I would've omitted them entirely but
std::distance likes them to be there even if it doesn't use them.
This rolls back r155808 and r155869.
Review by Doug Gregor incorporating feedback from Chandler Carruth.
llvm-svn: 158104
A vector should be returned via the hidden pointer argument except if its size
is equal to or smaller than 16-bytes and the target ABI is N32 or N64.
llvm-svn: 156642
filter_decl_iterator had a weird mismatch where both op* and op-> returned T*
making it difficult to generalize this filtering behavior into a reusable
library of any kind.
This change errs on the side of value, making op-> return T* and op* return
T&.
(reviewed by Richard Smith)
llvm-svn: 155808
- We do this when it is easy to determine that the backend will pass them on
the stack properly by itself.
Currently LLVM codegen is really bad in some cases with byval, for example, on
the test case here (which is derived from Sema code, which likes to pass
SourceLocations around)::
struct s47 { unsigned a; };
void f47(int,int,int,int,int,int,struct s47);
void test47(int a, struct s47 b) { f47(a, a, a, a, a, a, b); }
we used to emit code like this::
...
movl %esi, -8(%rbp)
movl -8(%rbp), %ecx
movl %ecx, (%rsp)
...
to handle moving the struct onto the stack, which is just appalling.
Now we generate::
movl %esi, (%rsp)
which seems better, no?
llvm-svn: 152462
optional argument passed through the variadic ellipsis)
potentially affects how we need to lower it. Propagate
this information down to the various getFunctionInfo(...)
overloads on CodeGenTypes. Furthermore, rename those
overloads to clarify their distinct purposes, and make
sure we're calling the right one in the right place.
This has a nice side-effect of making it easier to construct
a function type, since the 'variadic' bit is no longer
separable.
This shouldn't really change anything for our existing
platforms, with one minor exception --- we should now call
variadic ObjC methods with the ... in the "right place"
(see the test case), which I guess matters for anyone
running GNUStep on MIPS. Mostly it's just a substantial
clean-up.
llvm-svn: 150788
for the arm-linux-androideabi triple in particular.
Also use this to do a better job of selecting soft FP settings.
Patch by Evgeniy Stepanov.
llvm-svn: 147872
is inserted before the real argument. Padding is needed to ensure the backend
reads from or writes to the correct argument slots when the original alignment
of a byval structure is unavailable due to flattening.
llvm-svn: 147699
- Remodel Expr::EvaluateAsInt to behave like the other EvaluateAs* functions,
and add Expr::EvaluateKnownConstInt to capture the current fold-or-assert
behaviour.
- Factor out evaluation of bitfield bit widths.
- Fix a few places which would evaluate an expression twice: once to determine
whether it is a constant expression, then again to get the value.
llvm-svn: 141561
if the definition has a non-variadic prototype with compatible
parameters. Therefore, the default rule for such calls must be to
use a non-variadic convention. Achieve this by casting the callee to
the function type with which it is required to be compatible, unless
the target specifically opts out and insists that unprototyped calls
should use the variadic rules. The only case of that I'm aware of is
the x86-64 convention, which passes arguments the same way in both
cases but also sets a small amount of extra information; here we seek
to maintain compatibility with GCC, which does set this when calling
an unprototyped function.
Addresses PR10810 and PR10713.
llvm-svn: 140241
builtin types (When requested). This is another step toward making
ASTUnit build the ASTContext as needed when loading an AST file,
rather than doing so after the fact. No actual functionality change (yet).
llvm-svn: 138985
A homogeneous aggregate is an aggregate data structure where after flattening
any nesting there are 1 to 4 elements of the same base type that is either a
float, double, or Neon vector. All Neon vectors of the same size, either 64
or 128 bits, are treated as equivalent for this purpose. When using the
AAPCS-VFP ABI, check for homogeneous aggregates and pass them as arguments by
expanding them into a sequence of their base types. This requires extending
the existing support for expanded arguments to handle not only structs, but
also constant arrays and complex types.
llvm-svn: 136767
This reverts commit 67d097e1232b7d66f58989c16a45b8a11721f76e.
We found a miscompile with ARM byval, which is still being investigated.
In the meantime, this works around the problem by disabling ARM byval.
Conflicts:
lib/CodeGen/TargetInfo.cpp
llvm-svn: 136662
without bailing out when va_arg is an aggregate expression. However,
alignment checking needs to be added in isSafeToEliminateVarargsCast in
InstCombineCalls.cpp in order to produce correct mips code (see link below).
http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-July/042047.html
llvm-svn: 136647
Note that because we don't usually touch the MMX registers anyway, all -mno-mmx needs to do is tweak the x86-32 calling convention a little for vectors that look like MMX vectors, and prevent the definition of __MMX__.
clang doesn't actually stop the user from using MMX inline asm operands or MMX builtins in -mno-mmx mode; as a QOI issue, it would be nice to diagnose, but I doubt it really matters much.
<rdar://problem/9694837>
llvm-svn: 134770
The fixed implementation is compatible with the implementation both gcc and llvm-gcc use.
rdar://9686430 . (This is the issue that was reported in the thread "[LLVMdev] Segfault calling LLVM libs from a clang-compiled executable".)
llvm-svn: 134059
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.
Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.
llvm-svn: 133103
generator will give it something sufficient. This is important because
the mid-level optimizer doesn't know what alignment is required otherwise.
llvm-svn: 131879
AAPCS+VFP), similar to fastcall / stdcall / whatevercall seen on x86.
In particular, all library functions should always be AAPCS regardless of floating point ABI used.
llvm-svn: 129534
clobber with the 'y' constraint. Otherwise, we get the wrong return type and an
assert, because it created a '<1 x i64>' vector type instead of the x86_mmx
type.
llvm-svn: 127185
function parameters weren't converted to use the correct type (x86_mmx). Add a
check, similar to the one in llvm-gcc, to see if we need the x86_mmx type for
that function parameter. If so, it coerces the type to be that.
llvm-svn: 116684
- Therefore, we can lower out the NEON wrapper structs and pass the vectors
directly. This makes a huge difference in the cleanliness of the IR after
optimization.
- I will trust, but verify, via future ABITest testing (for APCS-GNU, at
least).
llvm-svn: 114618
with a non-default-stack-ABI-alignment (of 16).
- This fixes the ABI convenient, but breaks codegen since we now have
underaligned arguments. Marginal improvement overall though, and will be
fixed in next commit.
llvm-svn: 114113
caused by my ABI work. Passing:
struct outer {
int x;
struct epsilon_matcher {} e;
int f;
};
as {i32,i32} isn't safe, because the offset of the second element
needs to be at 8 when it is interpreted as a memory value.
llvm-svn: 112686
pointers. I find the resulting code to be substantially cleaner, and it
makes it very easy to use the same APIs for data member pointers (which I have
conscientiously avoided here), and it avoids a plethora of potential
inefficiencies due to excessive memory copying, but we'll have to see if it
actually works.
llvm-svn: 111776
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example. This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes). In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.
This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to
handle an argument value in registers being accessed at an
offset from the memory value.
This is the last x86-64 calling convention related miscompile
that I'm aware of.
llvm-svn: 109848
<2 x float> instead of double. This works but can't be turned
on until I teach codegen to pass <2 x float> as one XMM register
instead of two.
llvm-svn: 109790
return where the struct has a base but no fields. This
was because the x86-64 abi logic was checking the wrong
predicate in one place.
This was introduced in r91874, which was a fix for PR5831,
which lacked a CHECK line, so I verified and added it.
llvm-svn: 109759
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.
This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:
typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}
into this code at -O0:
define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}
Now we get:
define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}
This implements rdar://8248065
llvm-svn: 109733
Before we'd compile the example into something like:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2
Now we produce:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0
llvm-svn: 109732