if the definition has a non-variadic prototype with compatible
parameters. Therefore, the default rule for such calls must be to
use a non-variadic convention. Achieve this by casting the callee to
the function type with which it is required to be compatible, unless
the target specifically opts out and insists that unprototyped calls
should use the variadic rules. The only case of that I'm aware of is
the x86-64 convention, which passes arguments the same way in both
cases but also sets a small amount of extra information; here we seek
to maintain compatibility with GCC, which does set this when calling
an unprototyped function.
Addresses PR10810 and PR10713.
llvm-svn: 140241
This model uses the 'landingpad' instruction, which is pinned to the top of the
landing pad. (A landing pad is defined as the destination of the unwind branch
of an invoke instruction.) All of the information needed to generate the correct
exception handling metadata during code generation is encoded into the
landingpad instruction.
The new 'resume' instruction takes the place of the llvm.eh.resume intrinsic
call. It's lowered in much the same way as the intrinsic is.
llvm-svn: 140049
feature akin to the ARC runtime checks. Removes a terrible hack where
IR gen needed to find the declarations of those symbols in the translation
unit.
llvm-svn: 139404
These functions return a second value by writing to a pointer argument,
so they cannot be marked 'readnone' which implies that they don't access
memory.
<rdar://problem/10070234>
llvm-svn: 139319
There are still a few issues which need to be resolved with code generation for atomic load and store, so I'm not converting the places which need those for now.
I'm not entirely sure what to do about __builtin_llvm_memory_barrier: the fence instruction doesn't expose all the possibilities which can be expressed by __builtin_llvm_memory_barrier. I would appreciate hearing from anyone who is using this intrinsic.
llvm-svn: 139216
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
This fixes cases where the anonymous bitfield is followed by a bitfield member.
E.g.,
struct t4
{
char foo;
long : 0;
char bar : 1;
};
rdar://9859156
llvm-svn: 136991
alignment. This fixes cases where the anonymous bitfield is followed by a
non-bitfield member. E.g.,
struct t4
{
int foo : 1;
long : 0;
char bar;
};
Part of rdar://9859156
llvm-svn: 136858
structures. Alignment can be enforced with the use of anonymous bitfields
(e.g., int :0), but this is not currently supported. Add this test case to
document the current state, which will hopefully be fixed shortly.
llvm-svn: 136848
designed to be executed, and its output inspected for correct values,
but we aren't executing it. We're just compiling it, and dumping it to
/dev/null. It also isn't freestanding. If there is a desire to have this
test actually stick around, complain and I'll revert this and try to add
the file checks necessary to make this actually test things.
llvm-svn: 136846
A homogeneous aggregate is an aggregate data structure where after flattening
any nesting there are 1 to 4 elements of the same base type that is either a
float, double, or Neon vector. All Neon vectors of the same size, either 64
or 128 bits, are treated as equivalent for this purpose. When using the
AAPCS-VFP ABI, check for homogeneous aggregates and pass them as arguments by
expanding them into a sequence of their base types. This requires extending
the existing support for expanded arguments to handle not only structs, but
also constant arrays and complex types.
llvm-svn: 136767