- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type. This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks. The feature isn't 100% complete yet, but it's done enough that we want people testing it.
Still to do:
- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases
llvm-svn: 148242
APValue::Array and APValue::MemberPointer. All APValue values can now be emitted
as constants.
Add new CGCXXABI entry point for emitting an APValue MemberPointer. The other
entrypoints dealing with constant member pointers are no longer necessary and
will be removed in a later change.
Switch codegen from using EvaluateAsRValue/EvaluateAsLValue to
VarDecl::evaluateValue. This performs caching and deals with the nasty cases in
C++11 where a non-const object's initializer can refer indirectly to
previously-initialized fields within the same object.
Building the intermediate APValue object incurs a measurable performance hit on
pathological testcases with huge initializer lists, so we continue to build IR
directly from the Expr nodes for array and record types outside of C++11.
llvm-svn: 148178
CFStrings writable.
The strings (both Unicode and ASCII) should reside in a read-only section. E.g.,
__TEXT,__cstring instead of __DATA,__data. This is done by making the global
variable created for the strings constant despite the value of that flag.
<rdar://problem/10657500>
llvm-svn: 147845
is inserted before the real argument. Padding is needed to ensure the backend
reads from or writes to the correct argument slots when the original alignment
of a byval structure is unavailable due to flattening.
llvm-svn: 147699
evaluator into constant initializer handling / IRGen. The practical consequence
of this is that the bitcast now lives in the constant's definition, rather than
in its uses.
The code in the constant expression evaluator was producing vectors of the wrong
type and size (and possibly of the wrong value for a big-endian int-to-vector
bitcast). We were getting away with this only because we don't yet support
constant-folding of any expressions which inspect vector values.
llvm-svn: 145981
The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment.
<rdar://problem/10463337>
llvm-svn: 145862
itself via an asm label.
available_externally functions are supposed to correspond to an external
function, and that is not the case in the examples in pr9614.
llvm-svn: 143049
changed the return type of a compare of two unsigned vectors to be unsigned. This patch removes the check for hasIntegerRepresentation since its not needed and returns the appropriate signed type.
I added a new test case and updated exisiting test cases that assumed an unsigned result.
llvm-svn: 142250
Start handling debug line and scope information better:
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
after fixing a few bugs that were exposed in gdb testsuite testing.
llvm-svn: 141893
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
llvm-svn: 141732
if the definition has a non-variadic prototype with compatible
parameters. Therefore, the default rule for such calls must be to
use a non-variadic convention. Achieve this by casting the callee to
the function type with which it is required to be compatible, unless
the target specifically opts out and insists that unprototyped calls
should use the variadic rules. The only case of that I'm aware of is
the x86-64 convention, which passes arguments the same way in both
cases but also sets a small amount of extra information; here we seek
to maintain compatibility with GCC, which does set this when calling
an unprototyped function.
Addresses PR10810 and PR10713.
llvm-svn: 140241
This model uses the 'landingpad' instruction, which is pinned to the top of the
landing pad. (A landing pad is defined as the destination of the unwind branch
of an invoke instruction.) All of the information needed to generate the correct
exception handling metadata during code generation is encoded into the
landingpad instruction.
The new 'resume' instruction takes the place of the llvm.eh.resume intrinsic
call. It's lowered in much the same way as the intrinsic is.
llvm-svn: 140049
feature akin to the ARC runtime checks. Removes a terrible hack where
IR gen needed to find the declarations of those symbols in the translation
unit.
llvm-svn: 139404
These functions return a second value by writing to a pointer argument,
so they cannot be marked 'readnone' which implies that they don't access
memory.
<rdar://problem/10070234>
llvm-svn: 139319
There are still a few issues which need to be resolved with code generation for atomic load and store, so I'm not converting the places which need those for now.
I'm not entirely sure what to do about __builtin_llvm_memory_barrier: the fence instruction doesn't expose all the possibilities which can be expressed by __builtin_llvm_memory_barrier. I would appreciate hearing from anyone who is using this intrinsic.
llvm-svn: 139216
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
This fixes cases where the anonymous bitfield is followed by a bitfield member.
E.g.,
struct t4
{
char foo;
long : 0;
char bar : 1;
};
rdar://9859156
llvm-svn: 136991
alignment. This fixes cases where the anonymous bitfield is followed by a
non-bitfield member. E.g.,
struct t4
{
int foo : 1;
long : 0;
char bar;
};
Part of rdar://9859156
llvm-svn: 136858
structures. Alignment can be enforced with the use of anonymous bitfields
(e.g., int :0), but this is not currently supported. Add this test case to
document the current state, which will hopefully be fixed shortly.
llvm-svn: 136848
designed to be executed, and its output inspected for correct values,
but we aren't executing it. We're just compiling it, and dumping it to
/dev/null. It also isn't freestanding. If there is a desire to have this
test actually stick around, complain and I'll revert this and try to add
the file checks necessary to make this actually test things.
llvm-svn: 136846
A homogeneous aggregate is an aggregate data structure where after flattening
any nesting there are 1 to 4 elements of the same base type that is either a
float, double, or Neon vector. All Neon vectors of the same size, either 64
or 128 bits, are treated as equivalent for this purpose. When using the
AAPCS-VFP ABI, check for homogeneous aggregates and pass them as arguments by
expanding them into a sequence of their base types. This requires extending
the existing support for expanded arguments to handle not only structs, but
also constant arrays and complex types.
llvm-svn: 136767