CodeGen-level implementation. Instead of adding an attribute to clang's
FunctionDecl, add the IR attribute directly. This means a module built with
this flag is now compatible with code built without it and vice versa.
This change also results in the 'noalias' attribute no longer being added to
calls to operator new in the IR; it's now only added to the declaration. It
also fixes a bug where we failed to add the attribute to the 'nothrow' versions
(because we didn't implicitly declare them, there was no good time to inject a
fake attribute).
llvm-svn: 265728
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
r154191 switched to atexit() instead of global destructors, so the intent
was probably to check for _GLOBAL__D_a _not_ being in the output. There already
is a line for _ZN3barD1Ev further up, so just remove the CH_ECK line referring
to that.
The only circumstance in which clang emits _GLOBAL__D_a destructor symbols is
for -fapple-kext, and that is tested by test/CodeGenCXX/cxx-apple-kext.cpp.
llvm-svn: 208222
We previously treated ARM separately from the generic Itanium ABI for
initializing guard variables. This code duplication led to things like
the ARM path missing the memory barrier for threadsafe handling, and a
highly misleading comment about how we were (mis)using the generic ABI
for ARM64 when really it went through the ARM codepath.
This unifies the two code paths. Functionally, this changes the ARM
and ARM64 codepath to use one byte loads instead of 4 and 8,
respectively, and adds the missing atomic acquire to these loads.
Other architectures are unchanged.
llvm-svn: 206937
Clang implements the part of the ARM ABI saying that certain functions
(e.g., constructors and destructors) return "this", but Apple's version of
gcc and llvm-gcc did not. The libstdc++ dylib on iOS 5 was built with
llvm-gcc, which means that clang cannot safely assume that code from the C++
runtime will correctly follow the ABI. It is also possible to run into this
problem when linking with other libraries built with gcc or llvm-gcc. Even
though there is no way to reliably detect that situation, it is most likely
to come up when targeting older versions of iOS. Disabling the optimization
for any code targeting iOS 5 solves the libstdc++ problem and has a reasonably
good chance of fixing the issue for other older libraries as well.
<rdar://problem/16377159>
llvm-svn: 205272
This allows clang to use the backend parameter attribute 'returned' when generating 'this'-returning constructors and destructors in ARM and MSVC C++ ABIs.
llvm-svn: 185291
The backend will now use the generic 'returned' attribute to form tail calls where possible, as well as avoid save-restores of 'this' in some cases (specifically the cases that matter for the ARM C++ ABI).
This patch also reverts a prior front-end only partial implementation of these optimizations, since it's no longer required.
llvm-svn: 184205
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
Updated from r177211.
rdar://12818789
llvm-svn: 177541
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
rdar://12818789
llvm-svn: 177211
global destructor entry. For some reason this isn't enabled for
apple-kexts; it'd be good to have documentation for that.
Based on a patch by Nakamura Takumi!
llvm-svn: 154191
This model uses the 'landingpad' instruction, which is pinned to the top of the
landing pad. (A landing pad is defined as the destination of the unwind branch
of an invoke instruction.) All of the information needed to generate the correct
exception handling metadata during code generation is encoded into the
landingpad instruction.
The new 'resume' instruction takes the place of the llvm.eh.resume intrinsic
call. It's lowered in much the same way as the intrinsic is.
llvm-svn: 140049
to be careful to emit landing pads that are always prepared to handle a
cleanup path. This is correct mostly because of the fix to the LLVM
inliner, r132200.
llvm-svn: 132209
implement ARM array cookies. Also fix a few unfortunate bugs:
- throwing dtors in deletes prevented the allocation from being deleted
- adding the cookie to the new[] size was not being considered for
overflow (and, more seriously, was screwing up the earlier checks)
- deleting an array via a pointer to array of class type was not
causing any destructors to be run and was passing the unadjusted
pointer to the deallocator
- lots of address-space problems, in case anyone wants to support
free store in a variant address space :)
llvm-svn: 112814