Commit Graph

976 Commits

Author SHA1 Message Date
John McCall dec348f7db Correctly emit certain implicit references to 'self' even within
a lambda.

Bug #1 is that CGF's CurFuncDecl was "stuck" at lambda invocation
functions.  Fix that by generally improving getNonClosureContext
to look through lambdas and captured statements but only report
code contexts, which is generally what's wanted.  Audit uses of
CurFuncDecl and getNonClosureAncestor for correctness.

Bug #2 is that lambdas weren't specially mapping 'self' when inside
an ObjC method.  Fix that by removing the requirement for that
and using the normal EmitDeclRefLValue path in LoadObjCSelf.

rdar://13800041

llvm-svn: 181000
2013-05-03 07:33:41 +00:00
Adrian Prantl 857f92371a Revert "Attempt to un-break the gdb buildbot."
This reverts commit 180982.

llvm-svn: 180990
2013-05-03 01:42:35 +00:00
Adrian Prantl 44f38013e2 Attempt to un-break the gdb buildbot.
- Use the debug location of the return expression for the cleanup code
  if the return expression is trivially evaluatable, regardless of the
  number of stop points in the function.
- Ensure that any EH code in the cleanup still gets the line number of
  the closing } of the lexical scope.
- Added a testcase with EH in the cleanup.

rdar://problem/13442648

llvm-svn: 180982
2013-05-03 00:44:13 +00:00
Adrian Prantl 3be10542af Ensure that the line table for functions with cleanups is sequential.
If there is cleanup code, the cleanup code gets the debug location of
the closing '}'. The subsequent ret IR-instruction does not get a
debug location. The return _expression_ will get the debug location
of the return statement.

If the function contains only a single, simple return statement,
the cleanup code may become the first breakpoint in the function.
In this case we set the debug location for the cleanup code
to the location of the return statement.

rdar://problem/13442648

llvm-svn: 180932
2013-05-02 17:30:20 +00:00
Benjamin Kramer 139cfc2e63 ArrayRefize code. No functionality change.
llvm-svn: 180632
2013-04-26 21:32:52 +00:00
Richard Smith 852c9db72b C++1y: Allow aggregates to have default initializers.
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.

There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.

llvm-svn: 179958
2013-04-20 22:23:05 +00:00
Richard Smith 2fd1d7aee3 Implement CodeGen for C++11 thread_local, following the Itanium ABI specification as discussed on cxx-abi-dev.
llvm-svn: 179858
2013-04-19 16:42:07 +00:00
John McCall c8e0170578 Standardize accesses to the TargetInfo in IR-gen.
Patch by Stephen Lin!

llvm-svn: 179638
2013-04-16 22:48:15 +00:00
Tareq A. Siraj 24110cc733 Implement CapturedStmt AST
CapturedStmt can be used to implement generic function outlining as described in
http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-January/027540.html.

CapturedStmt is not exposed to the C api.

Serialization and template support are pending.

Author: Wei Pan <wei.pan@intel.com>

Differential Revision: http://llvm-reviews.chandlerc.com/D370

llvm-svn: 179615
2013-04-16 18:53:08 +00:00
Manman Ren c451e5766e Initial support for struct-path aware TBAA.
Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to
the actual type and 0, and are updated in EmitLValueForField.
Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar.
Added command line option -struct-path-tbaa.

llvm-svn: 178797
2013-04-04 21:53:22 +00:00
Manman Ren 092d9e8f3b revert r178784 since it does not have a commit message
llvm-svn: 178796
2013-04-04 21:51:07 +00:00
Manman Ren 037d2b252d Index: include/clang/Driver/CC1Options.td
===================================================================
--- include/clang/Driver/CC1Options.td	(revision 178718)
+++ include/clang/Driver/CC1Options.td	(working copy)
@@ -161,6 +161,8 @@
   HelpText<"Use register sized accesses to bit-fields, when possible.">;
 def relaxed_aliasing : Flag<["-"], "relaxed-aliasing">,
   HelpText<"Turn off Type Based Alias Analysis">;
+def struct_path_tbaa : Flag<["-"], "struct-path-tbaa">,
+  HelpText<"Turn on struct-path aware Type Based Alias Analysis">;
 def masm_verbose : Flag<["-"], "masm-verbose">,
   HelpText<"Generate verbose assembly output">;
 def mcode_model : Separate<["-"], "mcode-model">,
Index: include/clang/Driver/Options.td
===================================================================
--- include/clang/Driver/Options.td	(revision 178718)
+++ include/clang/Driver/Options.td	(working copy)
@@ -587,6 +587,7 @@
   Flags<[CC1Option]>, HelpText<"Disable spell-checking">;
 def fno_stack_protector : Flag<["-"], "fno-stack-protector">, Group<f_Group>;
 def fno_strict_aliasing : Flag<["-"], "fno-strict-aliasing">, Group<f_Group>;
+def fstruct_path_tbaa : Flag<["-"], "fstruct-path-tbaa">, Group<f_Group>;
 def fno_strict_enums : Flag<["-"], "fno-strict-enums">, Group<f_Group>;
 def fno_strict_overflow : Flag<["-"], "fno-strict-overflow">, Group<f_Group>;
 def fno_threadsafe_statics : Flag<["-"], "fno-threadsafe-statics">, Group<f_Group>,
Index: include/clang/Frontend/CodeGenOptions.def
===================================================================
--- include/clang/Frontend/CodeGenOptions.def	(revision 178718)
+++ include/clang/Frontend/CodeGenOptions.def	(working copy)
@@ -85,6 +85,7 @@
 VALUE_CODEGENOPT(OptimizeSize, 2, 0) ///< If -Os (==1) or -Oz (==2) is specified.
 CODEGENOPT(RelaxAll          , 1, 0) ///< Relax all machine code instructions.
 CODEGENOPT(RelaxedAliasing   , 1, 0) ///< Set when -fno-strict-aliasing is enabled.
+CODEGENOPT(StructPathTBAA    , 1, 0) ///< Whether or not to use struct-path TBAA.
 CODEGENOPT(SaveTempLabels    , 1, 0) ///< Save temporary labels.
 CODEGENOPT(SanitizeAddressZeroBaseShadow , 1, 0) ///< Map shadow memory at zero
                                                  ///< offset in AddressSanitizer.
Index: lib/CodeGen/CGExpr.cpp
===================================================================
--- lib/CodeGen/CGExpr.cpp	(revision 178718)
+++ lib/CodeGen/CGExpr.cpp	(working copy)
@@ -1044,7 +1044,8 @@
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(LValue lvalue) {
   return EmitLoadOfScalar(lvalue.getAddress(), lvalue.isVolatile(),
                           lvalue.getAlignment().getQuantity(),
-                          lvalue.getType(), lvalue.getTBAAInfo());
+                          lvalue.getType(), lvalue.getTBAAInfo(),
+                          lvalue.getTBAABaseType(), lvalue.getTBAAOffset());
 }
 
 static bool hasBooleanRepresentation(QualType Ty) {
@@ -1106,7 +1107,9 @@
 
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                               unsigned Alignment, QualType Ty,
-                                              llvm::MDNode *TBAAInfo) {
+                                              llvm::MDNode *TBAAInfo,
+                                              QualType TBAABaseType,
+                                              uint64_t TBAAOffset) {
   // For better performance, handle vector loads differently.
   if (Ty->isVectorType()) {
     llvm::Value *V;
@@ -1158,8 +1161,11 @@
     Load->setVolatile(true);
   if (Alignment)
     Load->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Load, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Load, TBAAPath);
+  }
 
   if ((SanOpts->Bool && hasBooleanRepresentation(Ty)) ||
       (SanOpts->Enum && Ty->getAs<EnumType>())) {
@@ -1217,7 +1223,8 @@
                                         bool Volatile, unsigned Alignment,
                                         QualType Ty,
                                         llvm::MDNode *TBAAInfo,
-                                        bool isInit) {
+                                        bool isInit, QualType TBAABaseType,
+                                        uint64_t TBAAOffset) {
   
   // Handle vectors differently to get better performance.
   if (Ty->isVectorType()) {
@@ -1268,15 +1275,19 @@
   llvm::StoreInst *Store = Builder.CreateStore(Value, Addr, Volatile);
   if (Alignment)
     Store->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Store, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Store, TBAAPath);
+  }
 }
 
 void CodeGenFunction::EmitStoreOfScalar(llvm::Value *value, LValue lvalue,
                                         bool isInit) {
   EmitStoreOfScalar(value, lvalue.getAddress(), lvalue.isVolatile(),
                     lvalue.getAlignment().getQuantity(), lvalue.getType(),
-                    lvalue.getTBAAInfo(), isInit);
+                    lvalue.getTBAAInfo(), isInit, lvalue.getTBAABaseType(),
+                    lvalue.getTBAAOffset());
 }
 
 /// EmitLoadOfLValue - Given an expression that represents a value lvalue, this
@@ -2494,9 +2505,12 @@
 
   llvm::Value *addr = base.getAddress();
   unsigned cvr = base.getVRQualifiers();
+  bool TBAAPath = CGM.getCodeGenOpts().StructPathTBAA;
   if (rec->isUnion()) {
     // For unions, there is no pointer adjustment.
     assert(!type->isReferenceType() && "union has reference member");
+    // TODO: handle path-aware TBAA for union.
+    TBAAPath = false;
   } else {
     // For structs, we GEP to the field that the record layout suggests.
     unsigned idx = CGM.getTypes().getCGRecordLayout(rec).getLLVMFieldNo(field);
@@ -2508,6 +2522,8 @@
       if (cvr & Qualifiers::Volatile) load->setVolatile(true);
       load->setAlignment(alignment.getQuantity());
 
+      // Loading the reference will disable path-aware TBAA.
+      TBAAPath = false;
       if (CGM.shouldUseTBAA()) {
         llvm::MDNode *tbaa;
         if (mayAlias)
@@ -2541,6 +2557,16 @@
 
   LValue LV = MakeAddrLValue(addr, type, alignment);
   LV.getQuals().addCVRQualifiers(cvr);
+  if (TBAAPath) {
+    const ASTRecordLayout &Layout =
+        getContext().getASTRecordLayout(field->getParent());
+    // Set the base type to be the base type of the base LValue and
+    // update offset to be relative to the base type.
+    LV.setTBAABaseType(base.getTBAABaseType());
+    LV.setTBAAOffset(base.getTBAAOffset() +
+                     Layout.getFieldOffset(field->getFieldIndex()) /
+                                           getContext().getCharWidth());
+  }
 
   // __weak attribute on a field is ignored.
   if (LV.getQuals().getObjCGCAttr() == Qualifiers::Weak)
Index: lib/CodeGen/CGValue.h
===================================================================
--- lib/CodeGen/CGValue.h	(revision 178718)
+++ lib/CodeGen/CGValue.h	(working copy)
@@ -157,6 +157,11 @@
 
   Expr *BaseIvarExp;
 
+  /// Used by struct-path-aware TBAA.
+  QualType TBAABaseType;
+  /// Offset relative to the base type.
+  uint64_t TBAAOffset;
+
   /// TBAAInfo - TBAA information to attach to dereferences of this LValue.
   llvm::MDNode *TBAAInfo;
 
@@ -175,6 +180,10 @@
     this->ImpreciseLifetime = false;
     this->ThreadLocalRef = false;
     this->BaseIvarExp = 0;
+
+    // Initialize fields for TBAA.
+    this->TBAABaseType = Type;
+    this->TBAAOffset = 0;
     this->TBAAInfo = TBAAInfo;
   }
 
@@ -232,6 +241,12 @@
   Expr *getBaseIvarExp() const { return BaseIvarExp; }
   void setBaseIvarExp(Expr *V) { BaseIvarExp = V; }
 
+  QualType getTBAABaseType() const { return TBAABaseType; }
+  void setTBAABaseType(QualType T) { TBAABaseType = T; }
+
+  uint64_t getTBAAOffset() const { return TBAAOffset; }
+  void setTBAAOffset(uint64_t O) { TBAAOffset = O; }
+
   llvm::MDNode *getTBAAInfo() const { return TBAAInfo; }
   void setTBAAInfo(llvm::MDNode *N) { TBAAInfo = N; }
 
Index: lib/CodeGen/CodeGenFunction.h
===================================================================
--- lib/CodeGen/CodeGenFunction.h	(revision 178718)
+++ lib/CodeGen/CodeGenFunction.h	(working copy)
@@ -2211,7 +2211,9 @@
   /// the LLVM value representation.
   llvm::Value *EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                 unsigned Alignment, QualType Ty,
-                                llvm::MDNode *TBAAInfo = 0);
+                                llvm::MDNode *TBAAInfo = 0,
+                                QualType TBAABaseTy = QualType(),
+                                uint64_t TBAAOffset = 0);
 
   /// EmitLoadOfScalar - Load a scalar value from an address, taking
   /// care to appropriately convert from the memory representation to
@@ -2224,7 +2226,9 @@
   /// the LLVM value representation.
   void EmitStoreOfScalar(llvm::Value *Value, llvm::Value *Addr,
                          bool Volatile, unsigned Alignment, QualType Ty,
-                         llvm::MDNode *TBAAInfo = 0, bool isInit=false);
+                         llvm::MDNode *TBAAInfo = 0, bool isInit = false,
+                         QualType TBAABaseTy = QualType(),
+                         uint64_t TBAAOffset = 0);
 
   /// EmitStoreOfScalar - Store a scalar value to an address, taking
   /// care to appropriately convert from the memory representation to
Index: lib/CodeGen/CodeGenModule.cpp
===================================================================
--- lib/CodeGen/CodeGenModule.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenModule.cpp	(working copy)
@@ -227,6 +227,20 @@
   return TBAA->getTBAAStructInfo(QTy);
 }
 
+llvm::MDNode *CodeGenModule::getTBAAStructTypeInfo(QualType QTy) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTypeInfo(QTy);
+}
+
+llvm::MDNode *CodeGenModule::getTBAAStructTagInfo(QualType BaseTy,
+                                                  llvm::MDNode *AccessN,
+                                                  uint64_t O) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTagInfo(BaseTy, AccessN, O);
+}
+
 void CodeGenModule::DecorateInstruction(llvm::Instruction *Inst,
                                         llvm::MDNode *TBAAInfo) {
   Inst->setMetadata(llvm::LLVMContext::MD_tbaa, TBAAInfo);
Index: lib/CodeGen/CodeGenModule.h
===================================================================
--- lib/CodeGen/CodeGenModule.h	(revision 178718)
+++ lib/CodeGen/CodeGenModule.h	(working copy)
@@ -501,6 +501,11 @@
   llvm::MDNode *getTBAAInfo(QualType QTy);
   llvm::MDNode *getTBAAInfoForVTablePtr();
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+  /// Return the MDNode in the type DAG for the given struct type.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QTy);
+  /// Return the path-aware tag for given base type, access node and offset.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseTy, llvm::MDNode *AccessN,
+                                     uint64_t O);
 
   bool isTypeConstant(QualType QTy, bool ExcludeCtorDtor);
 
Index: lib/CodeGen/CodeGenTBAA.cpp
===================================================================
--- lib/CodeGen/CodeGenTBAA.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.cpp	(working copy)
@@ -21,6 +21,7 @@
 #include "clang/AST/Mangle.h"
 #include "clang/AST/RecordLayout.h"
 #include "clang/Frontend/CodeGenOptions.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/IR/Constants.h"
 #include "llvm/IR/LLVMContext.h"
 #include "llvm/IR/Metadata.h"
@@ -225,3 +226,87 @@
   // For now, handle any other kind of type conservatively.
   return StructMetadataCache[Ty] = NULL;
 }
+
+/// Check if the given type can be handled by path-aware TBAA.
+static bool isTBAAPathStruct(QualType QTy) {
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+    // RD can be struct, union, class, interface or enum.
+    // For now, we only handle struct.
+    if (RD->isStruct() && !RD->hasFlexibleArrayMember())
+      return true;
+  }
+  return false;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTypeInfo(QualType QTy) {
+  const Type *Ty = Context.getCanonicalType(QTy).getTypePtr();
+  assert(isTBAAPathStruct(QTy));
+
+  if (llvm::MDNode *N = StructTypeMetadataCache[Ty])
+    return N;
+
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+
+    const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD);
+    SmallVector <std::pair<uint64_t, llvm::MDNode*>, 4> Fields;
+    // To reduce the size of MDNode for a given struct type, we only output
+    // once for all the fields with the same scalar types.
+    // Offsets for scalar fields in the type DAG are not used.
+    llvm::SmallSet <llvm::MDNode*, 4> ScalarFieldTypes;
+    unsigned idx = 0;
+    for (RecordDecl::field_iterator i = RD->field_begin(),
+         e = RD->field_end(); i != e; ++i, ++idx) {
+      QualType FieldQTy = i->getType();
+      llvm::MDNode *FieldNode;
+      if (isTBAAPathStruct(FieldQTy))
+        FieldNode = getTBAAStructTypeInfo(FieldQTy);
+      else {
+        FieldNode = getTBAAInfo(FieldQTy);
+        // Ignore this field if the type already exists.
+        if (ScalarFieldTypes.count(FieldNode))
+          continue;
+        ScalarFieldTypes.insert(FieldNode);
+       }
+      if (!FieldNode)
+        return StructTypeMetadataCache[Ty] = NULL;
+      Fields.push_back(std::make_pair(
+          Layout.getFieldOffset(idx) / Context.getCharWidth(), FieldNode));
+    }
+
+    // TODO: This is using the RTTI name. Is there a better way to get
+    // a unique string for a type?
+    SmallString<256> OutName;
+    llvm::raw_svector_ostream Out(OutName);
+    MContext.mangleCXXRTTIName(QualType(Ty, 0), Out);
+    Out.flush();
+    // Create the struct type node with a vector of pairs (offset, type).
+    return StructTypeMetadataCache[Ty] =
+      MDHelper.createTBAAStructTypeNode(OutName, Fields);
+  }
+
+  return StructMetadataCache[Ty] = NULL;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTagInfo(QualType BaseQTy, llvm::MDNode *AccessNode,
+                                  uint64_t Offset) {
+  if (!CodeGenOpts.StructPathTBAA)
+    return AccessNode;
+
+  const Type *BTy = Context.getCanonicalType(BaseQTy).getTypePtr();
+  TBAAPathTag PathTag = TBAAPathTag(BTy, AccessNode, Offset);
+  if (llvm::MDNode *N = StructTagMetadataCache[PathTag])
+    return N;
+
+  llvm::MDNode *BNode = 0;
+  if (isTBAAPathStruct(BaseQTy))
+    BNode  = getTBAAStructTypeInfo(BaseQTy);
+  if (!BNode)
+    return StructTagMetadataCache[PathTag] = AccessNode;
+
+  return StructTagMetadataCache[PathTag] =
+    MDHelper.createTBAAStructTagNode(BNode, AccessNode, Offset);
+}
Index: lib/CodeGen/CodeGenTBAA.h
===================================================================
--- lib/CodeGen/CodeGenTBAA.h	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.h	(working copy)
@@ -35,6 +35,14 @@
 namespace CodeGen {
   class CGRecordLayout;
 
+  struct TBAAPathTag {
+    TBAAPathTag(const Type *B, const llvm::MDNode *A, uint64_t O)
+      : BaseT(B), AccessN(A), Offset(O) {}
+    const Type *BaseT;
+    const llvm::MDNode *AccessN;
+    uint64_t Offset;
+  };
+
 /// CodeGenTBAA - This class organizes the cross-module state that is used
 /// while lowering AST types to LLVM types.
 class CodeGenTBAA {
@@ -46,8 +54,13 @@
   // MDHelper - Helper for creating metadata.
   llvm::MDBuilder MDHelper;
 
-  /// MetadataCache - This maps clang::Types to llvm::MDNodes describing them.
+  /// MetadataCache - This maps clang::Types to scalar llvm::MDNodes describing
+  /// them.
   llvm::DenseMap<const Type *, llvm::MDNode *> MetadataCache;
+  /// This maps clang::Types to a struct node in the type DAG.
+  llvm::DenseMap<const Type *, llvm::MDNode *> StructTypeMetadataCache;
+  /// This maps TBAAPathTags to a tag node.
+  llvm::DenseMap<TBAAPathTag, llvm::MDNode *> StructTagMetadataCache;
 
   /// StructMetadataCache - This maps clang::Types to llvm::MDNodes describing
   /// them for struct assignments.
@@ -89,9 +102,49 @@
   /// getTBAAStructInfo - Get the TBAAStruct MDNode to be used for a memcpy of
   /// the given type.
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+
+  /// Get the MDNode in the type DAG for given struct type QType.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QType);
+  /// Get the tag MDNode for a given base type, the actual sclar access MDNode
+  /// and offset into the base type.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseQType,
+                                     llvm::MDNode *AccessNode, uint64_t Offset);
 };
 
 }  // end namespace CodeGen
 }  // end namespace clang
 
+namespace llvm {
+
+template<> struct DenseMapInfo<clang::CodeGen::TBAAPathTag> {
+  static clang::CodeGen::TBAAPathTag getEmptyKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getEmptyKey(),
+      DenseMapInfo<const MDNode *>::getEmptyKey(),
+      DenseMapInfo<uint64_t>::getEmptyKey());
+  }
+
+  static clang::CodeGen::TBAAPathTag getTombstoneKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getTombstoneKey(),
+      DenseMapInfo<const MDNode *>::getTombstoneKey(),
+      DenseMapInfo<uint64_t>::getTombstoneKey());
+  }
+
+  static unsigned getHashValue(const clang::CodeGen::TBAAPathTag &Val) {
+    return DenseMapInfo<const clang::Type *>::getHashValue(Val.BaseT) ^
+           DenseMapInfo<const MDNode *>::getHashValue(Val.AccessN) ^
+           DenseMapInfo<uint64_t>::getHashValue(Val.Offset);
+  }
+
+  static bool isEqual(const clang::CodeGen::TBAAPathTag &LHS,
+                      const clang::CodeGen::TBAAPathTag &RHS) {
+    return LHS.BaseT == RHS.BaseT &&
+           LHS.AccessN == RHS.AccessN &&
+           LHS.Offset == RHS.Offset;
+  }
+};
+
+}  // end namespace llvm
+
 #endif
Index: lib/Driver/Tools.cpp
===================================================================
--- lib/Driver/Tools.cpp	(revision 178718)
+++ lib/Driver/Tools.cpp	(working copy)
@@ -2105,6 +2105,8 @@
                     options::OPT_fno_strict_aliasing,
                     getToolChain().IsStrictAliasingDefault()))
     CmdArgs.push_back("-relaxed-aliasing");
+  if (Args.hasArg(options::OPT_fstruct_path_tbaa))
+    CmdArgs.push_back("-struct-path-tbaa");
   if (Args.hasFlag(options::OPT_fstrict_enums, options::OPT_fno_strict_enums,
                    false))
     CmdArgs.push_back("-fstrict-enums");
Index: lib/Frontend/CompilerInvocation.cpp
===================================================================
--- lib/Frontend/CompilerInvocation.cpp	(revision 178718)
+++ lib/Frontend/CompilerInvocation.cpp	(working copy)
@@ -324,6 +324,7 @@
   Opts.UseRegisterSizedBitfieldAccess = Args.hasArg(
     OPT_fuse_register_sized_bitfield_access);
   Opts.RelaxedAliasing = Args.hasArg(OPT_relaxed_aliasing);
+  Opts.StructPathTBAA = Args.hasArg(OPT_struct_path_tbaa);
   Opts.DwarfDebugFlags = Args.getLastArgValue(OPT_dwarf_debug_flags);
   Opts.MergeAllConstants = !Args.hasArg(OPT_fno_merge_all_constants);
   Opts.NoCommon = Args.hasArg(OPT_fno_common);
Index: test/CodeGen/tbaa.cpp
===================================================================
--- test/CodeGen/tbaa.cpp	(revision 0)
+++ test/CodeGen/tbaa.cpp	(working copy)
@@ -0,0 +1,217 @@
+// RUN: %clang_cc1 -O1 -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s
+// RUN: %clang_cc1 -O1 -struct-path-tbaa -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s -check-prefix=PATH
+// Test TBAA metadata generated by front-end.
+
+#include <stdint.h>
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+   uint16_t f16_2;
+   uint32_t f32_2;
+} StructA;
+typedef struct
+{
+   uint16_t f16;
+   StructA a;
+   uint32_t f32;
+} StructB;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+} StructC;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+   uint8_t f8;
+} StructD;
+
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS;
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS2;
+
+uint32_t g(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !5
+  *s = 1;
+  A->f32 = 4;
+  return *s;
+}
+
+uint32_t g2(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !8
+  *s = 1;
+  A->f16 = 4;
+  return *s;
+}
+
+uint32_t g3(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  A->f32 = 1;
+  B->a.f32 = 4;
+  return A->f32;
+}
+
+uint32_t g4(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !11
+  A->f32 = 1;
+  B->a.f16 = 4;
+  return A->f32;
+}
+
+uint32_t g5(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !12
+  A->f32 = 1;
+  B->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g6(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !13
+  A->f32 = 1;
+  B->a.f32_2 = 4;
+  return A->f32;
+}
+
+uint32_t g7(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !14
+  A->f32 = 1;
+  S->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g8(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !16
+  A->f32 = 1;
+  S->f16 = 4;
+  return A->f32;
+}
+
+uint32_t g9(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !17
+  S->f32 = 1;
+  S2->f32 = 4;
+  return S->f32;
+}
+
+uint32_t g10(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !19
+  S->f32 = 1;
+  S2->f16 = 4;
+  return S->f32;
+}
+
+uint32_t g11(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !20
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !22
+  C->b.a.f32 = 1;
+  D->b.a.f32 = 4;
+  return C->b.a.f32;
+}
+
+uint32_t g12(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// TODO: differentiate the two accesses.
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !9
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  StructB *b1 = &(C->b);
+  StructB *b2 = &(D->b);
+  // b1, b2 have different context.
+  b1->a.f32 = 1;
+  b2->a.f32 = 4;
+  return b1->a.f32;
+}
+
+// CHECK: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// CHECK: !2 = metadata !{metadata !"Simple C/C++ TBAA"}
+// CHECK: !4 = metadata !{metadata !"int", metadata !1}
+// CHECK: !5 = metadata !{metadata !"short", metadata !1}
+
+// PATH: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// PATH: !4 = metadata !{metadata !"int", metadata !1}
+// PATH: !5 = metadata !{metadata !6, metadata !4, i64 4}
+// PATH: !6 = metadata !{metadata !"_ZTS7StructA", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !7 = metadata !{metadata !"short", metadata !1}
+// PATH: !8 = metadata !{metadata !6, metadata !7, i64 0}
+// PATH: !9 = metadata !{metadata !10, metadata !4, i64 8}
+// PATH: !10 = metadata !{metadata !"_ZTS7StructB", i64 0, metadata !7, i64 4, metadata !6, i64 20, metadata !4}
+// PATH: !11 = metadata !{metadata !10, metadata !7, i64 4}
+// PATH: !12 = metadata !{metadata !10, metadata !4, i64 20}
+// PATH: !13 = metadata !{metadata !10, metadata !4, i64 16}
+// PATH: !14 = metadata !{metadata !15, metadata !4, i64 4}
+// PATH: !15 = metadata !{metadata !"_ZTS7StructS", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !16 = metadata !{metadata !15, metadata !7, i64 0}
+// PATH: !17 = metadata !{metadata !18, metadata !4, i64 4}
+// PATH: !18 = metadata !{metadata !"_ZTS8StructS2", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !19 = metadata !{metadata !18, metadata !7, i64 0}
+// PATH: !20 = metadata !{metadata !21, metadata !4, i64 12}
+// PATH: !21 = metadata !{metadata !"_ZTS7StructC", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4}
+// PATH: !22 = metadata !{metadata !23, metadata !4, i64 12}
+// PATH: !23 = metadata !{metadata !"_ZTS7StructD", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4, i64 32, metadata !1}

llvm-svn: 178784
2013-04-04 20:14:17 +00:00
Adrian Prantl 5d5b67c52c * Attempt to un-break gdb buildbot by emitting a lexical block end only
when we actually end a lexical block.
* Added new test for line table / block cleanup.
* Follow-up to r177819 / rdar://problem/13115369

llvm-svn: 178490
2013-04-01 19:02:06 +00:00
Nadav Rotem 1da30944a6 Make clang to mark static stack allocations with lifetime markers to enable a more aggressive stack coloring.
Patch by John McCall with help by Shuxin Yang.
rdar://13115369

llvm-svn: 177819
2013-03-23 06:43:35 +00:00
John McCall eff1884274 Under ARC, when we're passing the address of a strong variable
to an out-parameter using the indirect-writeback conversion,
and we copied the current value of the variable to the temporary,
make sure that we register an intrinsic use of that value with
the optimizer so that the value won't get released until we have
a chance to retain it.

rdar://13195034

llvm-svn: 177813
2013-03-23 02:35:54 +00:00
Manman Ren 0175461296 Exploit this-return of a callsite in a this-return function.
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.

We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.

Updated from r177211.
rdar://12818789

llvm-svn: 177541
2013-03-20 16:59:38 +00:00
Manman Ren c089074aa5 revert r177211 due to its potential issues
llvm-svn: 177222
2013-03-16 04:47:38 +00:00
Manman Ren 58dd990c11 Exploit this-return of a callsite in a this-return function.
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.

We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.

rdar://12818789

llvm-svn: 177211
2013-03-16 00:11:09 +00:00
John McCall cdda29c968 Tighten up the rules for precise lifetime and document
the requirements on the ARC optimizer.

rdar://13407451

llvm-svn: 176924
2013-03-13 03:10:54 +00:00
Joey Gouly aba589cceb Add support for the OpenCL attribute 'vec_type_hint'.
Patch by Murat Bolat!

llvm-svn: 176686
2013-03-08 09:42:32 +00:00
John McCall a8ec7eb9cf Promote atomic type sizes up to a power of two, capped by
MaxAtomicPromoteWidth.  Fix a ton of terrible bugs with
_Atomic types and (non-intrinsic-mediated) loads and stores
thereto.

llvm-svn: 176658
2013-03-07 21:37:17 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
John McCall e739a49325 Restore order to placate test. I had no real reason to switch them.
llvm-svn: 176328
2013-03-01 01:38:54 +00:00
John McCall 07e60263dd Re-use bit from superclass and extract stuff into a local
function.  Serves a patch we're kicking around out-of-tree.

llvm-svn: 176327
2013-03-01 01:24:35 +00:00
John McCall 882987f30c Use the actual ABI-determined C calling convention for runtime
calls and declarations.

LLVM has a default CC determined by the target triple.  This is
not always the actual default CC for the ABI we've been asked to
target, and so we sometimes find ourselves annotating all user
functions with an explicit calling convention.  Since these
calling conventions usually agree for the simple set of argument
types passed to most runtime functions, using the LLVM-default CC
in principle has no effect.  However, the LLVM optimizer goes
into histrionics if it sees this kind of formal CC mismatch,
since it has no concept of CC compatibility.  Therefore, if this
module happens to define the "runtime" function, or got LTO'ed
with such a definition, we can miscompile;  so it's quite
important to get this right.

Defining runtime functions locally is quite common in embedded
applications.

llvm-svn: 176286
2013-02-28 19:01:20 +00:00
Timur Iskhodzhanov 57cbe5c790 Better support for constructors with -cxx-abi microsoft, partly fixes PR12784
llvm-svn: 176186
2013-02-27 13:46:31 +00:00
Richard Smith 539e4a77bb ubsan: Emit bounds checks for array indexing, vector indexing, and (in really simple cases) pointer arithmetic. This augments the existing bounds checking with language-level array bounds information.
llvm-svn: 175949
2013-02-23 02:53:19 +00:00
Lang Hames bf122744e5 Re-apply r174919 - smarter copy/move assignment/construction, with fixes for
bitfield related issues.

The original commit broke Takumi's builder. The bug was caused by bitfield sizes
being determined by their underlying type, rather than the field info. A similar
issue with bitfield alignments showed up on closer testing. Both have been fixed
in this patch.

llvm-svn: 175389
2013-02-17 07:22:09 +00:00
Richard Smith 2c5868c334 ubsan: Add checking for invalid downcasts. Per [expr.static.cast]p2 and p11,
base-to-derived casts have undefined behavior if the object is not actually an
instance of the derived type.

llvm-svn: 175078
2013-02-13 21:18:23 +00:00
Timur Iskhodzhanov ee6bc53365 Emit virtual/deleting destructors properly with -cxx-abi microsoft, PR15058
llvm-svn: 175045
2013-02-13 08:37:51 +00:00
Lang Hames 697b004219 Backing out r174919 while I investigate a self-host bug on Takumi's builder.
llvm-svn: 174925
2013-02-12 00:44:43 +00:00
Lang Hames 5824a4f1b0 When generating IR for default copy-constructors, copy-assignment operators,
move-constructors and move-assignment operators, use memcpy to copy adjacent
POD members.

Previously, classes with one or more Non-POD members would fall back on
element-wise copies for all members, including POD members. This often
generated a lot of IR. Without padding metadata, it wasn't often possible
for the LLVM optimizers to turn the element-wise copies into a memcpy.

This code hasn't yet received any serious tuning. I didn't see any serious
regressions on a self-hosted clang build, or any of the nightly tests, but
I think it's important to get this out in the wild to get more testing.
Insights, feedback and comments welcome.

Many thanks to David Blaikie, Richard Smith, and especially John McCall for
their help and feedback on this work.

llvm-svn: 174919
2013-02-11 23:44:11 +00:00
Arnaud A. de Grandmaison 49c04467ea Fix typo in comment
llvm-svn: 174359
2013-02-05 09:06:17 +00:00
David Blaikie 357aafb566 Fix exception handling line table problems introduced by r173593
r173593 made us a little too eager to associate all code at the end of a
function with the user-written 'return' line. This caused problems with
breakpoints as they'd be set in exception handling code preceeding the
actual non-exception return handling code, leading to the breakpoint never
being hit in non-exceptional execution.

This change restores the pre-r173593 exception handling line information where
the cleanup code is associated with the '}' not the return line.

llvm-svn: 174206
2013-02-01 19:09:49 +00:00
John McCall 12cc42aa1b Destroy arrays and ARC fields when throwing out of ctors.
Previously we were only handling non-array fields of class type.

Testcases derived from a patch by WenHan Gu.

llvm-svn: 174146
2013-02-01 05:11:40 +00:00
Douglas Gregor 6153500517 When we're emitting a constructor or destructor call from a delegating
constructor, retrieve our VTT parameter directly. Fixes PR14588 /
<rdar://problem/12867962>.

llvm-svn: 174042
2013-01-31 05:50:40 +00:00
Chad Rosier ae229d599b [ubsan] Implement the -fcatch-undefined-behavior flag using a trapping
implementation; this is much more inline with the original implementation
(i.e., pre-ubsan) and does not require run-time library support.

The trapping implementation can be invoked using either '-fcatch-undefined-behavior'
or '-fsanitize=undefined-trap -fsanitize-undefined-trap-on-error', with the latter
being preferred.  Eventually, the -fcatch-undefined-behavior' flag will be removed.

llvm-svn: 173848
2013-01-29 23:31:22 +00:00
David Blaikie 0a21d0da17 PR14566: Debug Info: avoid top level lexical blocks in functions
One of the gotchas (see changes to CodeGenFunction) was due to the fix in
r139416 (for PR10829). This only worked previously because the top level
lexical block would set the location to the end of the function, the debug
location would be updated (as per r139416), the location would be set to
the end of the function again (but that would no-op, since it was the same
as the previous location), then the return instruction would be emitted using
the debug location.

Once the top level lexical block was no longer emitted, the end-of-function
location change was causing the debug loc to be updated, regressing that bug.

llvm-svn: 173593
2013-01-26 22:16:26 +00:00
Fariborz Jahanian 7865220da4 patch for PR9027 and // rdar://11861085
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false, 
not honoring that aggregate has at least one 'volatile' data member 
(even though aggregate itself has not been qualified as 'volatile'. 
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).

llvm-svn: 173535
2013-01-25 23:57:05 +00:00
Will Dietz f54319c891 [ubsan] Add support for -fsanitize-blacklist
llvm-svn: 172808
2013-01-18 11:30:38 +00:00
Dmitri Gribenko f857950d39 Remove useless 'llvm::' qualifier from names like StringRef and others that are
brought into 'clang' namespace by clang/Basic/LLVM.h

llvm-svn: 172323
2013-01-12 19:30:44 +00:00
Eli Friedman 33accdf602 Don't assert/crash on reference variables in lambdas bound to a
static local variable from the parent scope.  PR14773.

llvm-svn: 171433
2013-01-03 00:39:26 +00:00
Chandler Carruth 3a02247dc9 Sort all of Clang's files under 'lib', and fix up the broken headers
uncovered.

This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.

I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.

llvm-svn: 169237
2012-12-04 09:13:33 +00:00
Will Dietz 88e0233ff4 [ubsan] Add flag to enable recovery from checks when possible.
llvm-svn: 169114
2012-12-02 19:50:33 +00:00
David Chisnall 9a837be2b9 Fix the Objective-C exception rethrow from cleanups (GNU runtimes). Note that
a bug in the inliner still causes the wrong thing to happen at -O2 and above
(PR14116).

llvm-svn: 167534
2012-11-07 16:50:40 +00:00
Richard Smith b1b0ab41e7 Use the individual -fsanitize=<...> arguments to control which of the UBSan
checks to enable. Remove frontend support for -fcatch-undefined-behavior,
-faddress-sanitizer and -fthread-sanitizer now that they don't do anything.

llvm-svn: 167413
2012-11-05 22:21:05 +00:00
Richard Smith de67068fc1 Split emission of -ftrapv checks and -fcatch-undefined-behavior checks into
separate functions, since they share essentially no code.

llvm-svn: 167259
2012-11-01 22:15:34 +00:00
Richard Smith 4d3110af06 -fcatch-undefined-behavior checking for appropriate vptr value: Clang CodeGen side.
llvm-svn: 166661
2012-10-25 02:14:12 +00:00
John McCall e68b8f4dcc At -O0, prefer objc_storeStrong with a null new value to the
combination of a load+objc_release;  this is generally better
for tools that try to track why values are retained and
released.  Also use objc_storeStrong when copying a block
(again, only at -O0), which requires us to do a preliminary
store of null in order to compensate for objc_storeStrong's
assign semantics.

llvm-svn: 166085
2012-10-17 02:28:37 +00:00
Alexey Samsonov 38e2496497 Transform pattern:
if (CGM.getModuleDebugInfo())
    DebugInfo = CGM.getModuleDebugInfo()
into a call:
  maybeInitializeDebugInfo();

This is a simplification for a possible future fix of PR13942.

llvm-svn: 166019
2012-10-16 07:22:28 +00:00
Nico Weber cf4ff586e8 Add codegen support for __uuidof().
llvm-svn: 165710
2012-10-11 10:13:44 +00:00
Richard Smith e30752c93b -fcatch-undefined-behavior: emit calls to the runtime library whenever one of the checks fails.
llvm-svn: 165536
2012-10-09 19:52:38 +00:00
Benjamin Kramer 1ca66919a5 CodeGen: Copy tail padding when we're not dealing with a trivial copy assign or move assign operator.
This fixes a regression from r162254, the optimizer has problems reasoning
about the smaller memcpy as it's often not safe to widen a store but making it
smaller is.

llvm-svn: 164917
2012-09-30 12:43:37 +00:00
Sylvestre Ledru 33b5baf189 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164766
llvm-svn: 164769
2012-09-27 10:16:10 +00:00
Sylvestre Ledru a876013dc9 Fix a typo 'iff' => 'if'
llvm-svn: 164766
2012-09-27 09:57:10 +00:00
Dmitri Gribenko a664e5b88f Use LLVM_DELETED_FUNCTION in place of 'DO NOT IMPLEMENT' comments.
llvm-svn: 163983
2012-09-15 20:20:27 +00:00
Richard Smith 4d1458ed38 -fcatch-undefined-behavior: Factor emission of the creation of, and branch to,
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.

llvm-svn: 163451
2012-09-08 02:08:36 +00:00
Chad Rosier 649dfc317d [ms-inline asm] Have MSAsmStmts use the generic EmitAsmStmt codegen function.
llvm-svn: 162796
2012-08-28 21:11:24 +00:00
Chad Rosier 6051bb94c0 [ms-inline asm] Rename EmitGCCAsmStmt to EmitAsmStmt and have it accept
AsmStmts.  This function is only used by GCCAsmStmts, however. Constraints need
to be properly computed before MSAsmStmts can use EmitAsmStmt.  No functional
change intended.

llvm-svn: 162776
2012-08-28 18:54:39 +00:00
Chad Rosier de70e0ef45 [ms-inline asm] As part of a larger refactoring, rename AsmStmt to GCCAsmStmt.
No functional change intended.

llvm-svn: 162632
2012-08-25 00:11:56 +00:00
Richard Smith 69d0d2626a New -fcatch-undefined-behavior features:
* when checking that a pointer or reference refers to appropriate storage for a type, also check the alignment and perform a null check
 * check that references are bound to appropriate storage
 * check that 'this' has appropriate storage in member accesses and member function calls

llvm-svn: 162523
2012-08-24 00:54:33 +00:00
Chad Rosier 59df25b659 [ms-inline asm] Remove an unused argument. This logic can now be reused by the
ms-style inline asms.

llvm-svn: 162463
2012-08-23 20:00:18 +00:00
Dmitri Gribenko adba9be7c5 Fix a bunch of -Wdocumentation warnings.
llvm-svn: 162452
2012-08-23 17:58:28 +00:00
Eli Friedman a5dd5684dc Use the alignment from lvalue emission to more accurately compute the alignment
of a pointer for builtin emission, instead of just depending on the type of the
pointee.  <rdar://problem/11314941>.

llvm-svn: 162425
2012-08-23 03:10:17 +00:00
Eli Friedman f6d2184c83 Fix an assertion failure with a C++ constructor initializing a
member of reference type in an anonymous struct.  PR13154.

llvm-svn: 161473
2012-08-08 03:51:37 +00:00
Richard Trieu c320c745cc Change APInt to APSInt in one instance. Also change a call to operator==() to
APSInt::isSameValue() when comparing different sized APSInt's.

llvm-svn: 160641
2012-07-23 20:21:35 +00:00
Simon Atanasyan 94a6d863a9 Revert commit r160308. We decide to move builtins selection to the backend.
llvm-svn: 160353
2012-07-17 08:15:06 +00:00
Simon Atanasyan a06d06b660 MIPS: Implement __builtin_mips_shll_qb builtin function overloading.
This function has two versions. The first one is used for a register operand.
The second one is used for an immediate number.

llvm-svn: 160308
2012-07-16 18:52:02 +00:00
Eric Christopher f8b9809fab Temporarily revert this to see if it brings the gdb bot back.
llvm-svn: 160049
2012-07-11 15:32:13 +00:00
Eric Christopher 2977378974 The end of a block doesn't necessarily need a line table entry unless
there's something going on there. Remove the unconditional line entry
and only add one if we're emitting cleanups (any other statements
would be handled normally).

Fixes rdar://9199234

llvm-svn: 160033
2012-07-11 01:49:26 +00:00
Tanya Lattner bcffcdfd18 Patch by Anton Lokhmotov to add OpenCL work group size attributes.
llvm-svn: 159965
2012-07-09 22:06:01 +00:00
John McCall 4e8ca4fa14 Significantly simplify CGExprAgg's logic about ignored results:
if we want to ignore a result, the Dest will be null.  Otherwise,
we must copy into it.  This means we need to ensure a slot when
loading from a volatile l-value.

With all that in place, fix a bug with chained assignments into
__block variables of aggregate type where we were losing insight into
the actual source of the value during the second assignment.

llvm-svn: 159630
2012-07-02 23:58:38 +00:00
Benjamin Kramer 46a72fb741 Dead code eliminate the massive hexagon builtin intrinsic supporting code.
The tablegen'd code does the same thing without this egregious duplication.
In my limited testing everything seems to work, however there can be
differences if the clang and llvm builtin definitions don't match.

llvm-svn: 159371
2012-06-28 20:08:55 +00:00
Simon Atanasyan 07ce7d8fb5 Support MIPS DSP Rev1 intrinsics.
This patch was reviewed in the llvm-commits list by Jim Grosbach.

llvm-svn: 159366
2012-06-28 18:23:16 +00:00
Eli Friedman c24e2fb1fb Propagate lvalue alignment into bitfields. Per report on cfe-dev.
llvm-svn: 159295
2012-06-27 21:19:48 +00:00
Fariborz Jahanian 6362803cfe block literal irgen: several improvements on naming block
literal helper functions. All helper functions (global
and locals) use block_invoke as their prefix. Local literal
helper names are prefixed by their enclosing mangled function
names. Blocks in non-local initializers (e.g. a global variable 
or a C++11 field) are prefixed by their mangled variable name. 
The descriminator number added to end of the name starts off 
with blank (for first block) and _<N> (for the N+2-th block).

llvm-svn: 159206
2012-06-26 16:06:38 +00:00
Chad Rosier 32503020a4 Etch out the code path for MS-style inline assembly.
llvm-svn: 158325
2012-06-11 20:47:18 +00:00
Fariborz Jahanian b5dd2cb13c objective-c: fix a sema and IRGen crash when property
getter result type is safe but does not match with property 
type resulting in spurious warning followed by crash in
IRGen. // rdar://11515196

llvm-svn: 157641
2012-05-29 19:56:01 +00:00
Richard Smith bb653bd5f9 Implement IRGen for C++11's "T{1, 2, 3}", where T is an aggregate and the
expression is treated as an lvalue.

llvm-svn: 156781
2012-05-14 21:57:21 +00:00
Nuno Lopes 3d6311d5f7 add -fbounds-checking option.
When enabled, clang generates bounds checks for array and pointers dereferences. Work to follow in LLVM's backend.

OK'ed by Chad; thanks for the review.

llvm-svn: 156431
2012-05-08 22:10:46 +00:00
John McCall c84ed6a336 Abstract the emission of global destructors into ABI-specific code
and only consider using __cxa_atexit in the Itanium logic.  The
default logic is to use atexit().

Emit "guarded" initializers in Microsoft mode unconditionally.
This is definitely not correct, but it's closer to correct than
just not emitting the initializer.

Based on a patch by Timur Iskhodzhanov!

llvm-svn: 155894
2012-05-01 06:13:13 +00:00
Patrick Beard 0caa39474b Implements boxed expressions for Objective-C. <rdar://problem/10194391>
llvm-svn: 155082
2012-04-19 00:25:12 +00:00
Eli Friedman 7f1ff60021 Propagate alignment on lvalues through EmitLValueForField. PR12395.
llvm-svn: 154789
2012-04-16 03:54:45 +00:00
Richard Smith c202b2809a Add an AttributedStmt type to represent a statement with C++11 attributes
attached. Since we do not support any attributes which appertain to a statement
(yet), testing of this is necessarily quite minimal.

Patch by Alexander Kornienko!

llvm-svn: 154723
2012-04-14 00:33:13 +00:00
Anton Korobeynikov 4215ca7564 Step forward with supporting of ARM homogenous aggregates:
- Handle unions
  - Handle C++ classes

llvm-svn: 154664
2012-04-13 11:22:00 +00:00
Duncan Sands e81111ca71 Express the number of ULPs in fpaccuracy metadata as a real rather than a
rational number, eg as 2.5 rather than 5, 2.  OK'd by Peter Collingbourne.

llvm-svn: 154388
2012-04-10 08:23:07 +00:00
John McCall ee08c53478 Rename GenerateCXXGlobalDtorFunc to GenerateCXXGlobalDtorsFunc.
llvm-svn: 154190
2012-04-06 18:21:03 +00:00
Chandler Carruth 8453795255 Revert r153723, and its follow-ups r153728 and r153733.
These patches cause us to miscompile and/or reject code with static
function-local variables in an extern-C context. Previously, we were
papering over this as long as the variables are within the same
translation unit, and had not seen any failures in the wild. We still
need a proper fix, which involves mangling static locals inside of an
extern-C block (as GCC already does), but this patch causes pretty
widespread regressions. Firefox, and many other applications no longer
build.

Lots of test cases have been posted to the list in response to this
commit, so there should be no problem reproducing the issues.

llvm-svn: 153768
2012-03-30 19:44:53 +00:00
John McCall 87590e60c0 Do the static-locals thing properly in the face of unions and
other things which might mess with the variable's type.

llvm-svn: 153733
2012-03-30 07:09:50 +00:00
Chad Rosier 615ed1a3a6 Revert r153613 as it's causing large compile-time regressions on the nightly testers.
llvm-svn: 153660
2012-03-29 17:37:10 +00:00
John McCall 1a0877f99d When we can't prove that the target of an aggregate copy is
a complete object, the memcpy needs to use the data size of
the structure instead of its sizeof() value.  Fixes PR12204.

llvm-svn: 153613
2012-03-28 23:30:44 +00:00
Rafael Espindola 5c0034a7c6 Add back r153360 with a fix for enums that cover all the 32 bit values.
Thanks to NAKAMURA Takumi for finding it!

llvm-svn: 153383
2012-03-24 16:50:34 +00:00
NAKAMURA Takumi 2681efcc95 Revert r153360 (and r153380), "Second part of PR12251. Produce the range metadata in clang for booleans and".
For i686 targets (eg. cygwin), I saw "Range must not be empty!" in verifier.

It produces (i32)[0x80000000:0x80000000) from (uint64_t)[0xFFFFFFFF80000000ULL:0x0000000080000000ULL), for signed i32 on MDNode::Range.

llvm-svn: 153382
2012-03-24 14:43:42 +00:00
Rafael Espindola 54355820e8 Second part of PR12251. Produce the range metadata in clang for booleans and
c++ enums.

llvm-svn: 153360
2012-03-24 00:28:06 +00:00
David Blaikie bbafb8a745 Unify naming of LangOptions variable/get function across the Clang stack (Lex to AST).
The member variable is always "LangOpts" and the member function is always "getLangOpts".

Reviewed by Chris Lattner

llvm-svn: 152536
2012-03-11 07:00:24 +00:00
John McCall 113bee0536 Remove BlockDeclRefExpr and introduce a bit on DeclRefExpr to
track whether the referenced declaration comes from an enclosing
local context.  I'm amenable to suggestions about the exact meaning
of this bit.

llvm-svn: 152491
2012-03-10 09:33:50 +00:00
John McCall 7133505936 Unify the BlockDeclRefExpr and DeclRefExpr paths so that
we correctly emit loads of BlockDeclRefExprs even when they
don't qualify as ODR-uses.  I think I'm adequately convinced
that BlockDeclRefExpr can die.

llvm-svn: 152479
2012-03-10 03:05:10 +00:00
Ted Kremenek e65b086e07 Add clang support for new Objective-C literal syntax for NSDictionary, NSArray,
NSNumber, and boolean literals.  This includes both Sema and Codegen support.
Included is also support for new Objective-C container subscripting.

My apologies for the large patch.  It was very difficult to break apart.
The patch introduces changes to the driver as well to cause clang to link
in additional runtime support when needed to support the new language features.

Docs are forthcoming to document the implementation and behavior of these features.

llvm-svn: 152137
2012-03-06 20:05:56 +00:00
Jay Foad b0f3344b10 PR12094: Set the alignment of memory intrinsic instructions based on the
types of the pointer arguments.

llvm-svn: 151927
2012-03-02 18:34:30 +00:00
Eli Friedman 98b01edc8c Implement "optimization" for lambda-to-block conversion which inlines the generated block literal for lambdas which are immediately converted to block pointer type. This simplifies the AST, avoids an unnecessary copy of the lambda and makes it much easier to avoid copying the result onto the heap.
Note that this transformation has a substantial semantic effect outside of ARC: it gives the converted lambda lifetime semantics similar to a block literal.  With ARC, the effect is much less obvious because the lifetime of blocks is already managed.

llvm-svn: 151797
2012-03-01 04:01:32 +00:00
Eli Friedman ec75fec805 Implement IRGen for the retain-autorelease in the lambda conversion-to-block-pointer outside of ARC. Testcases coming up soon.
llvm-svn: 151603
2012-02-28 01:08:45 +00:00
Eli Friedman 2495ab08fc Work-in-progress for lambda conversion-to-block operator. Still need to implement the retain+autorelease outside of ARC, and there's a bug that causes the generated code to crash in ARC (which I think is unrelated to my code, although I'm not completely sure).
llvm-svn: 151428
2012-02-25 02:48:22 +00:00
Bill Wendling f1a3fcac0d Use an ArrayRef when we can instead of passing in a SmallVectorImpl reference.
llvm-svn: 151150
2012-02-22 09:30:11 +00:00
Sebastian Redl d026dc499c Make heap-allocation of std::initializer_list 'work'.
llvm-svn: 150931
2012-02-19 16:03:09 +00:00
Sebastian Redl 8eb351d72e Get recursive initializer lists to work and add a test. Codegen of std::initializer_list is now complete. Onward to array new.
llvm-svn: 150926
2012-02-19 12:28:02 +00:00
Sebastian Redl c83ed8248e Basic code generation support for std::initializer_list.
We now generate temporary arrays to back std::initializer_list objects
initialized with braces. The initializer_list is then made to point at
the array. We support both ptr+size and start+end forms, although
the latter is untested.

Array lifetime is correct for temporary std::initializer_lists (e.g.
call arguments) and local variables. It is untested for new expressions
and member initializers.

Things left to do:
Massively increase the amount of testing. I need to write tests for
start+end init lists, temporary objects created as a side effect of
initializing init list objects, new expressions, member initialization,
creation of temporary objects (e.g. std::vector) for initializer lists,
and probably more.
Get lifetime "right" for member initializers and new expressions. Not
that either are very useful.
Implement list-initialization of array new expressions.

llvm-svn: 150803
2012-02-17 08:42:25 +00:00
Douglas Gregor 355efbb2e0 Rework the Sema/AST/IRgen dance for the lambda closure type's
conversion to function pointer. Rather than having IRgen synthesize
the body of this function, we instead introduce a static member
function "__invoke" with the same signature as the lambda's
operator() in the AST. Sema then generates a body for the conversion
to function pointer which simply returns the address of __invoke. This
approach makes it easier to evaluate a call to the conversion function
as a constant, makes the linkage of the __invoke function follow the
normal rules for member functions, and may make life easier down the
road if we ever want to constexpr'ify some of lambdas.

Note that IR generation is responsible for filling in the body of
__invoke (Sema just adds a dummy body), because the body can't
generally be expressed in C++.

Eli, please review!

llvm-svn: 150783
2012-02-17 03:02:34 +00:00
Eli Friedman 5b44688d6b Initial implementation of IRGen for the lambda conversion-to-function-pointer operator.
llvm-svn: 150660
2012-02-16 03:47:28 +00:00
Eli Friedman 5a6d507d1b Start of IRGen for lambda conversion operators.
llvm-svn: 150649
2012-02-16 01:37:33 +00:00
Dan Gohman 515a60daff Teach clang to add metadata tags to calls and invokes in ObjC with
-fno-objc-arc-exceptions. This will allow the optimizer to perform
optimizations which are only safe under that flag.

This is a part of rdar://10803830.

llvm-svn: 150644
2012-02-16 00:57:37 +00:00
Eli Friedman 5f1a04ffd1 Implement IRGen of lambda expressions which capture arrays.
llvm-svn: 150452
2012-02-14 02:31:03 +00:00
Richard Smith 6331c408b5 Deal with a horrible C++11 special case. If a non-literal type has a constexpr
constructor, and that constructor is used to initialize an object of static
storage duration such that all members and bases are initialized by constant
expressions, constant initialization is performed. In this case, the object
can still have a non-trivial destructor, and if it does, we must emit a dynamic
initializer which performs no initialization and instead simply registers that
destructor.

llvm-svn: 150419
2012-02-13 22:16:19 +00:00
Eli Friedman 9fbeba0d8e Basic support for referring to captured variables from lambdas. Some simple examples seem to work. Tests coming up soon.
llvm-svn: 150293
2012-02-11 02:57:39 +00:00
Eli Friedman c370a7eec7 Refactor lambda IRGen so AggExprEmitter::VisitLambdaExpr does the right thing.
llvm-svn: 150146
2012-02-09 03:32:31 +00:00
Eli Friedman 5bc1712940 A little bit of lambda IRGen.
llvm-svn: 150058
2012-02-08 05:34:55 +00:00
Fariborz Jahanian 715fdd53a6 revert r149184
llvm-svn: 149205
2012-01-29 20:27:13 +00:00
Fariborz Jahanian 326efeb95a objc-arc: Perform null check on receiver before sending methods which
consume one or more of their arguments. If not done, this will cause a leak
as method will not consume the argument when receiver is null.
// rdar://10444474

llvm-svn: 149184
2012-01-28 18:46:31 +00:00
Peter Collingbourne 1425b4556a Use function pointers, rather than references, to pass Destroyers
around, in the process cleaning up the various gcc/msvc compiler
workarounds.

llvm-svn: 149036
2012-01-26 03:33:36 +00:00
David Chisnall fa35df628a Some improvements to the handling of C11 atomic types:
- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type.  This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks.  The feature isn't 100% complete yet, but it's done enough that we want people testing it.

Still to do:

- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases

llvm-svn: 148242
2012-01-16 17:27:18 +00:00
Zhongxing Xu 82846eae0a Remove a redundant word.
llvm-svn: 148179
2012-01-14 09:08:15 +00:00
Fariborz Jahanian a08a74705b objc++: patch for IRgen for atomic properties of
c++ objects with non-trivial assignment/copy functions.
Also, one additional sema check. // rdar://6137845

llvm-svn: 147817
2012-01-10 00:37:01 +00:00
Fariborz Jahanian 4b501a2ed8 objc++: More codegen stuff for atomic properties of c++ objects
with non-trivial copies. // rdar://6137845

llvm-svn: 147735
2012-01-07 18:56:22 +00:00
Fariborz Jahanian 7ff610b62d objc++: more code gen stuff for atomic property api,
currently turned off. // rdar://6137845
Also, fixes a test case which should be nonatomic under
new API.

llvm-svn: 147691
2012-01-06 22:33:54 +00:00
Fariborz Jahanian 17eaf4089d objc++: sythesize a helper function to be used
for copying atomic properties of c++ objects
with non-trivial copy assignment in setters/getters.
Not yet used. // rdar://6137845

llvm-svn: 147636
2012-01-06 00:29:35 +00:00
Richard Smith 5fab0c9e1a Small refactoring and simplification of constant evaluation and some of its
clients. No functionality change.

llvm-svn: 147318
2011-12-28 19:48:30 +00:00
Tony Linthicum 76329bf83f Hexagon backend support
llvm-svn: 146413
2011-12-12 21:14:55 +00:00
Eli Friedman 6d694a38fd Make EmitAggregateCopy take an alignment argument. Make EmitFinalDestCopy pass in the correct alignment when known.
The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment.

<rdar://problem/10463337>
  

llvm-svn: 145862
2011-12-05 22:23:28 +00:00
Eli Friedman a0544d6fdf Switch LValue so that it exposes alignment in CharUnits. (No functional change.)
llvm-svn: 145753
2011-12-03 04:14:32 +00:00
Eli Friedman 38cd36dbdb Switch the Alignment argument on AggValueSlot over to CharUnits, per John's review comment.
llvm-svn: 145741
2011-12-03 02:13:40 +00:00
Eli Friedman c1d85b931e Track alignment in AggValueSlot. No functional change in this patch, but I'll be introducing uses of the specified alignment soon.
llvm-svn: 145736
2011-12-03 00:54:26 +00:00
Peter Collingbourne 702b2841a4 When destroying temporaries, instead of a custom cleanup use the
generic pushDestroy function.

This would reduce the number of useful declarations in
CGTemporaries.cpp to one.  Since CodeGenFunction::EmitCXXTemporary
does not deserve its own file, move it to CGCleanup.cpp and delete
CGTemporaries.cpp.

llvm-svn: 145202
2011-11-27 22:09:22 +00:00
Eli Friedman d20adbdce3 Fix a bunch of really nasty bugs in how we compute alignment for reference lvalues. PR11376.
llvm-svn: 144745
2011-11-16 00:42:57 +00:00
John McCall f4beacd059 Whenever explicitly activating or deactivating a cleanup, we
need to provide a 'dominating IP' which is guaranteed to
dominate the (de)activation point but which cannot be avoided
along any execution path from the (de)activation point to
the push-point of the cleanup.  Using the entry block is
bad mojo.

llvm-svn: 144276
2011-11-10 10:43:54 +00:00
John McCall 08ef466048 Enter the cleanups for a block outside the enclosing
full-expression.  Naturally they're inactive before we enter
the block literal expression.  This restores the intended
behavior that blocks belong to their enclosing scope.

There's a useful -O0 / compile-time optimization that we're
missing here with activating cleanups following straight-line
code from their inactive beginnings.

llvm-svn: 144268
2011-11-10 08:15:53 +00:00
John McCall 9a54961e01 Bind function "r-values" as l-values when emitting them as
opaque values.  Silly C type system.

llvm-svn: 144144
2011-11-08 22:54:08 +00:00
John McCall c109a259d2 Rip the ObjCPropertyRef l-value kind out of IR-generation.
llvm-svn: 143908
2011-11-07 03:59:57 +00:00
John McCall fe96e0b6be Change the AST representation of operations on Objective-C
property references to use a new PseudoObjectExpr
expression which pairs a syntactic form of the expression
with a set of semantic expressions implementing it.
This should significantly reduce the complexity required
elsewhere in the compiler to deal with these kinds of
expressions (e.g. IR generation's special l-value kind,
the static analyzer's Message abstraction), at the lower
cost of specifically dealing with the odd AST structure
of these expressions.  It should also greatly simplify
efforts to implement similar language features in the
future, most notably Managed C++'s properties and indexed
properties.

Most of the effort here is in dealing with the various
clients of the AST.  I've gone ahead and simplified the
ObjC rewriter's use of properties;  other clients, like
IR-gen and the static analyzer, have all the old
complexity *and* all the new complexity, at least
temporarily.  Many thanks to Ted for writing and advising
on the necessary changes to the static analyzer.

I've xfailed a small diagnostics regression in the static
analyzer at Ted's request.

llvm-svn: 143867
2011-11-06 09:01:30 +00:00
Peter Collingbourne 95fd2ca69f Annotate imprecise FP division with fpaccuracy metadata
The OpenCL single precision division operation is only required to
be accurate to 2.5ulp.  Annotate the fdiv instruction with metadata
which signals to the backend that an imprecise divide instruction
may be used.

llvm-svn: 143136
2011-10-27 19:19:51 +00:00
Eric Christopher a9d3497b5e Add a new subclass of RunCleanupScopes that also handles creating new
lexical blocks for debug info.

llvm-svn: 142466
2011-10-19 00:43:52 +00:00
Eli Friedman df14b3a837 Initial implementation of __atomic_* (everything except __atomic_is_lock_free).
llvm-svn: 141632
2011-10-11 02:20:01 +00:00
Peter Collingbourne fe88342240 CUDA: IR generation support for kernel call expressions
llvm-svn: 141300
2011-10-06 18:29:37 +00:00
John McCall ff61303bd0 Mark calls to objc_retainBlock that don't result from casts
to id so that we can still optimize them appropriately.

llvm-svn: 141064
2011-10-04 06:23:45 +00:00
John McCall 248512a573 When performing an @throw in ARC, retain + autorelease
the pointer, being sure to do so before running cleanups
associated with that full-expression.  rdar://10042689

llvm-svn: 140945
2011-10-01 10:32:24 +00:00
Bill Wendling f0724e8e06 Throw the switch to convert clang to the new exception handling model!
This model uses the 'landingpad' instruction, which is pinned to the top of the
landing pad. (A landing pad is defined as the destination of the unwind branch
of an invoke instruction.) All of the information needed to generate the correct
exception handling metadata during code generation is encoded into the
landingpad instruction.

The new 'resume' instruction takes the place of the llvm.eh.resume intrinsic
call. It's lowered in much the same way as the intrinsic is.

llvm-svn: 140049
2011-09-19 20:31:14 +00:00
Bill Wendling 79a70e42b0 Refactor the load of the exception pointer and the exception selector from their
storage slot into helper functions.

llvm-svn: 139826
2011-09-15 18:57:19 +00:00
John McCall 99210dc9d1 Rewrite this loop to use partial destruction; I'm not sure it's
possible for that to matter right now, but eventually I think we'll
need to unify this better, and then it might.  Also, use a more
efficient looping structure.

llvm-svn: 139788
2011-09-15 06:49:18 +00:00
John McCall f4528ae063 Unify the decision of how to emit property getters and setters into a
single code path.  Use atomic loads and stores where necessary.  Load and
store anything of the appropriate size and alignment with primitive
operations instead of going through the call.

llvm-svn: 139580
2011-09-13 03:34:09 +00:00
John McCall b923ece770 Privatize the setter/getter call generation methods, plus some minor
modernization.  No functionality change.

llvm-svn: 139555
2011-09-12 23:06:44 +00:00
John McCall 7f16c42b9e Simplify the generation of Objective-C setters, at least a little.
Use a more portable heuristic for deciding when to emit a single
atomic store;  it's possible that I've lost information here, but
I'm not sure how much of the logic before was intentionally arch-specific
and how much was just not quite consistent.

llvm-svn: 139468
2011-09-10 09:17:20 +00:00
Julien Lerouge 5a6b6987dc Bring llvm.annotation* intrinsics support back to where it was in llvm-gcc: can
annotate global, local variables, struct fields, or arbitrary statements (using
the __builtin_annotation), rdar://8037476.

llvm-svn: 139423
2011-09-09 22:41:49 +00:00
John McCall a5efa7386a Track whether an AggValueSlot is potentially aliased, and do not
emit call results into potentially aliased slots.  This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value.  It also brings us into compliance with
the x86-64 ABI.

llvm-svn: 138599
2011-08-25 23:04:34 +00:00
John McCall 8d6fc9583d Use stronger typing for the flags on AggValueSlot and require
creators to tell us whether something needs GC barriers.
No functionality change.

llvm-svn: 138581
2011-08-25 20:40:09 +00:00
John McCall 8e4c74bb7c Simplify EH control flow by observing that EH scopes form a simple
hierarchy of delegation, and that EH selector values are meaningful
function-wide (good thing, too, or inlining wouldn't work).
2,3d
1a
hierarchy of delegation and that EH selector values have the same
meaning everywhere in the function instead of being meaningful only
in the context of a specific selector.

This removes the need for routing edges through EH cleanups,
since a cleanup simply always branches to its enclosing scope.

llvm-svn: 137293
2011-08-11 02:22:43 +00:00
Chris Lattner 54b1677d23 Move ArrayRef to LLVM.h and eliminate now-redundant qualifiers, patch by Jon Mulder!
llvm-svn: 135855
2011-07-23 17:14:25 +00:00
Chris Lattner 62ff6e8b17 add raw_ostream and Twine to LLVM.h, eliminating a ton of llvm:: qualifications.
llvm-svn: 135577
2011-07-20 07:06:53 +00:00
Chris Lattner 01cf8db38b now that we have a centralized place to do so, add some using declarations for
some common llvm types: stringref and smallvector.  This cleans up the codebase
quite a bit.

llvm-svn: 135576
2011-07-20 06:58:45 +00:00
Chris Lattner 2192fe50da de-constify llvm::Type, patch by David Blaikie!
llvm-svn: 135370
2011-07-18 04:24:23 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
John McCall 97eab0a271 Okay, that rule about zero-length arrays applies to destroying
them, too.

llvm-svn: 135038
2011-07-13 08:09:46 +00:00
John McCall f47c069162 Aggressive dead code elimination.
llvm-svn: 135029
2011-07-13 03:03:51 +00:00
John McCall 98de3d74d2 Generalize the routine for destroying an object with static
storage duration, then explicitly exempt ownership-qualified
types from it.

llvm-svn: 135028
2011-07-13 03:01:35 +00:00
John McCall 30317fda63 Generalize Cleanup::Emit's "isForEH" parameter into a set
of flags.  No functionality change.

llvm-svn: 134997
2011-07-12 20:27:29 +00:00
John McCall 4bd0fb1f09 Switch field destruction over to use the new destroyer-based API
and kill a lot of redundant code.

llvm-svn: 134988
2011-07-12 16:41:08 +00:00
Chris Lattner d59d867ca5 insert a bitcast in the 'expand' case of argument passing when needed. This
fixes the -m32 build of oggenc.

llvm-svn: 134971
2011-07-12 06:29:11 +00:00
John McCall 5fcf8da33d Do full-expression cleanups in a much more sensible way that still lets
people write useful cleanup classes.

llvm-svn: 134942
2011-07-12 00:15:30 +00:00
John McCall 178360e1cd Fix a lot of problems with the partial destruction of arrays:
- an off-by-one error in emission of irregular array limits for
   InitListExprs
 - use an EH partial-destruction cleanup within the normal
   array-destruction cleanup
 - get the branch destinations right for the empty check
Also some refactoring which unfortunately obscures these changes.

llvm-svn: 134890
2011-07-11 08:38:19 +00:00
Chris Lattner a5f58b05e8 clang side to match the LLVM IR type system rewrite patch.
llvm-svn: 134831
2011-07-09 17:41:47 +00:00
John McCall 82fe67bb82 A number of array-related IR-gen cleanups.
- Emit default-initialization of arrays that were partially initialized
    with initializer lists with a loop, rather than emitting the default
    initializer N times;
  - support destroying VLAs of non-trivial type, although this is not
    yet exposed to users; and
  - support the partial destruction of arrays initialized with
    initializer lists when an initializer throws an exception.

llvm-svn: 134784
2011-07-09 01:37:26 +00:00
John McCall 55e1fbc848 LValue carries a type now, so simplify the main EmitLoad/Store APIs
by removing the redundant type parameter.

llvm-svn: 133860
2011-06-25 02:11:03 +00:00
John McCall 1bd2556ccc Honor objc_precise_lifetime in GC mode by feeding the value
in the variable to an inline asm which gets run when the variable
goes out of scope.

llvm-svn: 133840
2011-06-24 23:21:27 +00:00
John McCall 23c29fea92 Change the IR-generation of VLAs so that we capture bounds,
not sizes;  so that we use well-typed allocas;  and so that we
properly recurse through the full set of variably-modified types.

llvm-svn: 133827
2011-06-24 21:55:10 +00:00
Douglas Gregor 58df509fc0 When binding a reference to an Automatic Reference Counting temporary,
retain/release the temporary object appropriately. Previously, we
would only perform the retain/release operations when the reference
would extend the lifetime of the temporary, but this does the wrong
thing across calls.

llvm-svn: 133620
2011-06-22 16:12:01 +00:00
John McCall 6b0feb7ed6 Emit @finally blocks completely lazily instead of forcing their
existence by always threading an edge from the catchall.  Not doing
this was previously causing a crash in the very extreme case where
neither the normal cleanup nor the EH catchall was actually reachable:
we would delete the catchall entry block, which would cause us to
delete the entry block of the finally cleanup as well because the
cleanup logic would merge the blocks, which in turn triggered an assert
because later blocks in the finally would still be using values from the
entry.  Laziness turns out to be the most elegant solution to the problem.

llvm-svn: 133601
2011-06-22 02:32:12 +00:00
Douglas Gregor fe31481f68 Introduce a new AST node describing reference binding to temporaries.
MaterializeTemporaryExpr captures a reference binding to a temporary
value, making explicit that the temporary value (a prvalue) needs to
be materialized into memory so that its address can be used. The
intended AST invariant here is that a reference will always bind to a
glvalue, and MaterializeTemporaryExpr will be used to convert prvalues
into glvalues for that binding to happen. For example, given

  const int& r = 1.0;

The initializer of "r" will be a MaterializeTemporaryExpr whose
subexpression is an implicit conversion from the double literal "1.0"
to an integer value. 

IR generation benefits most from this new node, since it was
previously guessing (badly) when to materialize temporaries for the
purposes of reference binding. There are likely more refactoring and
cleanups we could perform there, but the introduction of
MaterializeTemporaryExpr fixes PR9565, a case where IR generation
would effectively bind a const reference directly to a bitfield in a
struct. Addresses <rdar://problem/9552231>.

llvm-svn: 133521
2011-06-21 17:03:29 +00:00
John McCall d463132f27 Objective-C fast enumeration loop variables are not retained in ARC, but
they should still be officially __strong for the purposes of errors, 
block capture, etc.  Make a new bit on variables, isARCPseudoStrong(),
and set this for 'self' and these enumeration-loop variables.  Change
the code that was looking for the old patterns to look for this bit,
and change IR generation to find this bit and treat the resulting         
variable as __unsafe_unretained for the purposes of init/destroy in
the two places it can come up.

llvm-svn: 133243
2011-06-17 06:42:21 +00:00
John McCall 1553b19067 Restore correct use of GC barriers.
llvm-svn: 133144
2011-06-16 04:16:24 +00:00
John McCall 31168b077c Automatic Reference Counting.
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.

Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.

llvm-svn: 133103
2011-06-15 23:02:42 +00:00
John McCall 9b382dde92 Convert Clang over to resuming from landing pads with llvm.eh.resume.
It's quite likely that this will explode, but I need to know how. :)

llvm-svn: 132269
2011-05-28 21:13:02 +00:00
Eli Friedman 380b8dad6b Back out r132209; it's breaking nightly tests.
llvm-svn: 132219
2011-05-27 21:32:17 +00:00
John McCall 63fb333fa4 Implement a new, much improved version of the cleanup hack. We just need
to be careful to emit landing pads that are always prepared to handle a
cleanup path.  This is correct mostly because of the fix to the LLVM
inliner, r132200.

llvm-svn: 132209
2011-05-27 20:01:14 +00:00
Devang Patel e7ce54074f Fix location of setter/getter synthesized for a property.
llvm-svn: 131701
2011-05-19 23:37:41 +00:00
John McCall 745ae2853c Make CGF.getContext() inlinable, because it's trivial, and optimize
hasAggregateLLVMType.

llvm-svn: 131375
2011-05-15 02:34:36 +00:00
Anders Carlsson c36783e8b9 Move code to emit the callee of an CXXOperatorCallExpr out into a separate function in CGClass.cpp
llvm-svn: 131075
2011-05-08 20:32:23 +00:00
Eli Friedman 49a94b1c7c Add an implementation of thunks for varargs methods. The implementation is a bit messy, but it is correct as long as the method in question doesn't use indirect gotos. A couple of possible alternative implementations are outlined in FIXME's in this patch. rdar://problem/8077308 .
llvm-svn: 130993
2011-05-06 17:27:27 +00:00
Alexis Hunt 61bc173784 Fully implement delegating constructors!
As far as I know, this implementation is complete but might be missing a
few optimizations. Exceptions and virtual bases are handled correctly.

Because I'm an optimist, the web page has appropriately been updated. If
I'm wrong, feel free to downgrade its support categories.

llvm-svn: 130642
2011-05-01 07:04:31 +00:00
Chris Lattner bc204c8043 implement rdar://9289524 - case followed immediately by break results in empty IR block,
a -O0 code quality issue.

llvm-svn: 129652
2011-04-17 00:54:30 +00:00
Chris Lattner 57540c5be0 fix a bunch of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129559
2011-04-15 05:22:18 +00:00
Richard Smith 02e85f3bc5 Add support for C++0x's range-based for loops, as specified by the C++11 draft standard (N3291).
llvm-svn: 129541
2011-04-14 22:09:26 +00:00
John McCall f9b056b002 After much contemplation, I've decided that we probably shouldn't "unique"
__block object copy/dispose helpers for C++ objects with those for
different variables with completely different semantics simply because
they happen to both be no more aligned than a pointer.

Found by inspection.

Also, internalize most of the helper generation logic within CGBlocks.cpp,
and refactor it to fit my peculiar aesthetic sense.

llvm-svn: 128618
2011-03-31 08:03:29 +00:00
John McCall 7306487077 Move all the significant __block code into CGBlocks.cpp. No functionality
change.

llvm-svn: 128608
2011-03-31 01:59:53 +00:00
Ken Dyck 3fb4c8920d Convert OffsetFromNearestVBast parameter of InitializeVTablePointer(s) to
CharUnits. No change in functionality intended.

llvm-svn: 128129
2011-03-23 01:04:18 +00:00
John McCall 32ea969415 Use a slightly more semantic interface for emitting call arguments.
llvm-svn: 127494
2011-03-11 20:59:21 +00:00
John McCall a738c25f5e Use the "undergoes default argument promotion" bit on parameters to
simplify the logic of initializing function parameters so that we don't need
both a variable declaration and a type in FunctionArgList.  This also means
that we need to propagate the CGFunctionInfo down in a lot of places rather
than recalculating it from the FAL.  There's more we can do to eliminate
redundancy here, and I've left FIXMEs behind to do it.

llvm-svn: 127314
2011-03-09 04:27:21 +00:00
John McCall 91ca10fe64 Extract a function to emit an arbitrary expression as if it were the initializer
for a local variable.

llvm-svn: 127227
2011-03-08 09:11:50 +00:00
Devang Patel d6ffebb077 DebugInfo can be enabled or disabled at function level (e.g. using an attribute). However, at module level it is determined by command line option and the state of command line option does not change during compilation. Make this layering explicit and fix accidental cases where the code generator was checking whether module has debug info enabled instead of checking whether debug info is enabled for this function or not.
llvm-svn: 127165
2011-03-07 18:45:56 +00:00
Devang Patel 68a1525290 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

llvm-svn: 126960
2011-03-03 20:13:15 +00:00
Tilmann Scheller 99cc30c371 Revert "Add CC_Win64ThisCall and set it in the necessary places."
This reverts commit 126863.

llvm-svn: 126886
2011-03-02 21:36:49 +00:00
Devang Patel bd6f7f9770 revert r126858.
llvm-svn: 126874
2011-03-02 20:31:22 +00:00
Tilmann Scheller 454464b491 Add CC_Win64ThisCall and set it in the necessary places.
llvm-svn: 126863
2011-03-02 19:36:23 +00:00
Devang Patel 31e5fb52d1 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

Reapply r126795 with a fix (one character change) for gdb testsuite regressions.

llvm-svn: 126858
2011-03-02 19:11:22 +00:00
Devang Patel a54696de8a Revert r126794.
llvm-svn: 126848
2011-03-02 17:54:58 +00:00
Devang Patel 3bc2dedb40 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

llvm-svn: 126795
2011-03-01 22:59:40 +00:00
Chris Lattner 29911cc2f5 Add some helper methods that will be used in my next patch.
llvm-svn: 126596
2011-02-28 00:18:40 +00:00
Chris Lattner 41c6ab538a Change the interface to ConstantFoldsToSimpleInteger to not encode
a bool + success into one tri-state integer, simplifying things.

llvm-svn: 126592
2011-02-27 23:02:32 +00:00
John McCall 9e2e22f5c6 Establish the iteration variable of an ObjC for-in loop before
emitting the collection expression.  Fixes some really, really broken
code.

llvm-svn: 126193
2011-02-22 07:16:58 +00:00
John McCall c533cb7008 Reorganize the emission of local variables.
llvm-svn: 126189
2011-02-22 06:44:22 +00:00
Anders Carlsson 08ce5ed1b1 Add a LangOptions::areExceptionsEnabled and start using it.
llvm-svn: 126062
2011-02-20 00:20:27 +00:00
Fariborz Jahanian 302a3d4e4d Objective-c armv7 API for atomic properties of
scalar types. // rdar://7761305

llvm-svn: 125946
2011-02-18 19:15:13 +00:00
John McCall e6be5e1c0d Remove the "conditional save" hashtables from IR generation.
llvm-svn: 125761
2011-02-17 19:02:56 +00:00
John McCall c07a0c7e48 Change the representation of GNU ?: expressions to use a different expression
class and to bind the shared value using OpaqueValueExpr.  This fixes an
unnoticed problem with deserialization of these expressions where the
deserialized form would lose the vital pointer-equality trait;  or rather,
it fixes it because this patch also does the right thing for deserializing
OVEs.

Change OVEs to not be a "temporary object" in the sense that copy elision is
permitted.

This new representation is not totally unawkward to work with, but I think
that's really part and parcel with the semantics we're modelling here.  In
particular, it's much easier to fix things like the copy elision bug and to
make the CFG look right.

I've tried to update the analyzer to deal with this in at least some          
obvious cases, and I think we get a much better CFG out, but the printing
of OpaqueValueExprs probably needs some work.

llvm-svn: 125744
2011-02-17 10:25:35 +00:00
Chris Lattner c8e630e4db Step #1/N of implementing support for __label__: split labels into
LabelDecl and LabelStmt.  There is a 1-1 correspondence between the
two, but this simplifies a bunch of code by itself.  This is because
labels are the only place where we previously had references to random
other statements, causing grief for AST serialization and other stuff.

This does cause one regression (attr(unused) doesn't silence unused
label warnings) which I'll address next.

This does fix some minor bugs:
1. "The only valid attribute " diagnostic was capitalized.
2. Various diagnostics printed as ''labelname'' instead of 'labelname'
3. This reduces duplication of label checking between functions and blocks.

Review appreciated, particularly for the cindex and template bits.

llvm-svn: 125733
2011-02-17 07:39:24 +00:00
John McCall 1bf5846abf Save a copy expression for non-trivial copy constructions of catch variables.
llvm-svn: 125661
2011-02-16 08:02:54 +00:00
Devang Patel 25468059e5 Simplify test to check an aggregate argument that has non trivial constructor or destructor.
This patch rewrites r125142.

llvm-svn: 125632
2011-02-16 01:11:51 +00:00
John McCall e3dc1707b5 Assorted cleanup:
- Have CGM precompute a number of commonly-used types
  - Have CGF copy that during initialization instead of recomputing them
  - Use TBAA info when initializing a parameter variable
  - Refactor the scalar ++/-- code

llvm-svn: 125562
2011-02-15 09:22:45 +00:00
Roman Divacky 178e0160b7 Implement mcount profiling, enabled via -pg.
llvm-svn: 125282
2011-02-10 16:52:03 +00:00
Devang Patel 14524e0f24 If an aggregate argument is passed indirectly because it has non trivial
destructor or copy constructor than let debug info know about it.

Radar 8945514.

llvm-svn: 125142
2011-02-09 00:37:30 +00:00
John McCall ad7c5c1657 Reorganize CodeGen{Function,Module} to eliminate the unfortunate
Block{Function,Module} base class.  Minor other refactorings.

Fixed a few address-space bugs while I was there.

llvm-svn: 125085
2011-02-08 08:22:06 +00:00
John McCall 351762cda2 A few more tweaks to the blocks AST representation:
- BlockDeclRefExprs always store VarDecls
  - BDREs no longer store copy expressions
  - BlockDecls now store a list of captured variables, information about
    how they're captured, and a copy expression if necessary
    
With that in hand, change IR generation to use the captures data in       
blocks instead of walking the block independently.        

Additionally, optimize block layout by emitting fields in descending
alignment order, with a heuristic for filling in words when alignment
of the end of the block header is insufficient for the most aligned
field.

llvm-svn: 125005
2011-02-07 10:33:21 +00:00
Fariborz Jahanian 7f6f81ba9b Clean up of -fapple-kext abi code. No change otherwise.
llvm-svn: 124807
2011-02-03 19:27:17 +00:00
Fariborz Jahanian 265c325ef3 -fapple-kext support for indirect call to virtuals dtors - wip.
llvm-svn: 124701
2011-02-01 23:22:34 +00:00
John McCall cb5f77f046 Reorganize the value-dominance metaprogram and introduce a specialization
for CodeGen's RValue type.

llvm-svn: 124483
2011-01-28 10:53:53 +00:00
John McCall e4df6c8d96 Convert the exception-freeing cleanup over to the conditional cleanups code,
fixing a crash which probably nobody was ever going to see.  In doing so,
fix a horrendous number of problems with the conditional-cleanups code.
Also, make conditional cleanups re-use the cleanup's activation variable,
which avoids some unfortunate repetitiveness.

llvm-svn: 124481
2011-01-28 08:37:24 +00:00
Fariborz Jahanian 2f2fa7284b Fixes an IRgen bug where __block variable is
referenced in the block-literal initializer
of that variable. // rdar://8893785

llvm-svn: 124332
2011-01-26 23:08:27 +00:00
John McCall f256eb54a2 Fix some obvious bugs in the conditional-cleanup code and then make the
dtor cleanup use it.

llvm-svn: 124309
2011-01-26 19:15:39 +00:00
John McCall ce1de6172c Better framework for conditional cleanups; untested as yet.
I'm separately committing this because it incidentally changes some
block orderings and minor IR issues, like using a phi instead of
an unnecessary alloca.

llvm-svn: 124277
2011-01-26 04:00:11 +00:00
Fariborz Jahanian 47609b088c apple kext abi requires all vf calls, including qualified
vf calls, be made indirect. This patch is towards that goal.

llvm-svn: 123922
2011-01-20 17:19:02 +00:00
Peter Collingbourne 0ff0b37627 Move name mangling support from CodeGen to AST. In the
process, perform a number of refactorings:

- Move MiscNameMangler member functions to MangleContext
- Remove GlobalDecl dependency from MangleContext
- Make MangleContext abstract and move Itanium/Microsoft functionality
  to their own classes/files
- Implement ASTContext::createMangleContext and have CodeGen use it

No (intended) functionality change.

llvm-svn: 123386
2011-01-13 18:57:25 +00:00
Bob Wilson 482afae812 Stop using builtins for the "_lane" variants of saturating multiply intrinsics.
Remove the "splat" parameter from the EmitNeonCall function, since it is no
longer needed.

llvm-svn: 121300
2010-12-08 22:37:56 +00:00
Bob Wilson 210f6ddecc Stop using a clang builtin for Neon vdup_lane intrinsics.
llvm-svn: 121191
2010-12-07 22:40:02 +00:00
John McCall 5d41378146 Rename CXXExprWithTemporaries -> ExprWithCleanups; there's no theoretical
reason this is limited to C++, and it's certainly not limited to temporaries.

llvm-svn: 120996
2010-12-06 08:20:24 +00:00
John McCall a2342eb857 Fix a bug in the emission of __real/__imag l-values on scalar operands.
Fix a bug in the emission of complex compound assignment l-values.
Introduce a method to emit an expression whose value isn't relevant.
Make that method evaluate its operand as an l-value if it is one.
Fixes our volatile compliance in C++.

llvm-svn: 120931
2010-12-05 02:00:02 +00:00
Francois Pichet d583da04d0 More anonymous struct/union redesign. This one deals with anonymous field used in a constructor initializer list:
struct X {
  X() : au_i1(123) {}
  union {
    int au_i1;
    float au_f1;
  };
};

clang will now deal with au_i1 explicitly as an IndirectFieldDecl.

llvm-svn: 120900
2010-12-04 09:14:42 +00:00
John McCall 0692a32a3e Test case for the l-value base only being evaluated once.
Also, move the l-value emission code into CGObjC.cpp and teach it, for
completeness, to store away self for a super send.

Also, inline the super cases for property gets and sets and make them
use the correct result type for implicit getter/setter calls.

llvm-svn: 120887
2010-12-04 03:11:00 +00:00
John McCall f3eb96fccf Kill the KVC l-value kind and calculate the base expression when emitting
the l-value.

llvm-svn: 120884
2010-12-04 02:32:38 +00:00
Fariborz Jahanian 50198098b9 IR Gen. part of API support for __block cxx
objects imported into blocks. //rdar://8594790.
Will have a test case coming (as well as one
sent to llvm test suite).

llvm-svn: 120713
2010-12-02 17:02:11 +00:00
Chris Lattner 27a3631bac Improve codegen for initializer lists to use memset more aggressively
when an initializer is variable (I handled the constant case in a previous
patch).  This has three pieces:

1. Enhance AggValueSlot to have a 'isZeroed' bit to tell CGExprAgg that
   the memory being stored into has previously been memset to zero.
2. Teach CGExprAgg to not emit stores of zero to isZeroed memory.
3. Teach CodeGenFunction::EmitAggExpr to scan initializers to determine
   whether they are profitable to emit a memset + inividual stores vs
   stores for everything.

The heuristic used is that a global has to be more than 16 bytes and
has to be 3/4 zero to be candidate for this xform.  The two testcases
are illustrative of the scenarios this catches.  We now codegen test9 into:

 call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 400, i32 4, i1 false)
 %.array = getelementptr inbounds [100 x i32]* %Arr, i32 0, i32 0
 %tmp = load i32* %X.addr, align 4
 store i32 %tmp, i32* %.array

and test10 into:

  call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 392, i32 8, i1 false)
  %tmp = getelementptr inbounds %struct.b* %S, i32 0, i32 0
  %tmp1 = getelementptr inbounds %struct.a* %tmp, i32 0, i32 0
  %tmp2 = load i32* %X.addr, align 4
  store i32 %tmp2, i32* %tmp1, align 4
  %tmp5 = getelementptr inbounds %struct.b* %S, i32 0, i32 3
  %tmp10 = getelementptr inbounds %struct.a* %tmp5, i32 0, i32 4
  %tmp11 = load i32* %X.addr, align 4
  store i32 %tmp11, i32* %tmp10, align 4

Previously we produced 99 stores of zero for test9 and also tons for test10.
This xforms should substantially speed up -O0 builds when it kicks in as well
as reducing code size and optimizer heartburn on insane cases.  This resolves
PR279.

llvm-svn: 120692
2010-12-02 07:07:26 +00:00
John McCall b7bd14fa08 Simplify the ASTs by consolidating ObjCImplicitGetterSetterExpr and ObjCPropertyRefExpr
into the latter.

llvm-svn: 120643
2010-12-02 01:19:52 +00:00
Anders Carlsson 3378d870d2 BuildVirtualCall doesn't need to take a reference to a pointer.
llvm-svn: 120252
2010-11-28 17:53:32 +00:00
John McCall 4f29b49de1 Support compound complex operations as l-values in C++. Add a test
case based on CodeGen/volatile-1.c which tests the current C++
semantics, and note the many, many places we fall short of them.

llvm-svn: 119402
2010-11-16 23:07:28 +00:00
Fariborz Jahanian a3e54bd33e Implements __block API for c++ objects. There is still
issue with runtime which I am discussing it with Blaine.
This is wip (so no test yet).

llvm-svn: 119368
2010-11-16 19:29:39 +00:00
John McCall 07bb19667a Simplify some complex emission and implement correct semantics for
assignment to volatiles in C.  This in effect reverts some of mjs's
work in and around r72572.  Basically, the C++ standard is quite
clear, except that it lies about volatile behavior approximating
C's, whereas the C standard is almost actively misleading.

llvm-svn: 119344
2010-11-16 10:08:07 +00:00
Fariborz Jahanian e988bdac44 Block API patch to do copy ctor of copied-in cxx objects in
copy helper function and dtor of copied cxx objects
in dispose helper functions. __block variables
TBD next.

llvm-svn: 119011
2010-11-13 21:53:34 +00:00
John McCall cdf7ef5437 Simplify the logic for emitting guard variables for template static
data members by delaying the emission of the initializer until after
linkage and visibility have been set on the global.  Also, don't
emit a guard unless the variable actually ends up with vague linkage,
and don't use thread-safe statics in any case.

llvm-svn: 118336
2010-11-06 09:44:32 +00:00
John McCall 3a7f6926d1 Restore r117403 (fixing IR gen for bool atomics), this time being less
aggressive about the form we expect bools to be in.  I don't really have
time to fix all the sources right now.

llvm-svn: 117486
2010-10-27 20:58:56 +00:00
Rafael Espindola 9d798a07e4 Revert r117403 as it caused PR8480.
llvm-svn: 117456
2010-10-27 17:13:49 +00:00
John McCall 6bde954f47 Extract procedures to do scalar-to-memory and memory-to-scalar conversions
in IR gen, and use those to fix a correctness issue with bool atomic
intrinsics.  rdar://problem/8461234

llvm-svn: 117403
2010-10-26 22:09:15 +00:00
Dan Gohman 8fc50c296a Factor out the code for emitting code to load vtable pointer members
so that it's done in one place.

llvm-svn: 117386
2010-10-26 18:44:08 +00:00
Michael J. Spencer f5a1fbcdf3 Fix Whitespace.
llvm-svn: 116798
2010-10-19 06:39:39 +00:00
John McCall 1c9c3fd50a Death to blocks, or at least the word "block" in one particular obnoxiously
ambiguous context.

llvm-svn: 116567
2010-10-15 04:57:14 +00:00
Dan Gohman 947c9af774 Experimental TBAA support.
This enables metadata generation by default, however the TBAA pass
in the optimizer is still disabled for now.

llvm-svn: 116536
2010-10-14 23:06:10 +00:00
Fariborz Jahanian 681c0754d9 Eliminate usage of ObjCSuperExpr used for
'super' as receiver of property or a setter/getter
methods. //rdar: //8525788

llvm-svn: 116483
2010-10-14 16:04:05 +00:00
Bill Wendling 65b2a965fb Add target implementations for the X86 builtins:
__builtin_ia32_vec_init_v8qi
  __builtin_ia32_vec_init_v4hi
  __builtin_ia32_vec_init_v2si

They are lowered to bitcasts. (These are all ready tested by the gcc testsuite.)
<rdar://problem/8529957>

llvm-svn: 116147
2010-10-09 08:47:25 +00:00
John McCall a2fabff4f6 Permit constant evaluation of const floating-point variables with
constant initializers.

llvm-svn: 116138
2010-10-09 01:34:31 +00:00
Bill Wendling 11191f11b8 Accidentally committed some temporary changes on my branch when reverting patches.
llvm-svn: 114936
2010-09-28 01:28:56 +00:00
Bill Wendling 6d8c442e08 Temporarily revert 114929 114925 114924 114921. It looked like they (or at least
one of them) was causing a series of failures:

http://google1.osuosl.org:8011/builders/clang-x86_64-darwin10-selfhost/builds/4518

svn merge -c -114929 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114929 into '.':
U    include/clang/Sema/Sema.h
U    include/clang/AST/DeclCXX.h
U    lib/Sema/SemaDeclCXX.cpp
U    lib/Sema/SemaTemplateInstantiateDecl.cpp
U    lib/Sema/SemaDecl.cpp
U    lib/Sema/SemaTemplateInstantiate.cpp
U    lib/AST/DeclCXX.cpp
svn merge -c -114925 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114925 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/AST/DeclCXX.cpp
svn merge -c -114924 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114924 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/Sema/SemaDecl.cpp
G    lib/AST/DeclCXX.cpp
U    lib/AST/ASTContext.cpp
svn merge -c -114921 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114921 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/Sema/SemaDecl.cpp
G    lib/AST/DeclCXX.cpp

llvm-svn: 114933
2010-09-28 01:09:49 +00:00
Fariborz Jahanian 8fb87aec78 Patch implements passing arrays to functions expecting
vla. Implements pr7827.

llvm-svn: 114737
2010-09-24 17:30:16 +00:00
Fariborz Jahanian cb75021034 IRgen for gnu extension's conditional lvalue expression
with missing LHS. radar 8453812. Executable test is checked 
into llvm test suite.

llvm-svn: 114457
2010-09-21 18:32:21 +00:00
Fariborz Jahanian 8162d4ad31 Implements in IRgen gnu extensions missing LHS for
complex conditionals. Radar 8453812.

llvm-svn: 114376
2010-09-20 23:50:22 +00:00
John McCall f09d96f76d Adjust a fixup's starting branch if it's being resolved because
it reached the outermost scope and it hasn't yet been forwarded
to a cleanup.  Fixed PR8175.

llvm-svn: 114259
2010-09-18 02:24:39 +00:00
Fariborz Jahanian 5bbd1b0051 Patch to add IRgen support for Gnu's conditional operator
extension when missing LHS. This patch covers scalar
conditionals only. Others are wip.
(pr7726, radar 8353567).

llvm-svn: 114182
2010-09-17 15:51:28 +00:00
John McCall 7f9c92a9a0 When emitting a new-expression inside a conditional expression,
the cleanup might not be dominated by the allocation code.
In this case, we have to store aside all the delete arguments
in case we need them later.  There's room for optimization here
in cases where we end up not actually needing the cleanup in
different branches (or being able to pop it after the
initialization code).

Also make sure we only call this operator delete along the path
where we actually allocated something.

Fixes rdar://problem/8439196.

llvm-svn: 114145
2010-09-17 00:50:28 +00:00
Fariborz Jahanian b60e70f963 Patch to move RequiresGCollection bit to
AggValueSlot slot.

llvm-svn: 114045
2010-09-16 00:20:07 +00:00
John McCall 7a626f63f7 one piece of code is responsible for the lifetime of every aggregate
slot.  The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.

I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision.  The main obstacle here is that
IR-generation must be much more careful about making sure that exactly

llvm-svn: 113962
2010-09-15 10:14:12 +00:00
John McCall 824c2f537c Implement the EH cleanup to call 'operator delete' if a new-expression throws
(but not if destructors associated with the full-expression throw).

llvm-svn: 113836
2010-09-14 07:57:04 +00:00
Argyrios Kyrtzidis 9efa1ce145 Fix VLA miscompilation.
llvm.stacksave/llvm.stackrestore wasn't emitted for VLAs in inner scopes.
Fixes r8403108.

llvm-svn: 113822
2010-09-14 00:42:34 +00:00
John McCall 68ff03728a Implement ARM static local initialization guards, which are more compact than
Itanium guards and use a slightly different compiled-in API.

llvm-svn: 113330
2010-09-08 01:44:27 +00:00
John McCall 8ed55a54fd Abstract IR generation of array cookies into the C++ ABI class and
implement ARM array cookies.  Also fix a few unfortunate bugs:
  - throwing dtors in deletes prevented the allocation from being deleted
  - adding the cookie to the new[] size was not being considered for
    overflow (and, more seriously, was screwing up the earlier checks)
  - deleting an array via a pointer to array of class type was not
    causing any destructors to be run and was passing the unadjusted
    pointer to the deallocator
  - lots of address-space problems, in case anyone wants to support
    free store in a variant address space :)

llvm-svn: 112814
2010-09-02 09:58:18 +00:00
John McCall 5d865c3292 Teach IR generation to return 'this' from constructors and destructors
under the ARM ABI.

llvm-svn: 112588
2010-08-31 07:33:07 +00:00
Daniel Dunbar 5c816378f8 IRgen: Set the alignment correctly when creating LValue for a decls.
- Fixes PR5598.
 - Review appreciated.

llvm-svn: 111726
2010-08-21 04:20:22 +00:00
Daniel Dunbar 49e5d12e59 CodeGenFunction: Eliminate unused MakeQualifiers() function.
llvm-svn: 111725
2010-08-21 04:13:07 +00:00
Daniel Dunbar 226bddaff1 IRgen/CGValue: Give MakeAddrLValue() an alignment argument, and eliminate old form of MakeAddr().
llvm-svn: 111723
2010-08-21 03:58:45 +00:00
Daniel Dunbar b1d94a98dd IRgen: Eliminate EmitPredefinedFunctionName(), it doesn't need to be factored out.
llvm-svn: 111715
2010-08-21 03:01:12 +00:00
Daniel Dunbar 93b00a98a3 IRgen: Add an LValue::MakeAddr variant which takes a type and uses that to build
the qualifiers.

Also, add CodeGenFunction::MakeAddrLValue() helper function which passes in the
ASTContext.

llvm-svn: 111714
2010-08-21 02:53:44 +00:00
Daniel Dunbar 0381634a61 IRgen: Change Emit{Load,Store}OfScalar to take a required Alignment argument and
update callers as best I can.
 - This is a work in progress, our alignment handling is very horrible / sketchy -- I am just aiming for monotonic improvement.
 - Serious review appreciated.

llvm-svn: 111707
2010-08-21 02:24:36 +00:00
John McCall 32427785c0 More cleanup enabling.
llvm-svn: 111070
2010-08-14 07:46:19 +00:00
John McCall 612942d65f Sketch out a framework for delaying the activation of a cleanup.
Not yet complete or used.

llvm-svn: 111044
2010-08-13 21:20:51 +00:00
Devang Patel dc866e119c Simplify code and add comments, in code that generate debug info for constant integer globals, based on Chris's feedback.
llvm-svn: 110694
2010-08-10 17:53:33 +00:00
Devang Patel e03edfd3e7 Even if a constant's evaluated value is used, emit debug info for the constant variable.
llvm-svn: 110660
2010-08-10 07:24:25 +00:00
John McCall cf14216509 Store inheritance paths after CastExprs instead of inside them.
This takes some trickery since CastExpr has subclasses (and indeed,
is abstract).

Also, smoosh the CastKind into the bitfield from Expr.

Drops two words of storage from Expr in the common case of expressions
which don't need inheritance paths.  Avoids a separate allocation and
another word of overhead in cases needing inheritance paths.  Also has
the advantage of not leaking memory, since destructors for AST nodes are
never run.

llvm-svn: 110507
2010-08-07 06:22:56 +00:00
Nate Begeman ad5dd42817 vdup_lane was missing
<rdar://problem/8278732>

llvm-svn: 110420
2010-08-06 01:24:57 +00:00
Fariborz Jahanian afa3c0a875 More objc block variable layout info. work.
llvm-svn: 110239
2010-08-04 18:44:59 +00:00
Fariborz Jahanian c05349e53a Some early work for providing block layout info.
for objective-c/c++ blocks (NeXt runtime).

llvm-svn: 110213
2010-08-04 16:57:49 +00:00
John McCall ba80390307 When creating a jump destination, its scope should be the scope of the
enclosing normal cleanup, not the top of the EH stack.  I'm *really*
surprised this hasn't been causing more problems.

Fixes rdar://problem/8231514.

llvm-svn: 109569
2010-07-28 01:07:35 +00:00
John McCall ad5d61e227 Revise cleanup IR generation to fix a major bug with cleanups (PR7686)
as well as some significant asymptotic inefficiencies with threading
multiple jumps through deep cleanups.

llvm-svn: 109274
2010-07-23 21:56:41 +00:00
John McCall cda666ccd8 Rename LazyCleanup -> Cleanup. No functionality change for these last three
commits.

llvm-svn: 109000
2010-07-21 07:22:38 +00:00
John McCall 20141f2d8c Rip out EHCleanupScope.
llvm-svn: 108999
2010-07-21 07:11:21 +00:00
John McCall 7535ee0352 Kill the CleanupBlock API.
llvm-svn: 108998
2010-07-21 07:04:01 +00:00
John McCall 8680f87d99 Switch the destructor for a temporary arising from a reference binding over to
using a lazy cleanup.

llvm-svn: 108994
2010-07-21 06:29:51 +00:00
John McCall 906da4bb52 Switch finally cleanups over to being lazy cleanups. We get basically nothing
from the laziness features here except better block ordering, but it removes yet
another CleanupBlock use.

llvm-svn: 108990
2010-07-21 05:47:49 +00:00
John McCall f99a631e4e Implement proper base/member destructor EH chaining.
llvm-svn: 108989
2010-07-21 05:30:47 +00:00
Douglas Gregor 05fc5be32f Implement zero-initialization for array new when there is an
initializer of (). Make sure to use a simple memset() when we can, or
fall back to generating a loop when a simple memset will not
suffice. Fixes <rdar://problem/8212208>, a regression due to my work
in r107857.

llvm-svn: 108977
2010-07-21 01:10:17 +00:00
John McCall 8f51041184 Add a little helper method which will be useful soon.
llvm-svn: 108972
2010-07-21 00:40:03 +00:00
Chris Lattner f2f3870189 Follow the implementation approach suggested by PR6687,
which generates more efficient and more obviously conformant
code.  We now test for overflow of the multiply then force
the result to -1 if so.  On X86, this generates nice code
like this:

__Z4testl:                              ## @_Z4testl
## BB#0:                                ## %entry
	subl	$12, %esp
	movl	$4, %eax
	mull	16(%esp)
	testl	%edx, %edx
	movl	$-1, %ecx
	cmovel	%eax, %ecx
	movl	%ecx, (%esp)
	call	__Znam
	addl	$12, %esp
	ret

llvm-svn: 108927
2010-07-20 21:07:09 +00:00
Chris Lattner 26008e07de implement rdar://5739832 - operator new should check for overflow in multiply,
causing clang to compile this code into something that correctly throws a
length error, fixing a potential integer overflow security attack:

void *test(long N) {
  return new int[N];
}

int main() {
  test(1L << 62);
}

We do this even when exceptions are disabled, because it is better for the
code to abort than for the attack to succeed.

This is heavily based on a patch that Fariborz wrote.

llvm-svn: 108915
2010-07-20 20:19:24 +00:00
Eli Friedman eca55afea3 Fix for PR3800: make sure not to evaluate the expression for a read-write
asm operand twice.

llvm-svn: 108489
2010-07-16 00:55:21 +00:00
John McCall 11e577b99a Work around an obnoxious GCC warning by changing semantics in a hopefully-
harmless way.

llvm-svn: 108295
2010-07-13 23:19:49 +00:00
John McCall 5c08ab956b Allow for the possibility that __cxa_end_catch might throw for a catch-all block
or a catch of a record type by value or reference.  Also convert this to a
lazy cleanup.

llvm-svn: 108287
2010-07-13 22:12:14 +00:00
John McCall 2b7fc3828e Teach IR generation how to lazily emit cleanups. This has a lot of advantages,
mostly in avoiding unnecessary work at compile time but also in producing more
sensible block orderings.

Move the destructor cleanups for local variables over to use lazy cleanups.
Eventually all cleanups will do this;  for now we have some awkward code
duplication.

Tell IR generation just to never produce landing pads in -fno-exceptions.
This is a much more comprehensive solution to a problem which previously was
half-solved by checks in most cleanup-generation spots.

llvm-svn: 108270
2010-07-13 20:32:21 +00:00
Douglas Gregor 747eb7840a Reinstate the fix for PR7556. A silly use of isTrivial() was
suppressing copies of objects with trivial copy constructors.

llvm-svn: 107857
2010-07-08 06:14:04 +00:00
Douglas Gregor e182370eda Revert r107828 and r107827, the fix for PR7556, which seems to be
breaking bootstrap on Linux.

llvm-svn: 107837
2010-07-07 23:37:33 +00:00