Commit Graph

105 Commits

Author SHA1 Message Date
Richard Smith e78fac5126 PR36992: do not store beyond the dsize of a class object unless we know
the tail padding is not reused.

We track on the AggValueSlot (and through a couple of other
initialization actions) whether we're dealing with an object that might
share its tail padding with some other object, so that we can avoid
emitting stores into the tail padding if that's the case. We still
widen stores into tail padding when we can do so.

Differential Revision: https://reviews.llvm.org/D45306

llvm-svn: 329342
2018-04-05 20:52:58 +00:00
Yaxun Liu d9389827d2 CodeGen: Reduce LValue and CallArgList memory footprint before recommitting r326946
Recent change r326946 (https://reviews.llvm.org/D34367) causes regression in Eigen due to increased
memory footprint of CallArg.

This patch reduces LValue size from 112 to 96 bytes and reduces inline argument count of CallArgList
from 16 to 8.

It has been verified that this will let the added deep AST tree test pass with r326946.

In the long run, CallArg or LValue memory footprint should be further optimized.

Differential Revision: https://reviews.llvm.org/D44445

llvm-svn: 327515
2018-03-14 15:02:28 +00:00
Ivan A. Kosarev b9c59f36fc [CodeGen] Propagate may-alias'ness of lvalues with TBAA info
This patch fixes various places in clang to propagate may-alias
TBAA access descriptors during construction of lvalues, thus
eliminating the need for the LValueBaseInfo::MayAlias flag.

This is part of D38126 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D39008

llvm-svn: 316988
2017-10-31 11:05:34 +00:00
Ivan A. Kosarev d17f12a35d [CodeGen] Pass TBAA info along with lvalue base info everywhere
This patch addresses the rest of the cases where we pass lvalue
base info, but do not provide corresponding TBAA info.

This patch should not bring in any functional changes.

This is part of D38126 reworked to be a separate patch to make
reviewing easier.

Differential Revision: https://reviews.llvm.org/D38945

llvm-svn: 315986
2017-10-17 10:17:43 +00:00
Alexander Richardson 6d989436d0 Convert clang::LangAS to a strongly typed enum
Summary:
Convert clang::LangAS to a strongly typed enum

Currently both clang AST address spaces and target specific address spaces
are represented as unsigned which can lead to subtle errors if the wrong
type is passed. It is especially confusing in the CodeGen files as it is
not possible to see what kind of address space should be passed to a
function without looking at the implementation.
I originally made this change for our LLVM fork for the CHERI architecture
where we make extensive use of address spaces to differentiate between
capabilities and pointers. When merging the upstream changes I usually
run into some test failures or runtime crashes because the wrong kind of
address space is passed to a function. By converting the LangAS enum to a
C++11 we can catch these errors at compile time. Additionally, it is now
obvious from the function signature which kind of address space it expects.

I found the following errors while writing this patch:

- ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address
  space to  TargetInfo::getPointer{Width,Align}()
- TypePrinter::printAttributedAfter() prints the numeric value of the
  clang AST address space instead of the target address space.
  However, this code is not used so I kept the current behaviour
- initializeForBlockHeader() in CGBlocks.cpp was passing
  LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}()
- CodeGenFunction::EmitBlockLiteral() was passing a AST address space to
  TargetInfo::getPointerWidth()
- CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space
  to Qualifiers::addAddressSpace()
- CGOpenMPRuntimeNVPTX::getParameterAddress() was using
  llvm::Type::getPointerTo() with a AST address space
- clang_getAddressSpace() returns either a LangAS or a target address
  space. As this is exposed to C I have kept the current behaviour and
  added a comment stating that it is probably not correct.

Other than this the patch should not cause any functional changes.

Reviewers: yaxunl, pcc, bader

Reviewed By: yaxunl, bader

Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits

Differential Revision: https://reviews.llvm.org/D38816

llvm-svn: 315871
2017-10-15 18:48:14 +00:00
Ivan A. Kosarev 383890bad4 Refine generation of TBAA information in clang
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.

DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.

TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.

Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).

We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.

Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.

Now we do not lookup twice for the same cache entry in
getAccessTagInfo().

Refined relevant comments and descriptions.

Differential Revision: https://reviews.llvm.org/D37826

llvm-svn: 315048
2017-10-06 08:17:48 +00:00
Ivan A. Kosarev afc074cc41 Revert r314977 "[CodeGen] Unify generation of scalar and struct-path TBAA tags"
D37826 has been mistakenly committed where it should be the patch from D38503.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314978
2017-10-05 11:05:43 +00:00
Ivan A. Kosarev 6fa20cfea3 [CodeGen] Unify generation of scalar and struct-path TBAA tags
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.

Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
  of types to access tags anymore. Instead, it takes an access
  descriptor (TBAAAccessInfo) and generates corresponding access
  tag from it.
* getTBAAInfoForVTablePtr() reworked to
  getTBAAVTablePtrAccessInfo() that now returns the
  virtual-pointer access descriptor and not the virtual-point
  type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
  descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
  it is the only way to generate access tag by a given access
  descriptor. It is capable of producing both scalar and
  struct-path tags, depending on options and availability of the
  base access type. getTBAAScalarTagInfo() and its cache
  ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
  access tag should be a scalar or struct-path one,
  getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
  descriptor by a given QualType access type.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314977
2017-10-05 10:47:51 +00:00
Ivan A. Kosarev a511ed7501 [CodeGen] Introduce generic TBAA access descriptors
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38456

llvm-svn: 314780
2017-10-03 10:52:39 +00:00
Ivan A. Kosarev 289574edc0 [CodeGen] Do not refer to complete TBAA info where we actually deal with just TBAA access types
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.

This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.

Differential Revision: https://reviews.llvm.org/D38404

llvm-svn: 314657
2017-10-02 09:54:47 +00:00
Krzysztof Parzyszek 8f248234fa [CodeGen] Propagate LValueBaseInfo instead of AlignmentSource
The functions creating LValues propagated information about alignment
source. Extend the propagated data to also include information about
possible unrestricted aliasing. A new class LValueBaseInfo will
contain both AlignmentSource and MayAlias info.

This patch should not introduce any functional changes.

Differential Revision: https://reviews.llvm.org/D33284

llvm-svn: 303358
2017-05-18 17:07:11 +00:00
David Majnemer ec4b7341cc [Sema] PR26444 fix crash when alignment value is >= 2**16
Sema allows max values up to 2**28, use unsigned instead of unsiged
short to hold values that large.

Differential Revision: http://reviews.llvm.org/D17248

Patch by Don Hinton!

llvm-svn: 262466
2016-03-02 06:48:47 +00:00
Michael Zolotukhin 84df12375c Introduce __builtin_nontemporal_store and __builtin_nontemporal_load.
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.

Differential Revision: http://reviews.llvm.org/D12313

llvm-svn: 247104
2015-09-08 23:52:33 +00:00
John McCall 7f416cc426 Compute and preserve alignment more faithfully in IR-generation.
Introduce an Address type to bundle a pointer value with an
alignment.  Introduce APIs on CGBuilderTy to work with Address
values.  Change core APIs on CGF/CGM to traffic in Address where
appropriate.  Require alignments to be non-zero.  Update a ton
of code to compute and propagate alignment information.

As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.

The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned.  Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay.  I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.

Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.

We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment.  In particular,
field access now uses alignmentAtOffset instead of min.

Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs.  For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint.  That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.

ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments.  In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments.  That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.

I partially punted on applying this work to CGBuiltin.  Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.

llvm-svn: 246985
2015-09-08 08:05:57 +00:00
David Blaikie e3b172afc3 [opaque pointer type] Update for GEP API changes in LLVM
Now the GEP constant utility functions require the type to be explicitly
passed (since eventually the pointer type will be opaque and not convey
the required type information). For now callers can still pass nullptr
(though none were needed here in Clang, which is nice) if
convenienc/necessary, but eventually that will be disallowed as well.

llvm-svn: 233937
2015-04-02 18:55:21 +00:00
Benjamin Kramer 2f5db8b3db Header guard canonicalization, clang part.
Modifications made by clang-tidy with minor tweaks.

llvm-svn: 215557
2014-08-13 16:25:19 +00:00
Craig Topper 8a13c4180e [C++11] Use 'nullptr'. CodeGen edition.
llvm-svn: 209272
2014-05-21 05:09:00 +00:00
Renato Golin 230c5eb4bd Non-allocatable Global Named Register
This patch implements global named registers in Clang, lowering to the just
created intrinsics in LLVM (@llvm.read/write_register). A new type of LValue
had to be created (Register), which just adds support to carry the metadata
node containing the name of the register. Two new methods to emit loads and
stores interoperate with another to emit the named metadata node.

No guarantees are being made and only non-allocatable global variable named
registers are being supported. Local named register support is unchanged.

llvm-svn: 209149
2014-05-19 18:15:42 +00:00
Eli Friedman be4504df26 Simplify atomic load/store IRGen.
Also fixes a couple minor bugs along the way; see testcases.

llvm-svn: 186049
2013-07-11 01:32:21 +00:00
Manman Ren c451e5766e Initial support for struct-path aware TBAA.
Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to
the actual type and 0, and are updated in EmitLValueForField.
Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar.
Added command line option -struct-path-tbaa.

llvm-svn: 178797
2013-04-04 21:53:22 +00:00
Manman Ren 092d9e8f3b revert r178784 since it does not have a commit message
llvm-svn: 178796
2013-04-04 21:51:07 +00:00
Manman Ren 037d2b252d Index: include/clang/Driver/CC1Options.td
===================================================================
--- include/clang/Driver/CC1Options.td	(revision 178718)
+++ include/clang/Driver/CC1Options.td	(working copy)
@@ -161,6 +161,8 @@
   HelpText<"Use register sized accesses to bit-fields, when possible.">;
 def relaxed_aliasing : Flag<["-"], "relaxed-aliasing">,
   HelpText<"Turn off Type Based Alias Analysis">;
+def struct_path_tbaa : Flag<["-"], "struct-path-tbaa">,
+  HelpText<"Turn on struct-path aware Type Based Alias Analysis">;
 def masm_verbose : Flag<["-"], "masm-verbose">,
   HelpText<"Generate verbose assembly output">;
 def mcode_model : Separate<["-"], "mcode-model">,
Index: include/clang/Driver/Options.td
===================================================================
--- include/clang/Driver/Options.td	(revision 178718)
+++ include/clang/Driver/Options.td	(working copy)
@@ -587,6 +587,7 @@
   Flags<[CC1Option]>, HelpText<"Disable spell-checking">;
 def fno_stack_protector : Flag<["-"], "fno-stack-protector">, Group<f_Group>;
 def fno_strict_aliasing : Flag<["-"], "fno-strict-aliasing">, Group<f_Group>;
+def fstruct_path_tbaa : Flag<["-"], "fstruct-path-tbaa">, Group<f_Group>;
 def fno_strict_enums : Flag<["-"], "fno-strict-enums">, Group<f_Group>;
 def fno_strict_overflow : Flag<["-"], "fno-strict-overflow">, Group<f_Group>;
 def fno_threadsafe_statics : Flag<["-"], "fno-threadsafe-statics">, Group<f_Group>,
Index: include/clang/Frontend/CodeGenOptions.def
===================================================================
--- include/clang/Frontend/CodeGenOptions.def	(revision 178718)
+++ include/clang/Frontend/CodeGenOptions.def	(working copy)
@@ -85,6 +85,7 @@
 VALUE_CODEGENOPT(OptimizeSize, 2, 0) ///< If -Os (==1) or -Oz (==2) is specified.
 CODEGENOPT(RelaxAll          , 1, 0) ///< Relax all machine code instructions.
 CODEGENOPT(RelaxedAliasing   , 1, 0) ///< Set when -fno-strict-aliasing is enabled.
+CODEGENOPT(StructPathTBAA    , 1, 0) ///< Whether or not to use struct-path TBAA.
 CODEGENOPT(SaveTempLabels    , 1, 0) ///< Save temporary labels.
 CODEGENOPT(SanitizeAddressZeroBaseShadow , 1, 0) ///< Map shadow memory at zero
                                                  ///< offset in AddressSanitizer.
Index: lib/CodeGen/CGExpr.cpp
===================================================================
--- lib/CodeGen/CGExpr.cpp	(revision 178718)
+++ lib/CodeGen/CGExpr.cpp	(working copy)
@@ -1044,7 +1044,8 @@
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(LValue lvalue) {
   return EmitLoadOfScalar(lvalue.getAddress(), lvalue.isVolatile(),
                           lvalue.getAlignment().getQuantity(),
-                          lvalue.getType(), lvalue.getTBAAInfo());
+                          lvalue.getType(), lvalue.getTBAAInfo(),
+                          lvalue.getTBAABaseType(), lvalue.getTBAAOffset());
 }
 
 static bool hasBooleanRepresentation(QualType Ty) {
@@ -1106,7 +1107,9 @@
 
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                               unsigned Alignment, QualType Ty,
-                                              llvm::MDNode *TBAAInfo) {
+                                              llvm::MDNode *TBAAInfo,
+                                              QualType TBAABaseType,
+                                              uint64_t TBAAOffset) {
   // For better performance, handle vector loads differently.
   if (Ty->isVectorType()) {
     llvm::Value *V;
@@ -1158,8 +1161,11 @@
     Load->setVolatile(true);
   if (Alignment)
     Load->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Load, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Load, TBAAPath);
+  }
 
   if ((SanOpts->Bool && hasBooleanRepresentation(Ty)) ||
       (SanOpts->Enum && Ty->getAs<EnumType>())) {
@@ -1217,7 +1223,8 @@
                                         bool Volatile, unsigned Alignment,
                                         QualType Ty,
                                         llvm::MDNode *TBAAInfo,
-                                        bool isInit) {
+                                        bool isInit, QualType TBAABaseType,
+                                        uint64_t TBAAOffset) {
   
   // Handle vectors differently to get better performance.
   if (Ty->isVectorType()) {
@@ -1268,15 +1275,19 @@
   llvm::StoreInst *Store = Builder.CreateStore(Value, Addr, Volatile);
   if (Alignment)
     Store->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Store, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Store, TBAAPath);
+  }
 }
 
 void CodeGenFunction::EmitStoreOfScalar(llvm::Value *value, LValue lvalue,
                                         bool isInit) {
   EmitStoreOfScalar(value, lvalue.getAddress(), lvalue.isVolatile(),
                     lvalue.getAlignment().getQuantity(), lvalue.getType(),
-                    lvalue.getTBAAInfo(), isInit);
+                    lvalue.getTBAAInfo(), isInit, lvalue.getTBAABaseType(),
+                    lvalue.getTBAAOffset());
 }
 
 /// EmitLoadOfLValue - Given an expression that represents a value lvalue, this
@@ -2494,9 +2505,12 @@
 
   llvm::Value *addr = base.getAddress();
   unsigned cvr = base.getVRQualifiers();
+  bool TBAAPath = CGM.getCodeGenOpts().StructPathTBAA;
   if (rec->isUnion()) {
     // For unions, there is no pointer adjustment.
     assert(!type->isReferenceType() && "union has reference member");
+    // TODO: handle path-aware TBAA for union.
+    TBAAPath = false;
   } else {
     // For structs, we GEP to the field that the record layout suggests.
     unsigned idx = CGM.getTypes().getCGRecordLayout(rec).getLLVMFieldNo(field);
@@ -2508,6 +2522,8 @@
       if (cvr & Qualifiers::Volatile) load->setVolatile(true);
       load->setAlignment(alignment.getQuantity());
 
+      // Loading the reference will disable path-aware TBAA.
+      TBAAPath = false;
       if (CGM.shouldUseTBAA()) {
         llvm::MDNode *tbaa;
         if (mayAlias)
@@ -2541,6 +2557,16 @@
 
   LValue LV = MakeAddrLValue(addr, type, alignment);
   LV.getQuals().addCVRQualifiers(cvr);
+  if (TBAAPath) {
+    const ASTRecordLayout &Layout =
+        getContext().getASTRecordLayout(field->getParent());
+    // Set the base type to be the base type of the base LValue and
+    // update offset to be relative to the base type.
+    LV.setTBAABaseType(base.getTBAABaseType());
+    LV.setTBAAOffset(base.getTBAAOffset() +
+                     Layout.getFieldOffset(field->getFieldIndex()) /
+                                           getContext().getCharWidth());
+  }
 
   // __weak attribute on a field is ignored.
   if (LV.getQuals().getObjCGCAttr() == Qualifiers::Weak)
Index: lib/CodeGen/CGValue.h
===================================================================
--- lib/CodeGen/CGValue.h	(revision 178718)
+++ lib/CodeGen/CGValue.h	(working copy)
@@ -157,6 +157,11 @@
 
   Expr *BaseIvarExp;
 
+  /// Used by struct-path-aware TBAA.
+  QualType TBAABaseType;
+  /// Offset relative to the base type.
+  uint64_t TBAAOffset;
+
   /// TBAAInfo - TBAA information to attach to dereferences of this LValue.
   llvm::MDNode *TBAAInfo;
 
@@ -175,6 +180,10 @@
     this->ImpreciseLifetime = false;
     this->ThreadLocalRef = false;
     this->BaseIvarExp = 0;
+
+    // Initialize fields for TBAA.
+    this->TBAABaseType = Type;
+    this->TBAAOffset = 0;
     this->TBAAInfo = TBAAInfo;
   }
 
@@ -232,6 +241,12 @@
   Expr *getBaseIvarExp() const { return BaseIvarExp; }
   void setBaseIvarExp(Expr *V) { BaseIvarExp = V; }
 
+  QualType getTBAABaseType() const { return TBAABaseType; }
+  void setTBAABaseType(QualType T) { TBAABaseType = T; }
+
+  uint64_t getTBAAOffset() const { return TBAAOffset; }
+  void setTBAAOffset(uint64_t O) { TBAAOffset = O; }
+
   llvm::MDNode *getTBAAInfo() const { return TBAAInfo; }
   void setTBAAInfo(llvm::MDNode *N) { TBAAInfo = N; }
 
Index: lib/CodeGen/CodeGenFunction.h
===================================================================
--- lib/CodeGen/CodeGenFunction.h	(revision 178718)
+++ lib/CodeGen/CodeGenFunction.h	(working copy)
@@ -2211,7 +2211,9 @@
   /// the LLVM value representation.
   llvm::Value *EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                 unsigned Alignment, QualType Ty,
-                                llvm::MDNode *TBAAInfo = 0);
+                                llvm::MDNode *TBAAInfo = 0,
+                                QualType TBAABaseTy = QualType(),
+                                uint64_t TBAAOffset = 0);
 
   /// EmitLoadOfScalar - Load a scalar value from an address, taking
   /// care to appropriately convert from the memory representation to
@@ -2224,7 +2226,9 @@
   /// the LLVM value representation.
   void EmitStoreOfScalar(llvm::Value *Value, llvm::Value *Addr,
                          bool Volatile, unsigned Alignment, QualType Ty,
-                         llvm::MDNode *TBAAInfo = 0, bool isInit=false);
+                         llvm::MDNode *TBAAInfo = 0, bool isInit = false,
+                         QualType TBAABaseTy = QualType(),
+                         uint64_t TBAAOffset = 0);
 
   /// EmitStoreOfScalar - Store a scalar value to an address, taking
   /// care to appropriately convert from the memory representation to
Index: lib/CodeGen/CodeGenModule.cpp
===================================================================
--- lib/CodeGen/CodeGenModule.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenModule.cpp	(working copy)
@@ -227,6 +227,20 @@
   return TBAA->getTBAAStructInfo(QTy);
 }
 
+llvm::MDNode *CodeGenModule::getTBAAStructTypeInfo(QualType QTy) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTypeInfo(QTy);
+}
+
+llvm::MDNode *CodeGenModule::getTBAAStructTagInfo(QualType BaseTy,
+                                                  llvm::MDNode *AccessN,
+                                                  uint64_t O) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTagInfo(BaseTy, AccessN, O);
+}
+
 void CodeGenModule::DecorateInstruction(llvm::Instruction *Inst,
                                         llvm::MDNode *TBAAInfo) {
   Inst->setMetadata(llvm::LLVMContext::MD_tbaa, TBAAInfo);
Index: lib/CodeGen/CodeGenModule.h
===================================================================
--- lib/CodeGen/CodeGenModule.h	(revision 178718)
+++ lib/CodeGen/CodeGenModule.h	(working copy)
@@ -501,6 +501,11 @@
   llvm::MDNode *getTBAAInfo(QualType QTy);
   llvm::MDNode *getTBAAInfoForVTablePtr();
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+  /// Return the MDNode in the type DAG for the given struct type.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QTy);
+  /// Return the path-aware tag for given base type, access node and offset.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseTy, llvm::MDNode *AccessN,
+                                     uint64_t O);
 
   bool isTypeConstant(QualType QTy, bool ExcludeCtorDtor);
 
Index: lib/CodeGen/CodeGenTBAA.cpp
===================================================================
--- lib/CodeGen/CodeGenTBAA.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.cpp	(working copy)
@@ -21,6 +21,7 @@
 #include "clang/AST/Mangle.h"
 #include "clang/AST/RecordLayout.h"
 #include "clang/Frontend/CodeGenOptions.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/IR/Constants.h"
 #include "llvm/IR/LLVMContext.h"
 #include "llvm/IR/Metadata.h"
@@ -225,3 +226,87 @@
   // For now, handle any other kind of type conservatively.
   return StructMetadataCache[Ty] = NULL;
 }
+
+/// Check if the given type can be handled by path-aware TBAA.
+static bool isTBAAPathStruct(QualType QTy) {
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+    // RD can be struct, union, class, interface or enum.
+    // For now, we only handle struct.
+    if (RD->isStruct() && !RD->hasFlexibleArrayMember())
+      return true;
+  }
+  return false;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTypeInfo(QualType QTy) {
+  const Type *Ty = Context.getCanonicalType(QTy).getTypePtr();
+  assert(isTBAAPathStruct(QTy));
+
+  if (llvm::MDNode *N = StructTypeMetadataCache[Ty])
+    return N;
+
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+
+    const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD);
+    SmallVector <std::pair<uint64_t, llvm::MDNode*>, 4> Fields;
+    // To reduce the size of MDNode for a given struct type, we only output
+    // once for all the fields with the same scalar types.
+    // Offsets for scalar fields in the type DAG are not used.
+    llvm::SmallSet <llvm::MDNode*, 4> ScalarFieldTypes;
+    unsigned idx = 0;
+    for (RecordDecl::field_iterator i = RD->field_begin(),
+         e = RD->field_end(); i != e; ++i, ++idx) {
+      QualType FieldQTy = i->getType();
+      llvm::MDNode *FieldNode;
+      if (isTBAAPathStruct(FieldQTy))
+        FieldNode = getTBAAStructTypeInfo(FieldQTy);
+      else {
+        FieldNode = getTBAAInfo(FieldQTy);
+        // Ignore this field if the type already exists.
+        if (ScalarFieldTypes.count(FieldNode))
+          continue;
+        ScalarFieldTypes.insert(FieldNode);
+       }
+      if (!FieldNode)
+        return StructTypeMetadataCache[Ty] = NULL;
+      Fields.push_back(std::make_pair(
+          Layout.getFieldOffset(idx) / Context.getCharWidth(), FieldNode));
+    }
+
+    // TODO: This is using the RTTI name. Is there a better way to get
+    // a unique string for a type?
+    SmallString<256> OutName;
+    llvm::raw_svector_ostream Out(OutName);
+    MContext.mangleCXXRTTIName(QualType(Ty, 0), Out);
+    Out.flush();
+    // Create the struct type node with a vector of pairs (offset, type).
+    return StructTypeMetadataCache[Ty] =
+      MDHelper.createTBAAStructTypeNode(OutName, Fields);
+  }
+
+  return StructMetadataCache[Ty] = NULL;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTagInfo(QualType BaseQTy, llvm::MDNode *AccessNode,
+                                  uint64_t Offset) {
+  if (!CodeGenOpts.StructPathTBAA)
+    return AccessNode;
+
+  const Type *BTy = Context.getCanonicalType(BaseQTy).getTypePtr();
+  TBAAPathTag PathTag = TBAAPathTag(BTy, AccessNode, Offset);
+  if (llvm::MDNode *N = StructTagMetadataCache[PathTag])
+    return N;
+
+  llvm::MDNode *BNode = 0;
+  if (isTBAAPathStruct(BaseQTy))
+    BNode  = getTBAAStructTypeInfo(BaseQTy);
+  if (!BNode)
+    return StructTagMetadataCache[PathTag] = AccessNode;
+
+  return StructTagMetadataCache[PathTag] =
+    MDHelper.createTBAAStructTagNode(BNode, AccessNode, Offset);
+}
Index: lib/CodeGen/CodeGenTBAA.h
===================================================================
--- lib/CodeGen/CodeGenTBAA.h	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.h	(working copy)
@@ -35,6 +35,14 @@
 namespace CodeGen {
   class CGRecordLayout;
 
+  struct TBAAPathTag {
+    TBAAPathTag(const Type *B, const llvm::MDNode *A, uint64_t O)
+      : BaseT(B), AccessN(A), Offset(O) {}
+    const Type *BaseT;
+    const llvm::MDNode *AccessN;
+    uint64_t Offset;
+  };
+
 /// CodeGenTBAA - This class organizes the cross-module state that is used
 /// while lowering AST types to LLVM types.
 class CodeGenTBAA {
@@ -46,8 +54,13 @@
   // MDHelper - Helper for creating metadata.
   llvm::MDBuilder MDHelper;
 
-  /// MetadataCache - This maps clang::Types to llvm::MDNodes describing them.
+  /// MetadataCache - This maps clang::Types to scalar llvm::MDNodes describing
+  /// them.
   llvm::DenseMap<const Type *, llvm::MDNode *> MetadataCache;
+  /// This maps clang::Types to a struct node in the type DAG.
+  llvm::DenseMap<const Type *, llvm::MDNode *> StructTypeMetadataCache;
+  /// This maps TBAAPathTags to a tag node.
+  llvm::DenseMap<TBAAPathTag, llvm::MDNode *> StructTagMetadataCache;
 
   /// StructMetadataCache - This maps clang::Types to llvm::MDNodes describing
   /// them for struct assignments.
@@ -89,9 +102,49 @@
   /// getTBAAStructInfo - Get the TBAAStruct MDNode to be used for a memcpy of
   /// the given type.
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+
+  /// Get the MDNode in the type DAG for given struct type QType.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QType);
+  /// Get the tag MDNode for a given base type, the actual sclar access MDNode
+  /// and offset into the base type.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseQType,
+                                     llvm::MDNode *AccessNode, uint64_t Offset);
 };
 
 }  // end namespace CodeGen
 }  // end namespace clang
 
+namespace llvm {
+
+template<> struct DenseMapInfo<clang::CodeGen::TBAAPathTag> {
+  static clang::CodeGen::TBAAPathTag getEmptyKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getEmptyKey(),
+      DenseMapInfo<const MDNode *>::getEmptyKey(),
+      DenseMapInfo<uint64_t>::getEmptyKey());
+  }
+
+  static clang::CodeGen::TBAAPathTag getTombstoneKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getTombstoneKey(),
+      DenseMapInfo<const MDNode *>::getTombstoneKey(),
+      DenseMapInfo<uint64_t>::getTombstoneKey());
+  }
+
+  static unsigned getHashValue(const clang::CodeGen::TBAAPathTag &Val) {
+    return DenseMapInfo<const clang::Type *>::getHashValue(Val.BaseT) ^
+           DenseMapInfo<const MDNode *>::getHashValue(Val.AccessN) ^
+           DenseMapInfo<uint64_t>::getHashValue(Val.Offset);
+  }
+
+  static bool isEqual(const clang::CodeGen::TBAAPathTag &LHS,
+                      const clang::CodeGen::TBAAPathTag &RHS) {
+    return LHS.BaseT == RHS.BaseT &&
+           LHS.AccessN == RHS.AccessN &&
+           LHS.Offset == RHS.Offset;
+  }
+};
+
+}  // end namespace llvm
+
 #endif
Index: lib/Driver/Tools.cpp
===================================================================
--- lib/Driver/Tools.cpp	(revision 178718)
+++ lib/Driver/Tools.cpp	(working copy)
@@ -2105,6 +2105,8 @@
                     options::OPT_fno_strict_aliasing,
                     getToolChain().IsStrictAliasingDefault()))
     CmdArgs.push_back("-relaxed-aliasing");
+  if (Args.hasArg(options::OPT_fstruct_path_tbaa))
+    CmdArgs.push_back("-struct-path-tbaa");
   if (Args.hasFlag(options::OPT_fstrict_enums, options::OPT_fno_strict_enums,
                    false))
     CmdArgs.push_back("-fstrict-enums");
Index: lib/Frontend/CompilerInvocation.cpp
===================================================================
--- lib/Frontend/CompilerInvocation.cpp	(revision 178718)
+++ lib/Frontend/CompilerInvocation.cpp	(working copy)
@@ -324,6 +324,7 @@
   Opts.UseRegisterSizedBitfieldAccess = Args.hasArg(
     OPT_fuse_register_sized_bitfield_access);
   Opts.RelaxedAliasing = Args.hasArg(OPT_relaxed_aliasing);
+  Opts.StructPathTBAA = Args.hasArg(OPT_struct_path_tbaa);
   Opts.DwarfDebugFlags = Args.getLastArgValue(OPT_dwarf_debug_flags);
   Opts.MergeAllConstants = !Args.hasArg(OPT_fno_merge_all_constants);
   Opts.NoCommon = Args.hasArg(OPT_fno_common);
Index: test/CodeGen/tbaa.cpp
===================================================================
--- test/CodeGen/tbaa.cpp	(revision 0)
+++ test/CodeGen/tbaa.cpp	(working copy)
@@ -0,0 +1,217 @@
+// RUN: %clang_cc1 -O1 -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s
+// RUN: %clang_cc1 -O1 -struct-path-tbaa -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s -check-prefix=PATH
+// Test TBAA metadata generated by front-end.
+
+#include <stdint.h>
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+   uint16_t f16_2;
+   uint32_t f32_2;
+} StructA;
+typedef struct
+{
+   uint16_t f16;
+   StructA a;
+   uint32_t f32;
+} StructB;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+} StructC;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+   uint8_t f8;
+} StructD;
+
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS;
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS2;
+
+uint32_t g(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !5
+  *s = 1;
+  A->f32 = 4;
+  return *s;
+}
+
+uint32_t g2(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !8
+  *s = 1;
+  A->f16 = 4;
+  return *s;
+}
+
+uint32_t g3(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  A->f32 = 1;
+  B->a.f32 = 4;
+  return A->f32;
+}
+
+uint32_t g4(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !11
+  A->f32 = 1;
+  B->a.f16 = 4;
+  return A->f32;
+}
+
+uint32_t g5(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !12
+  A->f32 = 1;
+  B->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g6(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !13
+  A->f32 = 1;
+  B->a.f32_2 = 4;
+  return A->f32;
+}
+
+uint32_t g7(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !14
+  A->f32 = 1;
+  S->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g8(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !16
+  A->f32 = 1;
+  S->f16 = 4;
+  return A->f32;
+}
+
+uint32_t g9(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !17
+  S->f32 = 1;
+  S2->f32 = 4;
+  return S->f32;
+}
+
+uint32_t g10(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !19
+  S->f32 = 1;
+  S2->f16 = 4;
+  return S->f32;
+}
+
+uint32_t g11(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !20
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !22
+  C->b.a.f32 = 1;
+  D->b.a.f32 = 4;
+  return C->b.a.f32;
+}
+
+uint32_t g12(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// TODO: differentiate the two accesses.
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !9
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  StructB *b1 = &(C->b);
+  StructB *b2 = &(D->b);
+  // b1, b2 have different context.
+  b1->a.f32 = 1;
+  b2->a.f32 = 4;
+  return b1->a.f32;
+}
+
+// CHECK: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// CHECK: !2 = metadata !{metadata !"Simple C/C++ TBAA"}
+// CHECK: !4 = metadata !{metadata !"int", metadata !1}
+// CHECK: !5 = metadata !{metadata !"short", metadata !1}
+
+// PATH: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// PATH: !4 = metadata !{metadata !"int", metadata !1}
+// PATH: !5 = metadata !{metadata !6, metadata !4, i64 4}
+// PATH: !6 = metadata !{metadata !"_ZTS7StructA", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !7 = metadata !{metadata !"short", metadata !1}
+// PATH: !8 = metadata !{metadata !6, metadata !7, i64 0}
+// PATH: !9 = metadata !{metadata !10, metadata !4, i64 8}
+// PATH: !10 = metadata !{metadata !"_ZTS7StructB", i64 0, metadata !7, i64 4, metadata !6, i64 20, metadata !4}
+// PATH: !11 = metadata !{metadata !10, metadata !7, i64 4}
+// PATH: !12 = metadata !{metadata !10, metadata !4, i64 20}
+// PATH: !13 = metadata !{metadata !10, metadata !4, i64 16}
+// PATH: !14 = metadata !{metadata !15, metadata !4, i64 4}
+// PATH: !15 = metadata !{metadata !"_ZTS7StructS", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !16 = metadata !{metadata !15, metadata !7, i64 0}
+// PATH: !17 = metadata !{metadata !18, metadata !4, i64 4}
+// PATH: !18 = metadata !{metadata !"_ZTS8StructS2", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !19 = metadata !{metadata !18, metadata !7, i64 0}
+// PATH: !20 = metadata !{metadata !21, metadata !4, i64 12}
+// PATH: !21 = metadata !{metadata !"_ZTS7StructC", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4}
+// PATH: !22 = metadata !{metadata !23, metadata !4, i64 12}
+// PATH: !23 = metadata !{metadata !"_ZTS7StructD", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4, i64 32, metadata !1}

llvm-svn: 178784
2013-04-04 20:14:17 +00:00
John McCall 4d31b812ad Remove trailing comma in enum list.
llvm-svn: 176926
2013-03-13 05:02:21 +00:00
John McCall cdda29c968 Tighten up the rules for precise lifetime and document
the requirements on the ARC optimizer.

rdar://13407451

llvm-svn: 176924
2013-03-13 03:10:54 +00:00
John McCall a8ec7eb9cf Promote atomic type sizes up to a power of two, capped by
MaxAtomicPromoteWidth.  Fix a ton of terrible bugs with
_Atomic types and (non-intrinsic-mediated) loads and stores
thereto.

llvm-svn: 176658
2013-03-07 21:37:17 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
Fariborz Jahanian 7865220da4 patch for PR9027 and // rdar://11861085
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false, 
not honoring that aggregate has at least one 'volatile' data member 
(even though aggregate itself has not been qualified as 'volatile'. 
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).

llvm-svn: 173535
2013-01-25 23:57:05 +00:00
Chandler Carruth ffd5551bc7 Rewrite #includes for llvm/Foo.h to llvm/IR/Foo.h as appropriate to
reflect the migration in r171366.

Re-sort the #include lines to reflect the new paths.

llvm-svn: 171369
2013-01-02 11:45:17 +00:00
NAKAMURA Takumi 368d2ee4ef CGValue.h: Update one \param to Addr in MakeBitfield(). [-Wdocumentation]
llvm-svn: 171011
2012-12-24 01:48:48 +00:00
Chandler Carruth ff0e3a1e1c Rework the bitfield access IR generation to address PR13619 and
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.

The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.

This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.

I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.

llvm-svn: 169489
2012-12-06 11:14:44 +00:00
Chandler Carruth 3a02247dc9 Sort all of Clang's files under 'lib', and fix up the broken headers
uncovered.

This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.

I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.

llvm-svn: 169237
2012-12-04 09:13:33 +00:00
John Criswell edc84507c7 Fix for PR#13606: http://llvm.org/bugs/show_bug.cgi?id=13606
Changed the alignment of an LValue to be 64 bits so that we can handle
alignment values up to half of a 64-bit address space.

llvm-svn: 161971
2012-08-15 18:40:30 +00:00
John McCall 4e8ca4fa14 Significantly simplify CGExprAgg's logic about ignored results:
if we want to ignore a result, the Dest will be null.  Otherwise,
we must copy into it.  This means we need to ensure a slot when
loading from a volatile l-value.

With all that in place, fix a bug with chained assignments into
__block variables of aggregate type where we were losing insight into
the actual source of the value during the second assignment.

llvm-svn: 159630
2012-07-02 23:58:38 +00:00
Eli Friedman c24e2fb1fb Propagate lvalue alignment into bitfields. Per report on cfe-dev.
llvm-svn: 159295
2012-06-27 21:19:48 +00:00
Chad Rosier 615ed1a3a6 Revert r153613 as it's causing large compile-time regressions on the nightly testers.
llvm-svn: 153660
2012-03-29 17:37:10 +00:00
John McCall 1a0877f99d When we can't prove that the target of an aggregate copy is
a complete object, the memcpy needs to use the data size of
the structure instead of its sizeof() value.  Fixes PR12204.

llvm-svn: 153613
2012-03-28 23:30:44 +00:00
Eli Friedman 610bb87e17 Make sure we correctly set the alignment for vector loads and stores associated with vector element lvalues. Patch by Kevin Schoedel (with some minor modifications by me).
llvm-svn: 153285
2012-03-22 22:36:39 +00:00
Benjamin Kramer 4257ab3ff0 Reuse forAddr to create ignored AggValueSlots.
Silences valgrind warnings about uninitalized alignment values.

llvm-svn: 146342
2011-12-11 16:34:24 +00:00
Eli Friedman a0544d6fdf Switch LValue so that it exposes alignment in CharUnits. (No functional change.)
llvm-svn: 145753
2011-12-03 04:14:32 +00:00
Eli Friedman 2869b5afe3 Add a utility to get a RValue for a given LValue for an aggregate; switch a few places over to it.
llvm-svn: 145747
2011-12-03 03:08:40 +00:00
Eli Friedman 38cd36dbdb Switch the Alignment argument on AggValueSlot over to CharUnits, per John's review comment.
llvm-svn: 145741
2011-12-03 02:13:40 +00:00
Eli Friedman c1d85b931e Track alignment in AggValueSlot. No functional change in this patch, but I'll be introducing uses of the specified alignment soon.
llvm-svn: 145736
2011-12-03 00:54:26 +00:00
John McCall c109a259d2 Rip the ObjCPropertyRef l-value kind out of IR-generation.
llvm-svn: 143908
2011-11-07 03:59:57 +00:00
John McCall fe96e0b6be Change the AST representation of operations on Objective-C
property references to use a new PseudoObjectExpr
expression which pairs a syntactic form of the expression
with a set of semantic expressions implementing it.
This should significantly reduce the complexity required
elsewhere in the compiler to deal with these kinds of
expressions (e.g. IR generation's special l-value kind,
the static analyzer's Message abstraction), at the lower
cost of specifically dealing with the odd AST structure
of these expressions.  It should also greatly simplify
efforts to implement similar language features in the
future, most notably Managed C++'s properties and indexed
properties.

Most of the effort here is in dealing with the various
clients of the AST.  I've gone ahead and simplified the
ObjC rewriter's use of properties;  other clients, like
IR-gen and the static analyzer, have all the old
complexity *and* all the new complexity, at least
temporarily.  Many thanks to Ted for writing and advising
on the necessary changes to the static analyzer.

I've xfailed a small diagnostics regression in the static
analyzer at Ted's request.

llvm-svn: 143867
2011-11-06 09:01:30 +00:00
John McCall cac93853ae What say we document some of these AggValueSlot flags a bit
better.

llvm-svn: 138628
2011-08-26 08:02:37 +00:00
John McCall 46759f4f46 Since the 'is aliased' bit is critical for correctness in C++, it
really shouldn't be optional.  Fix the remaining place where a
temporary was being passed as potentially-aliased memory.

Fixes PR10756.

llvm-svn: 138627
2011-08-26 07:31:35 +00:00
John McCall a5efa7386a Track whether an AggValueSlot is potentially aliased, and do not
emit call results into potentially aliased slots.  This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value.  It also brings us into compliance with
the x86-64 ABI.

llvm-svn: 138599
2011-08-25 23:04:34 +00:00
John McCall 8d6fc9583d Use stronger typing for the flags on AggValueSlot and require
creators to tell us whether something needs GC barriers.
No functionality change.

llvm-svn: 138581
2011-08-25 20:40:09 +00:00
John McCall 1553b19067 Restore correct use of GC barriers.
llvm-svn: 133144
2011-06-16 04:16:24 +00:00
John McCall 31168b077c Automatic Reference Counting.
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.

Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.

llvm-svn: 133103
2011-06-15 23:02:42 +00:00