Commit Graph

125 Commits

Author SHA1 Message Date
Nikita Popov ba31cb4d38 [CodeGen] Store element type in RValue
For aggregates, we need to store the element type to be able to
reconstruct the aggregate Address. This increases the size of this
packed structure (as the second value is already used for alignment
in this case), but I did not observe any compile-time or memory
usage regression from this change.
2021-12-17 09:05:59 +01:00
Nikita Popov 6bca9a428e [CodeGen] Store ElementType in LValue
Store the pointer element type inside LValue so that we can
preserve it when converting it back into an Address. Storing the
pointer element type might not be strictly required here in that
we could probably re-derive it from the QualType (which would
require CGF access though), but storing it seems like the simpler
solution.

The global register case is special and does not store an element
type, as the value is not a pointer type in that case and it's not
possible to create an Address from it.

This is the main remaining part from D103465.

Differential Revision: https://reviews.llvm.org/D115791
2021-12-16 09:23:33 +01:00
Nikita Popov abbc2e997b [CodeGen] Store ElementType in Address
Explicitly track the pointer element type in Address, rather than
deriving it from the pointer type, which will no longer be possible
with opaque pointers. This just adds the basic facility, for now
everything is still going through the deprecated constructors.

I had to adjust one place in the LValue implementation to satisfy
the new assertions: Global registers are represented as a
MetadataAsValue, which does not have a pointer type. We should
avoid using Address in this case.

This implements a part of D103465.

Differential Revision: https://reviews.llvm.org/D115725
2021-12-15 08:59:44 +01:00
Nikita Popov b8d121eb1d [CodeGen] Require use of Address::invalid() for invalid address (NFC)
This no longer allows creating an invalid Address through the regular
constructor. There were only two places that did this (AggValueSlot
and EHCleanupScope) which did this by converting a potential nullptr
into an Address. I've fixed both of these by directly storing an
Address instead.

This is intended as a bit of preliminary cleanup for D103465.

Differential Revision: https://reviews.llvm.org/D115630
2021-12-14 12:06:05 +01:00
Bevin Hansson 101309fe04 [AST] Change return type of getTypeInfoInChars to a proper struct instead of std::pair.
Followup to D85191.

This changes getTypeInfoInChars to return a TypeInfoChars
struct instead of a std::pair of CharUnits. This lets the
interface match getTypeInfo more closely.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D86447
2020-10-13 13:26:56 +02:00
Florian Hahn 8f3f88d2f5 [Matrix] Implement matrix index expressions ([][]).
This patch implements matrix index expressions
(matrix[RowIdx][ColumnIdx]).

It does so by introducing a new MatrixSubscriptExpr(Base, RowIdx, ColumnIdx).
MatrixSubscriptExprs are built in 2 steps in ActOnMatrixSubscriptExpr. First,
if the base of a subscript is of matrix type, we create a incomplete
MatrixSubscriptExpr(base, idx, nullptr). Second, if the base is an incomplete
MatrixSubscriptExpr, we create a complete
MatrixSubscriptExpr(base->getBase(), base->getRowIdx(), idx)

Similar to vector elements, it is not possible to take the address of
a MatrixSubscriptExpr.
For CodeGen, a new MatrixElt type is added to LValue, which is very
similar to VectorElt. The only difference is that we may need to cast
the type of the base from an array to a vector type when accessing it.

Reviewers: rjmccall, anemet, Bigcheese, rsmith, martong

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D76791
2020-06-01 20:08:49 +01:00
Reid Kleckner 01943a59f5 Move verification of Sema::MaximumAlignment to a .cpp file
Saves these transitive includes:
2020-01-30 13:37:52 -08:00
David Zarzycki 0d61cd25a6
Verify that clang's max alignment is <= LLVM's max alignment
Reviewers: lebedev.ri

Reviewed By: lebedev.ri

Subscribers: cfe-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D73363
2020-01-24 12:37:05 -05:00
Diogo Sampaio 2147703bde Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 6a24339a45.
Submitted using ide button by mistake
2020-01-21 15:31:33 +00:00
Diogo Sampaio 6a24339a45 [ARM] Follow AACPS standard for volatile bit-fields access width
Summary:
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2019Q1.1
(https://static.docs.arm.com/ihi0042/g/aapcs32.pdf)
section 8.1 Data Types, page 35, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths
of bit-fields where the container overlaps with a non-bit-field member.
 This is because the C/C++ memory model defines these as being separate
memory locations, which can be accessed by two threads
 simultaneously. For this reason, compilers must be permitted to use a
narrower memory access width (including splitting the access
 into multiple instructions) to avoid writing to a different memory location.
```

I've updated the patch D16586 to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Reviewers: rsmith, rjmccall, eli.friedman, ostannard

Subscribers: ostannard, kristof.beyls, cfe-commits, carwil, olista01

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72932
2020-01-21 15:23:38 +00:00
Akira Hatanaka f139ae3d93 [NFC] Pass a reference to CodeGenFunction to methods of LValue and
AggValueSlot

This reapplies 8a5b7c3570 after a null
dereference bug in CGOpenMPRuntime::emitUserDefinedMapper.

Original commit message:

This is needed for the pointer authentication work we plan to do in the
near future.

a63a81bd99/clang/docs/PointerAuthentication.rst
2019-12-03 15:22:13 -08:00
Akira Hatanaka 9f37c0e703 Revert "[NFC] Pass a reference to CodeGenFunction to methods of LValue and"
This reverts commit 8a5b7c3570. This seems
to have broken UBSan because of a null dereference.
2019-12-03 13:08:01 -08:00
Akira Hatanaka 8a5b7c3570 [NFC] Pass a reference to CodeGenFunction to methods of LValue and
AggValueSlot

This is needed for the pointer authentication work we plan to do in the
near future.

a63a81bd99/clang/docs/PointerAuthentication.rst
2019-12-03 11:30:09 -08:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Mikael Nilsson 9d2872db74 [OpenCL] Add generic AS to 'this' pointer
Address spaces are cast into generic before invoking the constructor.

Added support for a trailing Qualifiers object in FunctionProtoType.

Note: This recommits the previously reverted patch, 
      but now it is commited together with a fix for lldb.

Differential Revision: https://reviews.llvm.org/D54862

llvm-svn: 349019
2018-12-13 10:15:27 +00:00
Mikael Nilsson 90646732bf Revert "[OpenCL] Add generic AS to 'this' pointer"
Reverting because the patch broke lldb.

llvm-svn: 348931
2018-12-12 15:06:16 +00:00
Mikael Nilsson 78de84719b [OpenCL] Add generic AS to 'this' pointer
Address spaces are cast into generic before invoking the constructor.

Added support for a trailing Qualifiers object in FunctionProtoType.

Differential Revision: https://reviews.llvm.org/D54862

llvm-svn: 348927
2018-12-12 14:11:59 +00:00
Fangrui Song 6907ce2f8f Remove trailing space
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h}

llvm-svn: 338291
2018-07-30 19:24:48 +00:00
Serge Pavlov 376051820d [UBSan] Strengthen pointer checks in 'new' expressions
With this change compiler generates alignment checks for wider range
of types. Previously such checks were generated only for the record types
with non-trivial default constructor. So the types like:

    struct alignas(32) S2 { int x; };
    typedef __attribute__((ext_vector_type(2), aligned(32))) float float32x2_t;

did not get checks when allocated by 'new' expression.

This change also optimizes the checks generated for the arrays created
in 'new' expressions. Previously the check was generated for each
invocation of type constructor. Now the check is generated only once
for entire array.

Differential Revision: https://reviews.llvm.org/D49589

llvm-svn: 338199
2018-07-28 15:33:03 +00:00
Adrian Prantl 9fc8faf9e6 Remove \brief commands from doxygen comments.
This is similar to the LLVM change https://reviews.llvm.org/D46290.

We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46320

llvm-svn: 331834
2018-05-09 01:00:01 +00:00
Richard Smith e78fac5126 PR36992: do not store beyond the dsize of a class object unless we know
the tail padding is not reused.

We track on the AggValueSlot (and through a couple of other
initialization actions) whether we're dealing with an object that might
share its tail padding with some other object, so that we can avoid
emitting stores into the tail padding if that's the case. We still
widen stores into tail padding when we can do so.

Differential Revision: https://reviews.llvm.org/D45306

llvm-svn: 329342
2018-04-05 20:52:58 +00:00
Yaxun Liu d9389827d2 CodeGen: Reduce LValue and CallArgList memory footprint before recommitting r326946
Recent change r326946 (https://reviews.llvm.org/D34367) causes regression in Eigen due to increased
memory footprint of CallArg.

This patch reduces LValue size from 112 to 96 bytes and reduces inline argument count of CallArgList
from 16 to 8.

It has been verified that this will let the added deep AST tree test pass with r326946.

In the long run, CallArg or LValue memory footprint should be further optimized.

Differential Revision: https://reviews.llvm.org/D44445

llvm-svn: 327515
2018-03-14 15:02:28 +00:00
Ivan A. Kosarev b9c59f36fc [CodeGen] Propagate may-alias'ness of lvalues with TBAA info
This patch fixes various places in clang to propagate may-alias
TBAA access descriptors during construction of lvalues, thus
eliminating the need for the LValueBaseInfo::MayAlias flag.

This is part of D38126 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D39008

llvm-svn: 316988
2017-10-31 11:05:34 +00:00
Ivan A. Kosarev d17f12a35d [CodeGen] Pass TBAA info along with lvalue base info everywhere
This patch addresses the rest of the cases where we pass lvalue
base info, but do not provide corresponding TBAA info.

This patch should not bring in any functional changes.

This is part of D38126 reworked to be a separate patch to make
reviewing easier.

Differential Revision: https://reviews.llvm.org/D38945

llvm-svn: 315986
2017-10-17 10:17:43 +00:00
Alexander Richardson 6d989436d0 Convert clang::LangAS to a strongly typed enum
Summary:
Convert clang::LangAS to a strongly typed enum

Currently both clang AST address spaces and target specific address spaces
are represented as unsigned which can lead to subtle errors if the wrong
type is passed. It is especially confusing in the CodeGen files as it is
not possible to see what kind of address space should be passed to a
function without looking at the implementation.
I originally made this change for our LLVM fork for the CHERI architecture
where we make extensive use of address spaces to differentiate between
capabilities and pointers. When merging the upstream changes I usually
run into some test failures or runtime crashes because the wrong kind of
address space is passed to a function. By converting the LangAS enum to a
C++11 we can catch these errors at compile time. Additionally, it is now
obvious from the function signature which kind of address space it expects.

I found the following errors while writing this patch:

- ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address
  space to  TargetInfo::getPointer{Width,Align}()
- TypePrinter::printAttributedAfter() prints the numeric value of the
  clang AST address space instead of the target address space.
  However, this code is not used so I kept the current behaviour
- initializeForBlockHeader() in CGBlocks.cpp was passing
  LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}()
- CodeGenFunction::EmitBlockLiteral() was passing a AST address space to
  TargetInfo::getPointerWidth()
- CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space
  to Qualifiers::addAddressSpace()
- CGOpenMPRuntimeNVPTX::getParameterAddress() was using
  llvm::Type::getPointerTo() with a AST address space
- clang_getAddressSpace() returns either a LangAS or a target address
  space. As this is exposed to C I have kept the current behaviour and
  added a comment stating that it is probably not correct.

Other than this the patch should not cause any functional changes.

Reviewers: yaxunl, pcc, bader

Reviewed By: yaxunl, bader

Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits

Differential Revision: https://reviews.llvm.org/D38816

llvm-svn: 315871
2017-10-15 18:48:14 +00:00
Ivan A. Kosarev 383890bad4 Refine generation of TBAA information in clang
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.

DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.

TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.

Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).

We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.

Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.

Now we do not lookup twice for the same cache entry in
getAccessTagInfo().

Refined relevant comments and descriptions.

Differential Revision: https://reviews.llvm.org/D37826

llvm-svn: 315048
2017-10-06 08:17:48 +00:00
Ivan A. Kosarev afc074cc41 Revert r314977 "[CodeGen] Unify generation of scalar and struct-path TBAA tags"
D37826 has been mistakenly committed where it should be the patch from D38503.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314978
2017-10-05 11:05:43 +00:00
Ivan A. Kosarev 6fa20cfea3 [CodeGen] Unify generation of scalar and struct-path TBAA tags
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.

Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
  of types to access tags anymore. Instead, it takes an access
  descriptor (TBAAAccessInfo) and generates corresponding access
  tag from it.
* getTBAAInfoForVTablePtr() reworked to
  getTBAAVTablePtrAccessInfo() that now returns the
  virtual-pointer access descriptor and not the virtual-point
  type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
  descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
  it is the only way to generate access tag by a given access
  descriptor. It is capable of producing both scalar and
  struct-path tags, depending on options and availability of the
  base access type. getTBAAScalarTagInfo() and its cache
  ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
  access tag should be a scalar or struct-path one,
  getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
  descriptor by a given QualType access type.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314977
2017-10-05 10:47:51 +00:00
Ivan A. Kosarev a511ed7501 [CodeGen] Introduce generic TBAA access descriptors
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38456

llvm-svn: 314780
2017-10-03 10:52:39 +00:00
Ivan A. Kosarev 289574edc0 [CodeGen] Do not refer to complete TBAA info where we actually deal with just TBAA access types
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.

This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.

Differential Revision: https://reviews.llvm.org/D38404

llvm-svn: 314657
2017-10-02 09:54:47 +00:00
Krzysztof Parzyszek 8f248234fa [CodeGen] Propagate LValueBaseInfo instead of AlignmentSource
The functions creating LValues propagated information about alignment
source. Extend the propagated data to also include information about
possible unrestricted aliasing. A new class LValueBaseInfo will
contain both AlignmentSource and MayAlias info.

This patch should not introduce any functional changes.

Differential Revision: https://reviews.llvm.org/D33284

llvm-svn: 303358
2017-05-18 17:07:11 +00:00
David Majnemer ec4b7341cc [Sema] PR26444 fix crash when alignment value is >= 2**16
Sema allows max values up to 2**28, use unsigned instead of unsiged
short to hold values that large.

Differential Revision: http://reviews.llvm.org/D17248

Patch by Don Hinton!

llvm-svn: 262466
2016-03-02 06:48:47 +00:00
Michael Zolotukhin 84df12375c Introduce __builtin_nontemporal_store and __builtin_nontemporal_load.
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.

Differential Revision: http://reviews.llvm.org/D12313

llvm-svn: 247104
2015-09-08 23:52:33 +00:00
John McCall 7f416cc426 Compute and preserve alignment more faithfully in IR-generation.
Introduce an Address type to bundle a pointer value with an
alignment.  Introduce APIs on CGBuilderTy to work with Address
values.  Change core APIs on CGF/CGM to traffic in Address where
appropriate.  Require alignments to be non-zero.  Update a ton
of code to compute and propagate alignment information.

As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.

The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned.  Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay.  I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.

Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.

We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment.  In particular,
field access now uses alignmentAtOffset instead of min.

Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs.  For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint.  That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.

ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments.  In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments.  That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.

I partially punted on applying this work to CGBuiltin.  Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.

llvm-svn: 246985
2015-09-08 08:05:57 +00:00
David Blaikie e3b172afc3 [opaque pointer type] Update for GEP API changes in LLVM
Now the GEP constant utility functions require the type to be explicitly
passed (since eventually the pointer type will be opaque and not convey
the required type information). For now callers can still pass nullptr
(though none were needed here in Clang, which is nice) if
convenienc/necessary, but eventually that will be disallowed as well.

llvm-svn: 233937
2015-04-02 18:55:21 +00:00
Benjamin Kramer 2f5db8b3db Header guard canonicalization, clang part.
Modifications made by clang-tidy with minor tweaks.

llvm-svn: 215557
2014-08-13 16:25:19 +00:00
Craig Topper 8a13c4180e [C++11] Use 'nullptr'. CodeGen edition.
llvm-svn: 209272
2014-05-21 05:09:00 +00:00
Renato Golin 230c5eb4bd Non-allocatable Global Named Register
This patch implements global named registers in Clang, lowering to the just
created intrinsics in LLVM (@llvm.read/write_register). A new type of LValue
had to be created (Register), which just adds support to carry the metadata
node containing the name of the register. Two new methods to emit loads and
stores interoperate with another to emit the named metadata node.

No guarantees are being made and only non-allocatable global variable named
registers are being supported. Local named register support is unchanged.

llvm-svn: 209149
2014-05-19 18:15:42 +00:00
Eli Friedman be4504df26 Simplify atomic load/store IRGen.
Also fixes a couple minor bugs along the way; see testcases.

llvm-svn: 186049
2013-07-11 01:32:21 +00:00
Manman Ren c451e5766e Initial support for struct-path aware TBAA.
Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to
the actual type and 0, and are updated in EmitLValueForField.
Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar.
Added command line option -struct-path-tbaa.

llvm-svn: 178797
2013-04-04 21:53:22 +00:00
Manman Ren 092d9e8f3b revert r178784 since it does not have a commit message
llvm-svn: 178796
2013-04-04 21:51:07 +00:00
Manman Ren 037d2b252d Index: include/clang/Driver/CC1Options.td
===================================================================
--- include/clang/Driver/CC1Options.td	(revision 178718)
+++ include/clang/Driver/CC1Options.td	(working copy)
@@ -161,6 +161,8 @@
   HelpText<"Use register sized accesses to bit-fields, when possible.">;
 def relaxed_aliasing : Flag<["-"], "relaxed-aliasing">,
   HelpText<"Turn off Type Based Alias Analysis">;
+def struct_path_tbaa : Flag<["-"], "struct-path-tbaa">,
+  HelpText<"Turn on struct-path aware Type Based Alias Analysis">;
 def masm_verbose : Flag<["-"], "masm-verbose">,
   HelpText<"Generate verbose assembly output">;
 def mcode_model : Separate<["-"], "mcode-model">,
Index: include/clang/Driver/Options.td
===================================================================
--- include/clang/Driver/Options.td	(revision 178718)
+++ include/clang/Driver/Options.td	(working copy)
@@ -587,6 +587,7 @@
   Flags<[CC1Option]>, HelpText<"Disable spell-checking">;
 def fno_stack_protector : Flag<["-"], "fno-stack-protector">, Group<f_Group>;
 def fno_strict_aliasing : Flag<["-"], "fno-strict-aliasing">, Group<f_Group>;
+def fstruct_path_tbaa : Flag<["-"], "fstruct-path-tbaa">, Group<f_Group>;
 def fno_strict_enums : Flag<["-"], "fno-strict-enums">, Group<f_Group>;
 def fno_strict_overflow : Flag<["-"], "fno-strict-overflow">, Group<f_Group>;
 def fno_threadsafe_statics : Flag<["-"], "fno-threadsafe-statics">, Group<f_Group>,
Index: include/clang/Frontend/CodeGenOptions.def
===================================================================
--- include/clang/Frontend/CodeGenOptions.def	(revision 178718)
+++ include/clang/Frontend/CodeGenOptions.def	(working copy)
@@ -85,6 +85,7 @@
 VALUE_CODEGENOPT(OptimizeSize, 2, 0) ///< If -Os (==1) or -Oz (==2) is specified.
 CODEGENOPT(RelaxAll          , 1, 0) ///< Relax all machine code instructions.
 CODEGENOPT(RelaxedAliasing   , 1, 0) ///< Set when -fno-strict-aliasing is enabled.
+CODEGENOPT(StructPathTBAA    , 1, 0) ///< Whether or not to use struct-path TBAA.
 CODEGENOPT(SaveTempLabels    , 1, 0) ///< Save temporary labels.
 CODEGENOPT(SanitizeAddressZeroBaseShadow , 1, 0) ///< Map shadow memory at zero
                                                  ///< offset in AddressSanitizer.
Index: lib/CodeGen/CGExpr.cpp
===================================================================
--- lib/CodeGen/CGExpr.cpp	(revision 178718)
+++ lib/CodeGen/CGExpr.cpp	(working copy)
@@ -1044,7 +1044,8 @@
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(LValue lvalue) {
   return EmitLoadOfScalar(lvalue.getAddress(), lvalue.isVolatile(),
                           lvalue.getAlignment().getQuantity(),
-                          lvalue.getType(), lvalue.getTBAAInfo());
+                          lvalue.getType(), lvalue.getTBAAInfo(),
+                          lvalue.getTBAABaseType(), lvalue.getTBAAOffset());
 }
 
 static bool hasBooleanRepresentation(QualType Ty) {
@@ -1106,7 +1107,9 @@
 
 llvm::Value *CodeGenFunction::EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                               unsigned Alignment, QualType Ty,
-                                              llvm::MDNode *TBAAInfo) {
+                                              llvm::MDNode *TBAAInfo,
+                                              QualType TBAABaseType,
+                                              uint64_t TBAAOffset) {
   // For better performance, handle vector loads differently.
   if (Ty->isVectorType()) {
     llvm::Value *V;
@@ -1158,8 +1161,11 @@
     Load->setVolatile(true);
   if (Alignment)
     Load->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Load, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Load, TBAAPath);
+  }
 
   if ((SanOpts->Bool && hasBooleanRepresentation(Ty)) ||
       (SanOpts->Enum && Ty->getAs<EnumType>())) {
@@ -1217,7 +1223,8 @@
                                         bool Volatile, unsigned Alignment,
                                         QualType Ty,
                                         llvm::MDNode *TBAAInfo,
-                                        bool isInit) {
+                                        bool isInit, QualType TBAABaseType,
+                                        uint64_t TBAAOffset) {
   
   // Handle vectors differently to get better performance.
   if (Ty->isVectorType()) {
@@ -1268,15 +1275,19 @@
   llvm::StoreInst *Store = Builder.CreateStore(Value, Addr, Volatile);
   if (Alignment)
     Store->setAlignment(Alignment);
-  if (TBAAInfo)
-    CGM.DecorateInstruction(Store, TBAAInfo);
+  if (TBAAInfo) {
+    llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo,
+                                                      TBAAOffset);
+    CGM.DecorateInstruction(Store, TBAAPath);
+  }
 }
 
 void CodeGenFunction::EmitStoreOfScalar(llvm::Value *value, LValue lvalue,
                                         bool isInit) {
   EmitStoreOfScalar(value, lvalue.getAddress(), lvalue.isVolatile(),
                     lvalue.getAlignment().getQuantity(), lvalue.getType(),
-                    lvalue.getTBAAInfo(), isInit);
+                    lvalue.getTBAAInfo(), isInit, lvalue.getTBAABaseType(),
+                    lvalue.getTBAAOffset());
 }
 
 /// EmitLoadOfLValue - Given an expression that represents a value lvalue, this
@@ -2494,9 +2505,12 @@
 
   llvm::Value *addr = base.getAddress();
   unsigned cvr = base.getVRQualifiers();
+  bool TBAAPath = CGM.getCodeGenOpts().StructPathTBAA;
   if (rec->isUnion()) {
     // For unions, there is no pointer adjustment.
     assert(!type->isReferenceType() && "union has reference member");
+    // TODO: handle path-aware TBAA for union.
+    TBAAPath = false;
   } else {
     // For structs, we GEP to the field that the record layout suggests.
     unsigned idx = CGM.getTypes().getCGRecordLayout(rec).getLLVMFieldNo(field);
@@ -2508,6 +2522,8 @@
       if (cvr & Qualifiers::Volatile) load->setVolatile(true);
       load->setAlignment(alignment.getQuantity());
 
+      // Loading the reference will disable path-aware TBAA.
+      TBAAPath = false;
       if (CGM.shouldUseTBAA()) {
         llvm::MDNode *tbaa;
         if (mayAlias)
@@ -2541,6 +2557,16 @@
 
   LValue LV = MakeAddrLValue(addr, type, alignment);
   LV.getQuals().addCVRQualifiers(cvr);
+  if (TBAAPath) {
+    const ASTRecordLayout &Layout =
+        getContext().getASTRecordLayout(field->getParent());
+    // Set the base type to be the base type of the base LValue and
+    // update offset to be relative to the base type.
+    LV.setTBAABaseType(base.getTBAABaseType());
+    LV.setTBAAOffset(base.getTBAAOffset() +
+                     Layout.getFieldOffset(field->getFieldIndex()) /
+                                           getContext().getCharWidth());
+  }
 
   // __weak attribute on a field is ignored.
   if (LV.getQuals().getObjCGCAttr() == Qualifiers::Weak)
Index: lib/CodeGen/CGValue.h
===================================================================
--- lib/CodeGen/CGValue.h	(revision 178718)
+++ lib/CodeGen/CGValue.h	(working copy)
@@ -157,6 +157,11 @@
 
   Expr *BaseIvarExp;
 
+  /// Used by struct-path-aware TBAA.
+  QualType TBAABaseType;
+  /// Offset relative to the base type.
+  uint64_t TBAAOffset;
+
   /// TBAAInfo - TBAA information to attach to dereferences of this LValue.
   llvm::MDNode *TBAAInfo;
 
@@ -175,6 +180,10 @@
     this->ImpreciseLifetime = false;
     this->ThreadLocalRef = false;
     this->BaseIvarExp = 0;
+
+    // Initialize fields for TBAA.
+    this->TBAABaseType = Type;
+    this->TBAAOffset = 0;
     this->TBAAInfo = TBAAInfo;
   }
 
@@ -232,6 +241,12 @@
   Expr *getBaseIvarExp() const { return BaseIvarExp; }
   void setBaseIvarExp(Expr *V) { BaseIvarExp = V; }
 
+  QualType getTBAABaseType() const { return TBAABaseType; }
+  void setTBAABaseType(QualType T) { TBAABaseType = T; }
+
+  uint64_t getTBAAOffset() const { return TBAAOffset; }
+  void setTBAAOffset(uint64_t O) { TBAAOffset = O; }
+
   llvm::MDNode *getTBAAInfo() const { return TBAAInfo; }
   void setTBAAInfo(llvm::MDNode *N) { TBAAInfo = N; }
 
Index: lib/CodeGen/CodeGenFunction.h
===================================================================
--- lib/CodeGen/CodeGenFunction.h	(revision 178718)
+++ lib/CodeGen/CodeGenFunction.h	(working copy)
@@ -2211,7 +2211,9 @@
   /// the LLVM value representation.
   llvm::Value *EmitLoadOfScalar(llvm::Value *Addr, bool Volatile,
                                 unsigned Alignment, QualType Ty,
-                                llvm::MDNode *TBAAInfo = 0);
+                                llvm::MDNode *TBAAInfo = 0,
+                                QualType TBAABaseTy = QualType(),
+                                uint64_t TBAAOffset = 0);
 
   /// EmitLoadOfScalar - Load a scalar value from an address, taking
   /// care to appropriately convert from the memory representation to
@@ -2224,7 +2226,9 @@
   /// the LLVM value representation.
   void EmitStoreOfScalar(llvm::Value *Value, llvm::Value *Addr,
                          bool Volatile, unsigned Alignment, QualType Ty,
-                         llvm::MDNode *TBAAInfo = 0, bool isInit=false);
+                         llvm::MDNode *TBAAInfo = 0, bool isInit = false,
+                         QualType TBAABaseTy = QualType(),
+                         uint64_t TBAAOffset = 0);
 
   /// EmitStoreOfScalar - Store a scalar value to an address, taking
   /// care to appropriately convert from the memory representation to
Index: lib/CodeGen/CodeGenModule.cpp
===================================================================
--- lib/CodeGen/CodeGenModule.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenModule.cpp	(working copy)
@@ -227,6 +227,20 @@
   return TBAA->getTBAAStructInfo(QTy);
 }
 
+llvm::MDNode *CodeGenModule::getTBAAStructTypeInfo(QualType QTy) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTypeInfo(QTy);
+}
+
+llvm::MDNode *CodeGenModule::getTBAAStructTagInfo(QualType BaseTy,
+                                                  llvm::MDNode *AccessN,
+                                                  uint64_t O) {
+  if (!TBAA)
+    return 0;
+  return TBAA->getTBAAStructTagInfo(BaseTy, AccessN, O);
+}
+
 void CodeGenModule::DecorateInstruction(llvm::Instruction *Inst,
                                         llvm::MDNode *TBAAInfo) {
   Inst->setMetadata(llvm::LLVMContext::MD_tbaa, TBAAInfo);
Index: lib/CodeGen/CodeGenModule.h
===================================================================
--- lib/CodeGen/CodeGenModule.h	(revision 178718)
+++ lib/CodeGen/CodeGenModule.h	(working copy)
@@ -501,6 +501,11 @@
   llvm::MDNode *getTBAAInfo(QualType QTy);
   llvm::MDNode *getTBAAInfoForVTablePtr();
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+  /// Return the MDNode in the type DAG for the given struct type.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QTy);
+  /// Return the path-aware tag for given base type, access node and offset.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseTy, llvm::MDNode *AccessN,
+                                     uint64_t O);
 
   bool isTypeConstant(QualType QTy, bool ExcludeCtorDtor);
 
Index: lib/CodeGen/CodeGenTBAA.cpp
===================================================================
--- lib/CodeGen/CodeGenTBAA.cpp	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.cpp	(working copy)
@@ -21,6 +21,7 @@
 #include "clang/AST/Mangle.h"
 #include "clang/AST/RecordLayout.h"
 #include "clang/Frontend/CodeGenOptions.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/IR/Constants.h"
 #include "llvm/IR/LLVMContext.h"
 #include "llvm/IR/Metadata.h"
@@ -225,3 +226,87 @@
   // For now, handle any other kind of type conservatively.
   return StructMetadataCache[Ty] = NULL;
 }
+
+/// Check if the given type can be handled by path-aware TBAA.
+static bool isTBAAPathStruct(QualType QTy) {
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+    // RD can be struct, union, class, interface or enum.
+    // For now, we only handle struct.
+    if (RD->isStruct() && !RD->hasFlexibleArrayMember())
+      return true;
+  }
+  return false;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTypeInfo(QualType QTy) {
+  const Type *Ty = Context.getCanonicalType(QTy).getTypePtr();
+  assert(isTBAAPathStruct(QTy));
+
+  if (llvm::MDNode *N = StructTypeMetadataCache[Ty])
+    return N;
+
+  if (const RecordType *TTy = QTy->getAs<RecordType>()) {
+    const RecordDecl *RD = TTy->getDecl()->getDefinition();
+
+    const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD);
+    SmallVector <std::pair<uint64_t, llvm::MDNode*>, 4> Fields;
+    // To reduce the size of MDNode for a given struct type, we only output
+    // once for all the fields with the same scalar types.
+    // Offsets for scalar fields in the type DAG are not used.
+    llvm::SmallSet <llvm::MDNode*, 4> ScalarFieldTypes;
+    unsigned idx = 0;
+    for (RecordDecl::field_iterator i = RD->field_begin(),
+         e = RD->field_end(); i != e; ++i, ++idx) {
+      QualType FieldQTy = i->getType();
+      llvm::MDNode *FieldNode;
+      if (isTBAAPathStruct(FieldQTy))
+        FieldNode = getTBAAStructTypeInfo(FieldQTy);
+      else {
+        FieldNode = getTBAAInfo(FieldQTy);
+        // Ignore this field if the type already exists.
+        if (ScalarFieldTypes.count(FieldNode))
+          continue;
+        ScalarFieldTypes.insert(FieldNode);
+       }
+      if (!FieldNode)
+        return StructTypeMetadataCache[Ty] = NULL;
+      Fields.push_back(std::make_pair(
+          Layout.getFieldOffset(idx) / Context.getCharWidth(), FieldNode));
+    }
+
+    // TODO: This is using the RTTI name. Is there a better way to get
+    // a unique string for a type?
+    SmallString<256> OutName;
+    llvm::raw_svector_ostream Out(OutName);
+    MContext.mangleCXXRTTIName(QualType(Ty, 0), Out);
+    Out.flush();
+    // Create the struct type node with a vector of pairs (offset, type).
+    return StructTypeMetadataCache[Ty] =
+      MDHelper.createTBAAStructTypeNode(OutName, Fields);
+  }
+
+  return StructMetadataCache[Ty] = NULL;
+}
+
+llvm::MDNode *
+CodeGenTBAA::getTBAAStructTagInfo(QualType BaseQTy, llvm::MDNode *AccessNode,
+                                  uint64_t Offset) {
+  if (!CodeGenOpts.StructPathTBAA)
+    return AccessNode;
+
+  const Type *BTy = Context.getCanonicalType(BaseQTy).getTypePtr();
+  TBAAPathTag PathTag = TBAAPathTag(BTy, AccessNode, Offset);
+  if (llvm::MDNode *N = StructTagMetadataCache[PathTag])
+    return N;
+
+  llvm::MDNode *BNode = 0;
+  if (isTBAAPathStruct(BaseQTy))
+    BNode  = getTBAAStructTypeInfo(BaseQTy);
+  if (!BNode)
+    return StructTagMetadataCache[PathTag] = AccessNode;
+
+  return StructTagMetadataCache[PathTag] =
+    MDHelper.createTBAAStructTagNode(BNode, AccessNode, Offset);
+}
Index: lib/CodeGen/CodeGenTBAA.h
===================================================================
--- lib/CodeGen/CodeGenTBAA.h	(revision 178718)
+++ lib/CodeGen/CodeGenTBAA.h	(working copy)
@@ -35,6 +35,14 @@
 namespace CodeGen {
   class CGRecordLayout;
 
+  struct TBAAPathTag {
+    TBAAPathTag(const Type *B, const llvm::MDNode *A, uint64_t O)
+      : BaseT(B), AccessN(A), Offset(O) {}
+    const Type *BaseT;
+    const llvm::MDNode *AccessN;
+    uint64_t Offset;
+  };
+
 /// CodeGenTBAA - This class organizes the cross-module state that is used
 /// while lowering AST types to LLVM types.
 class CodeGenTBAA {
@@ -46,8 +54,13 @@
   // MDHelper - Helper for creating metadata.
   llvm::MDBuilder MDHelper;
 
-  /// MetadataCache - This maps clang::Types to llvm::MDNodes describing them.
+  /// MetadataCache - This maps clang::Types to scalar llvm::MDNodes describing
+  /// them.
   llvm::DenseMap<const Type *, llvm::MDNode *> MetadataCache;
+  /// This maps clang::Types to a struct node in the type DAG.
+  llvm::DenseMap<const Type *, llvm::MDNode *> StructTypeMetadataCache;
+  /// This maps TBAAPathTags to a tag node.
+  llvm::DenseMap<TBAAPathTag, llvm::MDNode *> StructTagMetadataCache;
 
   /// StructMetadataCache - This maps clang::Types to llvm::MDNodes describing
   /// them for struct assignments.
@@ -89,9 +102,49 @@
   /// getTBAAStructInfo - Get the TBAAStruct MDNode to be used for a memcpy of
   /// the given type.
   llvm::MDNode *getTBAAStructInfo(QualType QTy);
+
+  /// Get the MDNode in the type DAG for given struct type QType.
+  llvm::MDNode *getTBAAStructTypeInfo(QualType QType);
+  /// Get the tag MDNode for a given base type, the actual sclar access MDNode
+  /// and offset into the base type.
+  llvm::MDNode *getTBAAStructTagInfo(QualType BaseQType,
+                                     llvm::MDNode *AccessNode, uint64_t Offset);
 };
 
 }  // end namespace CodeGen
 }  // end namespace clang
 
+namespace llvm {
+
+template<> struct DenseMapInfo<clang::CodeGen::TBAAPathTag> {
+  static clang::CodeGen::TBAAPathTag getEmptyKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getEmptyKey(),
+      DenseMapInfo<const MDNode *>::getEmptyKey(),
+      DenseMapInfo<uint64_t>::getEmptyKey());
+  }
+
+  static clang::CodeGen::TBAAPathTag getTombstoneKey() {
+    return clang::CodeGen::TBAAPathTag(
+      DenseMapInfo<const clang::Type *>::getTombstoneKey(),
+      DenseMapInfo<const MDNode *>::getTombstoneKey(),
+      DenseMapInfo<uint64_t>::getTombstoneKey());
+  }
+
+  static unsigned getHashValue(const clang::CodeGen::TBAAPathTag &Val) {
+    return DenseMapInfo<const clang::Type *>::getHashValue(Val.BaseT) ^
+           DenseMapInfo<const MDNode *>::getHashValue(Val.AccessN) ^
+           DenseMapInfo<uint64_t>::getHashValue(Val.Offset);
+  }
+
+  static bool isEqual(const clang::CodeGen::TBAAPathTag &LHS,
+                      const clang::CodeGen::TBAAPathTag &RHS) {
+    return LHS.BaseT == RHS.BaseT &&
+           LHS.AccessN == RHS.AccessN &&
+           LHS.Offset == RHS.Offset;
+  }
+};
+
+}  // end namespace llvm
+
 #endif
Index: lib/Driver/Tools.cpp
===================================================================
--- lib/Driver/Tools.cpp	(revision 178718)
+++ lib/Driver/Tools.cpp	(working copy)
@@ -2105,6 +2105,8 @@
                     options::OPT_fno_strict_aliasing,
                     getToolChain().IsStrictAliasingDefault()))
     CmdArgs.push_back("-relaxed-aliasing");
+  if (Args.hasArg(options::OPT_fstruct_path_tbaa))
+    CmdArgs.push_back("-struct-path-tbaa");
   if (Args.hasFlag(options::OPT_fstrict_enums, options::OPT_fno_strict_enums,
                    false))
     CmdArgs.push_back("-fstrict-enums");
Index: lib/Frontend/CompilerInvocation.cpp
===================================================================
--- lib/Frontend/CompilerInvocation.cpp	(revision 178718)
+++ lib/Frontend/CompilerInvocation.cpp	(working copy)
@@ -324,6 +324,7 @@
   Opts.UseRegisterSizedBitfieldAccess = Args.hasArg(
     OPT_fuse_register_sized_bitfield_access);
   Opts.RelaxedAliasing = Args.hasArg(OPT_relaxed_aliasing);
+  Opts.StructPathTBAA = Args.hasArg(OPT_struct_path_tbaa);
   Opts.DwarfDebugFlags = Args.getLastArgValue(OPT_dwarf_debug_flags);
   Opts.MergeAllConstants = !Args.hasArg(OPT_fno_merge_all_constants);
   Opts.NoCommon = Args.hasArg(OPT_fno_common);
Index: test/CodeGen/tbaa.cpp
===================================================================
--- test/CodeGen/tbaa.cpp	(revision 0)
+++ test/CodeGen/tbaa.cpp	(working copy)
@@ -0,0 +1,217 @@
+// RUN: %clang_cc1 -O1 -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s
+// RUN: %clang_cc1 -O1 -struct-path-tbaa -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s -check-prefix=PATH
+// Test TBAA metadata generated by front-end.
+
+#include <stdint.h>
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+   uint16_t f16_2;
+   uint32_t f32_2;
+} StructA;
+typedef struct
+{
+   uint16_t f16;
+   StructA a;
+   uint32_t f32;
+} StructB;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+} StructC;
+typedef struct
+{
+   uint16_t f16;
+   StructB b;
+   uint32_t f32;
+   uint8_t f8;
+} StructD;
+
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS;
+typedef struct
+{
+   uint16_t f16;
+   uint32_t f32;
+} StructS2;
+
+uint32_t g(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !5
+  *s = 1;
+  A->f32 = 4;
+  return *s;
+}
+
+uint32_t g2(uint32_t *s, StructA *A, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !8
+  *s = 1;
+  A->f16 = 4;
+  return *s;
+}
+
+uint32_t g3(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  A->f32 = 1;
+  B->a.f32 = 4;
+  return A->f32;
+}
+
+uint32_t g4(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !11
+  A->f32 = 1;
+  B->a.f16 = 4;
+  return A->f32;
+}
+
+uint32_t g5(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !12
+  A->f32 = 1;
+  B->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g6(StructA *A, StructB *B, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !13
+  A->f32 = 1;
+  B->a.f32_2 = 4;
+  return A->f32;
+}
+
+uint32_t g7(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !14
+  A->f32 = 1;
+  S->f32 = 4;
+  return A->f32;
+}
+
+uint32_t g8(StructA *A, StructS *S, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !16
+  A->f32 = 1;
+  S->f16 = 4;
+  return A->f32;
+}
+
+uint32_t g9(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !17
+  S->f32 = 1;
+  S2->f32 = 4;
+  return S->f32;
+}
+
+uint32_t g10(StructS *S, StructS2 *S2, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14
+// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !19
+  S->f32 = 1;
+  S2->f16 = 4;
+  return S->f32;
+}
+
+uint32_t g11(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !20
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !22
+  C->b.a.f32 = 1;
+  D->b.a.f32 = 4;
+  return C->b.a.f32;
+}
+
+uint32_t g12(StructC *C, StructD *D, uint64_t count) {
+// CHECK: define i32 @{{.*}}(
+// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4
+// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4
+// TODO: differentiate the two accesses.
+// PATH: define i32 @{{.*}}(
+// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !9
+// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9
+  StructB *b1 = &(C->b);
+  StructB *b2 = &(D->b);
+  // b1, b2 have different context.
+  b1->a.f32 = 1;
+  b2->a.f32 = 4;
+  return b1->a.f32;
+}
+
+// CHECK: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// CHECK: !2 = metadata !{metadata !"Simple C/C++ TBAA"}
+// CHECK: !4 = metadata !{metadata !"int", metadata !1}
+// CHECK: !5 = metadata !{metadata !"short", metadata !1}
+
+// PATH: !1 = metadata !{metadata !"omnipotent char", metadata !2}
+// PATH: !4 = metadata !{metadata !"int", metadata !1}
+// PATH: !5 = metadata !{metadata !6, metadata !4, i64 4}
+// PATH: !6 = metadata !{metadata !"_ZTS7StructA", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !7 = metadata !{metadata !"short", metadata !1}
+// PATH: !8 = metadata !{metadata !6, metadata !7, i64 0}
+// PATH: !9 = metadata !{metadata !10, metadata !4, i64 8}
+// PATH: !10 = metadata !{metadata !"_ZTS7StructB", i64 0, metadata !7, i64 4, metadata !6, i64 20, metadata !4}
+// PATH: !11 = metadata !{metadata !10, metadata !7, i64 4}
+// PATH: !12 = metadata !{metadata !10, metadata !4, i64 20}
+// PATH: !13 = metadata !{metadata !10, metadata !4, i64 16}
+// PATH: !14 = metadata !{metadata !15, metadata !4, i64 4}
+// PATH: !15 = metadata !{metadata !"_ZTS7StructS", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !16 = metadata !{metadata !15, metadata !7, i64 0}
+// PATH: !17 = metadata !{metadata !18, metadata !4, i64 4}
+// PATH: !18 = metadata !{metadata !"_ZTS8StructS2", i64 0, metadata !7, i64 4, metadata !4}
+// PATH: !19 = metadata !{metadata !18, metadata !7, i64 0}
+// PATH: !20 = metadata !{metadata !21, metadata !4, i64 12}
+// PATH: !21 = metadata !{metadata !"_ZTS7StructC", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4}
+// PATH: !22 = metadata !{metadata !23, metadata !4, i64 12}
+// PATH: !23 = metadata !{metadata !"_ZTS7StructD", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4, i64 32, metadata !1}

llvm-svn: 178784
2013-04-04 20:14:17 +00:00
John McCall 4d31b812ad Remove trailing comma in enum list.
llvm-svn: 176926
2013-03-13 05:02:21 +00:00
John McCall cdda29c968 Tighten up the rules for precise lifetime and document
the requirements on the ARC optimizer.

rdar://13407451

llvm-svn: 176924
2013-03-13 03:10:54 +00:00
John McCall a8ec7eb9cf Promote atomic type sizes up to a power of two, capped by
MaxAtomicPromoteWidth.  Fix a ton of terrible bugs with
_Atomic types and (non-intrinsic-mediated) loads and stores
thereto.

llvm-svn: 176658
2013-03-07 21:37:17 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
Fariborz Jahanian 7865220da4 patch for PR9027 and // rdar://11861085
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false, 
not honoring that aggregate has at least one 'volatile' data member 
(even though aggregate itself has not been qualified as 'volatile'. 
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).

llvm-svn: 173535
2013-01-25 23:57:05 +00:00
Chandler Carruth ffd5551bc7 Rewrite #includes for llvm/Foo.h to llvm/IR/Foo.h as appropriate to
reflect the migration in r171366.

Re-sort the #include lines to reflect the new paths.

llvm-svn: 171369
2013-01-02 11:45:17 +00:00
NAKAMURA Takumi 368d2ee4ef CGValue.h: Update one \param to Addr in MakeBitfield(). [-Wdocumentation]
llvm-svn: 171011
2012-12-24 01:48:48 +00:00
Chandler Carruth ff0e3a1e1c Rework the bitfield access IR generation to address PR13619 and
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.

The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.

This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.

I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.

llvm-svn: 169489
2012-12-06 11:14:44 +00:00