Commit Graph

40 Commits

Author SHA1 Message Date
Ties Stuij 208987844f [ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```

I've updated the patch D16586 to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record
 - avoid overlapping zero-length bit-fields.

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D72932
2020-10-13 10:31:48 +01:00
Ties Stuij d6f3f61231 Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 514df1b2bb.

Some of the buildbots got llvm-lit errors on CodeGen/volatile.c
2020-09-08 18:46:27 +01:00
Ties Stuij 514df1b2bb [ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```

Patch D16586 was updated to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record
 - avoid overlapping zero-length bit-fields.

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Differential Revision: https://reviews.llvm.org/D72932

The following people contributed to this patch:
- Diogo Sampaio
- Ties Stuij
2020-09-08 17:49:49 +01:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Adrian Prantl 9fc8faf9e6 Remove \brief commands from doxygen comments.
This is similar to the LLVM change https://reviews.llvm.org/D46290.

We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46320

llvm-svn: 331834
2018-05-09 01:00:01 +00:00
Benjamin Kramer ff2e323a32 Make CodeGen headers self-contained.
llvm-svn: 259518
2016-02-02 16:05:18 +00:00
Ulrich Weigand 03ce2a16bf Respect alignment of nested bitfields
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:

struct XBitfield {
  unsigned b1 : 10;
  unsigned b2 : 12;
  unsigned b3 : 10;
};
struct YBitfield {
  char x;
  struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;

unsigned test7() {
  // CHECK: @test7
  // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
  return gbitfield.y.b2;
}

The "align 4" is actually wrong.  Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.

This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp.  Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.

Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).

In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage.  This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.

Differential Revision: http://reviews.llvm.org/D11034

llvm-svn: 241916
2015-07-10 17:30:00 +00:00
Aaron Ballman abc1892057 Removing LLVM_DELETED_FUNCTION, as MSVC 2012 was the last reason for requiring the macro. NFC; Clang edition.
llvm-svn: 229339
2015-02-15 22:54:08 +00:00
Benjamin Kramer 2f5db8b3db Header guard canonicalization, clang part.
Modifications made by clang-tidy with minor tweaks.

llvm-svn: 215557
2014-08-13 16:25:19 +00:00
Richard Smith cd45dbc5f2 When a module completes the definition of a class template specialization imported from another module, emit an update record, rather than using the broken decl rewriting mechanism. If multiple modules do this, merge the definitions together, much as we would if they were separate declarations.
llvm-svn: 206680
2014-04-19 03:48:30 +00:00
Alp Toker d473363876 Correct hyphenations in comments and assert messages
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities in nearby lines.

llvm-svn: 196466
2013-12-05 04:47:09 +00:00
Chandler Carruth ffd5551bc7 Rewrite #includes for llvm/Foo.h to llvm/IR/Foo.h as appropriate to
reflect the migration in r171366.

Re-sort the #include lines to reflect the new paths.

llvm-svn: 171369
2013-01-02 11:45:17 +00:00
Chandler Carruth ff0e3a1e1c Rework the bitfield access IR generation to address PR13619 and
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.

The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.

This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.

I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.

llvm-svn: 169489
2012-12-06 11:14:44 +00:00
Dmitri Gribenko a664e5b88f Use LLVM_DELETED_FUNCTION in place of 'DO NOT IMPLEMENT' comments.
llvm-svn: 163983
2012-09-15 20:20:27 +00:00
Eli Friedman c24e2fb1fb Propagate lvalue alignment into bitfields. Per report on cfe-dev.
llvm-svn: 159295
2012-06-27 21:19:48 +00:00
Chris Lattner 95c664b300 clean up forward declarations of raw_ostream to use the new LLVM.h
patch by Jon Mulder!

llvm-svn: 135851
2011-07-23 10:35:09 +00:00
Chris Lattner 62ff6e8b17 add raw_ostream and Twine to LLVM.h, eliminating a ton of llvm:: qualifications.
llvm-svn: 135577
2011-07-20 07:06:53 +00:00
Chris Lattner a5f58b05e8 clang side to match the LLVM IR type system rewrite patch.
llvm-svn: 134831
2011-07-09 17:41:47 +00:00
Ken Dyck 27337a8800 Convert AccessInfo::AccessAlignment to CharUnits. No change in functionality
intended.

llvm-svn: 130087
2011-04-24 10:13:17 +00:00
Ken Dyck f76759c6fa Convert CGBitFieldInfo::FieldByteOffset to CharUnits. No change in
functionality intended.

llvm-svn: 130085
2011-04-24 10:04:59 +00:00
John McCall 0217dfc2ba Perform zero-initialization of virtual base classes when emitting
a zero constant for a complete class.  rdar://problem/8424975

To make this happen, track the field indexes for virtual bases
in the complete object.  I'm curious whether we might be better
off making CGRecordLayoutBuilder *much* more reliant on
ASTRecordLayout;  we're currently duplicating an awful lot of the ABI
layout logic.

llvm-svn: 125555
2011-02-15 06:40:56 +00:00
John McCall 5a39bd2443 A CGRecordLayout object persists. Since its contained types may
refer to opaque types, they must be held via PATypeHolders.  I'm
not sure why this hasn't blown up before.

llvm-svn: 120491
2010-11-30 23:21:46 +00:00
Anders Carlsson 36e2fa8209 CGRecordLayout types are always struct types.
llvm-svn: 120106
2010-11-24 19:37:16 +00:00
Anders Carlsson a7dd96ce77 Rename BaseLLVMType to NonVirtualBaseLLVMType.
llvm-svn: 119956
2010-11-21 23:59:45 +00:00
Anders Carlsson c1351cac17 Introduce the concept of a non-virtual base type to CGRecordLayoutBuilder as a first step towards fixing PR6995.
llvm-svn: 118491
2010-11-09 05:25:47 +00:00
Daniel Dunbar c7f9bbafe4 IRgen: Move CGBitFieldInfo strategy computation helpers to static member
functions.

llvm-svn: 112913
2010-09-02 23:53:28 +00:00
John McCall 614dbdcd55 Go back to asking CodeGenTypes whether a type is zero-initializable.
Make CGT defer to the ABI on all member pointer types.
This requires giving CGT a handle to the ABI.
It's way easier to make that work if we avoid lazily creating the ABI.
Make it so.

llvm-svn: 111786
2010-08-22 21:01:12 +00:00
Anders Carlsson 061ca524b7 Keep track of the LLVM field numbers for non-virtual bases.
llvm-svn: 104013
2010-05-18 05:22:06 +00:00
Daniel Dunbar 19d6355b77 Fix comments.
llvm-svn: 102429
2010-04-27 14:51:07 +00:00
Daniel Dunbar 9c78d63fbc IRgen: Change CGBitFieldInfo to take the AccessInfo as constructor arguments, it is now an immutable object.
Also, add some checking of various invariants that should hold on the CGBitFieldInfo access.

llvm-svn: 101345
2010-04-15 05:09:32 +00:00
Daniel Dunbar bb13845c5f IRgen: Eliminate now unused fields from CGBitFieldInfo.
llvm-svn: 101344
2010-04-15 05:09:28 +00:00
Daniel Dunbar b2b40a479d IRgen: Tweak CGBitFieldInfo doxyments & add an accessor.
llvm-svn: 101221
2010-04-14 04:07:59 +00:00
Daniel Dunbar b935b9370d IRgen: Enhance CGBitFieldInfo with enough information to fully describe the "policy" with which a bit-field should be accessed.
- For now, these policies are computed to match the current IRgen strategy, although the new information isn't being used yet (except in -fdump-record-layouts).

 - Design comments appreciated.

llvm-svn: 101178
2010-04-13 20:58:55 +00:00
Daniel Dunbar b97bff9aa9 IRgen: Add CGRecordLayout::dump, and dump (irgen) record layouts as part of -fdump-record-layouts.
llvm-svn: 101051
2010-04-12 18:14:18 +00:00
Daniel Dunbar c75c8bd757 IRgen: Move the bit-field access type into CGBitFieldInfo, and change bit-field LValues to just store the base address of object containing the bit-field.
llvm-svn: 100745
2010-04-08 02:59:45 +00:00
Daniel Dunbar 196ea449ed IRgen: Move BitFieldIsSigned bit into CGBitFieldInfo.
llvm-svn: 100513
2010-04-06 01:07:44 +00:00
Daniel Dunbar cd3d5e76ce IRgen: Lift BitFieldInfo to CGBitFieldInfo at namespace level.
llvm-svn: 100433
2010-04-05 16:20:44 +00:00
Daniel Dunbar 034299ef25 IRGen: Move the auxiliary data structures tracking AST -> LLVM mappings out of CodeGenTypes, to per-record CGRecordLayout structures.
- I did a cursory check that this was perf neutral, FWIW.

llvm-svn: 99978
2010-03-31 01:09:11 +00:00
Daniel Dunbar 23ee4b7710 IRGen: Hide CGRecordLayoutBuilder class, because I can.
llvm-svn: 99967
2010-03-31 00:11:27 +00:00
Daniel Dunbar 072d0bb247 IRgen: Move CGRecordLayout to its own happy little file.
llvm-svn: 99945
2010-03-30 22:26:10 +00:00