This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
So I wrote a clang-tidy check to lint out redundant `isa`, `cast`, and
`dyn_cast`s for fun. This is a portion of what it found for clang; I
plan to do similar cleanups in LLVM and other subprojects when I find
time.
Because of the volume of changes, I explicitly avoided making any change
that wasn't highly local and obviously correct to me (e.g. we still have
a number of foo(cast<Bar>(baz)) that I didn't touch, since overloading
is a thing and the cast<Bar> did actually change the type -- just up the
class hierarchy).
I also tried to leave the types we were cast<>ing to somewhere nearby,
in cases where it wasn't locally obvious what we were dealing with
before.
llvm-svn: 326416
This patch fixes a bug in CGRecordLowering::accumulateBitFields where it
unconditionally starts a new run and emits a storage field when it sees
a zero-sized bitfield, which causes an assertion in insertPadding to
fail when -fno-bitfield-type-align is used.
It shouldn't emit new storage if UseZeroLengthBitfieldAlignment and
UseBitFieldTypeAlignment are both false.
rdar://problem/36762205
llvm-svn: 323943
Currently all the consecutive bitfields are wrapped as a large integer unless there is unamed zero sized bitfield in between. The patch provides an alternative manner which makes the bitfield to be accessed as separate memory location if it has legal integer width and is naturally aligned. Such separate bitfield may split the original consecutive bitfields into subgroups of consecutive bitfields, and each subgroup will be wrapped as an integer. Now This is all controlled by an option -ffine-grained-bitfield-accesses. The alternative of bitfield access manner can improve the access efficiency of those bitfields with legal width and being aligned, but may reduce the chance of load/store combining of other bitfields, so it depends on how the bitfields are defined and actually accessed to choose when to use the option. For now the option is off by default.
Differential revision: https://reviews.llvm.org/D36562
llvm-svn: 315915
Revert the two changes to thread CodeGenOptions into the TargetInfo allocation
and to fix the layering violation by moving CodeGenOptions into Basic.
Code Generation is arguably not particularly "basic". This addresses Richard's
post-commit review comments. This change purely does the mechanical revert and
will be followed up with an alternate approach to thread the desired information
into TargetInfo.
llvm-svn: 265806
This is a mechanical move of CodeGenOptions from libFrontend to libBasic. This
fixes the layering violation introduced earlier by threading CodeGenOptions into
TargetInfo. It should also fix the modules based self-hosting builds. NFC.
llvm-svn: 265702
We got this right for Itanium but not MSVC because CGRecordLayoutBuilder
was checking if the base's size was zero when it should have been
checking the non-virtual size.
This fixes PR21040.
llvm-svn: 251036
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
The first named data member is the field used to default initialize the
union. An IndirectFieldDecl can introduce the first named data member
of a union.
llvm-svn: 238649
Types can be classified as being zero-initializable or
non-zero-initializable. We used to classify array types by giving them
the classification of their base element type. However, incomplete
array types are never initialized directly and thus are always
zero-initializable.
llvm-svn: 238256
Fixes rdar://20621065.
A more elegant fix would preclude this case by defining the
rules such that zero-size classes are always formally empty.
I believe the only extensions which create zero-size classes
right now are flexible arrays and zero-length arrays; it's
not abstractly unreasonable to say that those don't count
as members for the purposes of emptiness, just as zero-width
bitfields don't count. But that's an ABI-affecting change
and requires further discussion; in the meantime, let's not
assert / miscompile.
llvm-svn: 235815
Unions are initialized with the default initialization of their first
named member. If that member is not zero initialized, then we should
prefer that member's type. Otherwise, we might try to make an otherwise
unsuitable type (like an array) which we cannot easily initialize with a
pointer to member.
llvm-svn: 219781
Clang uses two types to talk about a C++ class, the
NonVirtualBaseLLVMType and the LLVMType. Previously, we would allow one
of these to be packed and the other not.
This is problematic. If both don't agree on a common subset of fields,
then routines like getLLVMFieldNo will point to the wrong field. Solve
this by copying the 'packed'-ness of the complete type to the
non-virtual subobject. For this to work, we need to take into account
the non-virtual subobject's size and alignment when we are computing the
layout of the complete object.
This fixes PR21089.
llvm-svn: 218577
It fits better with LLVM's memory model to try to do this in the
backend. Specifically, narrowing wide loads in the backends should be
relatively straightforward and is generally valuable, whereas widening
loads tends to be very constrained.
Discussion here:
http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-20140811/112581.html
This reverts commit r215614.
llvm-svn: 215648
Currently when laying out bitfields that don't need any padding, we
represent them as a wide enough int to contain all of the bits. This
can be hard on the backend since we'll do things like represent stores
to a few bits as loading an i144, masking it with a large constant,
and storing it back.
This turns up in less pathological cases where we load and mask 64 bit
word on a 32 bit platform when we actually only need to access 32 bits.
This leads to bad code being generated in most of our 32 bit backends.
In practice, there are often natural breaks in bitfields, and it's a
fairly simple and effective heuristic to split these fields into legal
integer sized chunks when it will be equivalent (ie, it won't force us
to add any extra padding).
llvm-svn: 215614
Prior to this patch, CGRecordLower assumed that virtual bases could not
be placed before the nvsize of an object. This isn't true in Itanium
mode, virtual bases are placed at dsize rather than vnsize and in the
case of zero sized non-virtual bases nvsize can be larger than dsize.
This patch fixes CGRecordLowering to avoid an assert and to clip
bitfields properly in this case. A test case is included.
llvm-svn: 207280
When lowering a bitfield, CGRecordLowering would assign the wrong
storage type to a bitfield in some cases and trigger an assertion. In
these cases the layout was still correct, just the bitfield info was
wrong.
llvm-svn: 202562
Clang is using llvm::StructType::isOpaque() as a way of signaling if
we've finished record type conversion in
CodeGenTypes::isRecordLayoutComplete(). However, Clang was setting the
body of the type before it finished laying out the type as a base type.
Laying out the %class.C.base LLVM type attempts to convert more types,
eventually recursively attempting to layout 'C' again, at which point we
would say that layout was complete, even though we were still in the
middle of it.
By not setting the body, we correctly signal that layout is not
complete, and things work as expected.
At some point, it might be worth refactoring this to avoid looking at
the LLVM IR types under construction.
llvm-svn: 202320
Take advantage of CharUnits::alignmentAtOffset instead of calculating it
by hand.
Differential Revision: http://llvm-reviews.chandlerc.com/D2862
llvm-svn: 202098
CGRecordLayoutBuilder was aging, complex, multi-pass, and shows signs of
existing before ASTRecordLayoutBuilder. It redundantly performed many
layout operations that are now performed by ASTRecordLayoutBuilder and
asserted that the results were the same. With the addition of support
for the MS-ABI, such as placement of vbptrs, vtordisps, different
bitfield layout and a variety of other features, CGRecordLayoutBuilder
was growing unwieldy in its redundancy.
This patch re-architects CGRecordLayoutBuilder to not perform any
redundant layout but rather, as directly as possible, lower an
ASTRecordLayout to an llvm::type. The new architecture is significantly
smaller and simpler than the CGRecordLayoutBuilder and contains fewer
ABI-specific code paths. It's also one pass.
The architecture of the new system is described in the comments. For the
most part, the new system simply takes all of the fields and bases from
an ASTRecordLayout, sorts them, inserts padding and dumps a record.
Bitfields, unions and primary virtual bases make this process a bit more
complicated. See the inline comments.
In addition, this patch updates a few lit tests due to the fact that the
new system computes more accurate llvm types than CGRecordLayoutBuilder.
Each change is commented individually in the review.
Differential Revision: http://llvm-reviews.chandlerc.com/D2795
llvm-svn: 201907
might have a smaller size as compared to the stand-alone type of the base class.
This is possible when the derived class is packed and hence might have smaller
alignment requirement than the base class.
Differential Revision: http://llvm-reviews.chandlerc.com/D2599
llvm-svn: 200031
The MS abi lays out *all* non-virtual bases with leading vfptrs before
laying out non-virutal bases without vfptrs. This guarantees that the
primary base is laid out first. r198818 fixed RecordLayoutBuilder to
produce compatiable layouts. This patch fixes CGRecordLayoutBuilder to
be able to consume those layouts and produce meaningful output without
tripping any asserts about assumed incoming layout.
A test case is included that shows CGRecordLayoutBuilder in fact
produces output in the compatiable order.
llvm-svn: 198900
This patch refactors microsoft record layout to be more "natural". The
most dominant change is that vbptrs and vfptrs are injected after the
fact. This simplifies the implementation and the math for the offest
for the first base/field after the vbptr.
llvm-svn: 198818