The old implementation of ms_struct in RecordLayoutBuilder was a
complete mess: it depended on complicated conditionals which didn't
really reflect the underlying logic, and placed a burden on users of
the resulting RecordLayout. This commit rips out almost all of the
old code, and replaces it with simple checks in
RecordLayoutBuilder::LayoutBitField.
This commit also fixes <rdar://problem/14252115>, a bug where class
inheritance would cause us to lay out bitfields incorrectly.
llvm-svn: 185018
Note that there is no test suite update. This was found by a couple of
tests failing when the test suite was run on a powerpc64 host (thanks
Roman!). The tests don't specify a triple, which might seem surprising
for a codegen test. But in fact, these tests don't even inspect their
output. Not at all. I could add a bunch of triples to these tests so
that we'd get the test coverage for normal builds, but really someone
needs to go through and add actual *tests* to these tests. =[ The ones
in question are:
test/CodeGen/bitfield-init.c
test/CodeGen/union.c
llvm-svn: 169694
This was an egregious bug due to the several iterations of refactorings
that took place. Size no longer meant what it original did by the time
I finished, but this line of code never got updated. Unfortunately we
had essentially zero tests for this in the regression test suite. =[
I've added a PPC64 run over the bitfield test case I've been primarily
using. I'm still looking at adding more tests and making sure this is
the *correct* bitfield access code on PPC64 linux, but it looks pretty
close to me, and it is *worlds* better than before this patch as it no
longer asserts! =] More commits to follow with at least additional tests
and maybe more fixes.
Sorry for the long breakage due to this....
llvm-svn: 169691
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.
The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.
This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.
I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.
llvm-svn: 169489
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
In addition, I've made the pointer and reference typedef 'void' rather than T*
just so they can't get misused. I would've omitted them entirely but
std::distance likes them to be there even if it doesn't use them.
This rolls back r155808 and r155869.
Review by Doug Gregor incorporating feedback from Chandler Carruth.
llvm-svn: 158104
filter_decl_iterator had a weird mismatch where both op* and op-> returned T*
making it difficult to generalize this filtering behavior into a reusable
library of any kind.
This change errs on the side of value, making op-> return T* and op* return
T&.
(reviewed by Richard Smith)
llvm-svn: 155808
- Remodel Expr::EvaluateAsInt to behave like the other EvaluateAs* functions,
and add Expr::EvaluateKnownConstInt to capture the current fold-or-assert
behaviour.
- Factor out evaluation of bitfield bit widths.
- Fix a few places which would evaluate an expression twice: once to determine
whether it is a constant expression, then again to get the value.
llvm-svn: 141561
builtin types (When requested). This is another step toward making
ASTUnit build the ASTContext as needed when loading an AST file,
rather than doing so after the fact. No actual functionality change (yet).
llvm-svn: 138985
- Changes bit-field access policy to try to use (aligned) register sized accesses.
The idea here is that by using larger accesses we expose more coalescing
potential to the backend when we have situations like adjacent bit-fields in the
same structure (which is common), and that the backend should be smart enough to
narrow the accesses down when no coalescing is done or when it is shown not to
be profitable.
--
$ clang -m32 -O3 -S -o - t.c
_f0: ## @f0
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
movb (%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, (%eax)
movb 1(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 1(%eax)
movb 2(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 2(%eax)
movb 3(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 3(%eax)
popl %ebp
ret
$ clang -m32 -O3 -S -o - t.c -Xclang -fuse-register-sized-bitfield-access
_f0: ## @f0
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
movl $-2139062144, %ecx ## imm = 0xFFFFFFFF80808080
andl (%eax), %ecx
orl $16843009, %ecx ## imm = 0x1010101
movl %ecx, (%eax)
popl %ebp
ret
--
llvm-svn: 133532
Type::isUnsignedIntegerOrEnumerationType(), which are like
Type::isSignedIntegerType() and Type::isUnsignedIntegerType() but also
consider the underlying type of a C++0x scoped enumeration type.
Audited all callers to the existing functions, switching those that
need to also handle scoped enumeration types (e.g., those that deal
with constant values) over to the new functions. Fixes PR9923 /
<rdar://problem/9447851>.
llvm-svn: 131735
turns out that a field or base needs to be laid out in the tail padding of
the base, CGRecordLayoutBuilder::ResizeLastBaseFieldIfNecessary will convert
it to an array of i8.
I've audited the new test results to make sure that they are still valid. I've
also verified that we pass a self-host with this change.
This (finally) fixes PR5589!
llvm-svn: 129673