Commit Graph

8 Commits

Author SHA1 Message Date
Warren Hunt fb00c88703 Complete Rewrite of CGRecordLayoutBuilder
CGRecordLayoutBuilder was aging, complex, multi-pass, and shows signs of 
existing before ASTRecordLayoutBuilder.  It redundantly performed many 
layout operations that are now performed by ASTRecordLayoutBuilder and 
asserted that the results were the same.  With the addition of support 
for the MS-ABI, such as placement of vbptrs, vtordisps, different 
bitfield layout and a variety of other features, CGRecordLayoutBuilder 
was growing unwieldy in its redundancy.

This patch re-architects CGRecordLayoutBuilder to not perform any 
redundant layout but rather, as directly as possible, lower an 
ASTRecordLayout to an llvm::type.  The new architecture is significantly 
smaller and simpler than the CGRecordLayoutBuilder and contains fewer 
ABI-specific code paths.  It's also one pass.

The architecture of the new system is described in the comments. For the 
most part, the new system simply takes all of the fields and bases from 
an ASTRecordLayout, sorts them, inserts padding and dumps a record. 
Bitfields, unions and primary virtual bases make this process a bit more 
complicated.  See the inline comments.

In addition, this patch updates a few lit tests due to the fact that the 
new system computes more accurate llvm types than CGRecordLayoutBuilder. 
Each change is commented individually in the review.

Differential Revision: http://llvm-reviews.chandlerc.com/D2795

llvm-svn: 201907
2014-02-21 23:49:50 +00:00
Eli Friedman 61f615af81 Fix a FIXME in a testcase about packed structs and calls I left around
while fixing a related bug.  The fix here was simpler than I thought it
would be.

Fixes <rdar://problem/10530444>.

llvm-svn: 183718
2013-06-11 01:08:22 +00:00
Chandler Carruth ff0e3a1e1c Rework the bitfield access IR generation to address PR13619 and
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.

The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.

This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.

I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.

llvm-svn: 169489
2012-12-06 11:14:44 +00:00
Eli Friedman c24e2fb1fb Propagate lvalue alignment into bitfields. Per report on cfe-dev.
llvm-svn: 159295
2012-06-27 21:19:48 +00:00
Eli Friedman 114341db6d Attempt to fix test.
llvm-svn: 154897
2012-04-17 01:57:28 +00:00
Chad Rosier a750d46c9f Make sure EmitMoveFromReturnSlot is passing the correct alignment to
EmitFinalDestCopy (and thus pass EmitAggregateCopy the correct alignment).
rdar://11220251

llvm-svn: 154883
2012-04-17 00:35:38 +00:00
Eli Friedman 7f1ff60021 Propagate alignment on lvalues through EmitLValueForField. PR12395.
llvm-svn: 154789
2012-04-16 03:54:45 +00:00
Eli Friedman 6d694a38fd Make EmitAggregateCopy take an alignment argument. Make EmitFinalDestCopy pass in the correct alignment when known.
The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment.

<rdar://problem/10463337>
  

llvm-svn: 145862
2011-12-05 22:23:28 +00:00