In cooperation with the LLVM patch, this should implement all scalar front-end
parts of the C and C++ ABIs for AArch64.
This patch excludes the NEON support also reviewed due to an outbreak of
batshit insanity in our legal department. That will be committed soon bringing
the changes to precisely what has been approved.
Further reviews would be gratefully received.
llvm-svn: 174055
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false,
not honoring that aggregate has at least one 'volatile' data member
(even though aggregate itself has not been qualified as 'volatile'.
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).
llvm-svn: 173535
never key functions. We did not implement that rule for the
iOS ABI, which was driven by what was implemented in gcc-4.2.
However, implement it now for other ARM-based platforms.
llvm-svn: 173515
with -target i686-win32, you will see;
debug-info-static-member.cpp:11:22: error: in-class initializer for static data member of type 'const float' requires 'constexpr' specifier [-Wstatic-float-init]
const static float const_b = 3.14;
^ ~~~~
constexpr
llvm-svn: 173418
Looks like r161368 fixed this for one case but not all. This change generalizes
the solution over all the unwrapping cases. Now that preserving the qualifiers
is done independent of the particular type being unwrapped I won't bother
adding test cases for each one but at least demonstrate that this change was
necessary & sufficient to fix the bug.
llvm-svn: 173002
Adding the pseudo first parameter to a member function pointer's function type
and mark it as artificial.
Combined with a fix to GDB ( http://sourceware.org/bugzilla/show_bug.cgi?id=14998 )
this fixes gdb.cp/member-ptr.exp with Clang.
llvm-svn: 172911
it apart from [[gnu::noreturn]] / __attribute__((noreturn)), since their
semantics are not equivalent (for instance, we treat [[gnu::noreturn]] as
affecting the function type, whereas [[noreturn]] does not).
llvm-svn: 172691
The testcase in pr14929 shows that this is extremely hard to do. If we choose
to apply the attribute, that causes the visibility of some decls to change and
that can happen really late (during codegen).
Current gcc warns and ignores the attribute in this testcase with a warning.
This suggest that the correct solution is to find a point in the compilation
where we can compute the visibility and
* assert it was never computed before
* reject any attempts to compute it again in the future (with warnings).
llvm-svn: 172305
storage and thus is implicitly zero-initialized, no need to
do C++11 memory model. This patch unconditionally detects
such condition and zeroinitializer's the variable.
Patch has been commented on and OKed by Doug off-line.
// rdar://12897704
llvm-svn: 172144
Using added LLVM functionality in r171698. This works in GDB for member
variable pointers but not member function pointers. See the LLVM commit and
GDB bug 14998 for details.
Un-xfailing cases in the GDB 7.5 test suite will follow.
llvm-svn: 171699
Catch some cases I'd missed in r171605 related to unnamed parameters of record
type. This resolves all remaining cases of PR14573 suppression in the GDB 7.5
test suite. Fix to the test suite to follow.
llvm-svn: 171633
Referring back to the original commit (r115090) which was a frontend only test
I adjusted this test to verify the frontend change that was made, to emit the
protected access value in the flags metadata field.
llvm-svn: 171604
the body of a functions. The problem was that hasBody looks at the entire chain
and causes problems to -fvisibility-inlines-hidden if the cache was not
invalidated.
Original message:
Cache visibility of decls.
This unifies the linkage and visibility caching. I first implemented this when
working on pr13844, but the previous fixes removed the performance advantage of
this one.
This is still a step in the right direction for making linkage and visibility
cheap to use.
llvm-svn: 171053
of a member function with parenthesized declarator.
Like this test case:
class Foo {
const char *(baz)() {
return __PRETTY_FUNCTION__;
}
};
llvm-svn: 170233
I wasn't sure where to put the test case for this, but this seemed like as good
a place as any. I had to reorder the tests here to make them legible while
still matching the order of metadata output in the IR file (for some reason
making it virtual changed the ordering).
Relevant commit to fix up LLVM to actually respect 'artificial' member
variables is coming once I write up a test case for it.
llvm-svn: 170154
both LE and BE targets.
AFAICT, Clang get's this correct for PPC64. I've compared it to GCC 4.8
output for PPC64 (thanks Roman!) and to my limited ability to read power
assembly, it looks functionally equivalent. It would be really good to
fill in the assertions on this test case for x86-32, PPC32, ARM, etc.,
but I've reached the limit of my time and energy... Hopefully other
folks can chip in as it would be good to have this in place to test any
subsequent changes.
To those who care about PPC64 performance, a side note: there is some
*obnoxiously* bad code generated for these test cases. It would be worth
someone's time to sit down and teach the PPC backend to pattern match
these IR constructs better. It appears that things like '(shr %foo,
<imm>)' turn into 'rldicl R, R, 64-<imm>, <imm>' or some such. They
don't even get combined with other 'rldicl' instructions *immediately
adjacent*. I'll add a couple of these patterns to the README, but
I think it would be better to look at all the patterns produced by this
and other bitfield access code, and systematically build up a collection
of patterns that efficiently reduce them to the minimal code.
llvm-svn: 169693
This was an egregious bug due to the several iterations of refactorings
that took place. Size no longer meant what it original did by the time
I finished, but this line of code never got updated. Unfortunately we
had essentially zero tests for this in the regression test suite. =[
I've added a PPC64 run over the bitfield test case I've been primarily
using. I'm still looking at adding more tests and making sure this is
the *correct* bitfield access code on PPC64 linux, but it looks pretty
close to me, and it is *worlds* better than before this patch as it no
longer asserts! =] More commits to follow with at least additional tests
and maybe more fixes.
Sorry for the long breakage due to this....
llvm-svn: 169691
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.
The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.
This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.
I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.
llvm-svn: 169489