The handler that deals with IR passed/missed/analysis remarks is extended to
also handle the corresponding MIR remarks.
The more thorough testing in done via llc (rL293113, rL293121). Here we just
make sure that the functionality is accessible through clang.
llvm-svn: 293146
in the current lexical scope.
clang currently emits the lifetime.start marker of a variable when the
variable comes into scope even though a variable's lifetime starts at
the entry of the block with which it is associated, according to the C
standard. This normally doesn't cause any problems, but in the rare case
where a goto jumps backwards past the variable declaration to an earlier
point in the block (see the test case added to lifetime2.c), it can
cause mis-compilation.
To prevent such mis-compiles, this commit conservatively disables
emitting lifetime variables when a label has been seen in the current
block.
This problem was discussed on cfe-dev here:
http://lists.llvm.org/pipermail/cfe-dev/2016-July/050066.html
rdar://problem/30153946
Differential Revision: https://reviews.llvm.org/D27680
llvm-svn: 293106
Summary:
This patch changes the layout of DoubleAPFloat, and adjust all
operations to do either:
1) (IEEEdouble, IEEEdouble) -> (uint64_t, uint64_t) -> PPCDoubleDoubleImpl,
then run the old algorithm.
2) Do the right thing directly.
1) includes multiply, divide, remainder, mod, fusedMultiplyAdd, roundToIntegral,
convertFromString, next, convertToInteger, convertFromAPInt,
convertFromSignExtendedInteger, convertFromZeroExtendedInteger,
convertToHexString, toString, getExactInverse.
2) includes makeZero, makeLargest, makeSmallest, makeSmallestNormalized,
compare, bitwiseIsEqual, bitcastToAPInt, isDenormal, isSmallest,
isLargest, isInteger, ilogb, scalbn, frexp, hash_value, Profile.
I could split this into two patches, e.g. use
1) for all operatoins first, then incrementally change some of them to
2). I didn't do that, because 1) involves code that converts data between
PPCDoubleDoubleImpl and (IEEEdouble, IEEEdouble) back and forth, and may
pessimize the compiler. Instead, I find easy functions and use
approach 2) for them directly.
Next step is to implement move multiply and divide from 1) to 2). I don't
have plans for other functions in 1).
Differential Revision: https://reviews.llvm.org/D27872
llvm-svn: 292839
For a << b (as original vec_sl does), if b >= sizeof(a) * 8, the
behavior is undefined. However, Power instructions do define the
behavior, which is equivalent to a << (b % (sizeof(a) * 8)).
This patch changes altivec.h to use a << (b % (sizeof(a) * 8)), to
ensure the consistent semantic of the instructions. Then it combines
the generated multiple instructions back to a single shift.
This patch handles left shift only. Right shift, on the other hand, is
more complicated, considering arithematic/logical right shift.
Differential Revision: https://reviews.llvm.org/D28037
llvm-svn: 292659
Summary: LTO backend will not invoke SampleProfileLoader pass even if -fprofile-sample-use is specified. This patch passes the flag down so that pass manager can add the SampleProfileLoader pass correctly.
Reviewers: mehdi_amini, tejohnson
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D28588
llvm-svn: 291870
There is a synchronization point between the reference count of a block dropping to zero and it's destruction, which TSan does not observe. Do not report errors in the compiler-emitted block destroy method and everything called from it.
This is similar to https://reviews.llvm.org/D25857
Differential Revision: https://reviews.llvm.org/D28387
llvm-svn: 291868
clang has generated correct IR for char/short decrement since r126816,
but we didn't have any test coverage for decrement.
Patch by Andrew Rogers.
llvm-svn: 291805
Summary: LTO backend will not invoke SampleProfileLoader pass even if -fprofile-sample-use is specified. This patch passes the flag down so that pass manager can add the SampleProfileLoader pass correctly.
Reviewers: mehdi_amini, tejohnson
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D28588
llvm-svn: 291774
Summary:
In order to simplify distributed build system integration, where actions
may be scheduled before the Thin Link which determines the list of
objects selected by the linker. The gold plugin currently will emit
0-sized index files for objects not selected by the link, to enable
checking for expected output files by the build system. If the build
system then schedules a backend action for these bitcode files, we want
to be able to fall back to normal compilation instead of failing.
Fallback is enabled under an option in LLVM (D28410), in which case a
nullptr is returned from llvm::getModuleSummaryIndexForFile. Clang can
just proceed with non-ThinLTO compilation in that case.
I am investigating whether this can be addressed in our build system,
but that is a longer term fix and so this enables a workaround in the
meantime.
Reviewers: mehdi_amini
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D28362
llvm-svn: 291303
Summary:
This patch makes the type_mismatch static data 7 bytes smaller (and it
ends up being 16 bytes smaller due to alignment restrictions, at least
on some x86-64 environments).
It revs up the type_mismatch handler version since we're breaking binary
compatibility. I will soon post a patch for the compiler-rt side.
Reviewers: rsmith, kcc, vitalybuka, pgousseau, gbedwell
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28242
llvm-svn: 291236
This test would force the execution of the backend. However, the
backend already has a test for this. Effectively, this was trying to
test that an API call was made properly. We do not have a good way to
really test this. The test itself tested very little.
Addresses post-commit review comments from Eric Christopher.
llvm-svn: 291208
Add builtins for the functions and custom codegen mapping the builtins to their
corresponding intrinsics and handling the endian related swapping.
https://reviews.llvm.org/D26546
llvm-svn: 291179
It seems that the ARM buildbots do not include x86 support. However,
other x86 targets do not support the ARM target. Use a x86 triple and
require the registered target.
llvm-svn: 291142
inline assembly may use the `.include` directive to include other
content into the file. Without the integrated assembler, the `-I` group
gets passed to the assembler. Emulate this by collecting the header
search paths and passing them to the IAS.
Resolves PR24811!
llvm-svn: 291123
Front end component (back end changes are D27392). The vectorcall
calling convention was broken subtly in two cases. First,
it didn't properly handle homogeneous vector aggregates (HVAs).
Second, the vectorcall specification requires that only the
first 6 parameters be eligible for register assignment.
This patch fixes both issues.
Differential Revision: https://reviews.llvm.org/D27529
llvm-svn: 291041
The special case to widen the integer literal zero when passed to
variadic function calls should only apply to variadic functions, not
unprototyped functions. This is consistent with what MSVC does. In this
test case, MSVC uses a 4-byte store to pass the 5th argument to 'kr' and
an 8-byte store to pass the zero to 'v':
void v(int, ...);
void kr();
void f(void) {
v(1, 2, 3, 4, 0);
kr(1, 2, 3, 4, 0);
}
Aaron Ballman discovered this issue in https://reviews.llvm.org/D28166
llvm-svn: 290906
Our newly aggressive constant folding logic makes it possible for
CGExprConstant to see the same CompoundLiteralExpr more than once. So,
emitting a new GlobalVariable every time we see a CompoundLiteral is no
longer correct.
We had a similar issue with BlockExprs that was caught while testing
said aggressive folding, so I applied the same style of fix (see D26410)
here. If we find yet another case where this needs to happen, we should
probably refactor this so we don't have a third DenseMap+getter+setter.
As a design note: getAddrOfConstantCompoundLiteralIfEmitted is really
only intended to be called by ConstExprEmitter::EmitLValue. So,
returning a GlobalVariable* instead of a ConstantAddress costs us
effectively nothing, and saves us either a few bytes per entry in our
map or a bit of code duplication.
llvm-svn: 290661
This is kind of funny because I specifically did work to make this easy
and then it didn't actually get implemented.
I've also ported a set of tests that rely on this functionality to run
with the new PM as well as the old PM so that we don't mess this up in
the future.
llvm-svn: 290558
manager, and a code path to use it.
The option is actually a top-level option but does contain
'experimental' in the name. This is the compromise suggested by Richard
in discussions. We expect this option will be around long enough and
have enough users towards the end that it merits not being relegated to
CC1, but it still needs to be clear that this option will go away at
some point.
The backend code is a fresh codepath dedicated to handling the flow with
the new pass manager. This was also Richard's suggested code structuring
to essentially leave a clean path for development rather than carrying
complexity or idiosyncracies of how we do things just to share code with
the parts of this in common with the legacy pass manager. And it turns
out, not much is really in common even though we use the legacy pass
manager for codegen at this point.
I've switched a couple of tests to run with the new pass manager, and
they appear to work. There are still plenty of bugs that need squashing
(just with basic experiments I've found two already!) but they aren't in
this code, and the whole point is to expose the necessary hooks to start
experimenting with the pass manager in more realistic scenarios.
That said, I want to *strongly caution* anyone itching to play with
this: it is still *very shaky*. Several large components have not yet
been shaken down. For example I have bugs in both the always inliner and
inliner that I have already spotted and will be fixing independently.
Still, this is a fun milestone. =D
One thing not in this patch (but that might be very reasonable to add)
is some level of support for raw textual pass pipelines such as what
Sean had a patch for some time ago. I'm mostly interested in the more
traditional flow of getting the IR out of Clang and then running it
through opt, but I can see other use cases so someone may want to add
it.
And of course, *many* features are not yet supported!
- O1 is currently more like O2
- None of the sanitizers are wired up
- ObjC ARC optimizer isn't wired up
- ...
So plenty of stuff still lef to do!
Differential Revision: https://reviews.llvm.org/D28077
llvm-svn: 290450
Summary:
We compile user opencl kernel code with spir triple. But built-ins are written in OpenCL and we compile it with triple x86_64 to be able to use x86 intrinsics. And we need address spaces to match in both cases. So, we change fake address space map in OpenCL for matching with spir.
On CPU address spaces are not really important but we'd like to preserve address space information in order to perform optimizations relying on this info like enhanced alias analysis.
Reviewers: pekka.jaaskelainen, Anastasia
Subscribers: pekka.jaaskelainen, yaxunl, bader, cfe-commits
Differential Revision: https://reviews.llvm.org/D28048
llvm-svn: 290436
-fno-inline-functions, -O0, and optnone.
These were really, really tangled together:
- We used the noinline LLVM attribute for -fno-inline
- But not for -fno-inline-functions (breaking LTO)
- But we did use it for -finline-hint-functions (yay, LTO is happy!)
- But we didn't for -O0 (LTO is sad yet again...)
- We had weird structuring of CodeGenOpts with both an inlining
enumeration and a boolean. They interacted in weird ways and
needlessly.
- A *lot* of set smashing went on with setting these, and then got worse
when we considered optnone and other inlining-effecting attributes.
- A bunch of inline affecting attributes were managed in a completely
different place from -fno-inline.
- Even with -fno-inline we failed to put the LLVM noinline attribute
onto many generated function definitions because they didn't show up
as AST-level functions.
- If you passed -O0 but -finline-functions we would run the normal
inliner pass in LLVM despite it being in the O0 pipeline, which really
doesn't make much sense.
- Lastly, we used things like '-fno-inline' to manipulate the pass
pipeline which forced the pass pipeline to be much more
parameterizable than it really needs to be. Instead we can *just* use
the optimization level to select a pipeline and control the rest via
attributes.
Sadly, this causes a bunch of churn in tests because we don't run the
optimizer in the tests and check the contents of attribute sets. It
would be awesome if attribute sets were a bit more FileCheck friendly,
but oh well.
I think this is a significant improvement and should remove the semantic
need to change what inliner pass we run in order to comply with the
requested inlining semantics by relying completely on attributes. It
also cleans up tho optnone and related handling a bit.
One unfortunate aspect of this is that for generating alwaysinline
routines like those in OpenMP we end up removing noinline and then
adding alwaysinline. I tried a bunch of other approaches, but because we
recompute function attributes from scratch and don't have a declaration
here I couldn't find anything substantially cleaner than this.
Differential Revision: https://reviews.llvm.org/D28053
llvm-svn: 290398
Much to my surprise, '-disable-llvm-optzns' which I thought was the
magical flag I wanted to get at the raw LLVM IR coming out of Clang
deosn't do that. It still runs some passes over the IR. I don't want
that, I really want the *raw* IR coming out of Clang and I strongly
suspect everyone else using it is in the same camp.
There is actually a flag that does what I want that I didn't know about
called '-disable-llvm-passes'. I suspect many others don't know about it
either. It both does what I want and is much simpler.
This removes the confusing version and makes that spelling of the flag
an alias for '-disable-llvm-passes'. I've also moved everything in Clang
to use the 'passes' spelling as it seems both more accurate (*all* LLVM
passes are disabled, not just optimizations) and much easier to remember
and spell correctly.
This is part of simplifying how Clang drives LLVM to make it cleaner to
wire up to the new pass manager.
Differential Revision: https://reviews.llvm.org/D28047
llvm-svn: 290392
This is a recommit of r290149, which was reverted in r290169 due to msan
failures. msan was failing because we were calling
`isMostDerivedAnUnsizedArray` on an invalid designator, which caused us
to read uninitialized memory. To fix this, the logic of the caller of
said function was simplified, and we now have a `!Invalid` assert in
`isMostDerivedAnUnsizedArray`, so we can catch this particular bug more
easily in the future.
Fingers crossed that this patch sticks this time. :)
Original commit message:
This patch does three things:
- Gives us the alloc_size attribute in clang, which lets us infer the
number of bytes handed back to us by malloc/realloc/calloc/any user
functions that act in a similar manner.
- Teaches our constexpr evaluator that evaluating some `const` variables
is OK sometimes. This is why we have a change in
test/SemaCXX/constant-expression-cxx11.cpp and other seemingly
unrelated tests. Richard Smith okay'ed this idea some time ago in
person.
- Uniques some Blocks in CodeGen, which was reviewed separately at
D26410. Lack of uniquing only really shows up as a problem when
combined with our new eagerness in the face of const.
llvm-svn: 290297