Treat volatile accesses as "maybe" instead of "definite" side effects for the purposes of warning on evaluations in an unevaluated context. No longer diagnose on idiomatic code like:
int * volatile v;
(void)sizeof(*v);
llvm-svn: 225116
Most modern PowerPC cores prefer that functions and loops start on
16-byte-aligned boundaries (*), so instruct block placement, etc. to make this
happen. The branch selector has also been adjusted so account for the extra
nops that might now be inserted before loop headers.
(*) Some cores actually prefer other alignments for small loops, but that will
be addressed in a follow-up commit.
llvm-svn: 225115
Make sure they all have llvm_unreachable on the default path out of the switch. Remove unnecessary "default: break". Remove a 'return' after unreachable. Fix some indentation.
llvm-svn: 225114
We would sometimes leave the out-param APInts untouched while going
through computeKnownBits. While I don't know of a way to trigger a bug
involving this in practice, it goes against the overall design of
computeKnownBits.
Found via code inspection.
llvm-svn: 225109
Newer POWER cores, and the A2, support the cmpb instruction. This instruction
compares its operands, treating each of the 8 bytes in the GPRs separately,
returning a 'mask' result of 0 (for false) or -1 (for true) in each byte.
Code generation support is added, in the form of a PPCISelDAGToDAG
DAG-preprocessing routine, that recognizes patterns close to what the
instruction computes (either exactly, or related by a constant masking
operation), and generates the cmpb instruction (along with any necessary
constant masking operation). This can be expanded if use cases arise.
llvm-svn: 225106
ARM NT assumes a THUMB only environment. As such, any address that is detected
as residing in an executable section is adjusted to have its bottom bit set to
indicate THUMB in case of a mode exchange.
Although the testing here seems insufficient (missing the negative cases) the
existing test cases for the IMAGE_REL_ARM_{ADDR32,MOV32T} are relevant as they
ensure that we do not incorrectly set the bit.
llvm-svn: 225104
FunctionPassManager. These never got documented, likely due to the
clutter of this header file. This fixes another problem people noticed
when they started trying to use the new pass manager.
I've also used this to document the aspirational constraints I would
like to hold passes to. I don't really have a better place to document
such things at this point, but eventually will probably create a proper
.rst file and page for the LLVM pass infrastructure that carries such
high-level concerns.
llvm-svn: 225097
concept-based polymorphism in the pass manager to a separate header.
I got feedback from someone reading the code and trying to use it that
this was really making it hard to dive in and start using these APIs and
that makes a lot of sense.
This only requires a moderate amount of gymnastics to separate in this
way, namely rinsing the PreservedAnalysis object through a template
argument in a few places so that it is dependent and we only examine it
on instantiation.
llvm-svn: 225094
It is somewhat common for CFLAGS to be used with .s files. We were
already ignoring -flto. This patch just does the same for -fno-lto.
llvm-svn: 225093
Unfortunately, MSVC does not indicate to the driver what target is being used.
This means that we cannot correctly select the target architecture for the
clang_rt component. This breaks down when targeting windows with the clang
driver as opposed to the clang-cl driver. This should fix the native ARM
buildbot tests.
llvm-svn: 225089
Fix test failures by introducing CommonFlags::CopyFrom() to make sure
compiler doesn't insert memcpy() calls into runtime code.
Original commit message:
Protect CommonFlags singleton by adding const qualifier to
common_flags() accessor. The only ways to modify the flags are
SetCommonFlagsDefaults(), ParseCommonFlagsFromString() and
OverrideCommonFlags() functions, which are only supposed to be
called during initialization.
llvm-svn: 225088
The logic for addSanitizerRTWindows was performing the same logical operation as
getCompilerRT, which was previously fully generalised for Linux and Windows.
This avoids having a duplication of the logic for building up the name of a
clang_rt component. This change does move the current limitation for Windows
into getArchNameForCompilerRTLib, where it is assumed that the architecture for
Windows is always i386.
llvm-svn: 225087
Between this behavior and that fixed by r225083/r225000, I'll take the
latter over the former for now, but I'm immediately working on
understanding/addressing this behavior too.
(the fact that the code change in r225083 caused this change in behavior
is a bit troubling anyway - given that it looks & claims to be just a
preformance thing)
llvm-svn: 225086
r225000 generalized debug info line info handling for expressions such
that this code is no longer necessary.
This removes the last use of CGDebugInfo::getLocation, but not all the
uses of CGDebugInfo::CurLoc, which is still used internally in
CGDebugInfo. I'd like to do away with all of that & might succeed after
a few more patches.
llvm-svn: 225085
The optimization (that appears to have been here since the earliest
implementation (r50848) & has become more complicated over the years) to
avoid recreating the debugloc if it would be the same was out of date
because ApplyDebugLocation was not re-updating the CurLoc/PrevLoc. This
optimization doesn't look terribly beneficial/necessary, so I'm removing
it - if it turns up in benchmarks, I'm happy to reconsider/reimplement
this with justification, but for now it just seems to add
complexity/problems.
llvm-svn: 225083
This adds support for IMAGE_REL_ARM_BRANCH24T relocations. Similar to the
IMAGE_REL_ARM_BLX32T relocation, this relocation requires munging an
instruction. The instruction encoding is quite similar, allowing us to reuse
the same munging implementation. This is needed by the entry point stubs for
modules provided by MSVCRT.
llvm-svn: 225082
This adds support for IMAGE_REL_ARM_BLX23T relocations. Similar to the
IMAGE_REL_ARM_MOV32T relocation, this relocation requires munging an
instruction. This inches us closer to supporting a basic hello world
application.
llvm-svn: 225081
We've got some internal users that either aren't compatible with this or
have found a bug with it. Either way, this is an isolated cleanup and so
I'm reverting it to un-block folks while we investigate. Alexey and
I will be working on fixing everything up so this can be re-committed
soon. Sorry for the noise and any inconvenience.
llvm-svn: 225079
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
llvm-svn: 225076
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.
Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.
llvm-svn: 225075
a pre-splitting pass over loads and stores.
Historically, splitting could cause enough problems that I hamstrung the
entire process with a requirement that splittable integer loads and
stores must cover the entire alloca. All smaller loads and stores were
unsplittable to prevent chaos from ensuing. With the new pre-splitting
logic that does load/store pair splitting I introduced in r225061, we
can now very nicely handle arbitrarily splittable loads and stores. In
order to fully benefit from these smarts, we need to mark all of the
integer loads and stores as splittable.
However, we don't actually want to rewrite partitions with all integer
loads and stores marked as splittable. This will fail to extract scalar
integers from aggregates, which is kind of the point of SROA. =] In
order to resolve this, what we really want to do is only do
pre-splitting on the alloca slices with integer loads and stores fully
splittable. This allows us to uncover all non-integer uses of the alloca
that would benefit from a split in an integer load or store (and where
introducing the split is safe because it is just memory transfer from
a load to a store). Once done, we make all the non-whole-alloca integer
loads and stores unsplittable just as they have historically been,
repartition and rewrite.
The result is that when there are integer loads and stores anywhere
within an alloca (such as from a memcpy of a sub-object of a larger
object), we can split them up if there are non-integer components to the
aggregate hiding beneath. I've added the challenging test cases to
demonstrate how this is able to promote to scalars even a case where we
have even *partially* overlapping loads and stores.
This restores the single-store behavior for small arrays of i8s which is
really nice. I've restored both the little endian testing and big endian
testing for these exactly as they were prior to r225061. It also forced
me to be more aggressive in an alignment test to actually defeat SROA.
=] Without the added volatiles there, we actually split up the weird i16
loads and produce nice double allocas with better alignment.
This also uncovered a number of bugs where we failed to handle
splittable load and store slices which didn't have a begininng offset of
zero. Those fixes are included, and without them the existing test cases
explode in glorious fireworks. =]
I've kept support for leaving whole-alloca integer loads and stores as
splittable even for the purpose of rewriting, but I think that's likely
no longer needed. With the new pre-splitting, we might be able to remove
all the splitting support for loads and stores from the rewriter. Not
doing that in this patch to try to isolate any performance regressions
that causes in an easy to find and revert chunk.
llvm-svn: 225074
instructions.
I noticed this when working on dialing up how aggressively we can
pre-split loads and stores. My test case wasn't passing because dead
GEPs into the allocas persisted when they were built by this routine.
This isn't terribly harmful, we still rewrote and promoted the alloca
and I can't conceive of how to cause this to happen in a case where we
will keep the exact same alloca but rewrite and promote the uses of it.
If that ever happened, we'd get an assert out of mem2reg.
So I don't have a direct test case yet, but the subsequent commit's test
case wouldn't pass without this. There are other problems fixed by this
patch that I spotted purely by inspection such as the fact that
getAdjustedPtr could have actually deleted dead base pointers. I don't
know how to get a base pointer to go into getAdjustedPtr today, so
I think this bug could never have manifested (and I certainly can't
write a test case for it) but, it wasn't the intent of the code. The
code really just wanted to GC the new instructions built. That can be
done more directly by comparing with the base pointer which is the only
non-new instruction that this code can return.
llvm-svn: 225073
This adds support for the IMAGE_REL_ARM_MOV32T relocation. This is one of the
most complicated relocations for the Window on ARM target. It involves
re-encoding an instruction to contain an immediate value which is the relocation
target.
llvm-svn: 225072
The FIXME in the test is caused by TemplateDeclInstantiator::VisitCXXRecordDecl
returning a nullptr instead of creating an invalid decl. This is a common
pattern across all of TemplateDeclInstantiator, so I'm not comfortable changing
it. The reason it's not invalid in the class template is due to support for an
MSVC extension, see r137573.
llvm-svn: 225071
array. This prevents it from walking out of bounds on the splits array.
Bug found with the existing tests by ASan and by the MSVC debug build.
llvm-svn: 225069
a +asserts bootstrap, but my bootstrap had asserts off. Oops.
Anyways, in some places it is reasonable to cast (as a sanity check) the
pointer operand to a load or store to an instruction within SROA --
namely when the pointer operand is expected to be derived from an
alloca, and thus always an instruction. However, the pre-splitting code
also deals with loads and stores to non-alloca pointers and there we
need to just use the Value*. Nothing about the code relied on the
instruction cast, it was only there essentially as an invariant
assertion. Remove the two that don't actually hold.
This should fix the proximate issue in PR22080, but I'm also doing an
asserts bootstrap myself to see if there are other issues lurking.
I'll craft a reduced test case in a moment, but I wanted to get the tree
healthy as quickly as possible.
llvm-svn: 225068
Schedule dimensions that have the same constant value accross all statements do
not carry any information, but due to the increased dimensionality of the
schedule cost compile time. To not pay this cost, we remove constant dimensions
if possible.
llvm-svn: 225067
Attempting to fix PR22078 (building on 32-bit systems) by replacing my careless
use of 1ul to be a uint64_t constant with UINT64_C(1).
llvm-svn: 225066