back up the iterator, as long as it still contains the address.
std::lower_bound will point us to the entry after the one we
are really interested in, leading to problems with backtracing
in corefiles.
<rdar://problem/27823549>
llvm-svn: 278901
This is a quick work around, because in some cases, e.g. caller's stack
size > callee's stack size, we are still able to apply sibling call
optimization even callee has any byval arg.
This patch fix: https://llvm.org/bugs/show_bug.cgi?id=28328
Reviewers: hfinkel kbarton nemanjai amehsan
Subscribers: hans, tjablin
https://reviews.llvm.org/D23441
llvm-svn: 278900
Now the tests of TargetParser is in place:
unittests/Support/TargetParserTest.cpp.
So the tests in TripleTest.cpp which actually stressing TargetParser's behavior could be removed.
llvm-svn: 278899
Use BB.getNextNode(), which returns nullptr on end(), instead of
&*BB.getIterator(), which is UB on end().
CodeGenFunction::createBasicBlock expects nullptr in this case already.
llvm-svn: 278898
minimal and boring form than the old pass manager's version.
This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:
- Array alloca merging
- To support the above, bottom-up inlining with careful history
tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
Instead, it focuses on inlining functions with that attribute.
The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.
The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.
The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.
One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.
Anyways, hopefully a reasonable starting point for this pass.
Differential Revision: https://reviews.llvm.org/D23299
llvm-svn: 278896
Also avoid some pointless use of auto! Because that's friendlier to
readers and avoids several types accidentally resolving to unnecessary
references here (MachineInstr *&, unsigned &).
llvm-svn: 278894
This is off for now while testing can take place to make sure that in
fact we do sufficient stack coloring to fully obviate the manual alloca
array merging.
Some context on why we should be using stack coloring rather than
merging allocas in this way:
LLVM relies very heavily on analyzing pointers as coming from different
allocas in order to make aliasing decisions. These are some of the most
powerful aliasing signals available in LLVM. So merging allocas is an
extremely destructive operation on the LLVM IR -- it takes away highly
valuable and hard to reconstruct information.
As a consequence, inlined functions which happen to have array allocas
that this pattern matches will fail to be properly interleaved unless
SROA manages to hoist everything to an SSA register. Instead, the
inliner will have added an unnecessary dependence that one inlined
function execute after the other because they will have been rewritten
to refer to the same memory.
All that said, folks will reasonably want some time to experiment here
and make sure there are no significant regressions. A flag should give
us an easy knob to test.
For more context, see the thread here:
http://lists.llvm.org/pipermail/llvm-dev/2016-July/103277.htmlhttp://lists.llvm.org/pipermail/llvm-dev/2016-August/103285.html
Differential Revision: https://reviews.llvm.org/D23052
llvm-svn: 278892
These splices are interesting because they involve swapping two nodes in
the same list. There are two ways to do this. Assuming:
A -> B -> [Sentinel]
You can either:
- splice B before A, with: L.splice(A, L, B) or
- splice A before Sentinel, with: L.splice(L.end(), L, A) to create:
B -> A -> [Sentinel]
These two swapping-splices are somewhat interesting corner cases for
maintaining the list invariants. The tests pass even with my new ilist
implementation, but I had some doubts about the latter when I was
looking at weird UB effects. Since I can't find equivalent explicit
test coverage elsewhere it seems prudent to commit.
llvm-svn: 278887
IndVarSimplify::sinkUnusedInvariants calls
BasicBlock::getFirstInsertionPt on the ExitBlock and moves instructions
before it. This can return end(), so it's not safe to dereference. Add
an iterator-based overload to Instruction::moveBefore to avoid the UB.
llvm-svn: 278886
IsOperandBundleUse conveniently indicates whether
std::next(F->arg_begin(),UseIndex) will get to (or past) end(). Check
it first to avoid dereferencing end().
llvm-svn: 278884
BasicBlock::Create isn't designed to take iterators (which might be
end()), but pointers (which might be nullptr). Fix the UB that was
converting end() to a BasicBlock* by calling BasicBlock::getNextNode()
in the first place.
llvm-svn: 278883
around a Linux kernel bug where the actual amount of available stack may be a
*lot* lower than the rlimit.
GCC also sets a higher stack rlimit on startup, but it goes all the way to
64MiB. We can increase this limit if it proves necessary.
The kernel bug is as follows: Linux kernels prior to version 4.1 may choose to
map the process's heap as little as 128MiB before the process's stack for a PIE
binary, even in a 64-bit virtual address space. This means that allocating more
than 128MiB before you reach the process's stack high water mark can lead to
crashes, even if you don't recurse particularly deeply.
We work around the kernel bug by touching a page deep within the stack (after
ensuring that we know how big it is), to preallocate virtual address space for
the stack so that the kernel doesn't allow the brk() area to wander into it,
when building clang as a Linux PIE binary.
llvm-svn: 278882
When there's only one argument and it doesn't match one of the known
functions, return ARCInstKind::CallOrUser rather than falling through
to the two argument case. The old behaviour both incremented past and
dereferenced end().
llvm-svn: 278881
llvm::tryFoldSPUpdateIntoPushPop assumes its arguments are valid
MachineInstrs. Update ARMFrameLowering::emitPrologue to respect that;
when LastPush==end(), it can't possibly be a push instruction anyway.
llvm-svn: 278880
When comparing a User* to a BasicBlock::iterator in
passingValueIsAlwaysUndefined, don't dereference the iterator in case it
is end().
llvm-svn: 278872
Rather than doing a funny dance that relies on dereferencing end() not
crashing, add some API to MachineInstrBundleIterator to get a non-const
version of the iterator.
llvm-svn: 278870
This allows you to annotate switch case fallthrough in a better way
than a "// FALLTHROUGH" comment. Eventually it would be nice to turn
on -Wimplicit-fallthrough, if we can get the code base clean.
llvm-svn: 278868
If AnalyzeBranch can't analyze a block and it is possible to
fallthrough, then duplicating the block doesn't make sense, as only one
block can be the layout predecessor for the un-analyzable fallthrough.
Submitted wit a test case, but NOTE: the test case doesn't currently
fail. However, the test case fails with D20505 and would have saved me
some time debugging.
llvm-svn: 278866
1. Fix variable names
2. Add local variables to reduce code
3. Fix code comments
4. Add early exit to reduce indentation
5. Remove 'else' after if -> return
6. Hoist common predicate
llvm-svn: 278864
This patch adds a few new convenience options used by the PGO CMake cache to setup options on bootstrap stages. The new options are:
PGO_INSTRUMENT_LTO - Builds the instrumented and final builds with LTO
PGO_BUILD_CONFIGURATION - Accepts a CMake cache script that can be used for complex configuration of the stage2-instrumented and stage2 builds.
The patch also includes a fix for bootstrap dependencies so that the instrumented LTO tools don't get used when building the final stage, and it adds distribution targets to the passthrough.
llvm-svn: 278862
With -debug-info-kind=limited, we omit debug info for dynamic classes that live in other TUs. This reduces duplicate type information. When statically linked, the type information comes together. But if your binary has a class derived from a base in a DLL, the base class info is not available to the debugger.
The decision is made in shouldOmitDefinition (CGDebugInfo.cpp). Per a suggestion from rnk, I've tweaked the decision so that we do include definitions for classes marked as DLL imports. This should be a relatively small number of classes, so we don't pay a large price for duplication of the type info, yet it should cover most cases on Windows.
Essentially this makes debug info for DLLs independent, but we still assume that all TUs within the same DLL will be consistently built with (or without) debug info and the debugger will be able to search across the debug info within that scope to resolve any declarations into definitions, etc.
llvm-svn: 278861
The current MachineBasicBlock might be the last block, so FallThru may
be past the end(). Use getNextNode(), which will convert to nullptr,
rather than &*++, which is invalid if we reach the end().
llvm-svn: 278858
This patch handles 64-bit constants which can be encoded as 32-bit immediates.
It extends the functionality added by https://reviews.llvm.org/D11363 for 32-bit constants to 64-bit constants.
Patch by Sunita Marathe!
Differential Revision: https://reviews.llvm.org/D23391
llvm-svn: 278857
Clearing out the AssumptionCache can cause us to rescan the entire
function for assumes. If there are many loops, then we are scanning
over the entire function many times.
Instead of clearing out the AssumptionCache, register all cloned
assumes.
llvm-svn: 278854
Summary: This will allow for the sanitizers to be used when c++ abi is unavailable.
Reviewers: samsonov, beanz, pcc, rnk
Subscribers: llvm-commits, kubabrecka, compnerd, dberris
Differential Revision: https://reviews.llvm.org/D23376
llvm-svn: 278848