The StringRef constructor is unnecessary (since we're converting to
std::string anyway), and having it requires an explicit call to
StringRef's or std::string's constructor.
llvm-svn: 255000
Summary:
Also add a stricter post-condition for IndVarSimplify.
Fixes PR25578. Test case by Michael Zolotukhin.
Reviewers: hfinkel, atrick, mzolotukhin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15059
llvm-svn: 254977
Summary:
(Note: the problematic invocation of hoistIVInc that caused PR24804 came
from IndVarSimplify, not from SCEVExpander itself)
Fixes PR24804. Test case by David Majnemer.
Reviewers: hfinkel, majnemer, atrick, mzolotukhin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15058
llvm-svn: 254976
We were using unneccessarily large initial sizes for these SmallVectors. This was wasting around 50kb of memory for the O3 pipeline, even after the uniquing changes. We're still using around 20kb which is a bit much, but it's definitely better. This is about a 6% improvement in total O3 memory usage.
Note: The raw data on structure size which were used to pick these thresholds can be found in the review thread.
Differential Revision: http://reviews.llvm.org/D15244
llvm-svn: 254974
This is supposed to force-link the Interpreter, by inserting a dead
call to LLVMLinkInInterpreter().
Since it is actually an empty function, there is no reason for the
call to be dead.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 254956
Summary:
Add a field on the PassManagerBuilder that clang or gold can use to pass
down a pointer to the function index in memory to use for importing when
the ThinLTO backend is triggered. Add support to supply this to the
function import pass.
Reviewers: joker.eph, dexonsmith
Subscribers: davidxl, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D15024
llvm-svn: 254926
This is needed to support linking of module-level metadata as a
postpass after function importing, where we will be leaving temporary
metadata on imported instructions until the postpass metadata import.
Also added unittest. Split from D14838.
llvm-svn: 254914
This removes the code path that generate "synchronous" (only correct at call site) CFA.
We will probably want to re-introduce it once we are capable of emitting different
.eh_frame and .debug_frame sections.
Differential Revision: http://reviews.llvm.org/D14948
llvm-svn: 254874
Summary:
There are `SelectPatternFlavor`s that don't represent min or max idioms,
and we should not be passing those to `getCmpPredicateForMinMax`.
Fixes PR25745.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15249
llvm-svn: 254869
Different version of indexed format may use different
name uniquing schemes for static functions. Pass the
version info to the name interface so that different
schmes can be picked (for profile lookup).
llvm-svn: 254838
The indexList's nodes are all allocated on a BumpPtrAllocator, so it's
more efficient to let them be freed when it goes away, rather than
deleting them directly. This is a follow up to r254794.
llvm-svn: 254808
When the notion of target specific memory intrinsics was introduced to EarlyCSE, the commit confused the notions of volatile and simple memory access. Since I'm about to start working on this area, cleanup the naming so that patches aren't horribly confusing. Note that the actual implementation was always bailing if the load or store wasn't simple.
Reminder:
- "volatile" - C++ volatile, can't remove any memory operations, but in principal unordered
- "ordered" - imposes ordering constraints on other nearby memory operations
- "atomic" - can't be split or sheared. In LLVM terms, all "ordered" operations are also atomic so the predicate "isAtomic" is often used.
- "simple" - a load which is none of the above. These are normal loads and what most of the optimizer works with.
llvm-svn: 254805
In 254760, I introduced the usage of a BumpPtrAllocator for the AnalysisUsage instances held by the PassManger. This turns out to have been incorrect since a BumpPtrAllocator does not run the destructors of objects when deallocating memory. Since a few of our SmallVector's had grown beyond their small size, we end up with some leaked memory. We need to use a SpecificBumpPtrAllocator instead.
llvm-svn: 254803
The issue appears to have been that the copy constructor of the SmallVector was being invoked and this was somehow leading to leaked memory. This patch avoids the symptom, but likely doesn't address the underlying problem. I'm still investigating the root cause, but wanted to avoid the memory leak in the mean time. Even with the underlying fix, avoiding the redundant allocation is worthwhile.
llvm-svn: 254795
When a `SlotIndexes` is destroyed, `ileAllocator` will currently be
destructed before `IndexList`, but all of `IndexList`'s storage has
been allocated by `ileAllocator`. This means we'll call destructors on
garbage data, which is very bad. This can be avoided by putting the
BumpPtrAllocator earlier in the class than anything it allocates.
Unfortunately, I don't know how to test this. It depends very much on
memory layout, and the only evidence I have that this is actually
happening in practice are backtraces that might be explained by this.
By inspection though, the code is obviously dangerous/wrong, and this
is the right thing to do.
I'll follow up later with a patch that calls clearAndLeakNodesUnsafely
on the list, since there isn't much point in destructing them when
they're allocated in a BPA anyway, but I figured it makes sense to
commit the correctness fix separately from that optimization.
llvm-svn: 254794
Before this patch the diagnostic handler was optional. If it was not
passed, the one in the LLVMContext was used.
That is probably not a pattern we want to follow. If each area has an
optional callback, there is a sea of callbacks and it is hard to follow
which one is called.
Doing this also found cases where the callback is a nice addition, like
testing that no errors or warnings are reported.
The other option is to always use the diagnostic handler in the
LLVMContext. That has a few problems
* To implement the C API we would have to set the diag handler and then
set it back to the original value.
* Code that creates the context might be far away from code that wants
the diagnostics.
I do have a patch that implements the second option and will send that as
an RFC.
llvm-svn: 254777
Currently `OperandBundleUse::operandsHaveAttr` computes its result
without being given a specific operand. This is problematic because it
forces us to say that, e.g., even non-pointer operands in `"deopt"`
operand bundles are `readonly`, which doesn't make sense.
This commit changes `operandsHaveAttr` to work in the context of a
specific operand, so that we can give the operand attributes that make
sense for the operands's `llvm::Type`.
llvm-svn: 254764
The LegacyPassManager was storing an instance of AnalysisUsage for each instance of each pass. In practice, most instances of a single pass class share the same dependencies. We can't rely on this because passes can (and some do) have dynamic dependencies based on instance options.
We can exploit the likely commonality by uniqueing the usage information after querying the pass, but before storing it into the pass manager. This greatly reduces memory consumption by the AnalysisUsage objects. For a long pass pipeline, I measured a decrease in memory consumption for this storage of about 50%. I have not measured on the default O3 pipeline, but I suspect it will see some benefit as well since many passes are repeated (e.g. InstCombine).
Differential Revision: http://reviews.llvm.org/D14677
llvm-svn: 254760
This commit adds a new target-independent calling convention for C++ TLS
access functions. It aims to minimize overhead in the caller by perserving as
many registers as possible.
The target-specific implementation for X86-64 is defined as following:
Arguments are passed as for the default C calling convention
The same applies for the return value(s)
The callee preserves all GPRs - except RAX and RDI
The access function makes C-style TLS function calls in the entry and exit
block, C-style TLS functions save a lot more registers than normal calls.
The added calling convention ties into the existing implementation of the
C-style TLS functions, so we can't simply use existing calling conventions
such as preserve_mostcc.
rdar://9001553
llvm-svn: 254737
This is a continuation of r253367.
These functions return is owned by the caller, so they return
std::unique_ptr now.
The call can fail, so the return is wrapped in ErrorOr.
They have a context where to report diagnostics, so they don't need to
take a string out parameter.
With this there are no call to getGlobalContext in lib/LTO.
llvm-svn: 254721
This class is turning into a useful interface, rather than an implementation
detail, so I'm dropping the 'Base' suffix.
No functional change.
llvm-svn: 254693
Re-comitting with a change that avoids undefined uses getting put into
the VRegUses list.
The new algorithm remembers the uses encountered while walking backwards
until a matching def is found. Contrary to the previous version this:
- Works without LiveIntervals being available
- Allows to increase the precision to subregisters/lanemasks
(not used for now)
The changes in the AMDGPU tests are necessary because the R600 scheduler
is not stable with respect to the order of nodes in the ready queues.
Differential Revision: http://reviews.llvm.org/D9068
llvm-svn: 254683
This is a revised version of r254655 which uses a Printable wrapper
class to avoid ambiguous overload problems.
Differential Revision: http://reviews.llvm.org/D14348
llvm-svn: 254681
With the latest refactoring and code sharing patches landed,
it is possible to unify the value profile implementation between
raw and indexed profile. This is the patch in raw profile reader
that uses the common interface.
Differential Revision: http://reviews.llvm.org/D15056
llvm-svn: 254677
Currently in LLVM's cost model, a vectorized arithmetic instruction will have
high cost if its type is split into multiple registers. However, this
punishment is too heavy and unnecessary. The overhead of the split should not
be on arithmetic instructions but instructions that implement the split. Note
that during vectorization we have calculated the register pressure, and we
only choose proper interleaving factor (and also vectorization factor) so
that we don't use more registers than the maximum number.
Here is a very simple example: if a vadd has the cost 1, and if we double VF
so that we need two registers to perform it, then its cost will become 4 with
the current implementation, which will prevent us to use larger VF.
Differential revision: http://reviews.llvm.org/D15159
llvm-svn: 254671
This change adds support for an optional weight when merging profile data with the llvm-profdata tool.
Weights are specified by adding an option ':<weight>' suffix to the input file names.
Adding support for arbitrary weighting of input profile data allows for relative importance to be placed on the
input data from multiple training runs.
Both sampled and instrumented profiles are supported.
Reviewers: dnovillo, bogner, davidxl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14547
llvm-svn: 254669