The change in D78624 had a noticeable negative compile-time impact.
It seems that going through a function call for the MaxUsesToExplore
default is fairly expensive, at least if LLVM is not built with LTO.
This patch makes MaxUsesToExpore default to 0 and assigns the actual
default in the implementation instead. This recovers most of the
regression.
Differential Revision: https://reviews.llvm.org/D78734
The motivation is to be able to play with the option and change if it is required.
Reviewers: fedor.sergeev, apilipenko, rnk, jdoerfert
Reviewed By: fedor.sergeev
Subscribers: hiraditya, dantrushin, llvm-commits
Differential Revision: https://reviews.llvm.org/D78624
Essentially, fold OrderedBasicBlock into BasicBlock, and make it
auto-invalidate the instruction ordering when new instructions are
added. Notably, we don't need to invalidate it when removing
instructions, which is helpful when a pass mostly delete dead
instructions rather than transforming them.
The downside is that Instruction grows from 56 bytes to 64 bytes. The
resulting LLVM code is substantially simpler and automatically handles
invalidation, which makes me think that this is the right speed and size
tradeoff.
The important change is in SymbolTableTraitsImpl.h, where the numbering
is invalidated. Everything else should be straightforward.
We probably want to implement a fancier re-numbering scheme so that
local updates don't invalidate the ordering, but I plan for that to be
future work, maybe for someone else.
Reviewed By: lattner, vsk, fhahn, dexonsmith
Differential Revision: https://reviews.llvm.org/D51664
Summary:
CaptureTracker subclasses might have better dereferenceability
information which allows null pointer checks to be no-capturing.
The first user will be D59922.
Reviewers: sanjoy, hfinkel, aykevl, sstefan1, uenoku, xbolva00
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66371
llvm-svn: 369305
Summary:
Before we required the comparison against null to be "canonical", hence
null to be operand #1. This patch allows null to be in either operand,
similar to the handling of loaded globals that follows.
Reviewers: sanjoy, hfinkel, aykevl, sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66321
llvm-svn: 369158
Some uses of getArgumentAliasingToReturnedPointer and
isIntrinsicReturningPointerAliasingArgumentWithoutCapturing require the
calls/intrinsics to preserve the nullness of the argument.
For alias analysis, the nullness property does not really come into
play.
This patch explicitly sets it to true. In D61669, the alias analysis
uses will be switched to not require preserving nullness.
Reviewers: nlopes, efriedma, hfinkel, sanjoy, aqjune, jdoerfert
Reviewed By: jdoerfert
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64150
llvm-svn: 368993
Pointers that are in-bounds (either through dereferenceable_or_null or
thorough a getelementptr inbounds) cannot be captured with a comparison
against null. There is no way to construct a pointer that is still in
bounds but also NULL.
This helps safe languages that insert null checks before load/store
instructions. Without this patch, almost all pointers would be
considered captured even for simple loads. With this patch, an icmp with
null will not be seen as escaping as long as certain conditions are met.
There was a lot of discussion about this patch. See the Phabricator
thread for detals.
Differential Revision: https://reviews.llvm.org/D60047
llvm-svn: 362900
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
minted `CallBase` class instead of the `CallSite` wrapper.
This moves the largest interwoven collection of APIs that traffic in
`CallSite`s. While a handful of these could have been migrated with
a minorly more shallow migration by converting from a `CallSite` to
a `CallBase`, it hardly seemed worth it. Most of the APIs needed to
migrate together because of the complex interplay of AA APIs and the
fact that converting from a `CallBase` to a `CallSite` isn't free in its
current implementation.
Out of tree users of these APIs can fairly reliably migrate with some
combination of `.getInstruction()` on the `CallSite` instance and
casting the resulting pointer. The most generic form will look like `CS`
-> `cast_or_null<CallBase>(CS.getInstruction())` but in most cases there
is a more elegant migration. Hopefully, this migrates enough APIs for
users to fully move from `CallSite` to the base class. All of the
in-tree users were easily migrated in that fashion.
Thanks for the review from Saleem!
Differential Revision: https://reviews.llvm.org/D55641
llvm-svn: 350503
This is a follow up for rL347910. In the original patch I somehow forgot to pass
the limit from wrappers to the function which actually does the job.
llvm-svn: 349438
Currently CaptureTracker gives up if it encounters a value with more than 20
uses. The motivation for this cap is to keep it relatively cheap for
BasicAliasAnalysis use case, where the results can't be cached. Although, other
clients of CaptureTracker might be ok with higher cost. This patch introduces an
argument for PointerMayBeCaptured functions to specify the max number of uses to
explore. The motivation for this change is a downstream user of CaptureTracker,
but I believe upstream clients of CaptureTracker might also benefit from more
fine grained cap.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D55042
llvm-svn: 347910
Summary:
Patch for capture tracking broke
bootstrap of clang with -fstict-vtable-pointers
which resulted in debbugging nightmare. It was fixed
https://reviews.llvm.org/D46900 but as it turned
out, there were other parts like inliner (computing of
noalias metadata) that I found after bootstraping with enabled
assertions.
Reviewers: hfinkel, rsmith, chandlerc, amharc, kuhar
Subscribers: JDevlieghere, eraman, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D47088
llvm-svn: 333070
Summary:
A recent patch ([[ https://reviews.llvm.org/rL331587 | rL331587 ]]) to Capture Tracking taught it that the `launder_invariant_group` intrinsic captures its argument only by returning it. Unfortunately, BasicAA still considered every call instruction as a possible escape source and hence concluded that the result of a `launder_invariant_group` call cannot alias any local non-escaping value. This led to [[ https://bugs.llvm.org/show_bug.cgi?id=37458 | bug 37458 ]].
This patch updates the relevant check for escape sources in BasicAA.
Reviewers: Prazek, kuhar, rsmith, hfinkel, sanjoy, xbolva00
Reviewed By: hfinkel, xbolva00
Subscribers: JDevlieghere, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D46900
llvm-svn: 332466
Summary:
launder.invariant.group has the same rules of capturing as
bitcast, gep, etc - the original value is not captured
if the returned pointer is not captured.
With this patch, we mark 40% more functions as noalias when compiling with -fstrict-vtable-pointers;
1078 vs 1778 (39.37%)
Reviewers: sanjoy, davide, nlewycky, majnemer, mehdi_amini
Subscribers: JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D32673
llvm-svn: 331587
Summary:
The PointerMayBeCapturedBefore function's DomTree arg should be
const instead of non-const. There are no non-const uses of it
in the function.
llvm-svn: 330769
I did this a long time ago with a janky python script, but now
clang-format has built-in support for this. I fed clang-format every
line with a #include and let it re-sort things according to the precise
LLVM rules for include ordering baked into clang-format these days.
I've reverted a number of files where the results of sorting includes
isn't healthy. Either places where we have legacy code relying on
particular include ordering (where possible, I'll fix these separately)
or where we have particular formatting around #include lines that
I didn't want to disturb in this patch.
This patch is *entirely* mechanical. If you get merge conflicts or
anything, just ignore the changes in this patch and run clang-format
over your #include lines in the files.
Sorry for any noise here, but it is important to keep these things
stable. I was seeing an increasing number of patches with irrelevant
re-ordering of #include lines because clang-format was used. This patch
at least isolates that churn, makes it easy to skip when resolving
conflicts, and gets us to a clean baseline (again).
llvm-svn: 304787
The memory location that corresponds to a volatile operation is very
special. They are observed by the machine in ways which we cannot
reason about.
Differential Revision: http://reviews.llvm.org/D20555
llvm-svn: 270879
Summary
When a non-escaping pointer is compared to a global value, the
comparison can be folded even if the corresponding malloc/allocation
call cannot be elided.
We need to make sure the global value is not null, since comparisons to
null cannot be folded.
In future, we should also handle cases when the the comparison
instruction dominates the pointer escape.
Reviewers: sanjoy
Subscribers s.egerton, llvm-commits
Differential Revision: http://reviews.llvm.org/D19549
llvm-svn: 268390
These atomic operations are conceptually both a load and store from the same location. As such, we can treat them as the most conservative of those two components which in practice, means we can treat them like stores. An cmpxchg or atomicrmw captures the values, but not the locations accessed.
Note: We can probably be more aggressive about the comparison value in an cmpxhg since to have it be in memory, it must already be captured, but I figured it was better to avoid that for the moment.
Note 2: It turns out that since we don't actually support cmpxchg of pointer type, writing a negative test is impossible.
Differential Revision: http://reviews.llvm.org/D17400
llvm-svn: 261245
While we have successfully implemented a funclet-oriented EH scheme on
top of LLVM IR, our scheme has some notable deficiencies:
- catchendpad and cleanupendpad are necessary in the current design
but they are difficult to explain to others, even to seasoned LLVM
experts.
- catchendpad and cleanupendpad are optimization barriers. They cannot
be split and force all potentially throwing call-sites to be invokes.
This has a noticable effect on the quality of our code generation.
- catchpad, while similar in some aspects to invoke, is fairly awkward.
It is unsplittable, starts a funclet, and has control flow to other
funclets.
- The nesting relationship between funclets is currently a property of
control flow edges. Because of this, we are forced to carefully
analyze the flow graph to see if there might potentially exist illegal
nesting among funclets. While we have logic to clone funclets when
they are illegally nested, it would be nicer if we had a
representation which forbade them upfront.
Let's clean this up a bit by doing the following:
- Instead, make catchpad more like cleanuppad and landingpad: no control
flow, just a bunch of simple operands; catchpad would be splittable.
- Introduce catchswitch, a control flow instruction designed to model
the constraints of funclet oriented EH.
- Make funclet scoping explicit by having funclet instructions consume
the token produced by the funclet which contains them.
- Remove catchendpad and cleanupendpad. Their presence can be inferred
implicitly using coloring information.
N.B. The state numbering code for the CLR has been updated but the
veracity of it's output cannot be spoken for. An expert should take a
look to make sure the results are reasonable.
Reviewers: rnk, JosephTremoulet, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D15139
llvm-svn: 255422
Summary:
Earlier CaptureTracking would assume all "interesting" operands to a
call or invoke were its arguments. With operand bundles this is no
longer true.
Note: an earlier change got `doesNotCapture` working correctly with
operand bundles.
This change uses DSE to test the changes to CaptureTracking. DSE is a
vehicle for testing only, and is not directly involved in this change.
Reviewers: reames, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14306
llvm-svn: 252095
Some personality routines require funclet exit points to be clearly
marked, this is done by producing a token at the funclet pad and
consuming it at the corresponding ret instruction. CleanupReturnInst
already had a spot for this operand but CatchReturnInst did not.
Other personality routines don't need to use this which is why it has
been made optional.
llvm-svn: 245149
This patch is a follow up from r240560 and is a step further into
mitigating the compile time performance issues in CaptureTracker.
By providing the CaptureTracker with a "cached ordered basic block"
instead of computing it every time, MemDepAnalysis can use this cache
throughout its calls to AA->callCapturesBefore, avoiding to recompute it
for every scanned instruction. In the same testcase used in r240560,
compile time is reduced from 2min to 30s.
This also fixes PR22348.
rdar://problem/19230319
Differential Revision: http://reviews.llvm.org/D11364
llvm-svn: 243750
CaptureTracking becomes very expensive in large basic blocks while
calling PointerMayBeCaptured. PointerMayBeCaptured scans the BB the
number of times equal to the number of uses of 'BeforeHere', which is
currently capped at 20 and bails out with Tracker->tooManyUses().
The bottleneck here is the number of calls to PointerMayBeCaptured * the
basic block scan. In a testcase with a 82k instruction BB,
PointerMayBeCaptured is called 130k times, leading to 'shouldExplore'
taking 527k runs, this currently takes ~12min.
To fix this we locally (within PointerMayBeCaptured) number the
instructions in the basic block using a DenseMap to cache instruction
positions/numbers. We build the cache incrementally every time we need
to scan an unexplored part of the BB, improving compile time to only
take ~2min.
This triggers in the flow: DeadStoreElimination -> MepDepAnalysis ->
CaptureTracking.
Side note: after multiple runs in the test-suite I've seen no
performance nor compile time regressions, but could note a couple of
compile time improvements:
Performance Improvements - Compile Time Delta Previous Current StdDev
SingleSource/Benchmarks/Misc-C++/bigfib -4.48% 0.8547 0.8164 0.0022
MultiSource/Benchmarks/TSVC/LoopRerolling-dbl/LoopRerolling-dbl -1.47% 1.3912 1.3707 0.0056
Differential Revision: http://reviews.llvm.org/D7010
llvm-svn: 240560
The patch is generated using this command:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
llvm/lib/
Thanks to Eugene Kosov for the original patch!
llvm-svn: 240137
utils/sort_includes.py.
I clearly haven't done this in a while, so more changed than usual. This
even uncovered a missing include from the InstrProf library that I've
added. No functionality changed here, just mechanical cleanup of the
include order.
llvm-svn: 225974
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.
This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...
llvm-svn: 222334
As it turns out, the capture tracker named CaptureBefore used by AA, and now
available via the PointerMayBeCapturedBefore function, would have been
more-aptly named CapturedBeforeOrAt, because it considers captures at the
instruction provided. This is not always what one wants, and it is difficult to
get the strictly-before behavior given only the current interface. This adds an
additional parameter which controls whether or not you want to include
captures at the provided instruction. The default is not to include the
instruction provided, so that 'Before' matches its name.
No functionality change intended.
llvm-svn: 213582
There were two generally-useful CaptureTracker classes defined in LLVM: the
simple tracker defined in CaptureTracking (and made available via the
PointerMayBeCaptured utility function), and the CapturesBefore tracker
available only inside of AA. This change moves the CapturesBefore tracker into
CaptureTracking, generalizes it slightly (by adding a ReturnCaptures
parameter), and makes it generally available via a PointerMayBeCapturedBefore
utility function.
This logic will be needed, for example, to perform noalias function parameter
attribute inference.
No functionality change intended.
llvm-svn: 213519
This requires a number of steps.
1) Move value_use_iterator into the Value class as an implementation
detail
2) Change it to actually be a *Use* iterator rather than a *User*
iterator.
3) Add an adaptor which is a User iterator that always looks through the
Use to the User.
4) Wrap these in Value::use_iterator and Value::user_iterator typedefs.
5) Add the range adaptors as Value::uses() and Value::users().
6) Update *all* of the callers to correctly distinguish between whether
they wanted a use_iterator (and to explicitly dig out the User when
needed), or a user_iterator which makes the Use itself totally
opaque.
Because #6 requires churning essentially everything that walked the
Use-Def chains, I went ahead and added all of the range adaptors and
switched them to range-based loops where appropriate. Also because the
renaming requires at least churning every line of code, it didn't make
any sense to split these up into multiple commits -- all of which would
touch all of the same lies of code.
The result is still not quite optimal. The Value::use_iterator is a nice
regular iterator, but Value::user_iterator is an iterator over User*s
rather than over the User objects themselves. As a consequence, it fits
a bit awkwardly into the range-based world and it has the weird
extra-dereferencing 'operator->' that so many of our iterators have.
I think this could be fixed by providing something which transforms
a range of T&s into a range of T*s, but that *can* be separated into
another patch, and it isn't yet 100% clear whether this is the right
move.
However, this change gets us most of the benefit and cleans up
a substantial amount of code around Use and User. =]
llvm-svn: 203364
to ensure we don't mess up any of the overrides. Necessary for cleaning
up the Value use iterators and enabling range-based traversing of use
lists.
llvm-svn: 202958
The heuristic was added to avoid spending too much compile time A specially
crafted test case (PR17461, PR16474) with many uses on a select or bitcast
instruction can still trigger the slow case. Add a check for that case.
This only affects compile time, don't have a good way to test it.
llvm-svn: 191896
captured. This allows the tracker to look at the specific use, which may be
especially interesting for function calls.
Use this to fix 'nocapture' deduction in FunctionAttrs. The existing one does
not iterate until a fixpoint and does not guarantee that it produces the same
result regardless of iteration order. The new implementation builds up a graph
of how arguments are passed from function to function, and uses a bottom-up walk
on the argument-SCCs to assign nocapture. This gets us nocapture more often, and
does so rather efficiently and independent of iteration order.
llvm-svn: 147327