forked from OSchip/llvm-project
3ee826594a
LazyBlockFrequenceInfoPass, LazyBranchProbabilityInfoPass and LoopAccessLegacyAnalysis all cache pointers to their nestled required analysis passes. One need to use addRequiredTransitive to describe that the nestled passes can't be freed until those analysis passes no longer are used themselves. There is still a bit of a mess considering the getLazyBPIAnalysisUsage and getLazyBFIAnalysisUsage functions. Those functions are used from both Transform, CodeGen and Analysis passes. I figure it is OK to use addRequiredTransitive also when being used from Transform and CodeGen passes. On the other hand, I figure we must do it when used from other Analysis passes. So using addRequiredTransitive should be more correct here. An alternative solution would be to add a bool option in those functions to let the user tell if it is a analysis pass or not. Since those lazy passes will be obsolete when new PM has conquered the world I figure we can leave it like this right now. Intention with the patch is to fix PR49950. It at least solves the problem for the reproducer in PR49950. However, that reproducer need five passes in a specific order, so there are lots of various "solutions" that could avoid the crash without actually fixing the root cause. This is a reapply of commit |
||
---|---|---|
.. | ||
cmake | ||
docs | ||
include/polly | ||
lib | ||
test | ||
tools | ||
unittests | ||
utils | ||
www | ||
.arclint | ||
.gitattributes | ||
.gitignore | ||
CMakeLists.txt | ||
CREDITS.txt | ||
LICENSE.TXT | ||
README |
README
Polly - Polyhedral optimizations for LLVM ----------------------------------------- http://polly.llvm.org/ Polly uses a mathematical representation, the polyhedral model, to represent and transform loops and other control flow structures. Using an abstract representation it is possible to reason about transformations in a more general way and to use highly optimized linear programming libraries to figure out the optimal loop structure. These transformations can be used to do constant propagation through arrays, remove dead loop iterations, optimize loops for cache locality, optimize arrays, apply advanced automatic parallelization, drive vectorization, or they can be used to do software pipelining.