llvm-project/polly
Tobias Grosser 2d58a64e7f GPGPU: Bail out of scops with hoisted invariant loads
This is currently not supported and will only be added later. Also update the
test cases to ensure no invariant code hoisting is applied.

llvm-svn: 275987
2016-07-19 15:56:25 +00:00
..
cmake Respect LLVM_INSTALL_TOOLCHAIN_ONLY. 2016-06-21 18:14:01 +00:00
docs docs: Remove reference to PoCC 2016-05-17 19:44:16 +00:00
include/polly GPGPU: Bail out of scops with hoisted invariant loads 2016-07-19 15:56:25 +00:00
lib GPGPU: Bail out of scops with hoisted invariant loads 2016-07-19 15:56:25 +00:00
test GPGPU: Bail out of scops with hoisted invariant loads 2016-07-19 15:56:25 +00:00
tools GPURuntime: Only print status in debug mode 2016-07-06 03:04:53 +00:00
utils Revise polly-{update|check}-format targets 2015-09-14 16:59:50 +00:00
www [WWW] Mark task as done and me as owner of some task 2016-05-02 11:21:30 +00:00
.arcconfig Upgrade all the .arcconfigs to https. 2016-07-14 13:15:37 +00:00
.arclint Adjusted arc linter config for modern version of arcanist 2015-08-12 09:01:16 +00:00
.gitattributes gitattributes: .png and .txt are no text files 2013-07-28 09:05:20 +00:00
.gitignore Add git patch files to .gitignore 2015-06-23 20:55:01 +00:00
CMakeLists.txt GPGPU: Shorten ppcg include paths to avoid conflict with cuda.h 2016-07-15 07:50:36 +00:00
CREDITS.txt Add myself to the credits 2014-08-10 03:37:29 +00:00
LICENSE.txt Update copyright year to 2016. 2016-03-30 22:41:38 +00:00
README

README

Polly - Polyhedral optimizations for LLVM
-----------------------------------------
http://polly.llvm.org/

Polly uses a mathematical representation, the polyhedral model, to represent and
transform loops and other control flow structures. Using an abstract
representation it is possible to reason about transformations in a more general
way and to use highly optimized linear programming libraries to figure out the
optimal loop structure. These transformations can be used to do constant
propagation through arrays, remove dead loop iterations, optimize loops for
cache locality, optimize arrays, apply advanced automatic parallelization, drive
vectorization, or they can be used to do software pipelining.