Commit Graph

68 Commits

Author SHA1 Message Date
Johannes Doerfert 0f0d209bec Use the SCoP directly for canSynthesize [NFC]
llvm-svn: 270429
2016-05-23 12:47:09 +00:00
Johannes Doerfert ef74443c97 Duplicate part of the Region interface in the Scop class [NFC]
This allows to use the SCoP directly for various queries,
  thus to hide the underlying region more often.

llvm-svn: 270426
2016-05-23 12:42:38 +00:00
Johannes Doerfert 952b5304bc Add and use Scop::contains(Loop/BasicBlock/Instruction) [NFC]
llvm-svn: 270424
2016-05-23 12:40:48 +00:00
Johannes Doerfert a61eda7698 [FIX] Let ScalarEvolution forget hoisted values
We have to rethink the handling of escaping values in order to make
  this kind of "fixes" go away.

llvm-svn: 270409
2016-05-23 09:02:54 +00:00
Johannes Doerfert 404a0f81ea Check overflows in RTCs and bail accordingly
We utilize assumptions on the input to model IR in polyhedral world.
  To verify these assumptions we version the code and guard it with a
  runtime-check (RTC). However, since the RTCs are themselves generated
  from the polyhedral representation we generate them under the same
  assumptions that they should verify. In other words, the guarantees
  that we try to provide with the RTCs do not hold for the RTCs
  themselves. To this end it is necessary to employ a different check
  for the RTCs that will verify the assumptions did hold for them too.

Differential Revision: http://reviews.llvm.org/D20165

llvm-svn: 269299
2016-05-12 15:12:43 +00:00
Johannes Doerfert e243753a4d Simplify access relation for invariant loads early [NFC]
llvm-svn: 269046
2016-05-10 11:59:59 +00:00
Johannes Doerfert 5f173d414e Prevent complex access ranges with low number of pieces.
Previously we checked the number of pieces to decide whether or not a
  invariant load was to complex to be generated. However, there are
  cases when e.g., divisions cause the complexity to spike regardless of
  the number of pieces. To this end we now check the number of totally
  involved dimensions which will increase with the number of pieces but
  also the number of divisions.

llvm-svn: 269045
2016-05-10 11:46:57 +00:00
Michael Kruse bc150127ae Rename Conjuncts -> Disjunctions. NFC.
The check for complexity compares the number of polyhedra in a set,
which are combined by disjunctions (union, "OR"),
not conjunctions (intersection, "AND").

llvm-svn: 268223
2016-05-02 12:25:18 +00:00
Johannes Doerfert 8ab2803b63 [FIX] Propagate execution domain of invariant loads
If the base pointer of an invariant load is is loaded conditionally, that
  condition needs to hold for the invariant load too. The structure of the
  program will imply this for domain constraints but not for imprecisions in
  the modeling. To this end we will propagate the execution context of base
  pointers during code generation and thus ensure the derived pointer does
  not access an invalid base pointer.

llvm-svn: 267707
2016-04-27 12:49:11 +00:00
Johannes Doerfert a9dc529442 Collect and verify generated parallel subfunctions
We verify the optimized function now for a long time and it helped to track
  down bugs early. This will now also happen for all parallel subfunctions we
  generate.

llvm-svn: 265823
2016-04-08 18:16:02 +00:00
Johannes Doerfert 7b81103589 [FIX] Look through div & srem instructions in SCEVs
The findValues() function did not look through div & srem instructions
  that were part of the argument SCEV. However, in different other
  places we already look through it. This mismatch caused us to preload
  values in the wrong order.

llvm-svn: 265775
2016-04-08 10:25:58 +00:00
Michael Kruse c7e0d9c216 Fix non-synthesizable loop exit values.
Polly recognizes affine loops that ScalarEvolution does not, in
particular those with loop conditions that depend on hoisted invariant
loads. Check for SCEVAddRec dependencies on such loops and do not
consider their exit values as synthesizable because SCEVExpander would
generate them as expressions that depend on the original induction
variables. These are not available in generated code.

llvm-svn: 262404
2016-03-01 21:44:06 +00:00
Johannes Doerfert abadd71da1 [FIX] Prevent compile time problems due to complex invariant loads
This cures the symptoms we see in h264 of SPEC2006 but not the cause.

llvm-svn: 262327
2016-03-01 13:05:14 +00:00
Michael Kruse 6f7721f02b Introduce Scop::getStmtFor. NFC.
Replace Scop::getStmtForBasicBlock and Scop::getStmtForRegionNode, and
add overloads for llvm::Instruction and llvm::RegionNode.

getStmtFor and overloads become the common interface to get the Stmt
that contains something. Named after LoopInfo::getLoopFor and
RegionInfo::getRegionFor.

llvm-svn: 261791
2016-02-24 22:08:19 +00:00
Roman Gareev 11001e1534 Annotation of SIMD loops
Use 'mark' nodes annotate a SIMD loop during ScheduleTransformation and skip
parallelism checks.

The buildbot shows the following compile/execution time changes:

  Compile time:
    Improvements    Δ     Previous  Current  σ
    …/gesummv      -6.06% 0.2640    0.2480   0.0055
    …/gemver       -4.46% 0.4480    0.4280   0.0044
    …/covariance   -4.31% 0.8360    0.8000   0.0065
    …/adi          -3.23% 0.9920    0.9600   0.0065
    …/doitgen      -2.53% 0.9480    0.9240   0.0090
    …/3mm          -2.33% 1.0320    1.0080   0.0087

  Execution time:
    Regressions     Δ     Previous  Current  σ
    …/viterbi       1.70% 5.1840    5.2720   0.0074
    …/smallpt       1.06% 12.4920   12.6240  0.0040

Reviewed-by: Tobias Grosser <tobias@grosser.es>

Differential Revision: http://reviews.llvm.org/D14491

llvm-svn: 261620
2016-02-23 09:00:13 +00:00
Johannes Doerfert 6a7c3e4bac Set AST Build for all statements [NFC]
llvm-svn: 260956
2016-02-16 12:11:03 +00:00
Johannes Doerfert 96e5471139 Separate invariant equivalence classes by type
We now distinguish invariant loads to the same memory location if they
  have different types. This will cause us to pre-load an invariant
  location once for each type that is used to access it. However, we can
  thereby avoid invalid casting, especially if an array is accessed
  though different typed/sized invariant loads.

  This basically reverts the changes in r260023 but keeps the test
  cases.

llvm-svn: 260045
2016-02-07 17:30:13 +00:00
Johannes Doerfert adeab372ca Simplify code [NFC]
llvm-svn: 260030
2016-02-07 13:57:32 +00:00
Tobias Grosser 107cd5f5f6 IslNodeBuilder: Invariant load hoisting of elements with differing sizes
Always use access-instruction pointer type to load the invariant values.
Otherwise mismatches between ScopArrayInfo element type and memory access
element type will result in invalid casts. These type mismatches are after
r259784 a lot more common and also arise with types of different size, which
have not been handled before.

Interestingly, this change actually simplifies the code, as we now have only
one code path that is always taken, rather then a standard code path for the
common case and a "fixup" code path that replaces the standard code path in
case of mismatching types.

llvm-svn: 260009
2016-02-06 21:23:39 +00:00
Tobias Grosser d840fc7277 Support accesses with differently sized types to the same array
This allows code such as:

void multiple_types(char *Short, char *Float, char *Double) {
  for (long i = 0; i < 100; i++) {
    Short[i] = *(short *)&Short[2 * i];
    Float[i] = *(float *)&Float[4 * i];
    Double[i] = *(double *)&Double[8 * i];
  }
}

To model such code we use as canonical element type of the modeled array the
smallest element type of all original array accesses, if type allocation sizes
are multiples of each other. Otherwise, we use a newly created iN type, where N
is the gcd of the allocation size of the types used in the accesses to this
array. Accesses with types larger as the canonical element type are modeled as
multiple accesses with the smaller type.

For example the second load access is modeled as:

  { Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }

To support code-generating these memory accesses, we introduce a new method
getAccessAddressFunction that assigns each statement instance a single memory
location, the address we load from/store to. Currently we obtain this address by
taking the lexmin of the access function. We may consider keeping track of the
memory location more explicitly in the future.

We currently do _not_ handle multi-dimensional arrays and also keep the
restriction of not supporting accesses where the offset expression is not a
multiple of the access element type size. This patch adds tests that ensure
we correctly invalidate a scop in case these accesses are found. Both types of
accesses can be handled using the very same model, but are left to be added in
the future.

We also move the initialization of the scop-context into the constructor to
ensure it is already available when invalidating the scop.

Finally, we add this as a new item to the 2.9 release notes

Reviewers: jdoerfert, Meinersbur

Differential Revision: http://reviews.llvm.org/D16878

llvm-svn: 259784
2016-02-04 13:18:42 +00:00
Michael Kruse 70131d3416 Introduce MemAccInst helper class; NFC
MemAccInst wraps the common members of LoadInst and StoreInst. Also use
of this class in:
- ScopInfo::buildMemoryAccess
- BlockGenerator::generateLocationAccessed
- ScopInfo::addArrayAccess
- Scop::buildAliasGroups
- Replace every use of polly::getPointerOperand

Reviewers: jdoerfert, grosser

Differential Revision: http://reviews.llvm.org/D16530

llvm-svn: 258947
2016-01-27 17:09:17 +00:00
Johannes Doerfert 370cf00c9f Make sure we preserve alignment information after hoisting invariant load
In Polly, after hoisting loop invariant loads outside loop, the alignment
information for hoisted loads are missing, this patch restore them.

Contributed-by: Lawrence Hu <lawrence@codeaurora.org>

Differential Revision: http://reviews.llvm.org/D16160

llvm-svn: 258105
2016-01-19 00:17:21 +00:00
Tobias Grosser a535dff471 ScopInfo: Harmonize the different array kinds
Over time different vocabulary has been introduced to describe the different
memory objects in Polly, resulting in different - often inconsistent - naming
schemes in different parts of Polly. We now standartize this to the following
scheme:

  KindArray, KindValue, KindPHI, KindExitPHI
             | ------- isScalar -----------|

In most cases this naming scheme has already been used previously (this
minimizes changes and ensures we remain consistent with previous publications).
The main change is that we remove KindScalar to clearify the difference between
a scalar as a memory object of kind Value, PHI or ExitPHI and a value (former
KindScalar) which is a memory object modeling a llvm::Value.

We also move all documentation to the Kind* enum in the ScopArrayInfo class,
remove the second enum in the MemoryAccess class and update documentation to be
formulated from the perspective of the memory object, rather than the memory
access. The terms "Implicit"/"Explicit", formerly used to describe memory
accesses, have been dropped. From the perspective of memory accesses they
described the different memory kinds well - especially from the perspective of
code generation - but just from the perspective of a memory object it seems more
straightforward to talk about scalars and arrays, rather than explicit and
implicit arrays. The last comment is clearly subjective, though. A less
subjective reason to go for these terms is the historic use both in mailing list
discussions and publications.

llvm-svn: 255467
2015-12-13 19:59:01 +00:00
Tobias Grosser 2fd89da90d Remove non-debug printing of domain set
Contributed-by: Chris Jenneisch <chrisj@codeaurora.org>

Differential Revision: http://reviews.llvm.org/D15094

llvm-svn: 254343
2015-11-30 22:59:41 +00:00
Johannes Doerfert fdbf201fc9 [FIX] Do not generate code for parameters referencing dead values
Check if a value that is referenced by a parameter is dead and do not
generate code for the parameter in such a case.

llvm-svn: 252813
2015-11-11 22:40:51 +00:00
Johannes Doerfert dcfedf3505 [FIX] Cast pre-loaded values correctly or reload them with adjusted type.
Especially for structs, the SAI object of a base pointer does not
describe all the types that the user might expect when he loads from
that base pointer. While we will still cast integers and pointers we
will now reload the value with the correct type if floating point and
non-floating point values are involved. However, there are now TODOs
where we use bitcasts instead of a proper conversion or reloading.

This fixes bug 25479.

llvm-svn: 252706
2015-11-11 06:20:25 +00:00
Johannes Doerfert fc4bfc465a [FIX] Create empty invariant equivalence classes
We now create all invariant equivalence classes for required invariant loads
  instead of creating them on-demand. This way we can check if a parameter
  references an invariant load that is actually not executed and was therefor
  not materialized. If that happens the parameter is not materialized either.

This fixes bug 25469.

llvm-svn: 252701
2015-11-11 04:30:07 +00:00
Tobias Grosser 6abc75af4c ScopInfo: Introduce ArrayKind
Since 252422 we do not only distinguish two ScopArrayInfo kinds, PHI nodes
and others, but work with three kind of ScopArrayInfo objects. SCALAR, PHI and
ARRAY objects. Instead of keeping two boolean flags isPHI and isScalar and
wonder what an ScopArrayInfo object of kind (!isScalar && isPHI) is, we
list now explicitly the three different possible types of memory objects.

This change also allows us to remove the confusing nested pairs that have
been used in ArrayInfoMapTy.

llvm-svn: 252620
2015-11-10 17:31:31 +00:00
Johannes Doerfert 7a6e292d86 [FIX] Use same alloca for invariant loads and the scalar users
llvm-svn: 252451
2015-11-09 06:28:45 +00:00
Johannes Doerfert a768624f14 [FIX] Introduce different SAI objects for scalar and memory accesses
Even if a scalar and memory access have the same base pointer, we cannot use
  one SAI object as the type but also the number of dimensions are wrong. For
  the attached test case this caused a crash in the invariant load hoisting,
  though it could cause various other problems too.

This fixes bug 25428 and a execution time bug in MallocBench/cfrac.

Reported-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
llvm-svn: 252422
2015-11-08 19:12:05 +00:00
Johannes Doerfert c4898504ea [FIX] Bail out if there is a dependence cycle between invariant loads
While the program cannot cause a dependence cycle between invariant
  loads, additional constraints (e.g., to ensure finite loops) can
  introduce them. It is hard to detect them in the SCoP description,
  thus we will only check for them at code generation time. If such a
  recursion is detected we will bail out the code generation and place a
  "false" runtime check to guarantee the original code is used.

  This fixes bug 25443.

llvm-svn: 252412
2015-11-07 19:46:04 +00:00
Duncan P. N. Exon Smith b8f58b53dd polly/ADT: Remove implicit ilist iterator conversions, NFC
Remove all the implicit ilist iterator conversions from polly, in
preparation for making them illegal in ADT.  There was one oddity I came
across: at line 95 of lib/CodeGen/LoopGenerators.cpp, there was a
post-increment `Builder.GetInsertPoint()++`.

Since it was a no-op, I removed it, but I admit I wonder if it might be
a bug (both before and after this change)?  Perhaps it should be a
pre-increment?

llvm-svn: 252357
2015-11-06 22:56:54 +00:00
Johannes Doerfert 22892687f7 [FIX] Simplify and correct preloading of base pointer origin
To simplify and correct the preloading of a base pointer origin, e.g.,
  the base pointer for the current indirect invariant load, we now just
  check if there is an invariant access class that involves the base
  pointer of the current class.

llvm-svn: 251962
2015-11-03 19:15:33 +00:00
Johannes Doerfert 475d8e3f42 [FIX] Ensure base pointer origin was preloaded already
If a base pointer of a preloaded value has a base pointer origin, thus it is
  an indirect invariant load, we have to make sure the base pointer origin is
  preloaded first.

llvm-svn: 251946
2015-11-03 16:49:02 +00:00
Johannes Doerfert 3181c2ef72 [FIX] Correctly update SAI base pointer
If a base pointer load is preloaded, we have change the base pointer of
  the derived SAI. However, as the derived SAI relationship is is
  coarse grained, we need to check if we actually preloaded the base
  pointer or a different element of the base pointer SAI array.

llvm-svn: 251881
2015-11-03 01:42:59 +00:00
Johannes Doerfert af3e301a67 [FIX] Restructure invariant load equivalence classes
Sorting is replaced by a demand driven code generation that will pre-load a
  value when it is needed or, if it was not needed before, at some point
  determined by the order of invariant accesses in the program. Only in very
  little cases this demand driven pre-loading will kick in, though it will
  prevent us from generating faulty code. An example where it is needed is
  shown in:
    test/ScopInfo/invariant_loads_complicated_dependences.ll

  Invariant loads that appear in parameters but are not on the top-level (e.g.,
  the parameter is not a SCEVUnknown) will now be treated correctly.

Differential Revision: http://reviews.llvm.org/D13831

llvm-svn: 250655
2015-10-18 12:39:19 +00:00
Johannes Doerfert d8b6ad255f [FIX] Cast preloaded values
Preloaded values have to match the type of their counterpart in the
  original code and not the type of the base array.

llvm-svn: 250654
2015-10-18 12:36:42 +00:00
Johannes Doerfert 697fdf891c Consolidate invariant loads
If a (assumed) invariant location is loaded multiple times we
  generated a parameter for each location. However, this caused compile
  time problems for several benchmarks (e.g., 445_gobmk in SPEC2006 and
  BT in the NAS benchmarks). Additionally, the code we generate is
  suboptimal as we preload the same location multiple times and perform
  the same checks on all the parameters that refere to the same value.

  With this patch we consolidate the invariant loads in three steps:
    1) During SCoP initialization required invariant loads are put in
       equivalence classes based on their pointer operand. One
       representing load is used to generate a parameter for the whole
       class, thus we never generate multiple parameters for the same
       location.
    2) During the SCoP simplification we remove invariant memory
       accesses that are in the same equivalence class. While doing so
       we build the union of all execution domains as it is only
       important that the location is at least accessed once.
    3) During code generation we only preload one element of each
       equivalence class with the unified execution domain. All others
       are mapped to that preloaded value.

Differential Revision: http://reviews.llvm.org/D13338

llvm-svn: 249853
2015-10-09 17:12:26 +00:00
Johannes Doerfert 09e3697f44 Allow invariant loads in the SCoP description
This patch allows invariant loads to be used in the SCoP description,
  e.g., as loop bounds, conditions or in memory access functions.

  First we collect "required invariant loads" during SCoP detection that
  would otherwise make an expression we care about non-affine. To this
  end a new level of abstraction was introduced before
  SCEVValidator::isAffineExpr() namely ScopDetection::isAffine() and
  ScopDetection::onlyValidRequiredInvariantLoads(). Here we can decide
  if we want a load inside the region to be optimistically assumed
  invariant or not. If we do, it will be marked as required and in the
  SCoP generation we bail if it is actually not invariant. If we don't
  it will be a non-affine expression as before. At the moment we
  optimistically assume all "hoistable" (namely non-loop-carried) loads
  to be invariant. This causes us to expand some SCoPs and dismiss them
  later but it also allows us to detect a lot we would dismiss directly
  if we would ask e.g., AliasAnalysis::canBasicBlockModify(). We also
  allow potential aliases between optimistically assumed invariant loads
  and other pointers as our runtime alias checks are sound in case the
  loads are actually invariant. Together with the invariant checks this
  combination allows to handle a lot more than LICM can.

  The code generation of the invariant loads had to be extended as we
  can now have dependences between parameters and invariant (hoisted)
  loads as well as the other way around, e.g.,
    test/Isl/CodeGen/invariant_load_parameters_cyclic_dependence.ll
  First, it is important to note that we cannot have real cycles but
  only dependences from a hoisted load to a parameter and from another
  parameter to that hoisted load (and so on). To handle such cases we
  materialize llvm::Values for parameters that are referred by a hoisted
  load on demand and then materialize the remaining parameters. Second,
  there are new kinds of dependences between hoisted loads caused by the
  constraints on their execution. If a hoisted load is conditionally
  executed it might depend on the value of another hoisted load. To deal
  with such situations we sort them already in the ScopInfo such that
  they can be generated in the order they are listed in the
  Scop::InvariantAccesses list (see compareInvariantAccesses). The
  dependences between hoisted loads caused by indirect accesses are
  handled the same way as before.

llvm-svn: 249607
2015-10-07 20:17:36 +00:00
Johannes Doerfert 521dd5842f Move the ValueMapT declaration out of BlockGenerator
Value maps are created and used in many places and it is not always
  possible to include CodeGen/Blockgenerators.h. To this end, ValueMapT
  now lives in the ScopHelper.h which does not have any dependences itself.

  This patch also replaces uses of different other value map types with
  the ValueMapT.

llvm-svn: 249606
2015-10-07 20:15:56 +00:00
Tobias Grosser f4bb7a6a4d Consolidate the different ValueMapTypes we are using
There have been various places where llvm::DenseMap<const llvm::Value *,
llvm::Value *> types have been defined, but all types have been expected to be
identical. We make this more clear by consolidating the different types and use
BlockGenerator::ValueMapT wherever there is a need for types to match
BlockGenerator::ValueMapT.

llvm-svn: 249264
2015-10-04 10:18:32 +00:00
Tobias Grosser 1d45c6dadd IslExprBuilder: Use AssertingVH for IdToValueTy
llvm-svn: 249239
2015-10-03 17:20:00 +00:00
Johannes Doerfert 911951f4f8 Hand down referenced & globally mapped values to the subfunction
If a value is globally mapped (IslNodeBuilder::ValueMap) and
  referenced in the code that will be put into a subfunction, we hand
  down the new value to the subfunction.

  This patch also removes code that handed down all invariant loads to
  the subfunction. Instead, only needed invariant loads are given to the
  subfunction. There are two possible reasons for an invariant load to
  be handed down:
    1) The invariant load is used in a block that is placed in the
       subfunction but which is not the parent of the load. In this
       case, the scalar access that will read the loaded value, will
       cause its base pointer (the preloaded value) to be handed down to
       the subfunction.
    2) The invariant load is defined and used in a block that is placed
       in the subfunction. With this patch we will hand down the
       preloaded value to the subfunction as the invariant load is
       globally mapped to that value.

llvm-svn: 249126
2015-10-02 13:11:27 +00:00
Johannes Doerfert 850d346302 [FIX] Parallel codegen for invariant loads
Hand down all preloaded values to the parallel subfunction.

llvm-svn: 249010
2015-10-01 13:40:36 +00:00
Johannes Doerfert ebfd72493c [NFC] Extract materialization of parameters
llvm-svn: 248882
2015-09-30 09:52:08 +00:00
Johannes Doerfert ef19ead20e [FIX] Use escape logic for invariant loads
Before we unconditinoally forced all users outside the SCoP to use
  the preloaded value. However, if the SCoP is not executed due to the
  runtime checks, we need to use the original value because it might not
  be invariant in the first place.

llvm-svn: 248881
2015-09-30 09:43:20 +00:00
Johannes Doerfert c1db67e218 Identify and hoist definitively invariant loads
As a first step in the direction of assumed invariant loads (loads
  that are not written in some context) we now detect and hoist
  definitively invariant loads. These invariant loads will be preloaded
  in the code generation and used in the optimized version of the SCoP.
  If the load is only conditionally executed the preloaded version will
  also only be executed under the same condition, hence we will never
  access memory that wouldn't have been accessed otherwise. This is also
  the most distinguishing feature to licm.

  As hoisting can make statements empty we will simplify the SCoP and
  remove empty statements that would otherwise cause artifacts in the
  code generation.

Differential Revision: http://reviews.llvm.org/D13194

llvm-svn: 248861
2015-09-29 23:47:21 +00:00
Johannes Doerfert fb19dd694c Create parallel code in a separate block
This commit basically reverts r246427 but still solves the issue
  tackled by that commit. Instead of emitting initialization code in the
  beginning of the start block we now generate parallel code in its own
  block and thereby guarantee separation. This is necessary as we cannot
  generate code for hoisted loads prior to the start block but it still
  needs to be placed prior to everything else.

llvm-svn: 248674
2015-09-26 20:57:59 +00:00
Michael Kruse 8d0b734e71 Let MemoryAccess remember its purpose
There are three possible reasons to add a memory memory access: For explicit load and stores, for llvm::Value defs/uses, and to emulate PHI nodes (the latter two called implicit accesses). Previously MemoryAccess only stored IsPHI. Register accesses could be identified through the isScalar() method if it was no IsPHI. isScalar() determined the number of dimensions of the underlaying array, scalars represented by zero dimensions.

For the work on de-LICM, implicit accesses can have more than zero dimensions, making the distinction of isScalars() useless, hence now stored explicitly in the MemoryAccess. Instead, we replace it by isImplicit() and avoid the term scalar for zero-dimensional arrays as it might be confused with llvm::Value which are also often referred to as scalars (or alternatively, as registers).

No behavioral change intended, under the condition that it was impossible to create explicit accesses to zero-dimensional "arrays".

llvm-svn: 248616
2015-09-25 21:21:00 +00:00
Michael Kruse 7bf3944d23 Merge TempScopInfo.{cpp|h} into ScopInfo.{cpp|h}
This prepares for a series of patches that merges TempScopInfo into ScopInfo to
reduce Polly's code complexity. Only ScopInfo.{cpp|h} will be left thereafter.
Moving the code of TempScopInfo in one commit makes the mains diffs simpler to
understand.

In detail, merging the following classes is planned:
TempScopInfo into ScopInfo
TempScop into Scop
IRAccess into MemoryAccess

Only moving code, no functional changes intended.

Differential Version: http://reviews.llvm.org/D12693

llvm-svn: 247274
2015-09-10 12:46:52 +00:00