2017-10-27 09:09:08 +08:00
|
|
|
//===- StraightLineStrengthReduce.cpp - -----------------------------------===//
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file implements straight-line strength reduction (SLSR). Unlike loop
|
|
|
|
// strength reduction, this algorithm is designed to reduce arithmetic
|
|
|
|
// redundancy in straight-line code instead of loops. It has proven to be
|
|
|
|
// effective in simplifying arithmetic statements derived from an unrolled loop.
|
|
|
|
// It can also simplify the logic of SeparateConstOffsetFromGEP.
|
|
|
|
//
|
|
|
|
// There are many optimizations we can perform in the domain of SLSR. This file
|
|
|
|
// for now contains only an initial step. Specifically, we look for strength
|
2015-04-16 00:46:13 +08:00
|
|
|
// reduction candidates in the following forms:
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// Form 1: B + i * S
|
|
|
|
// Form 2: (B + i) * S
|
|
|
|
// Form 3: &B[i * S]
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2015-03-27 00:49:24 +08:00
|
|
|
// where S is an integer variable, and i is a constant integer. If we found two
|
2015-04-16 00:46:13 +08:00
|
|
|
// candidates S1 and S2 in the same form and S1 dominates S2, we may rewrite S2
|
|
|
|
// in a simpler way with respect to S1. For example,
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// S1: X = B + i * S
|
|
|
|
// S2: Y = B + i' * S => X + (i' - i) * S
|
2015-03-27 00:49:24 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// S1: X = (B + i) * S
|
|
|
|
// S2: Y = (B + i') * S => X + (i' - i) * S
|
2015-03-27 00:49:24 +08:00
|
|
|
//
|
|
|
|
// S1: X = &B[i * S]
|
2015-04-16 00:46:13 +08:00
|
|
|
// S2: Y = &B[i' * S] => &X[(i' - i) * S]
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// Note: (i' - i) * S is folded to the extent possible.
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// This rewriting is in general a good idea. The code patterns we focus on
|
|
|
|
// usually come from loop unrolling, so (i' - i) * S is likely the same
|
|
|
|
// across iterations and can be reused. When that happens, the optimized form
|
|
|
|
// takes only one add starting from the second iteration.
|
2015-03-27 00:49:24 +08:00
|
|
|
//
|
2015-04-16 00:46:13 +08:00
|
|
|
// When such rewriting is possible, we call S1 a "basis" of S2. When S2 has
|
|
|
|
// multiple bases, we choose to rewrite S2 with respect to its "immediate"
|
|
|
|
// basis, the basis that is the closest ancestor in the dominator tree.
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
//
|
|
|
|
// TODO:
|
|
|
|
//
|
|
|
|
// - Floating point arithmetics when fast math is enabled.
|
|
|
|
//
|
|
|
|
// - SLSR may decrease ILP at the architecture level. Targets that are very
|
|
|
|
// sensitive to ILP may want to disable it. Having SLSR to consider ILP is
|
|
|
|
// left as future work.
|
2015-04-16 00:46:13 +08:00
|
|
|
//
|
|
|
|
// - When (i' - i) is constant but i and i' are not, we could still perform
|
|
|
|
// SLSR.
|
2017-10-27 09:09:08 +08:00
|
|
|
|
|
|
|
#include "llvm/ADT/APInt.h"
|
|
|
|
#include "llvm/ADT/DepthFirstIterator.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
2015-03-27 00:49:24 +08:00
|
|
|
#include "llvm/Analysis/ScalarEvolution.h"
|
|
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
2018-06-05 05:23:21 +08:00
|
|
|
#include "llvm/Transforms/Utils/Local.h"
|
2015-05-16 01:07:48 +08:00
|
|
|
#include "llvm/Analysis/ValueTracking.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/Constants.h"
|
2015-03-27 00:49:24 +08:00
|
|
|
#include "llvm/IR/DataLayout.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/DerivedTypes.h"
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
#include "llvm/IR/Dominators.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/GetElementPtrTypeIterator.h"
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
#include "llvm/IR/IRBuilder.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/InstrTypes.h"
|
|
|
|
#include "llvm/IR/Instruction.h"
|
|
|
|
#include "llvm/IR/Instructions.h"
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
#include "llvm/IR/Module.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/Operator.h"
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
#include "llvm/IR/PatternMatch.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include "llvm/IR/Type.h"
|
|
|
|
#include "llvm/IR/Value.h"
|
|
|
|
#include "llvm/Pass.h"
|
|
|
|
#include "llvm/Support/Casting.h"
|
|
|
|
#include "llvm/Support/ErrorHandling.h"
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
#include "llvm/Transforms/Scalar.h"
|
2017-10-27 09:09:08 +08:00
|
|
|
#include <cassert>
|
|
|
|
#include <cstdint>
|
|
|
|
#include <limits>
|
2016-09-12 05:29:34 +08:00
|
|
|
#include <list>
|
2016-09-12 05:04:36 +08:00
|
|
|
#include <vector>
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
|
|
|
|
using namespace llvm;
|
|
|
|
using namespace PatternMatch;
|
|
|
|
|
2017-10-27 09:09:08 +08:00
|
|
|
static const unsigned UnknownAddressSpace =
|
|
|
|
std::numeric_limits<unsigned>::max();
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
|
2017-10-27 09:09:08 +08:00
|
|
|
namespace {
|
2016-04-27 08:32:09 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
class StraightLineStrengthReduce : public FunctionPass {
|
2015-03-27 00:49:24 +08:00
|
|
|
public:
|
2015-04-16 00:46:13 +08:00
|
|
|
// SLSR candidate. Such a candidate must be in one of the forms described in
|
|
|
|
// the header comments.
|
2016-09-12 05:29:34 +08:00
|
|
|
struct Candidate {
|
2015-03-27 00:49:24 +08:00
|
|
|
enum Kind {
|
|
|
|
Invalid, // reserved for the default constructor
|
2015-04-16 00:46:13 +08:00
|
|
|
Add, // B + i * S
|
2015-03-27 00:49:24 +08:00
|
|
|
Mul, // (B + i) * S
|
|
|
|
GEP, // &B[..][i * S][..]
|
|
|
|
};
|
|
|
|
|
2017-10-27 09:09:08 +08:00
|
|
|
Candidate() = default;
|
2015-03-27 00:49:24 +08:00
|
|
|
Candidate(Kind CT, const SCEV *B, ConstantInt *Idx, Value *S,
|
|
|
|
Instruction *I)
|
2017-10-27 09:09:08 +08:00
|
|
|
: CandidateKind(CT), Base(B), Index(Idx), Stride(S), Ins(I) {}
|
|
|
|
|
|
|
|
Kind CandidateKind = Invalid;
|
|
|
|
|
|
|
|
const SCEV *Base = nullptr;
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Note that Index and Stride of a GEP candidate do not necessarily have the
|
|
|
|
// same integer type. In that case, during rewriting, Stride will be
|
2015-03-27 00:49:24 +08:00
|
|
|
// sign-extended or truncated to Index's type.
|
2017-10-27 09:09:08 +08:00
|
|
|
ConstantInt *Index = nullptr;
|
|
|
|
|
|
|
|
Value *Stride = nullptr;
|
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// The instruction this candidate corresponds to. It helps us to rewrite a
|
|
|
|
// candidate with respect to its immediate basis. Note that one instruction
|
2015-04-16 00:46:13 +08:00
|
|
|
// can correspond to multiple candidates depending on how you associate the
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// expression. For instance,
|
|
|
|
//
|
|
|
|
// (a + 1) * (b + 2)
|
|
|
|
//
|
|
|
|
// can be treated as
|
|
|
|
//
|
|
|
|
// <Base: a, Index: 1, Stride: b + 2>
|
|
|
|
//
|
|
|
|
// or
|
|
|
|
//
|
|
|
|
// <Base: b, Index: 2, Stride: a + 1>
|
2017-10-27 09:09:08 +08:00
|
|
|
Instruction *Ins = nullptr;
|
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Points to the immediate basis of this candidate, or nullptr if we cannot
|
|
|
|
// find any basis for this candidate.
|
2017-10-27 09:09:08 +08:00
|
|
|
Candidate *Basis = nullptr;
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static char ID;
|
|
|
|
|
2017-10-27 09:09:08 +08:00
|
|
|
StraightLineStrengthReduce() : FunctionPass(ID) {
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
initializeStraightLineStrengthReducePass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
|
|
|
AU.addRequired<DominatorTreeWrapperPass>();
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 10:08:17 +08:00
|
|
|
AU.addRequired<ScalarEvolutionWrapperPass>();
|
2015-03-27 00:49:24 +08:00
|
|
|
AU.addRequired<TargetTransformInfoWrapperPass>();
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// We do not modify the shape of the CFG.
|
|
|
|
AU.setPreservesCFG();
|
|
|
|
}
|
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
bool doInitialization(Module &M) override {
|
|
|
|
DL = &M.getDataLayout();
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
bool runOnFunction(Function &F) override;
|
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
private:
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Returns true if Basis is a basis for C, i.e., Basis dominates C and they
|
|
|
|
// share the same base and stride.
|
|
|
|
bool isBasisFor(const Candidate &Basis, const Candidate &C);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Returns whether the candidate can be folded into an addressing mode.
|
|
|
|
bool isFoldable(const Candidate &C, TargetTransformInfo *TTI,
|
|
|
|
const DataLayout *DL);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Returns true if C is already in a simplest form and not worth being
|
|
|
|
// rewritten.
|
|
|
|
bool isSimplestForm(const Candidate &C);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Checks whether I is in a candidate form. If so, adds all the matching forms
|
|
|
|
// to Candidates, and tries to find the immediate basis for each of them.
|
2015-04-16 00:46:13 +08:00
|
|
|
void allocateCandidatesAndFindBasis(Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Allocate candidates and find bases for Add instructions.
|
|
|
|
void allocateCandidatesAndFindBasisForAdd(Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Given I = LHS + RHS, factors RHS into i * S and makes (LHS + i * S) a
|
|
|
|
// candidate.
|
|
|
|
void allocateCandidatesAndFindBasisForAdd(Value *LHS, Value *RHS,
|
|
|
|
Instruction *I);
|
2015-03-27 00:49:24 +08:00
|
|
|
// Allocate candidates and find bases for Mul instructions.
|
2015-04-16 00:46:13 +08:00
|
|
|
void allocateCandidatesAndFindBasisForMul(Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// Splits LHS into Base + Index and, if succeeds, calls
|
2015-04-16 00:46:13 +08:00
|
|
|
// allocateCandidatesAndFindBasis.
|
|
|
|
void allocateCandidatesAndFindBasisForMul(Value *LHS, Value *RHS,
|
|
|
|
Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// Allocate candidates and find bases for GetElementPtr instructions.
|
2015-04-16 00:46:13 +08:00
|
|
|
void allocateCandidatesAndFindBasisForGEP(GetElementPtrInst *GEP);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// A helper function that scales Idx with ElementSize before invoking
|
2015-04-16 00:46:13 +08:00
|
|
|
// allocateCandidatesAndFindBasis.
|
|
|
|
void allocateCandidatesAndFindBasisForGEP(const SCEV *B, ConstantInt *Idx,
|
|
|
|
Value *S, uint64_t ElementSize,
|
|
|
|
Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// Adds the given form <CT, B, Idx, S> to Candidates, and finds its immediate
|
|
|
|
// basis.
|
2015-04-16 00:46:13 +08:00
|
|
|
void allocateCandidatesAndFindBasis(Candidate::Kind CT, const SCEV *B,
|
|
|
|
ConstantInt *Idx, Value *S,
|
|
|
|
Instruction *I);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Rewrites candidate C with respect to Basis.
|
|
|
|
void rewriteCandidateWithBasis(const Candidate &C, const Candidate &Basis);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// A helper function that factors ArrayIdx to a product of a stride and a
|
2015-04-16 00:46:13 +08:00
|
|
|
// constant index, and invokes allocateCandidatesAndFindBasis with the
|
2015-03-27 00:49:24 +08:00
|
|
|
// factorings.
|
|
|
|
void factorArrayIndex(Value *ArrayIdx, const SCEV *Base, uint64_t ElementSize,
|
|
|
|
GetElementPtrInst *GEP);
|
2017-10-27 09:09:08 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// Emit code that computes the "bump" from Basis to C. If the candidate is a
|
|
|
|
// GEP and the bump is not divisible by the element size of the GEP, this
|
|
|
|
// function sets the BumpWithUglyGEP flag to notify its caller to bump the
|
|
|
|
// basis using an ugly GEP.
|
|
|
|
static Value *emitBump(const Candidate &Basis, const Candidate &C,
|
|
|
|
IRBuilder<> &Builder, const DataLayout *DL,
|
|
|
|
bool &BumpWithUglyGEP);
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
|
2017-10-27 09:09:08 +08:00
|
|
|
const DataLayout *DL = nullptr;
|
|
|
|
DominatorTree *DT = nullptr;
|
2015-03-27 00:49:24 +08:00
|
|
|
ScalarEvolution *SE;
|
2017-10-27 09:09:08 +08:00
|
|
|
TargetTransformInfo *TTI = nullptr;
|
2016-09-12 05:29:34 +08:00
|
|
|
std::list<Candidate> Candidates;
|
2017-10-27 09:09:08 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Temporarily holds all instructions that are unlinked (but not deleted) by
|
|
|
|
// rewriteCandidateWithBasis. These instructions will be actually removed
|
|
|
|
// after all rewriting finishes.
|
2015-04-16 00:46:13 +08:00
|
|
|
std::vector<Instruction *> UnlinkedInstructions;
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
};
|
2017-10-27 09:09:08 +08:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
|
|
|
|
char StraightLineStrengthReduce::ID = 0;
|
2017-10-27 09:09:08 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
INITIALIZE_PASS_BEGIN(StraightLineStrengthReduce, "slsr",
|
|
|
|
"Straight line strength reduction", false, false)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 10:08:17 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(ScalarEvolutionWrapperPass)
|
2015-03-27 00:49:24 +08:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(TargetTransformInfoWrapperPass)
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
INITIALIZE_PASS_END(StraightLineStrengthReduce, "slsr",
|
|
|
|
"Straight line strength reduction", false, false)
|
|
|
|
|
|
|
|
FunctionPass *llvm::createStraightLineStrengthReducePass() {
|
|
|
|
return new StraightLineStrengthReduce();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool StraightLineStrengthReduce::isBasisFor(const Candidate &Basis,
|
|
|
|
const Candidate &C) {
|
|
|
|
return (Basis.Ins != C.Ins && // skip the same instruction
|
2015-06-29 01:45:05 +08:00
|
|
|
// They must have the same type too. Basis.Base == C.Base doesn't
|
|
|
|
// guarantee their types are the same (PR23975).
|
|
|
|
Basis.Ins->getType() == C.Ins->getType() &&
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Basis must dominate C in order to rewrite C with respect to Basis.
|
|
|
|
DT->dominates(Basis.Ins->getParent(), C.Ins->getParent()) &&
|
2015-03-27 00:49:24 +08:00
|
|
|
// They share the same base, stride, and candidate kind.
|
2015-06-29 01:45:05 +08:00
|
|
|
Basis.Base == C.Base && Basis.Stride == C.Stride &&
|
2015-03-27 00:49:24 +08:00
|
|
|
Basis.CandidateKind == C.CandidateKind);
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
static bool isGEPFoldable(GetElementPtrInst *GEP,
|
2016-07-09 05:48:05 +08:00
|
|
|
const TargetTransformInfo *TTI) {
|
|
|
|
SmallVector<const Value*, 4> Indices;
|
|
|
|
for (auto I = GEP->idx_begin(); I != GEP->idx_end(); ++I)
|
|
|
|
Indices.push_back(*I);
|
2017-10-13 22:04:21 +08:00
|
|
|
return TTI->getGEPCost(GEP->getSourceElementType(), GEP->getPointerOperand(),
|
2016-07-09 05:48:05 +08:00
|
|
|
Indices) == TargetTransformInfo::TCC_Free;
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
// Returns whether (Base + Index * Stride) can be folded to an addressing mode.
|
|
|
|
static bool isAddFoldable(const SCEV *Base, ConstantInt *Index, Value *Stride,
|
|
|
|
TargetTransformInfo *TTI) {
|
2016-07-10 03:13:18 +08:00
|
|
|
// Index->getSExtValue() may crash if Index is wider than 64-bit.
|
|
|
|
return Index->getBitWidth() <= 64 &&
|
|
|
|
TTI->isLegalAddressingMode(Base->getType(), nullptr, 0, true,
|
2016-04-27 08:32:09 +08:00
|
|
|
Index->getSExtValue(), UnknownAddressSpace);
|
2015-04-16 00:46:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
bool StraightLineStrengthReduce::isFoldable(const Candidate &C,
|
|
|
|
TargetTransformInfo *TTI,
|
|
|
|
const DataLayout *DL) {
|
|
|
|
if (C.CandidateKind == Candidate::Add)
|
|
|
|
return isAddFoldable(C.Base, C.Index, C.Stride, TTI);
|
|
|
|
if (C.CandidateKind == Candidate::GEP)
|
2016-07-09 05:48:05 +08:00
|
|
|
return isGEPFoldable(cast<GetElementPtrInst>(C.Ins), TTI);
|
2015-04-16 00:46:13 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns true if GEP has zero or one non-zero index.
|
|
|
|
static bool hasOnlyOneNonZeroIndex(GetElementPtrInst *GEP) {
|
|
|
|
unsigned NumNonZeroIndices = 0;
|
|
|
|
for (auto I = GEP->idx_begin(); I != GEP->idx_end(); ++I) {
|
|
|
|
ConstantInt *ConstIdx = dyn_cast<ConstantInt>(*I);
|
|
|
|
if (ConstIdx == nullptr || !ConstIdx->isZero())
|
|
|
|
++NumNonZeroIndices;
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
2015-04-16 00:46:13 +08:00
|
|
|
return NumNonZeroIndices <= 1;
|
|
|
|
}
|
2015-03-27 00:49:24 +08:00
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
bool StraightLineStrengthReduce::isSimplestForm(const Candidate &C) {
|
|
|
|
if (C.CandidateKind == Candidate::Add) {
|
|
|
|
// B + 1 * S or B + (-1) * S
|
|
|
|
return C.Index->isOne() || C.Index->isMinusOne();
|
|
|
|
}
|
|
|
|
if (C.CandidateKind == Candidate::Mul) {
|
|
|
|
// (B + 0) * S
|
|
|
|
return C.Index->isZero();
|
|
|
|
}
|
|
|
|
if (C.CandidateKind == Candidate::GEP) {
|
|
|
|
// (char*)B + S or (char*)B - S
|
|
|
|
return ((C.Index->isOne() || C.Index->isMinusOne()) &&
|
|
|
|
hasOnlyOneNonZeroIndex(cast<GetElementPtrInst>(C.Ins)));
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: We currently implement an algorithm whose time complexity is linear in
|
|
|
|
// the number of existing candidates. However, we could do better by using
|
|
|
|
// ScopedHashTable. Specifically, while traversing the dominator tree, we could
|
|
|
|
// maintain all the candidates that dominate the basic block being traversed in
|
|
|
|
// a ScopedHashTable. This hash table is indexed by the base and the stride of
|
|
|
|
// a candidate. Therefore, finding the immediate basis of a candidate boils down
|
|
|
|
// to one hash-table look up.
|
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasis(
|
|
|
|
Candidate::Kind CT, const SCEV *B, ConstantInt *Idx, Value *S,
|
|
|
|
Instruction *I) {
|
2015-03-27 00:49:24 +08:00
|
|
|
Candidate C(CT, B, Idx, S, I);
|
2015-04-16 00:46:13 +08:00
|
|
|
// SLSR can complicate an instruction in two cases:
|
|
|
|
//
|
|
|
|
// 1. If we can fold I into an addressing mode, computing I is likely free or
|
|
|
|
// takes only one instruction.
|
|
|
|
//
|
|
|
|
// 2. I is already in a simplest form. For example, when
|
|
|
|
// X = B + 8 * S
|
|
|
|
// Y = B + S,
|
|
|
|
// rewriting Y to X - 7 * S is probably a bad idea.
|
|
|
|
//
|
|
|
|
// In the above cases, we still add I to the candidate list so that I can be
|
|
|
|
// the basis of other candidates, but we leave I's basis blank so that I
|
|
|
|
// won't be rewritten.
|
|
|
|
if (!isFoldable(C, TTI, DL) && !isSimplestForm(C)) {
|
|
|
|
// Try to compute the immediate basis of C.
|
|
|
|
unsigned NumIterations = 0;
|
|
|
|
// Limit the scan radius to avoid running in quadratice time.
|
|
|
|
static const unsigned MaxNumIterations = 50;
|
|
|
|
for (auto Basis = Candidates.rbegin();
|
|
|
|
Basis != Candidates.rend() && NumIterations < MaxNumIterations;
|
|
|
|
++Basis, ++NumIterations) {
|
|
|
|
if (isBasisFor(*Basis, C)) {
|
|
|
|
C.Basis = &(*Basis);
|
|
|
|
break;
|
|
|
|
}
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
// Regardless of whether we find a basis for C, we need to push C to the
|
2015-04-16 00:46:13 +08:00
|
|
|
// candidate list so that it can be the basis of other candidates.
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
Candidates.push_back(C);
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasis(
|
|
|
|
Instruction *I) {
|
2015-03-27 00:49:24 +08:00
|
|
|
switch (I->getOpcode()) {
|
2015-04-16 00:46:13 +08:00
|
|
|
case Instruction::Add:
|
|
|
|
allocateCandidatesAndFindBasisForAdd(I);
|
|
|
|
break;
|
2015-03-27 00:49:24 +08:00
|
|
|
case Instruction::Mul:
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForMul(I);
|
2015-03-27 00:49:24 +08:00
|
|
|
break;
|
|
|
|
case Instruction::GetElementPtr:
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForGEP(cast<GetElementPtrInst>(I));
|
2015-03-27 00:49:24 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForAdd(
|
|
|
|
Instruction *I) {
|
|
|
|
// Try matching B + i * S.
|
|
|
|
if (!isa<IntegerType>(I->getType()))
|
|
|
|
return;
|
|
|
|
|
|
|
|
assert(I->getNumOperands() == 2 && "isn't I an add?");
|
|
|
|
Value *LHS = I->getOperand(0), *RHS = I->getOperand(1);
|
|
|
|
allocateCandidatesAndFindBasisForAdd(LHS, RHS, I);
|
|
|
|
if (LHS != RHS)
|
|
|
|
allocateCandidatesAndFindBasisForAdd(RHS, LHS, I);
|
|
|
|
}
|
|
|
|
|
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForAdd(
|
|
|
|
Value *LHS, Value *RHS, Instruction *I) {
|
|
|
|
Value *S = nullptr;
|
|
|
|
ConstantInt *Idx = nullptr;
|
|
|
|
if (match(RHS, m_Mul(m_Value(S), m_ConstantInt(Idx)))) {
|
|
|
|
// I = LHS + RHS = LHS + Idx * S
|
|
|
|
allocateCandidatesAndFindBasis(Candidate::Add, SE->getSCEV(LHS), Idx, S, I);
|
|
|
|
} else if (match(RHS, m_Shl(m_Value(S), m_ConstantInt(Idx)))) {
|
|
|
|
// I = LHS + RHS = LHS + (S << Idx) = LHS + S * (1 << Idx)
|
|
|
|
APInt One(Idx->getBitWidth(), 1);
|
|
|
|
Idx = ConstantInt::get(Idx->getContext(), One << Idx->getValue());
|
|
|
|
allocateCandidatesAndFindBasis(Candidate::Add, SE->getSCEV(LHS), Idx, S, I);
|
|
|
|
} else {
|
|
|
|
// At least, I = LHS + 1 * RHS
|
|
|
|
ConstantInt *One = ConstantInt::get(cast<IntegerType>(I->getType()), 1);
|
|
|
|
allocateCandidatesAndFindBasis(Candidate::Add, SE->getSCEV(LHS), One, RHS,
|
|
|
|
I);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-05-16 01:07:48 +08:00
|
|
|
// Returns true if A matches B + C where C is constant.
|
|
|
|
static bool matchesAdd(Value *A, Value *&B, ConstantInt *&C) {
|
|
|
|
return (match(A, m_Add(m_Value(B), m_ConstantInt(C))) ||
|
|
|
|
match(A, m_Add(m_ConstantInt(C), m_Value(B))));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns true if A matches B | C where C is constant.
|
|
|
|
static bool matchesOr(Value *A, Value *&B, ConstantInt *&C) {
|
|
|
|
return (match(A, m_Or(m_Value(B), m_ConstantInt(C))) ||
|
|
|
|
match(A, m_Or(m_ConstantInt(C), m_Value(B))));
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForMul(
|
2015-03-27 00:49:24 +08:00
|
|
|
Value *LHS, Value *RHS, Instruction *I) {
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
Value *B = nullptr;
|
|
|
|
ConstantInt *Idx = nullptr;
|
2015-05-16 01:07:48 +08:00
|
|
|
if (matchesAdd(LHS, B, Idx)) {
|
2015-03-27 00:49:24 +08:00
|
|
|
// If LHS is in the form of "Base + Index", then I is in the form of
|
|
|
|
// "(Base + Index) * RHS".
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasis(Candidate::Mul, SE->getSCEV(B), Idx, RHS, I);
|
2015-05-16 01:07:48 +08:00
|
|
|
} else if (matchesOr(LHS, B, Idx) && haveNoCommonBitsSet(B, Idx, *DL)) {
|
|
|
|
// If LHS is in the form of "Base | Index" and Base and Index have no common
|
|
|
|
// bits set, then
|
|
|
|
// Base | Index = Base + Index
|
|
|
|
// and I is thus in the form of "(Base + Index) * RHS".
|
|
|
|
allocateCandidatesAndFindBasis(Candidate::Mul, SE->getSCEV(B), Idx, RHS, I);
|
2015-03-27 00:49:24 +08:00
|
|
|
} else {
|
|
|
|
// Otherwise, at least try the form (LHS + 0) * RHS.
|
|
|
|
ConstantInt *Zero = ConstantInt::get(cast<IntegerType>(I->getType()), 0);
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasis(Candidate::Mul, SE->getSCEV(LHS), Zero, RHS,
|
2015-05-16 01:07:48 +08:00
|
|
|
I);
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForMul(
|
2015-03-27 00:49:24 +08:00
|
|
|
Instruction *I) {
|
|
|
|
// Try matching (B + i) * S.
|
|
|
|
// TODO: we could extend SLSR to float and vector types.
|
|
|
|
if (!isa<IntegerType>(I->getType()))
|
|
|
|
return;
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
assert(I->getNumOperands() == 2 && "isn't I a mul?");
|
2015-03-27 00:49:24 +08:00
|
|
|
Value *LHS = I->getOperand(0), *RHS = I->getOperand(1);
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForMul(LHS, RHS, I);
|
2015-03-27 00:49:24 +08:00
|
|
|
if (LHS != RHS) {
|
|
|
|
// Symmetrically, try to split RHS to Base + Index.
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForMul(RHS, LHS, I);
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForGEP(
|
2015-03-27 00:49:24 +08:00
|
|
|
const SCEV *B, ConstantInt *Idx, Value *S, uint64_t ElementSize,
|
|
|
|
Instruction *I) {
|
2015-04-03 05:18:32 +08:00
|
|
|
// I = B + sext(Idx *nsw S) * ElementSize
|
|
|
|
// = B + (sext(Idx) * sext(S)) * ElementSize
|
2015-03-27 00:49:24 +08:00
|
|
|
// = B + (sext(Idx) * ElementSize) * sext(S)
|
|
|
|
// Casting to IntegerType is safe because we skipped vector GEPs.
|
|
|
|
IntegerType *IntPtrTy = cast<IntegerType>(DL->getIntPtrType(I->getType()));
|
|
|
|
ConstantInt *ScaledIdx = ConstantInt::get(
|
|
|
|
IntPtrTy, Idx->getSExtValue() * (int64_t)ElementSize, true);
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasis(Candidate::GEP, B, ScaledIdx, S, I);
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void StraightLineStrengthReduce::factorArrayIndex(Value *ArrayIdx,
|
|
|
|
const SCEV *Base,
|
|
|
|
uint64_t ElementSize,
|
|
|
|
GetElementPtrInst *GEP) {
|
2015-04-16 00:46:13 +08:00
|
|
|
// At least, ArrayIdx = ArrayIdx *nsw 1.
|
|
|
|
allocateCandidatesAndFindBasisForGEP(
|
2015-03-27 00:49:24 +08:00
|
|
|
Base, ConstantInt::get(cast<IntegerType>(ArrayIdx->getType()), 1),
|
|
|
|
ArrayIdx, ElementSize, GEP);
|
|
|
|
Value *LHS = nullptr;
|
|
|
|
ConstantInt *RHS = nullptr;
|
|
|
|
// One alternative is matching the SCEV of ArrayIdx instead of ArrayIdx
|
|
|
|
// itself. This would allow us to handle the shl case for free. However,
|
|
|
|
// matching SCEVs has two issues:
|
|
|
|
//
|
|
|
|
// 1. this would complicate rewriting because the rewriting procedure
|
|
|
|
// would have to translate SCEVs back to IR instructions. This translation
|
|
|
|
// is difficult when LHS is further evaluated to a composite SCEV.
|
|
|
|
//
|
|
|
|
// 2. ScalarEvolution is designed to be control-flow oblivious. It tends
|
|
|
|
// to strip nsw/nuw flags which are critical for SLSR to trace into
|
|
|
|
// sext'ed multiplication.
|
|
|
|
if (match(ArrayIdx, m_NSWMul(m_Value(LHS), m_ConstantInt(RHS)))) {
|
|
|
|
// SLSR is currently unsafe if i * S may overflow.
|
2015-04-03 05:18:32 +08:00
|
|
|
// GEP = Base + sext(LHS *nsw RHS) * ElementSize
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForGEP(Base, RHS, LHS, ElementSize, GEP);
|
2015-04-07 01:15:48 +08:00
|
|
|
} else if (match(ArrayIdx, m_NSWShl(m_Value(LHS), m_ConstantInt(RHS)))) {
|
|
|
|
// GEP = Base + sext(LHS <<nsw RHS) * ElementSize
|
|
|
|
// = Base + sext(LHS *nsw (1 << RHS)) * ElementSize
|
|
|
|
APInt One(RHS->getBitWidth(), 1);
|
|
|
|
ConstantInt *PowerOf2 =
|
|
|
|
ConstantInt::get(RHS->getContext(), One << RHS->getValue());
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasisForGEP(Base, PowerOf2, LHS, ElementSize, GEP);
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-16 00:46:13 +08:00
|
|
|
void StraightLineStrengthReduce::allocateCandidatesAndFindBasisForGEP(
|
2015-03-27 00:49:24 +08:00
|
|
|
GetElementPtrInst *GEP) {
|
|
|
|
// TODO: handle vector GEPs
|
|
|
|
if (GEP->getType()->isVectorTy())
|
|
|
|
return;
|
|
|
|
|
2015-05-19 01:03:25 +08:00
|
|
|
SmallVector<const SCEV *, 4> IndexExprs;
|
|
|
|
for (auto I = GEP->idx_begin(); I != GEP->idx_end(); ++I)
|
|
|
|
IndexExprs.push_back(SE->getSCEV(*I));
|
2015-03-27 00:49:24 +08:00
|
|
|
|
|
|
|
gep_type_iterator GTI = gep_type_begin(GEP);
|
2016-12-02 10:24:42 +08:00
|
|
|
for (unsigned I = 1, E = GEP->getNumOperands(); I != E; ++I, ++GTI) {
|
|
|
|
if (GTI.isStruct())
|
2015-03-27 00:49:24 +08:00
|
|
|
continue;
|
2015-05-19 01:03:25 +08:00
|
|
|
|
|
|
|
const SCEV *OrigIndexExpr = IndexExprs[I - 1];
|
2015-09-23 09:59:04 +08:00
|
|
|
IndexExprs[I - 1] = SE->getZero(OrigIndexExpr->getType());
|
2015-05-19 01:03:25 +08:00
|
|
|
|
|
|
|
// The base of this candidate is GEP's base plus the offsets of all
|
|
|
|
// indices except this current one.
|
2016-11-13 14:59:50 +08:00
|
|
|
const SCEV *BaseExpr = SE->getGEPExpr(cast<GEPOperator>(GEP), IndexExprs);
|
2015-05-19 01:03:25 +08:00
|
|
|
Value *ArrayIdx = GEP->getOperand(I);
|
2016-12-02 10:24:42 +08:00
|
|
|
uint64_t ElementSize = DL->getTypeAllocSize(GTI.getIndexedType());
|
2016-07-10 03:13:18 +08:00
|
|
|
if (ArrayIdx->getType()->getIntegerBitWidth() <=
|
2016-07-12 02:13:28 +08:00
|
|
|
DL->getPointerSizeInBits(GEP->getAddressSpace())) {
|
2016-07-10 03:13:18 +08:00
|
|
|
// Skip factoring if ArrayIdx is wider than the pointer size, because
|
|
|
|
// ArrayIdx is implicitly truncated to the pointer size.
|
|
|
|
factorArrayIndex(ArrayIdx, BaseExpr, ElementSize, GEP);
|
|
|
|
}
|
2015-03-27 00:49:24 +08:00
|
|
|
// When ArrayIdx is the sext of a value, we try to factor that value as
|
|
|
|
// well. Handling this case is important because array indices are
|
|
|
|
// typically sign-extended to the pointer size.
|
|
|
|
Value *TruncatedArrayIdx = nullptr;
|
2016-07-10 03:13:18 +08:00
|
|
|
if (match(ArrayIdx, m_SExt(m_Value(TruncatedArrayIdx))) &&
|
|
|
|
TruncatedArrayIdx->getType()->getIntegerBitWidth() <=
|
2016-07-12 02:13:28 +08:00
|
|
|
DL->getPointerSizeInBits(GEP->getAddressSpace())) {
|
2016-07-10 03:13:18 +08:00
|
|
|
// Skip factoring if TruncatedArrayIdx is wider than the pointer size,
|
|
|
|
// because TruncatedArrayIdx is implicitly truncated to the pointer size.
|
2015-05-19 01:03:25 +08:00
|
|
|
factorArrayIndex(TruncatedArrayIdx, BaseExpr, ElementSize, GEP);
|
2016-07-10 03:13:18 +08:00
|
|
|
}
|
2015-05-19 01:03:25 +08:00
|
|
|
|
|
|
|
IndexExprs[I - 1] = OrigIndexExpr;
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// A helper function that unifies the bitwidth of A and B.
|
|
|
|
static void unifyBitWidth(APInt &A, APInt &B) {
|
|
|
|
if (A.getBitWidth() < B.getBitWidth())
|
|
|
|
A = A.sext(B.getBitWidth());
|
|
|
|
else if (A.getBitWidth() > B.getBitWidth())
|
|
|
|
B = B.sext(A.getBitWidth());
|
|
|
|
}
|
|
|
|
|
|
|
|
Value *StraightLineStrengthReduce::emitBump(const Candidate &Basis,
|
|
|
|
const Candidate &C,
|
|
|
|
IRBuilder<> &Builder,
|
|
|
|
const DataLayout *DL,
|
|
|
|
bool &BumpWithUglyGEP) {
|
|
|
|
APInt Idx = C.Index->getValue(), BasisIdx = Basis.Index->getValue();
|
|
|
|
unifyBitWidth(Idx, BasisIdx);
|
|
|
|
APInt IndexOffset = Idx - BasisIdx;
|
|
|
|
|
|
|
|
BumpWithUglyGEP = false;
|
|
|
|
if (Basis.CandidateKind == Candidate::GEP) {
|
|
|
|
APInt ElementSize(
|
|
|
|
IndexOffset.getBitWidth(),
|
2016-07-10 03:13:18 +08:00
|
|
|
DL->getTypeAllocSize(
|
|
|
|
cast<GetElementPtrInst>(Basis.Ins)->getResultElementType()));
|
2015-03-27 00:49:24 +08:00
|
|
|
APInt Q, R;
|
|
|
|
APInt::sdivrem(IndexOffset, ElementSize, Q, R);
|
2016-07-10 03:13:18 +08:00
|
|
|
if (R == 0)
|
2015-03-27 00:49:24 +08:00
|
|
|
IndexOffset = Q;
|
|
|
|
else
|
|
|
|
BumpWithUglyGEP = true;
|
|
|
|
}
|
2015-04-16 00:46:13 +08:00
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
// Compute Bump = C - Basis = (i' - i) * S.
|
|
|
|
// Common case 1: if (i' - i) is 1, Bump = S.
|
2016-07-10 03:13:18 +08:00
|
|
|
if (IndexOffset == 1)
|
2015-03-27 00:49:24 +08:00
|
|
|
return C.Stride;
|
|
|
|
// Common case 2: if (i' - i) is -1, Bump = -S.
|
2016-07-10 03:13:18 +08:00
|
|
|
if (IndexOffset.isAllOnesValue())
|
2015-03-27 00:49:24 +08:00
|
|
|
return Builder.CreateNeg(C.Stride);
|
2015-04-16 00:46:13 +08:00
|
|
|
|
|
|
|
// Otherwise, Bump = (i' - i) * sext/trunc(S). Note that (i' - i) and S may
|
|
|
|
// have different bit widths.
|
|
|
|
IntegerType *DeltaType =
|
|
|
|
IntegerType::get(Basis.Ins->getContext(), IndexOffset.getBitWidth());
|
|
|
|
Value *ExtendedStride = Builder.CreateSExtOrTrunc(C.Stride, DeltaType);
|
|
|
|
if (IndexOffset.isPowerOf2()) {
|
|
|
|
// If (i' - i) is a power of 2, Bump = sext/trunc(S) << log(i' - i).
|
|
|
|
ConstantInt *Exponent = ConstantInt::get(DeltaType, IndexOffset.logBase2());
|
|
|
|
return Builder.CreateShl(ExtendedStride, Exponent);
|
|
|
|
}
|
|
|
|
if ((-IndexOffset).isPowerOf2()) {
|
|
|
|
// If (i - i') is a power of 2, Bump = -sext/trunc(S) << log(i' - i).
|
|
|
|
ConstantInt *Exponent =
|
|
|
|
ConstantInt::get(DeltaType, (-IndexOffset).logBase2());
|
|
|
|
return Builder.CreateNeg(Builder.CreateShl(ExtendedStride, Exponent));
|
|
|
|
}
|
|
|
|
Constant *Delta = ConstantInt::get(DeltaType, IndexOffset);
|
2015-03-27 00:49:24 +08:00
|
|
|
return Builder.CreateMul(ExtendedStride, Delta);
|
|
|
|
}
|
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
void StraightLineStrengthReduce::rewriteCandidateWithBasis(
|
|
|
|
const Candidate &C, const Candidate &Basis) {
|
2015-03-27 00:49:24 +08:00
|
|
|
assert(C.CandidateKind == Basis.CandidateKind && C.Base == Basis.Base &&
|
|
|
|
C.Stride == Basis.Stride);
|
2015-04-16 00:46:13 +08:00
|
|
|
// We run rewriteCandidateWithBasis on all candidates in a post-order, so the
|
|
|
|
// basis of a candidate cannot be unlinked before the candidate.
|
|
|
|
assert(Basis.Ins->getParent() != nullptr && "the basis is unlinked");
|
2015-03-27 00:49:24 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// An instruction can correspond to multiple candidates. Therefore, instead of
|
|
|
|
// simply deleting an instruction when we rewrite it, we mark its parent as
|
|
|
|
// nullptr (i.e. unlink it) so that we can skip the candidates whose
|
|
|
|
// instruction is already rewritten.
|
|
|
|
if (!C.Ins->getParent())
|
|
|
|
return;
|
2015-03-27 00:49:24 +08:00
|
|
|
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
IRBuilder<> Builder(C.Ins);
|
2015-03-27 00:49:24 +08:00
|
|
|
bool BumpWithUglyGEP;
|
|
|
|
Value *Bump = emitBump(Basis, C, Builder, DL, BumpWithUglyGEP);
|
|
|
|
Value *Reduced = nullptr; // equivalent to but weaker than C.Ins
|
|
|
|
switch (C.CandidateKind) {
|
2015-04-16 00:46:13 +08:00
|
|
|
case Candidate::Add:
|
2018-10-23 22:07:39 +08:00
|
|
|
case Candidate::Mul: {
|
2015-04-22 03:56:18 +08:00
|
|
|
// C = Basis + Bump
|
2018-10-23 22:07:39 +08:00
|
|
|
Value *NegBump;
|
|
|
|
if (match(Bump, m_Neg(m_Value(NegBump)))) {
|
2015-04-22 03:56:18 +08:00
|
|
|
// If Bump is a neg instruction, emit C = Basis - (-Bump).
|
2018-10-23 22:07:39 +08:00
|
|
|
Reduced = Builder.CreateSub(Basis.Ins, NegBump);
|
2015-04-22 03:56:18 +08:00
|
|
|
// We only use the negative argument of Bump, and Bump itself may be
|
|
|
|
// trivially dead.
|
|
|
|
RecursivelyDeleteTriviallyDeadInstructions(Bump);
|
2015-04-16 00:46:13 +08:00
|
|
|
} else {
|
2015-06-18 11:35:57 +08:00
|
|
|
// It's tempting to preserve nsw on Bump and/or Reduced. However, it's
|
|
|
|
// usually unsound, e.g.,
|
|
|
|
//
|
|
|
|
// X = (-2 +nsw 1) *nsw INT_MAX
|
|
|
|
// Y = (-2 +nsw 3) *nsw INT_MAX
|
|
|
|
// =>
|
|
|
|
// Y = X + 2 * INT_MAX
|
|
|
|
//
|
|
|
|
// Neither + and * in the resultant expression are nsw.
|
2015-04-16 00:46:13 +08:00
|
|
|
Reduced = Builder.CreateAdd(Basis.Ins, Bump);
|
|
|
|
}
|
2015-03-27 00:49:24 +08:00
|
|
|
break;
|
2018-10-23 22:07:39 +08:00
|
|
|
}
|
2015-03-27 00:49:24 +08:00
|
|
|
case Candidate::GEP:
|
|
|
|
{
|
|
|
|
Type *IntPtrTy = DL->getIntPtrType(C.Ins->getType());
|
2015-04-03 05:18:32 +08:00
|
|
|
bool InBounds = cast<GetElementPtrInst>(C.Ins)->isInBounds();
|
2015-03-27 00:49:24 +08:00
|
|
|
if (BumpWithUglyGEP) {
|
|
|
|
// C = (char *)Basis + Bump
|
|
|
|
unsigned AS = Basis.Ins->getType()->getPointerAddressSpace();
|
|
|
|
Type *CharTy = Type::getInt8PtrTy(Basis.Ins->getContext(), AS);
|
|
|
|
Reduced = Builder.CreateBitCast(Basis.Ins, CharTy);
|
2015-04-03 05:18:32 +08:00
|
|
|
if (InBounds)
|
2015-04-04 05:33:42 +08:00
|
|
|
Reduced =
|
|
|
|
Builder.CreateInBoundsGEP(Builder.getInt8Ty(), Reduced, Bump);
|
2015-04-03 05:18:32 +08:00
|
|
|
else
|
2015-04-04 03:41:44 +08:00
|
|
|
Reduced = Builder.CreateGEP(Builder.getInt8Ty(), Reduced, Bump);
|
2015-03-27 00:49:24 +08:00
|
|
|
Reduced = Builder.CreateBitCast(Reduced, C.Ins->getType());
|
|
|
|
} else {
|
|
|
|
// C = gep Basis, Bump
|
|
|
|
// Canonicalize bump to pointer size.
|
|
|
|
Bump = Builder.CreateSExtOrTrunc(Bump, IntPtrTy);
|
2015-04-03 05:18:32 +08:00
|
|
|
if (InBounds)
|
2019-02-02 04:44:47 +08:00
|
|
|
Reduced = Builder.CreateInBoundsGEP(
|
|
|
|
cast<GetElementPtrInst>(Basis.Ins)->getResultElementType(),
|
|
|
|
Basis.Ins, Bump);
|
2015-04-03 05:18:32 +08:00
|
|
|
else
|
2019-02-02 04:44:47 +08:00
|
|
|
Reduced = Builder.CreateGEP(
|
|
|
|
cast<GetElementPtrInst>(Basis.Ins)->getResultElementType(),
|
|
|
|
Basis.Ins, Bump);
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
2017-10-27 09:09:08 +08:00
|
|
|
break;
|
2015-03-27 00:49:24 +08:00
|
|
|
}
|
|
|
|
default:
|
|
|
|
llvm_unreachable("C.CandidateKind is invalid");
|
|
|
|
};
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
Reduced->takeName(C.Ins);
|
|
|
|
C.Ins->replaceAllUsesWith(Reduced);
|
|
|
|
// Unlink C.Ins so that we can skip other candidates also corresponding to
|
|
|
|
// C.Ins. The actual deletion is postponed to the end of runOnFunction.
|
|
|
|
C.Ins->removeFromParent();
|
2015-04-16 00:46:13 +08:00
|
|
|
UnlinkedInstructions.push_back(C.Ins);
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
bool StraightLineStrengthReduce::runOnFunction(Function &F) {
|
2016-04-23 06:06:11 +08:00
|
|
|
if (skipFunction(F))
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
return false;
|
|
|
|
|
2015-03-27 00:49:24 +08:00
|
|
|
TTI = &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F);
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
DT = &getAnalysis<DominatorTreeWrapperPass>().getDomTree();
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 10:08:17 +08:00
|
|
|
SE = &getAnalysis<ScalarEvolutionWrapperPass>().getSE();
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
// Traverse the dominator tree in the depth-first order. This order makes sure
|
|
|
|
// all bases of a candidate are in Candidates when we process it.
|
2016-08-20 06:06:23 +08:00
|
|
|
for (const auto Node : depth_first(DT))
|
|
|
|
for (auto &I : *(Node->getBlock()))
|
2015-04-16 00:46:13 +08:00
|
|
|
allocateCandidatesAndFindBasis(&I);
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
|
|
|
|
// Rewrite candidates in the reverse depth-first order. This order makes sure
|
|
|
|
// a candidate being rewritten is not a basis for any other candidate.
|
|
|
|
while (!Candidates.empty()) {
|
|
|
|
const Candidate &C = Candidates.back();
|
|
|
|
if (C.Basis != nullptr) {
|
|
|
|
rewriteCandidateWithBasis(C, *C.Basis);
|
|
|
|
}
|
|
|
|
Candidates.pop_back();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Delete all unlink instructions.
|
2015-04-22 03:56:18 +08:00
|
|
|
for (auto *UnlinkedInst : UnlinkedInstructions) {
|
|
|
|
for (unsigned I = 0, E = UnlinkedInst->getNumOperands(); I != E; ++I) {
|
|
|
|
Value *Op = UnlinkedInst->getOperand(I);
|
|
|
|
UnlinkedInst->setOperand(I, nullptr);
|
|
|
|
RecursivelyDeleteTriviallyDeadInstructions(Op);
|
|
|
|
}
|
[IR] De-virtualize ~Value to save a vptr
Summary:
Implements PR889
Removing the virtual table pointer from Value saves 1% of RSS when doing
LTO of llc on Linux. The impact on time was positive, but too noisy to
conclusively say that performance improved. Here is a link to the
spreadsheet with the original data:
https://docs.google.com/spreadsheets/d/1F4FHir0qYnV0MEp2sYYp_BuvnJgWlWPhWOwZ6LbW7W4/edit?usp=sharing
This change makes it invalid to directly delete a Value, User, or
Instruction pointer. Instead, such code can be rewritten to a null check
and a call Value::deleteValue(). Value objects tend to have their
lifetimes managed through iplist, so for the most part, this isn't a big
deal. However, there are some places where LLVM deletes values, and
those places had to be migrated to deleteValue. I have also created
llvm::unique_value, which has a custom deleter, so it can be used in
place of std::unique_ptr<Value>.
I had to add the "DerivedUser" Deleter escape hatch for MemorySSA, which
derives from User outside of lib/IR. Code in IR cannot include MemorySSA
headers or call the MemoryAccess object destructors without introducing
a circular dependency, so we need some level of indirection.
Unfortunately, no class derived from User may have any virtual methods,
because adding a virtual method would break User::getHungOffOperands(),
which assumes that it can find the use list immediately prior to the
User object. I've added a static_assert to the appropriate OperandTraits
templates to help people avoid this trap.
Reviewers: chandlerc, mehdi_amini, pete, dberlin, george.burgess.iv
Reviewed By: chandlerc
Subscribers: krytarowski, eraman, george.burgess.iv, mzolotukhin, Prazek, nlewycky, hans, inglorion, pcc, tejohnson, dberlin, llvm-commits
Differential Revision: https://reviews.llvm.org/D31261
llvm-svn: 303362
2017-05-19 01:24:10 +08:00
|
|
|
UnlinkedInst->deleteValue();
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-04 03:37:06 +08:00
|
|
|
}
|
|
|
|
bool Ret = !UnlinkedInstructions.empty();
|
|
|
|
UnlinkedInstructions.clear();
|
|
|
|
return Ret;
|
|
|
|
}
|