2015-03-05 09:07:03 +08:00
|
|
|
; RUN: llc -mtriple=i686-linux -pre-RA-sched=source < %s | FileCheck %s
|
2011-10-21 16:01:56 +08:00
|
|
|
|
|
|
|
declare void @error(i32 %i, i32 %a, i32 %b)
|
|
|
|
|
2011-10-21 16:57:37 +08:00
|
|
|
define i32 @test_ifchains(i32 %i, i32* %a, i32 %b) {
|
2011-10-21 16:01:56 +08:00
|
|
|
; Test a chain of ifs, where the block guarded by the if is error handling code
|
|
|
|
; that is not expected to run.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_ifchains:
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %entry
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %else1
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %else2
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %else3
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %else4
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:01:56 +08:00
|
|
|
; CHECK: %exit
|
[MBP] Fix a really horrible bug in MachineBlockPlacement, but behind
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
llvm-svn: 231238
2015-03-04 20:18:08 +08:00
|
|
|
; CHECK: %then1
|
2015-03-05 09:07:03 +08:00
|
|
|
; CHECK: %then2
|
|
|
|
; CHECK: %then3
|
|
|
|
; CHECK: %then4
|
|
|
|
; CHECK: %then5
|
2011-10-21 16:01:56 +08:00
|
|
|
|
|
|
|
entry:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
2015-02-28 05:17:42 +08:00
|
|
|
%val1 = load i32, i32* %gep1
|
2011-10-21 16:01:56 +08:00
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then1, label %else1, !prof !0
|
|
|
|
|
|
|
|
then1:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %b)
|
|
|
|
br label %else1
|
|
|
|
|
|
|
|
else1:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep2 = getelementptr i32, i32* %a, i32 2
|
2015-02-28 05:17:42 +08:00
|
|
|
%val2 = load i32, i32* %gep2
|
2011-10-21 16:01:56 +08:00
|
|
|
%cond2 = icmp ugt i32 %val2, 2
|
|
|
|
br i1 %cond2, label %then2, label %else2, !prof !0
|
|
|
|
|
|
|
|
then2:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %b)
|
|
|
|
br label %else2
|
|
|
|
|
|
|
|
else2:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep3 = getelementptr i32, i32* %a, i32 3
|
2015-02-28 05:17:42 +08:00
|
|
|
%val3 = load i32, i32* %gep3
|
2011-10-21 16:01:56 +08:00
|
|
|
%cond3 = icmp ugt i32 %val3, 3
|
|
|
|
br i1 %cond3, label %then3, label %else3, !prof !0
|
|
|
|
|
|
|
|
then3:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %b)
|
|
|
|
br label %else3
|
|
|
|
|
|
|
|
else3:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep4 = getelementptr i32, i32* %a, i32 4
|
2015-02-28 05:17:42 +08:00
|
|
|
%val4 = load i32, i32* %gep4
|
2011-10-21 16:01:56 +08:00
|
|
|
%cond4 = icmp ugt i32 %val4, 4
|
|
|
|
br i1 %cond4, label %then4, label %else4, !prof !0
|
|
|
|
|
|
|
|
then4:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %b)
|
|
|
|
br label %else4
|
|
|
|
|
|
|
|
else4:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep5 = getelementptr i32, i32* %a, i32 3
|
2015-02-28 05:17:42 +08:00
|
|
|
%val5 = load i32, i32* %gep5
|
2011-10-21 16:01:56 +08:00
|
|
|
%cond5 = icmp ugt i32 %val5, 3
|
|
|
|
br i1 %cond5, label %then5, label %exit, !prof !0
|
|
|
|
|
|
|
|
then5:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %b)
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %b
|
|
|
|
}
|
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
define i32 @test_loop_cold_blocks(i32 %i, i32* %a) {
|
|
|
|
; Check that we sink cold loop blocks after the hot loop body.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_loop_cold_blocks:
|
2011-11-13 19:20:44 +08:00
|
|
|
; CHECK: %entry
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
[MBP] Fix a really horrible bug in MachineBlockPlacement, but behind
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
llvm-svn: 231238
2015-03-04 20:18:08 +08:00
|
|
|
; CHECK: %unlikely1
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2015-03-05 09:07:03 +08:00
|
|
|
; CHECK: %unlikely2
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align
|
2011-11-13 19:20:44 +08:00
|
|
|
; CHECK: %body1
|
|
|
|
; CHECK: %body2
|
|
|
|
; CHECK: %body3
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %body1
|
|
|
|
|
|
|
|
body1:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %body3 ]
|
|
|
|
%base = phi i32 [ 0, %entry ], [ %sum, %body3 ]
|
|
|
|
%unlikelycond1 = icmp slt i32 %base, 42
|
|
|
|
br i1 %unlikelycond1, label %unlikely1, label %body2, !prof !0
|
|
|
|
|
|
|
|
unlikely1:
|
|
|
|
call void @error(i32 %i, i32 1, i32 %base)
|
|
|
|
br label %body2
|
|
|
|
|
|
|
|
body2:
|
|
|
|
%unlikelycond2 = icmp sgt i32 %base, 21
|
|
|
|
br i1 %unlikelycond2, label %unlikely2, label %body3, !prof !0
|
|
|
|
|
|
|
|
unlikely2:
|
|
|
|
call void @error(i32 %i, i32 2, i32 %base)
|
|
|
|
br label %body3
|
|
|
|
|
|
|
|
body3:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %arrayidx
|
2011-11-13 19:20:44 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
|
|
|
br i1 %exitcond, label %exit, label %body1
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %sum
|
|
|
|
}
|
|
|
|
|
IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly. These
are the matching assembly changes for the metadata/value split in
r223802.
- Only use the `metadata` type when referencing metadata from a call
intrinsic -- i.e., only when it's used as a `Value`.
- Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
when referencing it from call intrinsics.
So, assembly like this:
define @foo(i32 %v) {
call void @llvm.foo(metadata !{i32 %v}, metadata !0)
call void @llvm.foo(metadata !{i32 7}, metadata !0)
call void @llvm.foo(metadata !1, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{metadata !3}, metadata !0)
ret void, !bar !2
}
!0 = metadata !{metadata !2}
!1 = metadata !{i32* @global}
!2 = metadata !{metadata !3}
!3 = metadata !{}
turns into this:
define @foo(i32 %v) {
call void @llvm.foo(metadata i32 %v, metadata !0)
call void @llvm.foo(metadata i32 7, metadata !0)
call void @llvm.foo(metadata i32* @global, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{!3}, metadata !0)
ret void, !bar !2
}
!0 = !{!2}
!1 = !{i32* @global}
!2 = !{!3}
!3 = !{}
I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines). I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.
This is part of PR21532.
llvm-svn: 224257
2014-12-16 03:07:53 +08:00
|
|
|
!0 = !{!"branch_weights", i32 4, i32 64}
|
2011-10-21 16:57:37 +08:00
|
|
|
|
2011-11-13 19:20:44 +08:00
|
|
|
define i32 @test_loop_early_exits(i32 %i, i32* %a) {
|
|
|
|
; Check that we sink early exit blocks out of loop bodies.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_loop_early_exits:
|
2011-11-13 19:20:44 +08:00
|
|
|
; CHECK: %entry
|
2012-04-16 17:31:23 +08:00
|
|
|
; CHECK: %body1
|
2011-11-13 19:20:44 +08:00
|
|
|
; CHECK: %body2
|
|
|
|
; CHECK: %body3
|
|
|
|
; CHECK: %body4
|
2012-04-16 17:31:23 +08:00
|
|
|
; CHECK: %exit
|
[MBP] Fix a really horrible bug in MachineBlockPlacement, but behind
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
llvm-svn: 231238
2015-03-04 20:18:08 +08:00
|
|
|
; CHECK: %bail1
|
2015-03-05 09:07:03 +08:00
|
|
|
; CHECK: %bail2
|
|
|
|
; CHECK: %bail3
|
2011-11-13 19:20:44 +08:00
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %body1
|
|
|
|
|
|
|
|
body1:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %body4 ]
|
|
|
|
%base = phi i32 [ 0, %entry ], [ %sum, %body4 ]
|
|
|
|
%bailcond1 = icmp eq i32 %base, 42
|
|
|
|
br i1 %bailcond1, label %bail1, label %body2
|
|
|
|
|
|
|
|
bail1:
|
|
|
|
ret i32 -1
|
|
|
|
|
|
|
|
body2:
|
|
|
|
%bailcond2 = icmp eq i32 %base, 43
|
|
|
|
br i1 %bailcond2, label %bail2, label %body3
|
|
|
|
|
|
|
|
bail2:
|
|
|
|
ret i32 -2
|
|
|
|
|
|
|
|
body3:
|
|
|
|
%bailcond3 = icmp eq i32 %base, 44
|
|
|
|
br i1 %bailcond3, label %bail3, label %body4
|
|
|
|
|
|
|
|
bail3:
|
|
|
|
ret i32 -3
|
|
|
|
|
|
|
|
body4:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %arrayidx
|
2011-11-13 19:20:44 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
|
|
|
br i1 %exitcond, label %exit, label %body1
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %sum
|
|
|
|
}
|
|
|
|
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; Tail duplication during layout can entirely remove body0 by duplicating it
|
|
|
|
; into the entry block and into body1. This is a good thing but it isn't what
|
|
|
|
; this test is looking for. So to make the blocks longer so they don't get
|
|
|
|
; duplicated, we add some calls to dummy.
|
|
|
|
declare void @dummy()
|
|
|
|
|
2011-11-27 08:38:03 +08:00
|
|
|
define i32 @test_loop_rotate(i32 %i, i32* %a) {
|
|
|
|
; Check that we rotate conditional exits from the loop to the bottom of the
|
|
|
|
; loop, eliminating unconditional branches to the top.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_loop_rotate:
|
2011-11-27 08:38:03 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %body1
|
|
|
|
; CHECK: %body0
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %body0
|
|
|
|
|
|
|
|
body0:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %body1 ]
|
|
|
|
%base = phi i32 [ 0, %entry ], [ %sum, %body1 ]
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
call void @dummy()
|
|
|
|
call void @dummy()
|
2011-11-27 08:38:03 +08:00
|
|
|
br i1 %exitcond, label %exit, label %body1
|
|
|
|
|
|
|
|
body1:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %arrayidx
|
2011-11-27 08:38:03 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%bailcond1 = icmp eq i32 %sum, 42
|
|
|
|
br label %body0
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %base
|
|
|
|
}
|
|
|
|
|
2012-04-16 17:31:23 +08:00
|
|
|
define i32 @test_no_loop_rotate(i32 %i, i32* %a) {
|
|
|
|
; Check that we don't try to rotate a loop which is already laid out with
|
|
|
|
; fallthrough opportunities into the top and out of the bottom.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_no_loop_rotate:
|
2012-04-16 17:31:23 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %body0
|
|
|
|
; CHECK: %body1
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %body0
|
|
|
|
|
|
|
|
body0:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %body1 ]
|
|
|
|
%base = phi i32 [ 0, %entry ], [ %sum, %body1 ]
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %arrayidx
|
2012-04-16 17:31:23 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%bailcond1 = icmp eq i32 %sum, 42
|
|
|
|
br i1 %bailcond1, label %exit, label %body1
|
|
|
|
|
|
|
|
body1:
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
|
|
|
br i1 %exitcond, label %exit, label %body0
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %base
|
|
|
|
}
|
|
|
|
|
2011-10-21 16:57:37 +08:00
|
|
|
define i32 @test_loop_align(i32 %i, i32* %a) {
|
|
|
|
; Check that we provide basic loop body alignment with the block placement
|
|
|
|
; pass.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_loop_align:
|
2011-10-21 16:57:37 +08:00
|
|
|
; CHECK: %entry
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align [[ALIGN:[0-9]+]],
|
2011-10-21 16:57:37 +08:00
|
|
|
; CHECK-NEXT: %body
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %body
|
|
|
|
|
|
|
|
body:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %body ]
|
|
|
|
%base = phi i32 [ 0, %entry ], [ %sum, %body ]
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %arrayidx
|
2011-10-21 16:57:37 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
|
|
|
br i1 %exitcond, label %exit, label %body
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %sum
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @test_nested_loop_align(i32 %i, i32* %a, i32* %b) {
|
|
|
|
; Check that we provide nested loop body alignment.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_nested_loop_align:
|
2011-10-21 16:57:37 +08:00
|
|
|
; CHECK: %entry
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align [[ALIGN]],
|
2011-11-13 19:20:44 +08:00
|
|
|
; CHECK-NEXT: %loop.body.1
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align [[ALIGN]],
|
2011-10-21 16:57:37 +08:00
|
|
|
; CHECK-NEXT: %inner.loop.body
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK-NOT: .p2align
|
2011-10-21 16:57:37 +08:00
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %loop.body.1
|
|
|
|
|
|
|
|
loop.body.1:
|
|
|
|
%iv = phi i32 [ 0, %entry ], [ %next, %loop.body.2 ]
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
|
2015-02-28 05:17:42 +08:00
|
|
|
%bidx = load i32, i32* %arrayidx
|
2011-10-21 16:57:37 +08:00
|
|
|
br label %inner.loop.body
|
|
|
|
|
|
|
|
inner.loop.body:
|
|
|
|
%inner.iv = phi i32 [ 0, %loop.body.1 ], [ %inner.next, %inner.loop.body ]
|
|
|
|
%base = phi i32 [ 0, %loop.body.1 ], [ %sum, %inner.loop.body ]
|
|
|
|
%scaled_idx = mul i32 %bidx, %iv
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%inner.arrayidx = getelementptr inbounds i32, i32* %b, i32 %scaled_idx
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load i32, i32* %inner.arrayidx
|
2011-10-21 16:57:37 +08:00
|
|
|
%sum = add nsw i32 %0, %base
|
|
|
|
%inner.next = add i32 %iv, 1
|
|
|
|
%inner.exitcond = icmp eq i32 %inner.next, %i
|
|
|
|
br i1 %inner.exitcond, label %loop.body.2, label %inner.loop.body
|
|
|
|
|
|
|
|
loop.body.2:
|
|
|
|
%next = add i32 %iv, 1
|
|
|
|
%exitcond = icmp eq i32 %next, %i
|
|
|
|
br i1 %exitcond, label %exit, label %loop.body.1
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret i32 %sum
|
|
|
|
}
|
2011-11-14 08:00:35 +08:00
|
|
|
|
|
|
|
define void @unnatural_cfg1() {
|
|
|
|
; Test that we can handle a loop with an inner unnatural loop at the end of
|
|
|
|
; a function. This is a gross CFG reduced out of the single source GCC.
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: unnatural_cfg1
|
2011-11-14 08:00:35 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %loop.body1
|
|
|
|
; CHECK: %loop.body2
|
2011-11-15 14:26:43 +08:00
|
|
|
; CHECK: %loop.body3
|
2011-11-14 08:00:35 +08:00
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %loop.header
|
|
|
|
|
|
|
|
loop.header:
|
|
|
|
br label %loop.body1
|
|
|
|
|
|
|
|
loop.body1:
|
|
|
|
br i1 undef, label %loop.body3, label %loop.body2
|
|
|
|
|
|
|
|
loop.body2:
|
2015-02-28 05:17:42 +08:00
|
|
|
%ptr = load i32*, i32** undef, align 4
|
2011-11-14 08:00:35 +08:00
|
|
|
br label %loop.body3
|
|
|
|
|
|
|
|
loop.body3:
|
|
|
|
%myptr = phi i32* [ %ptr2, %loop.body5 ], [ %ptr, %loop.body2 ], [ undef, %loop.body1 ]
|
|
|
|
%bcmyptr = bitcast i32* %myptr to i32*
|
2015-02-28 05:17:42 +08:00
|
|
|
%val = load i32, i32* %bcmyptr, align 4
|
2011-11-14 08:00:35 +08:00
|
|
|
%comp = icmp eq i32 %val, 48
|
|
|
|
br i1 %comp, label %loop.body4, label %loop.body5
|
|
|
|
|
|
|
|
loop.body4:
|
|
|
|
br i1 undef, label %loop.header, label %loop.body5
|
|
|
|
|
|
|
|
loop.body5:
|
2015-02-28 05:17:42 +08:00
|
|
|
%ptr2 = load i32*, i32** undef, align 4
|
2011-11-14 08:00:35 +08:00
|
|
|
br label %loop.body3
|
|
|
|
}
|
Fix an overflow bug in MachineBranchProbabilityInfo. This pass relied on
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
2011-11-14 16:50:16 +08:00
|
|
|
|
2011-11-15 14:26:43 +08:00
|
|
|
define void @unnatural_cfg2() {
|
|
|
|
; Test that we can handle a loop with a nested natural loop *and* an unnatural
|
|
|
|
; loop. This was reduced from a crash on block placement when run over
|
|
|
|
; single-source GCC.
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: unnatural_cfg2
|
2011-11-15 14:26:43 +08:00
|
|
|
; CHECK: %entry
|
Revert Revert [MBP] do not rotate loop if it creates extra branch
This is a second attempt to land this patch.
The first one resulted in a crash of clang sanitizer buildbot.
The fix is here and regression test is added.
This is a last fix for the corner case of PR32214. Actually this is not really corner case in general.
We should not do a loop rotation if we create an additional branch due to it.
Consider the case where we have a loop chain H, M, B, C , where
H is header with viable fallthrough from pre-header and exit from the loop
M - some middle block
B - backedge to Header but with exit from the loop also.
C - some cold block of the loop.
Let's H is determined as a best exit. If we do a loop rotation M, B, C, H we can introduce the extra branch.
Let's compute the change in number of branches:
+1 branch from pre-header to header
-1 branch from header to exit
+1 branch from header to middle block if there is such
-1 branch from cold bock to header if there is one
So if C is not a predecessor of H then we introduce extra branch.
This change actually prohibits rotation of the loop if both true
Best Exit has next element in chain as successor.
Last element in chain is not a predecessor of first element of chain.
Reviewers: iteratee, xur, sammccall, chandlerc
Reviewed By: iteratee
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34745
llvm-svn: 307631
2017-07-11 16:34:58 +08:00
|
|
|
; CHECK: %loop.header
|
2011-11-15 14:26:43 +08:00
|
|
|
; CHECK: %loop.body1
|
|
|
|
; CHECK: %loop.body2
|
|
|
|
; CHECK: %loop.body4
|
|
|
|
; CHECK: %loop.inner2.begin
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK: %loop.inner2.begin
|
|
|
|
; CHECK: %loop.body3
|
|
|
|
; CHECK: %loop.inner1.begin
|
2011-11-15 14:26:43 +08:00
|
|
|
; CHECK: %bail
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %loop.header
|
|
|
|
|
|
|
|
loop.header:
|
|
|
|
%comp0 = icmp eq i32* undef, null
|
|
|
|
br i1 %comp0, label %bail, label %loop.body1
|
|
|
|
|
|
|
|
loop.body1:
|
2015-02-28 05:17:42 +08:00
|
|
|
%val0 = load i32*, i32** undef, align 4
|
2011-11-15 14:26:43 +08:00
|
|
|
br i1 undef, label %loop.body2, label %loop.inner1.begin
|
|
|
|
|
|
|
|
loop.body2:
|
|
|
|
br i1 undef, label %loop.body4, label %loop.body3
|
|
|
|
|
|
|
|
loop.body3:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%ptr1 = getelementptr inbounds i32, i32* %val0, i32 0
|
2011-11-15 14:26:43 +08:00
|
|
|
%castptr1 = bitcast i32* %ptr1 to i32**
|
2015-02-28 05:17:42 +08:00
|
|
|
%val1 = load i32*, i32** %castptr1, align 4
|
2011-11-15 14:26:43 +08:00
|
|
|
br label %loop.inner1.begin
|
|
|
|
|
|
|
|
loop.inner1.begin:
|
|
|
|
%valphi = phi i32* [ %val2, %loop.inner1.end ], [ %val1, %loop.body3 ], [ %val0, %loop.body1 ]
|
|
|
|
%castval = bitcast i32* %valphi to i32*
|
|
|
|
%comp1 = icmp eq i32 undef, 48
|
|
|
|
br i1 %comp1, label %loop.inner1.end, label %loop.body4
|
|
|
|
|
|
|
|
loop.inner1.end:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%ptr2 = getelementptr inbounds i32, i32* %valphi, i32 0
|
2011-11-15 14:26:43 +08:00
|
|
|
%castptr2 = bitcast i32* %ptr2 to i32**
|
2015-02-28 05:17:42 +08:00
|
|
|
%val2 = load i32*, i32** %castptr2, align 4
|
2011-11-15 14:26:43 +08:00
|
|
|
br label %loop.inner1.begin
|
|
|
|
|
|
|
|
loop.body4.dead:
|
|
|
|
br label %loop.body4
|
|
|
|
|
|
|
|
loop.body4:
|
|
|
|
%comp2 = icmp ult i32 undef, 3
|
|
|
|
br i1 %comp2, label %loop.inner2.begin, label %loop.end
|
|
|
|
|
|
|
|
loop.inner2.begin:
|
|
|
|
br i1 false, label %loop.end, label %loop.inner2.end
|
|
|
|
|
|
|
|
loop.inner2.end:
|
|
|
|
%comp3 = icmp eq i32 undef, 1769472
|
|
|
|
br i1 %comp3, label %loop.end, label %loop.inner2.begin
|
|
|
|
|
|
|
|
loop.end:
|
|
|
|
br label %loop.header
|
|
|
|
|
|
|
|
bail:
|
|
|
|
unreachable
|
|
|
|
}
|
|
|
|
|
Fix an overflow bug in MachineBranchProbabilityInfo. This pass relied on
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
2011-11-14 16:50:16 +08:00
|
|
|
define i32 @problematic_switch() {
|
|
|
|
; This function's CFG caused overlow in the machine branch probability
|
|
|
|
; calculation, triggering asserts. Make sure we don't crash on it.
|
|
|
|
; CHECK: problematic_switch
|
|
|
|
|
|
|
|
entry:
|
|
|
|
switch i32 undef, label %exit [
|
|
|
|
i32 879, label %bogus
|
|
|
|
i32 877, label %step
|
|
|
|
i32 876, label %step
|
|
|
|
i32 875, label %step
|
|
|
|
i32 874, label %step
|
|
|
|
i32 873, label %step
|
|
|
|
i32 872, label %step
|
|
|
|
i32 868, label %step
|
|
|
|
i32 867, label %step
|
|
|
|
i32 866, label %step
|
|
|
|
i32 861, label %step
|
|
|
|
i32 860, label %step
|
|
|
|
i32 856, label %step
|
|
|
|
i32 855, label %step
|
|
|
|
i32 854, label %step
|
|
|
|
i32 831, label %step
|
|
|
|
i32 830, label %step
|
|
|
|
i32 829, label %step
|
|
|
|
i32 828, label %step
|
|
|
|
i32 815, label %step
|
|
|
|
i32 814, label %step
|
|
|
|
i32 811, label %step
|
|
|
|
i32 806, label %step
|
|
|
|
i32 805, label %step
|
|
|
|
i32 804, label %step
|
|
|
|
i32 803, label %step
|
|
|
|
i32 802, label %step
|
|
|
|
i32 801, label %step
|
|
|
|
i32 800, label %step
|
|
|
|
i32 799, label %step
|
|
|
|
i32 798, label %step
|
|
|
|
i32 797, label %step
|
|
|
|
i32 796, label %step
|
|
|
|
i32 795, label %step
|
|
|
|
]
|
|
|
|
bogus:
|
|
|
|
unreachable
|
|
|
|
step:
|
|
|
|
br label %exit
|
|
|
|
exit:
|
|
|
|
%merge = phi i32 [ 3, %step ], [ 6, %entry ]
|
|
|
|
ret i32 %merge
|
|
|
|
}
|
2011-11-19 18:26:02 +08:00
|
|
|
|
|
|
|
define void @fpcmp_unanalyzable_branch(i1 %cond) {
|
2016-03-24 05:45:37 +08:00
|
|
|
; This function's CFG contains an once-unanalyzable branch (une on floating
|
|
|
|
; points). As now it becomes analyzable, we should get best layout in which each
|
|
|
|
; edge in 'entry' -> 'entry.if.then_crit_edge' -> 'if.then' -> 'if.end' is
|
|
|
|
; fall-through.
|
|
|
|
; CHECK-LABEL: fpcmp_unanalyzable_branch:
|
|
|
|
; CHECK: # BB#0: # %entry
|
|
|
|
; CHECK: # BB#1: # %entry.if.then_crit_edge
|
2016-10-21 02:06:52 +08:00
|
|
|
; CHECK: .LBB10_5: # %if.then
|
|
|
|
; CHECK: .LBB10_6: # %if.end
|
2016-03-24 05:45:37 +08:00
|
|
|
; CHECK: # BB#3: # %exit
|
|
|
|
; CHECK: jne .LBB10_4
|
2016-10-21 02:06:52 +08:00
|
|
|
; CHECK-NEXT: jnp .LBB10_6
|
|
|
|
; CHECK: jmp .LBB10_5
|
2011-11-20 17:30:40 +08:00
|
|
|
|
2011-11-19 18:26:02 +08:00
|
|
|
entry:
|
2011-11-20 17:30:40 +08:00
|
|
|
; Note that this branch must be strongly biased toward
|
|
|
|
; 'entry.if.then_crit_edge' to ensure that we would try to form a chain for
|
2016-03-24 05:45:37 +08:00
|
|
|
; 'entry' -> 'entry.if.then_crit_edge' -> 'if.then' -> 'if.end'.
|
2011-11-20 17:30:40 +08:00
|
|
|
br i1 %cond, label %entry.if.then_crit_edge, label %lor.lhs.false, !prof !1
|
2011-11-19 18:26:02 +08:00
|
|
|
|
|
|
|
entry.if.then_crit_edge:
|
2015-02-28 05:17:42 +08:00
|
|
|
%.pre14 = load i8, i8* undef, align 1
|
2011-11-19 18:26:02 +08:00
|
|
|
br label %if.then
|
|
|
|
|
|
|
|
lor.lhs.false:
|
|
|
|
br i1 undef, label %if.end, label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
%cmp.i = fcmp une double 0.000000e+00, undef
|
2016-03-24 05:45:37 +08:00
|
|
|
br i1 %cmp.i, label %if.then, label %if.end, !prof !3
|
2011-11-19 18:26:02 +08:00
|
|
|
|
|
|
|
if.then:
|
|
|
|
%0 = phi i8 [ %.pre14, %entry.if.then_crit_edge ], [ undef, %exit ]
|
|
|
|
%1 = and i8 %0, 1
|
2013-05-01 01:52:57 +08:00
|
|
|
store i8 %1, i8* undef, align 4
|
2011-11-19 18:26:02 +08:00
|
|
|
br label %if.end
|
|
|
|
|
|
|
|
if.end:
|
|
|
|
ret void
|
|
|
|
}
|
2011-11-20 17:30:40 +08:00
|
|
|
|
IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly. These
are the matching assembly changes for the metadata/value split in
r223802.
- Only use the `metadata` type when referencing metadata from a call
intrinsic -- i.e., only when it's used as a `Value`.
- Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
when referencing it from call intrinsics.
So, assembly like this:
define @foo(i32 %v) {
call void @llvm.foo(metadata !{i32 %v}, metadata !0)
call void @llvm.foo(metadata !{i32 7}, metadata !0)
call void @llvm.foo(metadata !1, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{metadata !3}, metadata !0)
ret void, !bar !2
}
!0 = metadata !{metadata !2}
!1 = metadata !{i32* @global}
!2 = metadata !{metadata !3}
!3 = metadata !{}
turns into this:
define @foo(i32 %v) {
call void @llvm.foo(metadata i32 %v, metadata !0)
call void @llvm.foo(metadata i32 7, metadata !0)
call void @llvm.foo(metadata i32* @global, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{!3}, metadata !0)
ret void, !bar !2
}
!0 = !{!2}
!1 = !{i32* @global}
!2 = !{!3}
!3 = !{}
I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines). I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.
This is part of PR21532.
llvm-svn: 224257
2014-12-16 03:07:53 +08:00
|
|
|
!1 = !{!"branch_weights", i32 1000, i32 1}
|
2016-03-24 05:45:37 +08:00
|
|
|
!3 = !{!"branch_weights", i32 1, i32 1000}
|
2011-11-20 19:22:06 +08:00
|
|
|
|
|
|
|
declare i32 @f()
|
|
|
|
declare i32 @g()
|
|
|
|
declare i32 @h(i32 %x)
|
|
|
|
|
|
|
|
define i32 @test_global_cfg_break_profitability() {
|
|
|
|
; Check that our metrics for the profitability of a CFG break are global rather
|
|
|
|
; than local. A successor may be very hot, but if the current block isn't, it
|
|
|
|
; doesn't matter. Within this test the 'then' block is slightly warmer than the
|
|
|
|
; 'else' block, but not nearly enough to merit merging it with the exit block
|
|
|
|
; even though the probability of 'then' branching to the 'exit' block is very
|
|
|
|
; high.
|
|
|
|
; CHECK: test_global_cfg_break_profitability
|
2011-11-20 20:49:45 +08:00
|
|
|
; CHECK: calll {{_?}}f
|
|
|
|
; CHECK: calll {{_?}}g
|
|
|
|
; CHECK: calll {{_?}}h
|
2011-11-20 19:22:06 +08:00
|
|
|
; CHECK: ret
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br i1 undef, label %then, label %else, !prof !2
|
|
|
|
|
|
|
|
then:
|
|
|
|
%then.result = call i32 @f()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
else:
|
|
|
|
%else.result = call i32 @g()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
%result = phi i32 [ %then.result, %then ], [ %else.result, %else ]
|
|
|
|
%result2 = call i32 @h(i32 %result)
|
|
|
|
ret i32 %result
|
|
|
|
}
|
|
|
|
|
IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly. These
are the matching assembly changes for the metadata/value split in
r223802.
- Only use the `metadata` type when referencing metadata from a call
intrinsic -- i.e., only when it's used as a `Value`.
- Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
when referencing it from call intrinsics.
So, assembly like this:
define @foo(i32 %v) {
call void @llvm.foo(metadata !{i32 %v}, metadata !0)
call void @llvm.foo(metadata !{i32 7}, metadata !0)
call void @llvm.foo(metadata !1, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{metadata !3}, metadata !0)
ret void, !bar !2
}
!0 = metadata !{metadata !2}
!1 = metadata !{i32* @global}
!2 = metadata !{metadata !3}
!3 = metadata !{}
turns into this:
define @foo(i32 %v) {
call void @llvm.foo(metadata i32 %v, metadata !0)
call void @llvm.foo(metadata i32 7, metadata !0)
call void @llvm.foo(metadata i32* @global, metadata !0)
call void @llvm.foo(metadata !3, metadata !0)
call void @llvm.foo(metadata !{!3}, metadata !0)
ret void, !bar !2
}
!0 = !{!2}
!1 = !{i32* @global}
!2 = !{!3}
!3 = !{}
I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines). I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.
This is part of PR21532.
llvm-svn: 224257
2014-12-16 03:07:53 +08:00
|
|
|
!2 = !{!"branch_weights", i32 3, i32 1}
|
2011-11-22 21:13:16 +08:00
|
|
|
|
|
|
|
declare i32 @__gxx_personality_v0(...)
|
|
|
|
|
2015-06-18 04:52:32 +08:00
|
|
|
define void @test_eh_lpad_successor() personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) {
|
2011-11-22 21:13:16 +08:00
|
|
|
; Some times the landing pad ends up as the first successor of an invoke block.
|
|
|
|
; When this happens, a strange result used to fall out of updateTerminators: we
|
|
|
|
; didn't correctly locate the fallthrough successor, assuming blindly that the
|
|
|
|
; first one was the fallthrough successor. As a result, we would add an
|
|
|
|
; erroneous jump to the landing pad thinking *that* was the default successor.
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: test_eh_lpad_successor
|
2011-11-22 21:13:16 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK-NOT: jmp
|
|
|
|
; CHECK: %loop
|
|
|
|
|
|
|
|
entry:
|
|
|
|
invoke i32 @f() to label %preheader unwind label %lpad
|
|
|
|
|
|
|
|
preheader:
|
|
|
|
br label %loop
|
|
|
|
|
|
|
|
lpad:
|
2015-06-18 04:52:32 +08:00
|
|
|
%lpad.val = landingpad { i8*, i32 }
|
2011-11-22 21:13:16 +08:00
|
|
|
cleanup
|
|
|
|
resume { i8*, i32 } %lpad.val
|
|
|
|
|
|
|
|
loop:
|
|
|
|
br label %loop
|
|
|
|
}
|
2011-11-23 11:03:21 +08:00
|
|
|
|
2011-11-23 16:23:54 +08:00
|
|
|
declare void @fake_throw() noreturn
|
|
|
|
|
2015-06-18 04:52:32 +08:00
|
|
|
define void @test_eh_throw() personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) {
|
2011-11-23 16:23:54 +08:00
|
|
|
; For blocks containing a 'throw' (or similar functionality), we have
|
|
|
|
; a no-return invoke. In this case, only EH successors will exist, and
|
|
|
|
; fallthrough simply won't occur. Make sure we don't crash trying to update
|
|
|
|
; terminators for such constructs.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: test_eh_throw
|
2011-11-23 16:23:54 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %cleanup
|
|
|
|
|
|
|
|
entry:
|
|
|
|
invoke void @fake_throw() to label %continue unwind label %cleanup
|
|
|
|
|
|
|
|
continue:
|
|
|
|
unreachable
|
|
|
|
|
|
|
|
cleanup:
|
2015-06-18 04:52:32 +08:00
|
|
|
%0 = landingpad { i8*, i32 }
|
2011-11-23 16:23:54 +08:00
|
|
|
cleanup
|
|
|
|
unreachable
|
|
|
|
}
|
|
|
|
|
2011-11-23 11:03:21 +08:00
|
|
|
define void @test_unnatural_cfg_backwards_inner_loop() {
|
|
|
|
; Test that when we encounter an unnatural CFG structure after having formed
|
|
|
|
; a chain for an inner loop which happened to be laid out backwards we don't
|
|
|
|
; attempt to merge onto the wrong end of the inner loop just because we find it
|
|
|
|
; first. This was reduced from a crasher in GCC's single source.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: test_unnatural_cfg_backwards_inner_loop
|
2011-11-23 11:03:21 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %loop2b
|
2011-11-27 17:22:53 +08:00
|
|
|
; CHECK: %loop1
|
2011-11-23 11:03:21 +08:00
|
|
|
|
|
|
|
entry:
|
|
|
|
br i1 undef, label %loop2a, label %body
|
|
|
|
|
|
|
|
body:
|
|
|
|
br label %loop2a
|
|
|
|
|
|
|
|
loop1:
|
2015-02-28 05:17:42 +08:00
|
|
|
%next.load = load i32*, i32** undef
|
2011-11-23 11:03:21 +08:00
|
|
|
br i1 %comp.a, label %loop2a, label %loop2b
|
|
|
|
|
|
|
|
loop2a:
|
|
|
|
%var = phi i32* [ null, %entry ], [ null, %body ], [ %next.phi, %loop1 ]
|
|
|
|
%next.var = phi i32* [ null, %entry ], [ undef, %body ], [ %next.load, %loop1 ]
|
|
|
|
%comp.a = icmp eq i32* %var, null
|
|
|
|
br label %loop3
|
|
|
|
|
|
|
|
loop2b:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep = getelementptr inbounds i32, i32* %var.phi, i32 0
|
2011-11-23 11:03:21 +08:00
|
|
|
%next.ptr = bitcast i32* %gep to i32**
|
|
|
|
store i32* %next.phi, i32** %next.ptr
|
|
|
|
br label %loop3
|
|
|
|
|
|
|
|
loop3:
|
|
|
|
%var.phi = phi i32* [ %next.phi, %loop2b ], [ %var, %loop2a ]
|
|
|
|
%next.phi = phi i32* [ %next.load, %loop2b ], [ %next.var, %loop2a ]
|
|
|
|
br label %loop1
|
|
|
|
}
|
2011-11-23 18:35:36 +08:00
|
|
|
|
|
|
|
define void @unanalyzable_branch_to_loop_header() {
|
|
|
|
; Ensure that we can handle unanalyzable branches into loop headers. We
|
|
|
|
; pre-form chains for unanalyzable branches, and will find the tail end of that
|
|
|
|
; at the start of the loop. This function uses floating point comparison
|
|
|
|
; fallthrough because that happens to always produce unanalyzable branches on
|
|
|
|
; x86.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: unanalyzable_branch_to_loop_header
|
2011-11-23 18:35:36 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %loop
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%cmp = fcmp une double 0.000000e+00, undef
|
|
|
|
br i1 %cmp, label %loop, label %exit
|
|
|
|
|
|
|
|
loop:
|
|
|
|
%cond = icmp eq i8 undef, 42
|
|
|
|
br i1 %cond, label %exit, label %loop
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2011-11-24 16:46:04 +08:00
|
|
|
define void @unanalyzable_branch_to_best_succ(i1 %cond) {
|
|
|
|
; Ensure that we can handle unanalyzable branches where the destination block
|
2014-01-25 01:20:08 +08:00
|
|
|
; gets selected as the optimal successor to merge.
|
2011-11-24 16:46:04 +08:00
|
|
|
;
|
2016-03-24 05:45:37 +08:00
|
|
|
; This branch is now analyzable and hence the destination block becomes the
|
|
|
|
; hotter one. The right order is entry->bar->exit->foo.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: unanalyzable_branch_to_best_succ
|
2011-11-24 16:46:04 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %bar
|
|
|
|
; CHECK: %exit
|
2016-03-24 05:45:37 +08:00
|
|
|
; CHECK: %foo
|
2011-11-24 16:46:04 +08:00
|
|
|
|
|
|
|
entry:
|
|
|
|
; Bias this branch toward bar to ensure we form that chain.
|
|
|
|
br i1 %cond, label %bar, label %foo, !prof !1
|
|
|
|
|
|
|
|
foo:
|
|
|
|
%cmp = fcmp une double 0.000000e+00, undef
|
|
|
|
br i1 %cmp, label %bar, label %exit
|
|
|
|
|
|
|
|
bar:
|
|
|
|
call i32 @f()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @unanalyzable_branch_to_free_block(float %x) {
|
|
|
|
; Ensure that we can handle unanalyzable branches where the destination block
|
|
|
|
; gets selected as the best free block in the CFG.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: unanalyzable_branch_to_free_block
|
2011-11-24 16:46:04 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %a
|
|
|
|
; CHECK: %b
|
|
|
|
; CHECK: %c
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br i1 undef, label %a, label %b
|
|
|
|
|
|
|
|
a:
|
|
|
|
call i32 @f()
|
|
|
|
br label %c
|
|
|
|
|
|
|
|
b:
|
|
|
|
%cmp = fcmp une float %x, undef
|
|
|
|
br i1 %cmp, label %c, label %exit
|
|
|
|
|
|
|
|
c:
|
|
|
|
call i32 @g()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2011-11-24 19:23:15 +08:00
|
|
|
define void @many_unanalyzable_branches() {
|
|
|
|
; Ensure that we don't crash as we're building up many unanalyzable branches,
|
|
|
|
; blocks, and loops.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: many_unanalyzable_branches
|
2011-11-24 19:23:15 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
br label %0
|
|
|
|
|
2015-02-28 05:17:42 +08:00
|
|
|
%val0 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp0 = fcmp une float %val0, undef
|
|
|
|
br i1 %cmp0, label %1, label %0
|
2015-02-28 05:17:42 +08:00
|
|
|
%val1 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp1 = fcmp une float %val1, undef
|
|
|
|
br i1 %cmp1, label %2, label %1
|
2015-02-28 05:17:42 +08:00
|
|
|
%val2 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp2 = fcmp une float %val2, undef
|
|
|
|
br i1 %cmp2, label %3, label %2
|
2015-02-28 05:17:42 +08:00
|
|
|
%val3 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp3 = fcmp une float %val3, undef
|
|
|
|
br i1 %cmp3, label %4, label %3
|
2015-02-28 05:17:42 +08:00
|
|
|
%val4 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp4 = fcmp une float %val4, undef
|
|
|
|
br i1 %cmp4, label %5, label %4
|
2015-02-28 05:17:42 +08:00
|
|
|
%val5 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp5 = fcmp une float %val5, undef
|
|
|
|
br i1 %cmp5, label %6, label %5
|
2015-02-28 05:17:42 +08:00
|
|
|
%val6 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp6 = fcmp une float %val6, undef
|
|
|
|
br i1 %cmp6, label %7, label %6
|
2015-02-28 05:17:42 +08:00
|
|
|
%val7 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp7 = fcmp une float %val7, undef
|
|
|
|
br i1 %cmp7, label %8, label %7
|
2015-02-28 05:17:42 +08:00
|
|
|
%val8 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp8 = fcmp une float %val8, undef
|
|
|
|
br i1 %cmp8, label %9, label %8
|
2015-02-28 05:17:42 +08:00
|
|
|
%val9 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp9 = fcmp une float %val9, undef
|
|
|
|
br i1 %cmp9, label %10, label %9
|
2015-02-28 05:17:42 +08:00
|
|
|
%val10 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp10 = fcmp une float %val10, undef
|
|
|
|
br i1 %cmp10, label %11, label %10
|
2015-02-28 05:17:42 +08:00
|
|
|
%val11 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp11 = fcmp une float %val11, undef
|
|
|
|
br i1 %cmp11, label %12, label %11
|
2015-02-28 05:17:42 +08:00
|
|
|
%val12 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp12 = fcmp une float %val12, undef
|
|
|
|
br i1 %cmp12, label %13, label %12
|
2015-02-28 05:17:42 +08:00
|
|
|
%val13 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp13 = fcmp une float %val13, undef
|
|
|
|
br i1 %cmp13, label %14, label %13
|
2015-02-28 05:17:42 +08:00
|
|
|
%val14 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp14 = fcmp une float %val14, undef
|
|
|
|
br i1 %cmp14, label %15, label %14
|
2015-02-28 05:17:42 +08:00
|
|
|
%val15 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp15 = fcmp une float %val15, undef
|
|
|
|
br i1 %cmp15, label %16, label %15
|
2015-02-28 05:17:42 +08:00
|
|
|
%val16 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp16 = fcmp une float %val16, undef
|
|
|
|
br i1 %cmp16, label %17, label %16
|
2015-02-28 05:17:42 +08:00
|
|
|
%val17 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp17 = fcmp une float %val17, undef
|
|
|
|
br i1 %cmp17, label %18, label %17
|
2015-02-28 05:17:42 +08:00
|
|
|
%val18 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp18 = fcmp une float %val18, undef
|
|
|
|
br i1 %cmp18, label %19, label %18
|
2015-02-28 05:17:42 +08:00
|
|
|
%val19 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp19 = fcmp une float %val19, undef
|
|
|
|
br i1 %cmp19, label %20, label %19
|
2015-02-28 05:17:42 +08:00
|
|
|
%val20 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp20 = fcmp une float %val20, undef
|
|
|
|
br i1 %cmp20, label %21, label %20
|
2015-02-28 05:17:42 +08:00
|
|
|
%val21 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp21 = fcmp une float %val21, undef
|
|
|
|
br i1 %cmp21, label %22, label %21
|
2015-02-28 05:17:42 +08:00
|
|
|
%val22 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp22 = fcmp une float %val22, undef
|
|
|
|
br i1 %cmp22, label %23, label %22
|
2015-02-28 05:17:42 +08:00
|
|
|
%val23 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp23 = fcmp une float %val23, undef
|
|
|
|
br i1 %cmp23, label %24, label %23
|
2015-02-28 05:17:42 +08:00
|
|
|
%val24 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp24 = fcmp une float %val24, undef
|
|
|
|
br i1 %cmp24, label %25, label %24
|
2015-02-28 05:17:42 +08:00
|
|
|
%val25 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp25 = fcmp une float %val25, undef
|
|
|
|
br i1 %cmp25, label %26, label %25
|
2015-02-28 05:17:42 +08:00
|
|
|
%val26 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp26 = fcmp une float %val26, undef
|
|
|
|
br i1 %cmp26, label %27, label %26
|
2015-02-28 05:17:42 +08:00
|
|
|
%val27 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp27 = fcmp une float %val27, undef
|
|
|
|
br i1 %cmp27, label %28, label %27
|
2015-02-28 05:17:42 +08:00
|
|
|
%val28 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp28 = fcmp une float %val28, undef
|
|
|
|
br i1 %cmp28, label %29, label %28
|
2015-02-28 05:17:42 +08:00
|
|
|
%val29 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp29 = fcmp une float %val29, undef
|
|
|
|
br i1 %cmp29, label %30, label %29
|
2015-02-28 05:17:42 +08:00
|
|
|
%val30 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp30 = fcmp une float %val30, undef
|
|
|
|
br i1 %cmp30, label %31, label %30
|
2015-02-28 05:17:42 +08:00
|
|
|
%val31 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp31 = fcmp une float %val31, undef
|
|
|
|
br i1 %cmp31, label %32, label %31
|
2015-02-28 05:17:42 +08:00
|
|
|
%val32 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp32 = fcmp une float %val32, undef
|
|
|
|
br i1 %cmp32, label %33, label %32
|
2015-02-28 05:17:42 +08:00
|
|
|
%val33 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp33 = fcmp une float %val33, undef
|
|
|
|
br i1 %cmp33, label %34, label %33
|
2015-02-28 05:17:42 +08:00
|
|
|
%val34 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp34 = fcmp une float %val34, undef
|
|
|
|
br i1 %cmp34, label %35, label %34
|
2015-02-28 05:17:42 +08:00
|
|
|
%val35 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp35 = fcmp une float %val35, undef
|
|
|
|
br i1 %cmp35, label %36, label %35
|
2015-02-28 05:17:42 +08:00
|
|
|
%val36 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp36 = fcmp une float %val36, undef
|
|
|
|
br i1 %cmp36, label %37, label %36
|
2015-02-28 05:17:42 +08:00
|
|
|
%val37 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp37 = fcmp une float %val37, undef
|
|
|
|
br i1 %cmp37, label %38, label %37
|
2015-02-28 05:17:42 +08:00
|
|
|
%val38 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp38 = fcmp une float %val38, undef
|
|
|
|
br i1 %cmp38, label %39, label %38
|
2015-02-28 05:17:42 +08:00
|
|
|
%val39 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp39 = fcmp une float %val39, undef
|
|
|
|
br i1 %cmp39, label %40, label %39
|
2015-02-28 05:17:42 +08:00
|
|
|
%val40 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp40 = fcmp une float %val40, undef
|
|
|
|
br i1 %cmp40, label %41, label %40
|
2015-02-28 05:17:42 +08:00
|
|
|
%val41 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp41 = fcmp une float %val41, undef
|
|
|
|
br i1 %cmp41, label %42, label %41
|
2015-02-28 05:17:42 +08:00
|
|
|
%val42 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp42 = fcmp une float %val42, undef
|
|
|
|
br i1 %cmp42, label %43, label %42
|
2015-02-28 05:17:42 +08:00
|
|
|
%val43 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp43 = fcmp une float %val43, undef
|
|
|
|
br i1 %cmp43, label %44, label %43
|
2015-02-28 05:17:42 +08:00
|
|
|
%val44 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp44 = fcmp une float %val44, undef
|
|
|
|
br i1 %cmp44, label %45, label %44
|
2015-02-28 05:17:42 +08:00
|
|
|
%val45 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp45 = fcmp une float %val45, undef
|
|
|
|
br i1 %cmp45, label %46, label %45
|
2015-02-28 05:17:42 +08:00
|
|
|
%val46 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp46 = fcmp une float %val46, undef
|
|
|
|
br i1 %cmp46, label %47, label %46
|
2015-02-28 05:17:42 +08:00
|
|
|
%val47 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp47 = fcmp une float %val47, undef
|
|
|
|
br i1 %cmp47, label %48, label %47
|
2015-02-28 05:17:42 +08:00
|
|
|
%val48 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp48 = fcmp une float %val48, undef
|
|
|
|
br i1 %cmp48, label %49, label %48
|
2015-02-28 05:17:42 +08:00
|
|
|
%val49 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp49 = fcmp une float %val49, undef
|
|
|
|
br i1 %cmp49, label %50, label %49
|
2015-02-28 05:17:42 +08:00
|
|
|
%val50 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp50 = fcmp une float %val50, undef
|
|
|
|
br i1 %cmp50, label %51, label %50
|
2015-02-28 05:17:42 +08:00
|
|
|
%val51 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp51 = fcmp une float %val51, undef
|
|
|
|
br i1 %cmp51, label %52, label %51
|
2015-02-28 05:17:42 +08:00
|
|
|
%val52 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp52 = fcmp une float %val52, undef
|
|
|
|
br i1 %cmp52, label %53, label %52
|
2015-02-28 05:17:42 +08:00
|
|
|
%val53 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp53 = fcmp une float %val53, undef
|
|
|
|
br i1 %cmp53, label %54, label %53
|
2015-02-28 05:17:42 +08:00
|
|
|
%val54 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp54 = fcmp une float %val54, undef
|
|
|
|
br i1 %cmp54, label %55, label %54
|
2015-02-28 05:17:42 +08:00
|
|
|
%val55 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp55 = fcmp une float %val55, undef
|
|
|
|
br i1 %cmp55, label %56, label %55
|
2015-02-28 05:17:42 +08:00
|
|
|
%val56 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp56 = fcmp une float %val56, undef
|
|
|
|
br i1 %cmp56, label %57, label %56
|
2015-02-28 05:17:42 +08:00
|
|
|
%val57 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp57 = fcmp une float %val57, undef
|
|
|
|
br i1 %cmp57, label %58, label %57
|
2015-02-28 05:17:42 +08:00
|
|
|
%val58 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp58 = fcmp une float %val58, undef
|
|
|
|
br i1 %cmp58, label %59, label %58
|
2015-02-28 05:17:42 +08:00
|
|
|
%val59 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp59 = fcmp une float %val59, undef
|
|
|
|
br i1 %cmp59, label %60, label %59
|
2015-02-28 05:17:42 +08:00
|
|
|
%val60 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp60 = fcmp une float %val60, undef
|
|
|
|
br i1 %cmp60, label %61, label %60
|
2015-02-28 05:17:42 +08:00
|
|
|
%val61 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp61 = fcmp une float %val61, undef
|
|
|
|
br i1 %cmp61, label %62, label %61
|
2015-02-28 05:17:42 +08:00
|
|
|
%val62 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp62 = fcmp une float %val62, undef
|
|
|
|
br i1 %cmp62, label %63, label %62
|
2015-02-28 05:17:42 +08:00
|
|
|
%val63 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp63 = fcmp une float %val63, undef
|
|
|
|
br i1 %cmp63, label %64, label %63
|
2015-02-28 05:17:42 +08:00
|
|
|
%val64 = load volatile float, float* undef
|
2011-11-24 19:23:15 +08:00
|
|
|
%cmp64 = fcmp une float %val64, undef
|
|
|
|
br i1 %cmp64, label %65, label %64
|
|
|
|
|
|
|
|
br label %exit
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
}
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
|
|
|
|
define void @benchmark_heapsort(i32 %n, double* nocapture %ra) {
|
|
|
|
; This test case comes from the heapsort benchmark, and exemplifies several
|
|
|
|
; important aspects to block placement in the presence of loops:
|
|
|
|
; 1) Loop rotation needs to *ensure* that the desired exiting edge can be
|
|
|
|
; a fallthrough.
|
|
|
|
; 2) The exiting edge from the loop which is rotated to be laid out at the
|
|
|
|
; bottom of the loop needs to be exiting into the nearest enclosing loop (to
|
|
|
|
; which there is an exit). Otherwise, we force that enclosing loop into
|
|
|
|
; strange layouts that are siginificantly less efficient, often times maing
|
|
|
|
; it discontiguous.
|
|
|
|
;
|
Codegen: Make chains from trellis-shaped CFGs
Lay out trellis-shaped CFGs optimally.
A trellis of the shape below:
A B
|\ /|
| \ / |
| X |
| / \ |
|/ \|
C D
would be laid out A; B->C ; D by the current layout algorithm. Now we identify
trellises and lay them out either A->C; B->D or A->D; B->C. This scales with an
increasing number of predecessors. A trellis is a a group of 2 or more
predecessor blocks that all have the same successors.
because of this we can tail duplicate to extend existing trellises.
As an example consider the following CFG:
B D F H
/ \ / \ / \ / \
A---C---E---G---Ret
Where A,C,E,G are all small (Currently 2 instructions).
The CFG preserving layout is then A,B,C,D,E,F,G,H,Ret.
The current code will copy C into B, E into D and G into F and yield the layout
A,C,B(C),E,D(E),F(G),G,H,ret
define void @straight_test(i32 %tag) {
entry:
br label %test1
test1: ; A
%tagbit1 = and i32 %tag, 1
%tagbit1eq0 = icmp eq i32 %tagbit1, 0
br i1 %tagbit1eq0, label %test2, label %optional1
optional1: ; B
call void @a()
br label %test2
test2: ; C
%tagbit2 = and i32 %tag, 2
%tagbit2eq0 = icmp eq i32 %tagbit2, 0
br i1 %tagbit2eq0, label %test3, label %optional2
optional2: ; D
call void @b()
br label %test3
test3: ; E
%tagbit3 = and i32 %tag, 4
%tagbit3eq0 = icmp eq i32 %tagbit3, 0
br i1 %tagbit3eq0, label %test4, label %optional3
optional3: ; F
call void @c()
br label %test4
test4: ; G
%tagbit4 = and i32 %tag, 8
%tagbit4eq0 = icmp eq i32 %tagbit4, 0
br i1 %tagbit4eq0, label %exit, label %optional4
optional4: ; H
call void @d()
br label %exit
exit:
ret void
}
here is the layout after D27742:
straight_test: # @straight_test
; ... Prologue elided
; BB#0: # %entry ; A (merged with test1)
; ... More prologue elided
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_2
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_3
b .LBB0_4
.LBB0_2: # %optional1 ; B (copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_4
.LBB0_3: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_5
b .LBB0_6
.LBB0_4: # %optional2 ; D (copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_5: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
b .LBB0_7
.LBB0_6: # %optional3 ; F (copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit ; Ret
ld 30, 96(1) # 8-byte Folded Reload
addi 1, 1, 112
ld 0, 16(1)
mtlr 0
blr
The tail-duplication has produced some benefit, but it has also produced a
trellis which is not laid out optimally. With this patch, we improve the layouts
of such trellises, and decrease the cost calculation for tail-duplication
accordingly.
This patch produces the layout A,C,E,G,B,D,F,H,Ret. This layout does have
back edges, which is a negative, but it has a bigger compensating
positive, which is that it handles the case where there are long strings
of skipped blocks much better than the original layout. Both layouts
handle runs of executed blocks equally well. Branch prediction also
improves if there is any correlation between subsequent optional blocks.
Here is the resulting concrete layout:
straight_test: # @straight_test
; BB#0: # %entry ; A (merged with test1)
mr 30, 3
andi. 3, 30, 1
bc 12, 1, .LBB0_4
; BB#1: # %test2 ; C
rlwinm. 3, 30, 0, 30, 30
bne 0, .LBB0_5
.LBB0_2: # %test3 ; E
rlwinm. 3, 30, 0, 29, 29
bne 0, .LBB0_6
.LBB0_3: # %test4 ; G
rlwinm. 3, 30, 0, 28, 28
bne 0, .LBB0_7
b .LBB0_8
.LBB0_4: # %optional1 ; B (Copy of C)
bl a
nop
rlwinm. 3, 30, 0, 30, 30
beq 0, .LBB0_2
.LBB0_5: # %optional2 ; D (Copy of E)
bl b
nop
rlwinm. 3, 30, 0, 29, 29
beq 0, .LBB0_3
.LBB0_6: # %optional3 ; F (Copy of G)
bl c
nop
rlwinm. 3, 30, 0, 28, 28
beq 0, .LBB0_8
.LBB0_7: # %optional4 ; H
bl d
nop
.LBB0_8: # %exit
Differential Revision: https://reviews.llvm.org/D28522
llvm-svn: 295223
2017-02-16 03:49:14 +08:00
|
|
|
; CHECK-LABEL: @benchmark_heapsort
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; First rotated loop top.
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align
|
2012-04-16 21:33:36 +08:00
|
|
|
; CHECK: %while.end
|
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well. Issue was worklist/scheduling/taildup issue in layout.
Issue from 2nd rollback fixed, with 2 additional tests. Issue was
tail merging/loop info/tail-duplication causing issue with loops that share
a header block.
Issue with early tail-duplication of blocks that branch to a fallthrough
predecessor fixed with test case: tail-dup-branch-to-fallthrough.ll
Differential revision: https://reviews.llvm.org/D18226
llvm-svn: 283934
2016-10-12 04:36:43 +08:00
|
|
|
; %for.cond gets completely tail-duplicated away.
|
2012-04-16 21:33:36 +08:00
|
|
|
; CHECK: %if.then
|
|
|
|
; CHECK: %if.else
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
; CHECK: %if.end10
|
|
|
|
; Second rotated loop top
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align
|
2012-04-16 21:33:36 +08:00
|
|
|
; CHECK: %if.then24
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
; CHECK: %while.cond.outer
|
|
|
|
; Third rotated loop top
|
2016-01-26 08:03:25 +08:00
|
|
|
; CHECK: .p2align
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
; CHECK: %while.cond
|
|
|
|
; CHECK: %while.body
|
|
|
|
; CHECK: %land.lhs.true
|
|
|
|
; CHECK: %if.then19
|
2013-06-24 09:55:01 +08:00
|
|
|
; CHECK: %if.end20
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
; CHECK: %if.then8
|
|
|
|
; CHECK: ret
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%shr = ashr i32 %n, 1
|
|
|
|
%add = add nsw i32 %shr, 1
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx3 = getelementptr inbounds double, double* %ra, i64 1
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
br label %for.cond
|
|
|
|
|
|
|
|
for.cond:
|
|
|
|
%ir.0 = phi i32 [ %n, %entry ], [ %ir.1, %while.end ]
|
|
|
|
%l.0 = phi i32 [ %add, %entry ], [ %l.1, %while.end ]
|
|
|
|
%cmp = icmp sgt i32 %l.0, 1
|
|
|
|
br i1 %cmp, label %if.then, label %if.else
|
|
|
|
|
|
|
|
if.then:
|
|
|
|
%dec = add nsw i32 %l.0, -1
|
|
|
|
%idxprom = sext i32 %dec to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx = getelementptr inbounds double, double* %ra, i64 %idxprom
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load double, double* %arrayidx, align 8
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
br label %if.end10
|
|
|
|
|
|
|
|
if.else:
|
|
|
|
%idxprom1 = sext i32 %ir.0 to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx2 = getelementptr inbounds double, double* %ra, i64 %idxprom1
|
2015-02-28 05:17:42 +08:00
|
|
|
%1 = load double, double* %arrayidx2, align 8
|
|
|
|
%2 = load double, double* %arrayidx3, align 8
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
store double %2, double* %arrayidx2, align 8
|
|
|
|
%dec6 = add nsw i32 %ir.0, -1
|
|
|
|
%cmp7 = icmp eq i32 %dec6, 1
|
|
|
|
br i1 %cmp7, label %if.then8, label %if.end10
|
|
|
|
|
|
|
|
if.then8:
|
|
|
|
store double %1, double* %arrayidx3, align 8
|
|
|
|
ret void
|
|
|
|
|
|
|
|
if.end10:
|
|
|
|
%ir.1 = phi i32 [ %ir.0, %if.then ], [ %dec6, %if.else ]
|
|
|
|
%l.1 = phi i32 [ %dec, %if.then ], [ %l.0, %if.else ]
|
|
|
|
%rra.0 = phi double [ %0, %if.then ], [ %1, %if.else ]
|
|
|
|
%add31 = add nsw i32 %ir.1, 1
|
|
|
|
br label %while.cond.outer
|
|
|
|
|
|
|
|
while.cond.outer:
|
|
|
|
%j.0.ph.in = phi i32 [ %l.1, %if.end10 ], [ %j.1, %if.then24 ]
|
|
|
|
%j.0.ph = shl i32 %j.0.ph.in, 1
|
|
|
|
br label %while.cond
|
|
|
|
|
|
|
|
while.cond:
|
|
|
|
%j.0 = phi i32 [ %add31, %if.end20 ], [ %j.0.ph, %while.cond.outer ]
|
|
|
|
%cmp11 = icmp sgt i32 %j.0, %ir.1
|
|
|
|
br i1 %cmp11, label %while.end, label %while.body
|
|
|
|
|
|
|
|
while.body:
|
|
|
|
%cmp12 = icmp slt i32 %j.0, %ir.1
|
|
|
|
br i1 %cmp12, label %land.lhs.true, label %if.end20
|
|
|
|
|
|
|
|
land.lhs.true:
|
|
|
|
%idxprom13 = sext i32 %j.0 to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx14 = getelementptr inbounds double, double* %ra, i64 %idxprom13
|
2015-02-28 05:17:42 +08:00
|
|
|
%3 = load double, double* %arrayidx14, align 8
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
%add15 = add nsw i32 %j.0, 1
|
|
|
|
%idxprom16 = sext i32 %add15 to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx17 = getelementptr inbounds double, double* %ra, i64 %idxprom16
|
2015-02-28 05:17:42 +08:00
|
|
|
%4 = load double, double* %arrayidx17, align 8
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
%cmp18 = fcmp olt double %3, %4
|
|
|
|
br i1 %cmp18, label %if.then19, label %if.end20
|
|
|
|
|
|
|
|
if.then19:
|
|
|
|
br label %if.end20
|
|
|
|
|
|
|
|
if.end20:
|
|
|
|
%j.1 = phi i32 [ %add15, %if.then19 ], [ %j.0, %land.lhs.true ], [ %j.0, %while.body ]
|
|
|
|
%idxprom21 = sext i32 %j.1 to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx22 = getelementptr inbounds double, double* %ra, i64 %idxprom21
|
2015-02-28 05:17:42 +08:00
|
|
|
%5 = load double, double* %arrayidx22, align 8
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
%cmp23 = fcmp olt double %rra.0, %5
|
|
|
|
br i1 %cmp23, label %if.then24, label %while.cond
|
|
|
|
|
|
|
|
if.then24:
|
|
|
|
%idxprom27 = sext i32 %j.0.ph.in to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx28 = getelementptr inbounds double, double* %ra, i64 %idxprom27
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
store double %5, double* %arrayidx28, align 8
|
|
|
|
br label %while.cond.outer
|
|
|
|
|
|
|
|
while.end:
|
|
|
|
%idxprom33 = sext i32 %j.0.ph.in to i64
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%arrayidx34 = getelementptr inbounds double, double* %ra, i64 %idxprom33
|
Rewrite how machine block placement handles loop rotation.
This is a complex change that resulted from a great deal of
experimentation with several different benchmarks. The one which proved
the most useful is included as a test case, but I don't know that it
captures all of the relevant changes, as I didn't have specific
regression tests for each, they were more the result of reasoning about
what the old algorithm would possibly do wrong. I'm also failing at the
moment to craft more targeted regression tests for these changes, if
anyone has ideas, it would be welcome.
The first big thing broken with the old algorithm is the idea that we
can take a basic block which has a loop-exiting successor and a looping
successor and use the looping successor as the layout top in order to
get that particular block to be the bottom of the loop after layout.
This happens to work in many cases, but not in all.
The second big thing broken was that we didn't try to select the exit
which fell into the nearest enclosing loop (to which we exit at all). As
a consequence, even if the rotation worked perfectly, it would result in
one of two bad layouts. Either the bottom of the loop would get
fallthrough, skipping across a nearer enclosing loop and thereby making
it discontiguous, or it would be forced to take an explicit jump over
the nearest enclosing loop to earch its successor. The point of the
rotation is to get fallthrough, so we need it to fallthrough to the
nearest loop it can.
The fix to the first issue is to actually layout the loop from the loop
header, and then rotate the loop such that the correct exiting edge can
be a fallthrough edge. This is actually much easier than I anticipated
because we can handle all the hard parts of finding a viable rotation
before we do the layout. We just store that, and then rotate after
layout is finished. No inner loops get split across the post-rotation
backedge because we check for them when selecting the rotation.
That fix exposed a latent problem with our exitting block selection --
we should allow the backedge to point into the middle of some inner-loop
chain as there is no real penalty to it, the whole point is that it
*won't* be a fallthrough edge. This may have blocked the rotation at all
in some cases, I have no idea and no test case as I've never seen it in
practice, it was just noticed by inspection.
Finally, all of these fixes, and studying the loops they produce,
highlighted another problem: in rotating loops like this, we sometimes
fail to align the destination of these backwards jumping edges. Fix this
by actually walking the backwards edges rather than relying on loopinfo.
This fixes regressions on heapsort if block placement is enabled as well
as lots of other cases where the previous logic would introduce an
abundance of unnecessary branches into the execution.
llvm-svn: 154783
2012-04-16 09:12:56 +08:00
|
|
|
store double %rra.0, double* %arrayidx34, align 8
|
|
|
|
br label %for.cond
|
|
|
|
}
|
2013-05-24 20:26:52 +08:00
|
|
|
|
|
|
|
declare void @cold_function() cold
|
|
|
|
|
|
|
|
define i32 @test_cold_calls(i32* %a) {
|
|
|
|
; Test that edges to blocks post-dominated by cold calls are
|
|
|
|
; marked as not expected to be taken. They should be laid out
|
|
|
|
; at the bottom.
|
2013-07-14 04:38:47 +08:00
|
|
|
; CHECK-LABEL: test_cold_calls:
|
2013-05-24 20:26:52 +08:00
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %else
|
|
|
|
; CHECK: %exit
|
|
|
|
; CHECK: %then
|
|
|
|
|
|
|
|
entry:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
2015-02-28 05:17:42 +08:00
|
|
|
%val1 = load i32, i32* %gep1
|
2013-05-24 20:26:52 +08:00
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then, label %else
|
|
|
|
|
|
|
|
then:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
else:
|
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
2015-02-28 03:29:02 +08:00
|
|
|
%gep2 = getelementptr i32, i32* %a, i32 2
|
2015-02-28 05:17:42 +08:00
|
|
|
%val2 = load i32, i32* %gep2
|
2013-05-24 20:26:52 +08:00
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
%ret = phi i32 [ %val1, %then ], [ %val2, %else ]
|
|
|
|
ret i32 %ret
|
|
|
|
}
|
2016-04-08 05:29:39 +08:00
|
|
|
|
|
|
|
; Make sure we put landingpads out of the way.
|
|
|
|
declare i32 @pers(...)
|
|
|
|
|
|
|
|
declare i32 @foo();
|
|
|
|
|
|
|
|
declare i32 @bar();
|
|
|
|
|
|
|
|
define i32 @test_lp(i32 %a) personality i32 (...)* @pers {
|
|
|
|
; CHECK-LABEL: test_lp:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %hot
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %cold
|
|
|
|
; CHECK: %coldlp
|
|
|
|
; CHECK: %hotlp
|
|
|
|
; CHECK: %lpret
|
|
|
|
entry:
|
|
|
|
%0 = icmp sgt i32 %a, 1
|
|
|
|
br i1 %0, label %hot, label %cold, !prof !4
|
|
|
|
|
|
|
|
hot:
|
|
|
|
%1 = invoke i32 @foo()
|
|
|
|
to label %then unwind label %hotlp
|
|
|
|
|
|
|
|
cold:
|
|
|
|
%2 = invoke i32 @bar()
|
|
|
|
to label %then unwind label %coldlp
|
|
|
|
|
|
|
|
then:
|
|
|
|
%3 = phi i32 [ %1, %hot ], [ %2, %cold ]
|
|
|
|
ret i32 %3
|
|
|
|
|
|
|
|
hotlp:
|
|
|
|
%4 = landingpad { i8*, i32 }
|
|
|
|
cleanup
|
|
|
|
br label %lpret
|
|
|
|
|
|
|
|
coldlp:
|
|
|
|
%5 = landingpad { i8*, i32 }
|
|
|
|
cleanup
|
|
|
|
br label %lpret
|
|
|
|
|
|
|
|
lpret:
|
|
|
|
%6 = phi i32 [-1, %hotlp], [-2, %coldlp]
|
|
|
|
%7 = add i32 %6, 42
|
|
|
|
ret i32 %7
|
|
|
|
}
|
|
|
|
|
|
|
|
!4 = !{!"branch_weights", i32 65536, i32 0}
|
|
|
|
|
|
|
|
; Make sure that ehpad are scheduled from the least probable one
|
|
|
|
; to the most probable one. See selectBestCandidateBlock as to why.
|
|
|
|
declare void @clean();
|
|
|
|
|
|
|
|
define void @test_flow_unwind() personality i32 (...)* @pers {
|
|
|
|
; CHECK-LABEL: test_flow_unwind:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %exit
|
|
|
|
; CHECK: %innerlp
|
|
|
|
; CHECK: %outerlp
|
|
|
|
; CHECK: %outercleanup
|
|
|
|
entry:
|
|
|
|
%0 = invoke i32 @foo()
|
|
|
|
to label %then unwind label %outerlp
|
|
|
|
|
|
|
|
then:
|
|
|
|
%1 = invoke i32 @bar()
|
|
|
|
to label %exit unwind label %innerlp
|
|
|
|
|
|
|
|
exit:
|
|
|
|
ret void
|
|
|
|
|
|
|
|
innerlp:
|
|
|
|
%2 = landingpad { i8*, i32 }
|
|
|
|
cleanup
|
|
|
|
br label %innercleanup
|
|
|
|
|
|
|
|
outerlp:
|
|
|
|
%3 = landingpad { i8*, i32 }
|
|
|
|
cleanup
|
|
|
|
br label %outercleanup
|
|
|
|
|
|
|
|
outercleanup:
|
|
|
|
%4 = phi { i8*, i32 } [%2, %innercleanup], [%3, %outerlp]
|
|
|
|
call void @clean()
|
|
|
|
resume { i8*, i32 } %4
|
|
|
|
|
|
|
|
innercleanup:
|
|
|
|
call void @clean()
|
|
|
|
br label %outercleanup
|
|
|
|
}
|
Revive http://reviews.llvm.org/D12778 to handle forward-hot-prob and backward-hot-prob consistently.
Summary:
Consider the following diamond CFG:
A
/ \
B C
\/
D
Suppose A->B and A->C have probabilities 81% and 19%. In block-placement, A->B is called a hot edge and the final placement should be ABDC. However, the current implementation outputs ABCD. This is because when choosing the next block of B, it checks if Freq(C->D) > Freq(B->D) * 20%, which is true (if Freq(A) = 100, then Freq(B->D) = 81, Freq(C->D) = 19, and 19 > 81*20%=16.2). Actually, we should use 25% instead of 20% as the probability here, so that we have 19 < 81*25%=20.25, and the desired ABDC layout will be generated.
Reviewers: djasper, davidxl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20989
llvm-svn: 272203
2016-06-09 05:30:12 +08:00
|
|
|
|
|
|
|
declare void @hot_function()
|
|
|
|
|
|
|
|
define void @test_hot_branch(i32* %a) {
|
|
|
|
; Test that a hot branch that has a probability a little larger than 80% will
|
|
|
|
; break CFG constrains when doing block placement.
|
|
|
|
; CHECK-LABEL: test_hot_branch:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %exit
|
|
|
|
; CHECK: %else
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then, label %else, !prof !5
|
|
|
|
|
|
|
|
then:
|
|
|
|
call void @hot_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
else:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2016-06-15 06:27:17 +08:00
|
|
|
define void @test_hot_branch_profile(i32* %a) !prof !6 {
|
2016-06-15 07:05:46 +08:00
|
|
|
; Test that a hot branch that has a probability a little larger than 50% will
|
2016-06-15 06:27:17 +08:00
|
|
|
; break CFG constrains when doing block placement when profile is available.
|
|
|
|
; CHECK-LABEL: test_hot_branch_profile:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %exit
|
|
|
|
; CHECK: %else
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
2016-06-15 07:05:46 +08:00
|
|
|
br i1 %cond1, label %then, label %else, !prof !7
|
2016-06-15 06:27:17 +08:00
|
|
|
|
|
|
|
then:
|
|
|
|
call void @hot_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
else:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test_hot_branch_triangle_profile(i32* %a) !prof !6 {
|
|
|
|
; Test that a hot branch that has a probability a little larger than 80% will
|
|
|
|
; break triangle shaped CFG constrains when doing block placement if profile
|
|
|
|
; is present.
|
|
|
|
; CHECK-LABEL: test_hot_branch_triangle_profile:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %exit
|
|
|
|
; CHECK: %then
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %exit, label %then, !prof !5
|
|
|
|
|
|
|
|
then:
|
|
|
|
call void @hot_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test_hot_branch_triangle_profile_topology(i32* %a) !prof !6 {
|
|
|
|
; Test that a hot branch that has a probability between 50% and 66% will not
|
|
|
|
; break triangle shaped CFG constrains when doing block placement if profile
|
|
|
|
; is present.
|
|
|
|
; CHECK-LABEL: test_hot_branch_triangle_profile_topology:
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %exit
|
|
|
|
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %exit, label %then, !prof !7
|
|
|
|
|
|
|
|
then:
|
|
|
|
call void @hot_function()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2016-07-30 02:09:28 +08:00
|
|
|
declare void @a()
|
|
|
|
declare void @b()
|
|
|
|
|
|
|
|
define void @test_forked_hot_diamond(i32* %a) {
|
|
|
|
; Test that a hot-branch with probability > 80% followed by a 50/50 branch
|
|
|
|
; will not place the cold predecessor if the probability for the fallthrough
|
|
|
|
; remains above 80%
|
|
|
|
; CHECK-LABEL: test_forked_hot_diamond
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %fork1
|
|
|
|
; CHECK: %else
|
|
|
|
; CHECK: %fork2
|
|
|
|
; CHECK: %exit
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then, label %else, !prof !5
|
|
|
|
|
|
|
|
then:
|
|
|
|
call void @hot_function()
|
|
|
|
%gep2 = getelementptr i32, i32* %a, i32 2
|
|
|
|
%val2 = load i32, i32* %gep2
|
|
|
|
%cond2 = icmp ugt i32 %val2, 2
|
|
|
|
br i1 %cond2, label %fork1, label %fork2, !prof !8
|
|
|
|
|
|
|
|
else:
|
|
|
|
call void @cold_function()
|
|
|
|
%gep3 = getelementptr i32, i32* %a, i32 3
|
|
|
|
%val3 = load i32, i32* %gep3
|
|
|
|
%cond3 = icmp ugt i32 %val3, 3
|
|
|
|
br i1 %cond3, label %fork1, label %fork2, !prof !8
|
|
|
|
|
|
|
|
fork1:
|
|
|
|
call void @a()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
fork2:
|
|
|
|
call void @b()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test_forked_hot_diamond_gets_cold(i32* %a) {
|
|
|
|
; Test that a hot-branch with probability > 80% followed by a 50/50 branch
|
|
|
|
; will place the cold predecessor if the probability for the fallthrough
|
|
|
|
; falls below 80%
|
|
|
|
; The probability for both branches is 85%. For then2 vs else1
|
|
|
|
; this results in a compounded probability of 83%.
|
|
|
|
; Neither then2->fork1 nor then2->fork2 has a large enough relative
|
|
|
|
; probability to break the CFG.
|
|
|
|
; Relative probs:
|
|
|
|
; then2 -> fork1 vs else1 -> fork1 = 71%
|
|
|
|
; then2 -> fork2 vs else2 -> fork2 = 74%
|
|
|
|
; CHECK-LABEL: test_forked_hot_diamond_gets_cold
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then1
|
|
|
|
; CHECK: %then2
|
|
|
|
; CHECK: %else1
|
|
|
|
; CHECK: %fork1
|
|
|
|
; CHECK: %else2
|
|
|
|
; CHECK: %fork2
|
|
|
|
; CHECK: %exit
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then1, label %else1, !prof !9
|
|
|
|
|
|
|
|
then1:
|
|
|
|
call void @hot_function()
|
|
|
|
%gep2 = getelementptr i32, i32* %a, i32 2
|
|
|
|
%val2 = load i32, i32* %gep2
|
|
|
|
%cond2 = icmp ugt i32 %val2, 2
|
|
|
|
br i1 %cond2, label %then2, label %else2, !prof !9
|
|
|
|
|
|
|
|
else1:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %fork1
|
|
|
|
|
|
|
|
then2:
|
|
|
|
call void @hot_function()
|
|
|
|
%gep3 = getelementptr i32, i32* %a, i32 3
|
|
|
|
%val3 = load i32, i32* %gep2
|
|
|
|
%cond3 = icmp ugt i32 %val2, 3
|
|
|
|
br i1 %cond3, label %fork1, label %fork2, !prof !8
|
|
|
|
|
|
|
|
else2:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %fork2
|
|
|
|
|
|
|
|
fork1:
|
|
|
|
call void @a()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
fork2:
|
|
|
|
call void @b()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
|
|
|
define void @test_forked_hot_diamond_stays_hot(i32* %a) {
|
|
|
|
; Test that a hot-branch with probability > 88.88% (1:8) followed by a 50/50
|
|
|
|
; branch will not place the cold predecessor as the probability for the
|
|
|
|
; fallthrough stays above 80%
|
|
|
|
; (1:8) followed by (1:1) is still (1:4)
|
|
|
|
; Here we use 90% probability because two in a row
|
|
|
|
; have a 89 % probability vs the original branch.
|
|
|
|
; CHECK-LABEL: test_forked_hot_diamond_stays_hot
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %then1
|
|
|
|
; CHECK: %then2
|
|
|
|
; CHECK: %fork1
|
|
|
|
; CHECK: %else1
|
|
|
|
; CHECK: %else2
|
|
|
|
; CHECK: %fork2
|
|
|
|
; CHECK: %exit
|
|
|
|
entry:
|
|
|
|
%gep1 = getelementptr i32, i32* %a, i32 1
|
|
|
|
%val1 = load i32, i32* %gep1
|
|
|
|
%cond1 = icmp ugt i32 %val1, 1
|
|
|
|
br i1 %cond1, label %then1, label %else1, !prof !10
|
|
|
|
|
|
|
|
then1:
|
|
|
|
call void @hot_function()
|
|
|
|
%gep2 = getelementptr i32, i32* %a, i32 2
|
|
|
|
%val2 = load i32, i32* %gep2
|
|
|
|
%cond2 = icmp ugt i32 %val2, 2
|
|
|
|
br i1 %cond2, label %then2, label %else2, !prof !10
|
|
|
|
|
|
|
|
else1:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %fork1
|
|
|
|
|
|
|
|
then2:
|
|
|
|
call void @hot_function()
|
|
|
|
%gep3 = getelementptr i32, i32* %a, i32 3
|
|
|
|
%val3 = load i32, i32* %gep2
|
|
|
|
%cond3 = icmp ugt i32 %val2, 3
|
|
|
|
br i1 %cond3, label %fork1, label %fork2, !prof !8
|
|
|
|
|
|
|
|
else2:
|
|
|
|
call void @cold_function()
|
|
|
|
br label %fork2
|
|
|
|
|
|
|
|
fork1:
|
|
|
|
call void @a()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
fork2:
|
|
|
|
call void @b()
|
|
|
|
br label %exit
|
|
|
|
|
|
|
|
exit:
|
|
|
|
call void @hot_function()
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
2017-04-11 06:28:18 +08:00
|
|
|
; Because %endif has a higher frequency than %if, the calculations show we
|
|
|
|
; shouldn't tail-duplicate %endif so that we can place it after %if. We were
|
|
|
|
; previously undercounting the cost by ignoring execution frequency that didn't
|
|
|
|
; come from the %if->%endif path.
|
|
|
|
; CHECK-LABEL: higher_frequency_succ_tail_dup
|
|
|
|
; CHECK: %entry
|
|
|
|
; CHECK: %elseif
|
|
|
|
; CHECK: %else
|
|
|
|
; CHECK: %endif
|
|
|
|
; CHECK: %then
|
|
|
|
; CHECK: %ret
|
|
|
|
define void @higher_frequency_succ_tail_dup(i1 %a, i1 %b, i1 %c) {
|
|
|
|
entry:
|
|
|
|
br label %if
|
|
|
|
if: ; preds = %entry
|
|
|
|
call void @effect(i32 0)
|
|
|
|
br i1 %a, label %elseif, label %endif, !prof !11 ; even
|
|
|
|
|
|
|
|
elseif: ; preds = %if
|
|
|
|
call void @effect(i32 1)
|
|
|
|
br i1 %b, label %else, label %endif, !prof !11 ; even
|
|
|
|
|
|
|
|
else: ; preds = %elseif
|
|
|
|
call void @effect(i32 2)
|
|
|
|
br label %endif
|
|
|
|
|
|
|
|
endif: ; preds = %if, %elseif, %else
|
|
|
|
br i1 %c, label %then, label %ret, !prof !12 ; 5 to 3
|
|
|
|
|
|
|
|
then: ; preds = %endif
|
|
|
|
call void @effect(i32 3)
|
|
|
|
br label %ret
|
|
|
|
|
|
|
|
ret: ; preds = %endif, %then
|
|
|
|
ret void
|
|
|
|
}
|
|
|
|
|
Revert Revert [MBP] do not rotate loop if it creates extra branch
This is a second attempt to land this patch.
The first one resulted in a crash of clang sanitizer buildbot.
The fix is here and regression test is added.
This is a last fix for the corner case of PR32214. Actually this is not really corner case in general.
We should not do a loop rotation if we create an additional branch due to it.
Consider the case where we have a loop chain H, M, B, C , where
H is header with viable fallthrough from pre-header and exit from the loop
M - some middle block
B - backedge to Header but with exit from the loop also.
C - some cold block of the loop.
Let's H is determined as a best exit. If we do a loop rotation M, B, C, H we can introduce the extra branch.
Let's compute the change in number of branches:
+1 branch from pre-header to header
-1 branch from header to exit
+1 branch from header to middle block if there is such
-1 branch from cold bock to header if there is one
So if C is not a predecessor of H then we introduce extra branch.
This change actually prohibits rotation of the loop if both true
Best Exit has next element in chain as successor.
Last element in chain is not a predecessor of first element of chain.
Reviewers: iteratee, xur, sammccall, chandlerc
Reviewed By: iteratee
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34745
llvm-svn: 307631
2017-07-11 16:34:58 +08:00
|
|
|
define i32 @not_rotate_if_extra_branch(i32 %count) {
|
|
|
|
; Test checks that there is no loop rotation
|
|
|
|
; if it introduces extra branch.
|
|
|
|
; Specifically in this case because best exit is .header
|
|
|
|
; but it has fallthrough to .middle block and last block in
|
|
|
|
; loop chain .slow does not have afallthrough to .header.
|
|
|
|
; CHECK-LABEL: not_rotate_if_extra_branch
|
|
|
|
; CHECK: %.entry
|
|
|
|
; CHECK: %.header
|
|
|
|
; CHECK: %.middle
|
|
|
|
; CHECK: %.backedge
|
|
|
|
; CHECK: %.slow
|
|
|
|
; CHECK: %.bailout
|
|
|
|
; CHECK: %.stop
|
|
|
|
.entry:
|
|
|
|
%sum.0 = shl nsw i32 %count, 1
|
|
|
|
br label %.header
|
|
|
|
|
|
|
|
.header:
|
|
|
|
%i = phi i32 [ %i.1, %.backedge ], [ 0, %.entry ]
|
|
|
|
%sum = phi i32 [ %sum.1, %.backedge ], [ %sum.0, %.entry ]
|
|
|
|
%is_exc = icmp sgt i32 %i, 9000000
|
|
|
|
br i1 %is_exc, label %.bailout, label %.middle, !prof !13
|
|
|
|
|
|
|
|
.bailout:
|
|
|
|
%sum.2 = add nsw i32 %count, 1
|
|
|
|
br label %.stop
|
|
|
|
|
|
|
|
.middle:
|
|
|
|
%pr.1 = and i32 %i, 1023
|
|
|
|
%pr.2 = icmp eq i32 %pr.1, 0
|
|
|
|
br i1 %pr.2, label %.slow, label %.backedge, !prof !14
|
|
|
|
|
|
|
|
.slow:
|
|
|
|
tail call void @effect(i32 %sum)
|
|
|
|
br label %.backedge
|
|
|
|
|
|
|
|
.backedge:
|
|
|
|
%sum.1 = add nsw i32 %i, %sum
|
|
|
|
%i.1 = add nsw i32 %i, 1
|
|
|
|
%end = icmp slt i32 %i.1, %count
|
|
|
|
br i1 %end, label %.header, label %.stop, !prof !15
|
|
|
|
|
|
|
|
.stop:
|
|
|
|
%sum.phi = phi i32 [ %sum.1, %.backedge ], [ %sum.2, %.bailout ]
|
|
|
|
ret i32 %sum.phi
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @not_rotate_if_extra_branch_regression(i32 %count, i32 %init) {
|
|
|
|
; This is a regression test against patch avoid loop rotation if
|
|
|
|
; it introduce an extra btanch.
|
|
|
|
; CHECK-LABEL: not_rotate_if_extra_branch_regression
|
|
|
|
; CHECK: %.entry
|
|
|
|
; CHECK: %.first_backedge
|
|
|
|
; CHECK: %.slow
|
|
|
|
; CHECK: %.second_header
|
|
|
|
.entry:
|
|
|
|
%sum.0 = shl nsw i32 %count, 1
|
|
|
|
br label %.first_header
|
|
|
|
|
|
|
|
.first_header:
|
|
|
|
%i = phi i32 [ %i.1, %.first_backedge ], [ 0, %.entry ]
|
|
|
|
%is_bo1 = icmp sgt i32 %i, 9000000
|
|
|
|
br i1 %is_bo1, label %.bailout, label %.first_backedge, !prof !14
|
|
|
|
|
|
|
|
.first_backedge:
|
|
|
|
%i.1 = add nsw i32 %i, 1
|
|
|
|
%end = icmp slt i32 %i.1, %count
|
|
|
|
br i1 %end, label %.first_header, label %.second_header, !prof !13
|
|
|
|
|
|
|
|
.second_header:
|
|
|
|
%j = phi i32 [ %j.1, %.second_backedge ], [ %init, %.first_backedge ]
|
|
|
|
%end.2 = icmp sgt i32 %j, %count
|
|
|
|
br i1 %end.2, label %.stop, label %.second_middle, !prof !14
|
|
|
|
|
|
|
|
.second_middle:
|
|
|
|
%is_slow = icmp sgt i32 %j, 9000000
|
|
|
|
br i1 %is_slow, label %.slow, label %.second_backedge, !prof !14
|
|
|
|
|
|
|
|
.slow:
|
|
|
|
tail call void @effect(i32 %j)
|
|
|
|
br label %.second_backedge
|
|
|
|
|
|
|
|
.second_backedge:
|
|
|
|
%j.1 = add nsw i32 %j, 1
|
|
|
|
%end.3 = icmp slt i32 %j, 10000000
|
|
|
|
br i1 %end.3, label %.second_header, label %.stop, !prof !13
|
|
|
|
|
|
|
|
.stop:
|
|
|
|
%res = add nsw i32 %j, %i.1
|
|
|
|
ret i32 %res
|
|
|
|
|
|
|
|
.bailout:
|
|
|
|
ret i32 0
|
|
|
|
}
|
|
|
|
|
2017-04-11 06:28:18 +08:00
|
|
|
declare void @effect(i32)
|
|
|
|
|
Revive http://reviews.llvm.org/D12778 to handle forward-hot-prob and backward-hot-prob consistently.
Summary:
Consider the following diamond CFG:
A
/ \
B C
\/
D
Suppose A->B and A->C have probabilities 81% and 19%. In block-placement, A->B is called a hot edge and the final placement should be ABDC. However, the current implementation outputs ABCD. This is because when choosing the next block of B, it checks if Freq(C->D) > Freq(B->D) * 20%, which is true (if Freq(A) = 100, then Freq(B->D) = 81, Freq(C->D) = 19, and 19 > 81*20%=16.2). Actually, we should use 25% instead of 20% as the probability here, so that we have 19 < 81*25%=20.25, and the desired ABDC layout will be generated.
Reviewers: djasper, davidxl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20989
llvm-svn: 272203
2016-06-09 05:30:12 +08:00
|
|
|
!5 = !{!"branch_weights", i32 84, i32 16}
|
2016-06-15 06:27:17 +08:00
|
|
|
!6 = !{!"function_entry_count", i32 10}
|
|
|
|
!7 = !{!"branch_weights", i32 60, i32 40}
|
2016-07-30 02:09:28 +08:00
|
|
|
!8 = !{!"branch_weights", i32 5001, i32 4999}
|
|
|
|
!9 = !{!"branch_weights", i32 85, i32 15}
|
|
|
|
!10 = !{!"branch_weights", i32 90, i32 10}
|
2017-04-11 06:28:18 +08:00
|
|
|
!11 = !{!"branch_weights", i32 1, i32 1}
|
|
|
|
!12 = !{!"branch_weights", i32 5, i32 3}
|
Revert Revert [MBP] do not rotate loop if it creates extra branch
This is a second attempt to land this patch.
The first one resulted in a crash of clang sanitizer buildbot.
The fix is here and regression test is added.
This is a last fix for the corner case of PR32214. Actually this is not really corner case in general.
We should not do a loop rotation if we create an additional branch due to it.
Consider the case where we have a loop chain H, M, B, C , where
H is header with viable fallthrough from pre-header and exit from the loop
M - some middle block
B - backedge to Header but with exit from the loop also.
C - some cold block of the loop.
Let's H is determined as a best exit. If we do a loop rotation M, B, C, H we can introduce the extra branch.
Let's compute the change in number of branches:
+1 branch from pre-header to header
-1 branch from header to exit
+1 branch from header to middle block if there is such
-1 branch from cold bock to header if there is one
So if C is not a predecessor of H then we introduce extra branch.
This change actually prohibits rotation of the loop if both true
Best Exit has next element in chain as successor.
Last element in chain is not a predecessor of first element of chain.
Reviewers: iteratee, xur, sammccall, chandlerc
Reviewed By: iteratee
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34745
llvm-svn: 307631
2017-07-11 16:34:58 +08:00
|
|
|
!13 = !{!"branch_weights", i32 1, i32 1}
|
|
|
|
!14 = !{!"branch_weights", i32 1, i32 1023}
|
|
|
|
!15 = !{!"branch_weights", i32 4095, i32 1}
|