One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
The expansion code does the same thing. Since
the operands were not defined with the correct
types, this has the side effect of fixing operand
folding since the expanded pseudo would never use
SGPRs or inline immediates.
llvm-svn: 230072
This enables a few useful combines that used to only
use fma.
Also since v_mad_f32 apparently does not support denormals,
disable the existing cases that are custom handled if they are
requested.
llvm-svn: 230071
This should allow finally fixing the f64 fdiv implementation.
Test is disabled for VI since there seems to be a problem with one
of the buffer load instructions on it.
llvm-svn: 229236
llc would hang trying to write output to a full pipe that FileCheck
wasn't reading. FileCheck wasn't reading from stdin because it needs a
file as a positional argument.
llvm-svn: 229157
SimplifyCFG now knows how to speculate calls to intrinsic cttz/ctlz that are
'cheap' for the target. Therefore, some of the logic in CodeGenPrepare
that was originally added at revision 224899 can now be removed.
This patch is basically a no functional change. It removes the duplicated
logic in CodeGenPrepare and converts all the existing target specific tests
for cttz/ctlz into SimplifyCFG tests.
Differential Revision: http://reviews.llvm.org/D7608
llvm-svn: 229105
This is a union of these commits:
* R600/SI: Enable more tests for VI which need no changes
* R600/SI: Enable V_BCNT tests for VI
Differences:
- v_bcnt_..._e32 -> _e64
- s_load_dword* inline offset is in bytes instead of dwords
* R600/SI: Enable all tests for VI which use S_LOAD_DWORD
The inline offset is changed from dwords to bytes.
* R600/SI: Enable LDS tests for VI
Differences:
- the s_load_dword inline offset changed from dwords to bytes
- the tests checked very little on CI, so they have been fixed to check all
instructions that "SI" checked
* R600/SI: Enable lshr tests for VI
* R600/SI: Fix divrem64 tests
- "v_lshl_64" was missing "b" before "64"
- added VI-NOT checks
* R600/SI: Enable the SI.tid test for VI
* R600/SI: Enable the frem test for VI
Also, the frem_f64 checking is added for CI-VI.
* R600/SI: Add VI tests for rsq.clamped
llvm-svn: 228830
Doesn't seem necessary anymore. I think this was mostly compensating for
not enabling WQM for texture sampling instructions.
v2: Add test coverage
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228373
If whole quad mode isn't enabled for these, the level of detail is
calculated incorrectly for pixels along diagonal triangle edges, causing
artifacts.
v2: Use a TSFlag instead of lots of switch cases
v3: Add test coverage
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88642
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228372
We should be setting UnrollingPreferences::MaxCount to MAX_UINT instead
of UnrollingPreferences::Count.
Count is a 'forced unrolling factor', while MaxCount sets an upper
limit to the unrolling factor.
Setting Count to MAX_UINT was causing the loop in the testcase to be
unrolled 15 times, when it only had a maximum of 4 iterations.
llvm-svn: 228303
The llvm.SI.end.cf intrinsic is used to mark the end of if-then blocks,
if-then-else blocks, and loops. It is responsible for updating the
exec mask to re-enable threads that had been masked during the preceding
control flow block. For example:
s_mov_b64 exec, 0x3 ; Initial exec mask
s_mov_b64 s[0:1], exec ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0 ; llvm.SI.if
do_stuff()
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf
The bug fixed by this patch was one where the llvm.SI.end.cf intrinsic
was being inserted into the header of loops. This would happen when
an if block terminated in a loop header and we would end up with
code like this:
s_mov_b64 exec, 0x3 ; Initial exec mask
s_mov_b64 s[0:1], exec ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0 ; llvm.SI.if
do_stuff()
LOOP: ; Start of loop header
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf <-BUG: The exec mask has the
same value at the beginning of each loop
iteration.
do_stuff();
s_cbranch_execnz LOOP
The fix is to create a new basic block before the loop and insert the
llvm.SI.end.cf there. This way the exec mask is restored before the
start of the loop instead of at the beginning of each iteration.
llvm-svn: 228302
v2i32, i32, trunc i32 to i16, and truc i32 to i8 stores are legal for
all address spaces. We had marked them as custom in order to lower
them for the private address space, but this is no longer necessary.
This enables lowering of misaligned stores of these types in the
DAGLegalizer.
llvm-svn: 228189
This can happen when a REV instruction is commuted.
The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
(very useful to catch bugs where an unsupported instruction somehow makes
it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
to prevent REV from commuting to non-REV on VI
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227990
This fixes a hang when using an empty geometry shader.
v2: - don't add s_nop when followed by s_waitcnt
- comestic changes
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227986
This is true for SI only. CI+ supports unaligned memory accesses,
but this requires driver support, so for now we disallow unaligned
accesses for all GCN targets.
llvm-svn: 227822
Add tests for the various combines. This should
always be at least cycle neutral on all subtargets for f64,
and faster on some. For f32 we should prefer selecting
v_mad_f32 over v_fma_f32.
llvm-svn: 227484
We used to do this promotion during DAG legalization, but this
caused an infinite loop in ExpandUnalignedLoad() because it assumed
that i64 loads were legal if i64 was a legal type.
It also seems better to report i64 loads as legal, since they actually
are and we were just promoting them to simplify our tablegen files.
llvm-svn: 226945
v2: add and enable tests for SI
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Matt Arsenault <Matthew.Arsenault@amd.com>
llvm-svn: 226881