Commit Graph

3464 Commits

Author SHA1 Message Date
Sanjay Patel 0cf0be993c [InstCombine] fix operands of shouldChangeType() for casted phi transform
This is a bug noted in the recent D72733 and seen
in the similar transform just above the changed source code.

I added tests with illegal types and zexts to show the bug -
we could transform legal phi ops to illegal, etc. I did not add
tests with trunc because we won't see any diffs on those patterns.
That is because InstCombiner::SliceUpIllegalIntegerPHI() appears to
do those transforms independently of datalayout. It can also create
more casts than are present in existing code.

There are some existing regression tests that do not include a
datalayout that would be altered by this fix. I assumed that the
lack of a datalayout in those regression files is an oversight, so
I added the minimal layout (make i32 legal) necessary to preserve
behavior on those tests.

Differential Revision: https://reviews.llvm.org/D73907
2020-02-04 07:45:48 -05:00
Jay Foad 2252cac694 [ANDGPU] getMemOperandsWithOffset: support BUF non-stack-access instructions with resource but no vaddr
Summary:
This enables clustering for many more BUF instructions.

Reviewers: rampitec, arsenm, nhaehnle

Subscribers: jvesely, wdng, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73868
2020-02-03 22:49:30 +00:00
Matt Arsenault 7d3aace3f5 AMDGPU: Add flag to control mem intrinsic expansion
GlobalISel doesn't implement the expansion for these yet, so add a
flag to force expanding these so it's possible to avoid these for a
while.
2020-02-03 14:26:01 -08:00
Matt Arsenault cb7b661d3d AMDGPU: Analyze divergence of inline asm 2020-02-03 12:42:16 -08:00
Matt Arsenault 2758ae41ae AMDGPU/GlobalISel: Allow selecting s128 load/stores 2020-02-03 12:28:08 -08:00
Matt Arsenault 726446a009 AMDGPU: Fix splitting wide f32 s.buffer.load intrinsics
This would witch f32 to i32, and produce an invald concat_vectors from
i32 pieces to an f32 vector.
2020-02-03 12:28:08 -08:00
Matt Arsenault cd7650c186 GlobalISel: Implement fewerElementsVector for G_SEXT_INREG
Start using a new strategy with a combination of merge and unmerges.

This allows scalarizing before lowering, which in cases like
<2 x s128> avoids producing giant illegal shifts.
2020-02-03 11:47:33 -08:00
Jay Foad 05297b7cbe [AMDGPU] getMemOperandsWithOffset: add resource operand for BUF instructions
Summary:
This prevents unwanted clustering of BUF instructions with the same
vaddr but different resource descriptors.

Reviewers: rampitec, arsenm, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73867
2020-02-03 17:06:09 +00:00
Matt Arsenault 00b22df71d AMDGPU: Fix extra type mangling on llvm.amdgcn.if.break
These have to be the same mask type.
2020-02-03 07:02:05 -08:00
Matt Arsenault 95a9b828f3 AMDGPU/GlobalISel: Fix mem size in test
This wasn't intended to tests an extload.
2020-02-03 05:41:14 -08:00
Jay Foad 97d9a76afc [AMDGPU] Don't remove short branches over kills
Summary:
D68092 introduced a new SIRemoveShortExecBranches optimization pass and
broke some graphics shaders. The problem is that it was removing
branches over KILL pseudo instructions, and the fix is to explicitly
check for that in mustRetainExeczBranch.

Reviewers: critson, arsenm, nhaehnle, cdevadas, hakzsam

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73771
2020-02-03 09:26:52 +00:00
Simon Pilgrim 547a94ffa1 Regenerate bitcast test for upcoming patch. 2020-02-02 18:27:44 +00:00
Nicolai Hähnle ba8110161d AMDGPU/GFX10: Fix NSA reassign pass when operands are undef
Summary:
Virtual registers that are undef have an empty LiveInterval at this
point, which means beginIndex() and endIndex() cannot be used. We
only need those indices to determine the range in which to scan for
affected other NSA instructions, and undef operands cannot contribute
to that range.

Reviewers: arsenm, rampitec, mareko

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73831
2020-02-01 22:41:40 +01:00
Matt Arsenault c0b12916a7 AMDGPU/GlobalISel: Use more wide vector load/stores
This improves the type breakdown for some large vectors. For example,
we now get a <4 x s32> and s32 store instead of 5 s32 stores for
<5 x s32>.
2020-02-01 10:47:21 -05:00
Matt Arsenault e3117e5c30 AMDGPU/GlobalISel: Improve legalization of wide stores
This fixes legalizations of global stores > 128-bits. It seems work is
needed on how this split actually occurs. For example, we get the
right code for s160, with an s128 and s32 load, but get 5 s32 loads
for <5 x s32>.
2020-02-01 10:47:03 -05:00
Matt Arsenault bc101ffd77 GlobalISel: Support widening unmerge results with pointer source 2020-02-01 10:47:03 -05:00
Matt Arsenault 98aaed2980 AMDGPU/GlobalISel: Fix forming G_TRUNC with vcc result
This somehow got lost when I fixed the boolean handling.
2020-01-31 20:29:41 -05:00
Matt Arsenault c28f1faaff AMDGPU: Switch some tests to use generated checks
Control flow tests are particularly annoying, and it's probably better
to be have comprehensive check lines for them.
2020-01-31 20:29:41 -05:00
alex-t 5df1ac7846 [AMDGPU] fixed divergence driven shift operations selection
Differential Revision: https://reviews.llvm.org/D73483

Reviewers: rampitec
2020-01-31 20:49:56 +03:00
Matt Arsenault 6fb544d1d2 AMDGPU/GlobalISel: Combine FMIN_LEGACY/FMAX_LEGACY
Try out using combine definition rules.

This really should be a post-legalizer combine, but the combiner pass
is currently pre-legalize. Most of the target combines are really
post-legalize, so we should probably move the pass.
2020-01-31 06:58:04 -08:00
Matt Arsenault 49e424e08e AMDGPU/GlobalISel: Select global MUBUF atomicrmw 2020-01-31 06:05:41 -08:00
Matt Arsenault 0426c2d07d Reapply "AMDGPU: Cleanup and fix SMRD offset handling"
This reverts commit 6a4acb9d80.
2020-01-31 06:01:28 -08:00
Matt Arsenault 6a4acb9d80 Revert "AMDGPU: Cleanup and fix SMRD offset handling"
This reverts commit 17dbc6611d.

A test is failing on some bots
2020-01-30 15:39:51 -08:00
Matt Arsenault 17dbc6611d AMDGPU: Cleanup and fix SMRD offset handling
I believe this also fixes bugs with CI 32-bit handling, which was
incorrectly skipping offsets that look like signed 32-bit values. Also
validate the offsets are dword aligned before folding.
2020-01-30 15:04:21 -08:00
David Tenty 809c872aae [NFC] Fix check prefix add in fcanonicalize-elimination.ll
The test fix added by "D39306: Fix
CodeGen/AMDGPU/fcanonicalize-elimination.ll on FreeBSD 11.0" uses a test
prefix which is not actually used in the FileCheck stanza. Thus the
problem originally encountered still exists and the tests fails for host
triples that  contain "1.0", including AIX 7.1.0.
2020-01-30 17:19:49 -05:00
Matt Arsenault ea956685a1 GlobalISel: Implement s32->s64 G_FPTOSI lowering
Port directly from DAG version.

The lowering for G_FPTOUI used to fail on AMDGPU because it uses
G_FPTOSI.
2020-01-30 08:47:07 -05:00
Matt Arsenault b21571f4d5 AMDGPU/GlobalISel: Handle s64->s64 G_FPTOSI/G_FPTOUI 2020-01-30 08:46:37 -05:00
Matt Arsenault 8184176efd AMDGPU/GlobalISel: Custom lower G_LOG/G_LOG10
I'm pretty sure this is wrong and we should expand these in a correct
way, but this matches the existing behavior.
2020-01-30 08:38:50 -05:00
Matt Arsenault 872e899b75 AMDGPU/GlobalISel: Legalize unpacked d16 image operations
On targets that don't have the normal packed f16 layout, handle these
during legalization. Directly modify the register types. We can infer
this was a d16 load based on the mem operand size during selection.

A16 operands should possibly be handled here as well, but don't worry
about that yet.
2020-01-30 08:36:11 -05:00
Matt Arsenault d21182d692 AMDGPU/GlobalISel: Only map VOP operands to VGPRs
This trivially avoids violating the constant bus restriction.

Previously this was allowing one SGPR in the first source
operand, which technically also avoided violating this for most
operations (but not for special cases reading vcc).

We do need to write some new, smarter operand folds to pick the
optimal SGPR to use in some kind of post-isel fold, but that's purely
an optimization.

I was originally thinking we would pick which operands should be SGPRs
in RegBankSelect, but I think this isn't really manageable. There
would be additional complexity to handle every G_* instruction, and
then any nontrivial instruction patterns would need to know when to
avoid violating it, which is likely to be very error prone.

I think having all inputs being canonically copies to VGPRs will
simplify the operand folding logic. The current folding we do is
backwards, and only considers one operand at a time, relative to
operands it already has. It therefore poorly handles the case where
there is already a constant bus operand user. If all operands are
copies, it's somewhat simpler to consider all input operands at once
to choose the optimal constant bus user.

Since the failure mode for constant bus violations is now a verifier
error and not an selection failure, this moves towards a place where
we can turn on the fallback mode. The SGPR copy folding optimizations
can be left for later.
2020-01-30 08:32:35 -05:00
Matt Arsenault b4a0766c8d AMDGPU/GlobalISel: Select llvm.amdgcn.buffer.atomic.cmpswap 2020-01-30 08:22:43 -05:00
Connor Abbott ce06d50756 AMDGPU: Fix AMDGPUUnifyDivergentExitNodes with no normal returns
Summary:
The code was assuming in a few places that if there was only one exit
from the function that it was a normal return, which is invalid. It
could be an infinite loop, in which case we still need to insert the
usual fake edge so that the null export happens. This fixes shaders that
end with an infinite loop that discards.

Reviewers: arsenm, nhaehnle, critson

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71192
2020-01-30 10:55:02 +01:00
Matt Arsenault 7f3280ecdd AMDGPU/GlobalISel: Select permlane16/permlanex16 2020-01-29 17:55:31 -05:00
Amara Emerson c12f046eb9 [GlobalISel] Add new combine to convert scalar G_MUL to G_SHL.
For pow2 constants we should use G_SHL for pattern matching (and perf)
purposes later.

Vector support not yet implemented.

Differential Revision: https://reviews.llvm.org/D73659
2020-01-29 13:39:00 -08:00
Matt Arsenault d3cea95475 AMDGPU/GlobalISel: Fix tests in release build
Irritatingly the failure output is different in release vs. debug
because of the legality check is removed without asserts, so a register
ends up constrained only in release builds.
2020-01-29 12:27:16 -08:00
Amara Emerson 0da937bb5c [GlobalISel][IRTranslator] Follow convention and put constant offset of getelementptr arithmetic on RHS.
We were needlessly putting known constant values on the LHS of a G_MUL, which
is suboptimal.

Differential Revision: https://reviews.llvm.org/D73650
2020-01-29 11:37:19 -08:00
Austin Kerbow 2605adb69c [AMDGPU][GlobalISel] Select 8-byte LDS Ops with 4-byte alignment
Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73585
2020-01-29 10:42:12 -08:00
Jay Foad d07a789579 [AMDGPU] Cluster FLAT instructions with both vaddr and saddr
Reviewers: rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73634
2020-01-29 17:01:35 +00:00
Matt Arsenault 62129878a6 AMDGPU/GlobalISel: Fix tablegen selection for scalar bin ops
Fixes selection for scalar G_SMULH/G_UMULH. Also switches to using
tablegen selected add/sub, which switch to the signed version of the
opcode. This matches the current DAG behavior. We can't drop the
manual selection for add/sub yet, because it's still both for VALU
add/sub and for G_PTR_ADD.
2020-01-29 08:55:54 -08:00
Matt Arsenault b63629a58d GlobalISel: Fix mask computation in lowerInsert
This is supposed to be the high bit index, not the width. Use the
wrapping form of getBitsSet and avoid the bitflip.
2020-01-29 08:25:36 -08:00
Jay Foad 0d7bd34312 [MachineScheduler] Ignore artificial edges when forming store chains
Summary:
BaseMemOpClusterMutation::apply forms store chains by looking for
control (i.e. non-data) dependencies from one mem op to another.

In the test case, clusterNeighboringMemOps successfully clusters the
loads, and then adds artificial edges to the loads' successors as
described in the comment:
      // Copy successor edges from SUa to SUb. Interleaving computation
      // dependent on SUa can prevent load combining due to register reuse.
The effect of this is that *data* dependencies from one load to a store
are copied as *artificial* dependencies from a different load to the
same store.

Then when BaseMemOpClusterMutation::apply looks at the stores, it finds
that some of them have a control dependency on a previous load, which
breaks the chains and means that the stores are not all considered part
of the same chain and won't all be clustered.

The fix is to only consider non-artificial control dependencies when
forming chains.

Subscribers: MatzeB, jvesely, nhaehnle, hiraditya, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71717
2020-01-29 16:23:01 +00:00
Matt Arsenault 96352e0a1b AMDGPU/GlobalISel: Handle LDS with relocations case 2020-01-29 08:18:55 -08:00
Connor Abbott 87d98c1495 AMDGPU: Fix handling of infinite loops in fragment shaders
Summary:
Due to the fact that kill is just a normal intrinsic, even though it's
supposed to terminate the thread, we can end up with provably infinite
loops that are actually supposed to end successfully. The
AMDGPUUnifyDivergentExitNodes pass breaks up these loops, but because
there's no obvious place to make the loop branch to, it just makes it
return immediately, which skips the exports that are supposed to happen
at the end and hangs the GPU if all the threads end up being killed.

While it would be nice if the fact that kill terminates the thread were
modeled in the IR, I think that the structurizer as-is would make a mess if we
did that when the kill is inside control flow. For now, we just add a null
export at the end to make sure that it always exports something, which fixes
the immediate problem without penalizing the more common case. This means that
we sometimes do two "done" exports when only some of the threads enter the
discard loop, but from tests the hardware seems ok with that.

This fixes dEQP-VK.graphicsfuzz.while-inside-switch with radv.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70781
2020-01-29 17:13:25 +01:00
Matt Arsenault 94e8ef4d4c AMDGPU/GlobalISel: Look through copies for source modifiers
When all VOP instructions are legalized to VGPRs, any SGPR source
modifiers will have a copy in the way.
2020-01-29 08:08:13 -08:00
Matt Arsenault 752e2e245a AMDGPU/GlobalISel: Rewrite fadd select tests
Convert to the style most others use with one test instruction per
function, and use an implicit use to ensure the result register class
is constrained.

Change-Id: I6109148b0e3c80aa5535796a37abca583c19a936
2020-01-29 07:49:38 -08:00
Connor Abbott 08b205bb48 Revert "AMDGPU: Fix handling of infinite loops in fragment shaders"
This reverts commit 0994c485e6.
2020-01-29 16:14:52 +01:00
Connor Abbott 13ab22ab22 Revert "AMDGPU: Fix AMDGPUUnifyDivergentExitNodes with no normal returns"
This reverts commit 323bfde20c.
2020-01-29 16:14:49 +01:00
Matt Arsenault 02adfb5155 AMDGPU/GlobalISel: Manually select scalar f64 G_FNEG
This should be no problem to support with a pattern, but it turns out
there are just too many yaks to shave. The main problem is in the DAG
emitter, which I have no desire to sink effort into fixing.

If we had a bit to disable patterns in the DAG importer, fixing the
GlobalISelEmitter is more manageable.
2020-01-29 06:49:16 -08:00
Matt Arsenault c5c1bb3374 GlobalISel: Lower G_WRITE_REGISTER 2020-01-29 06:48:24 -08:00
Connor Abbott 323bfde20c AMDGPU: Fix AMDGPUUnifyDivergentExitNodes with no normal returns
Summary:
The code was assuming in a few places that if there was only one exit
from the function that it was a normal return, which is invalid. It
could be an infinite loop, in which case we still need to insert the
usual fake edge so that the null export happens. This fixes shaders that
end with an infinite loop that discards.

Reviewers: arsenm, nhaehnle, critson

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71192
2020-01-29 15:08:46 +01:00
Connor Abbott 0994c485e6 AMDGPU: Fix handling of infinite loops in fragment shaders
Summary:
Due to the fact that kill is just a normal intrinsic, even though it's
supposed to terminate the thread, we can end up with provably infinite
loops that are actually supposed to end successfully. The
AMDGPUUnifyDivergentExitNodes pass breaks up these loops, but because
there's no obvious place to make the loop branch to, it just makes it
return immediately, which skips the exports that are supposed to happen
at the end and hangs the GPU if all the threads end up being killed.

While it would be nice if the fact that kill terminates the thread were
modeled in the IR, I think that the structurizer as-is would make a mess if we
did that when the kill is inside control flow. For now, we just add a null
export at the end to make sure that it always exports something, which fixes
the immediate problem without penalizing the more common case. This means that
we sometimes do two "done" exports when only some of the threads enter the
discard loop, but from tests the hardware seems ok with that.

This fixes dEQP-VK.graphicsfuzz.while-inside-switch with radv.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70781
2020-01-29 15:08:46 +01:00
Jay Foad 4a331beadc [AMDGPU] Fix vccz after v_readlane/v_readfirstlane to vcc_lo/hi
Summary:
Up to gfx9, writes to vcc_lo and vcc_hi by instructions like
v_readlane and v_readfirstlane do not update vccz to reflect the new
value of vcc. Fix it by reusing part of the existing vccz bug handling
code, which inserts an "s_mov_b64 vcc, vcc" instruction to restore vccz
just before an instruction that needs the correct value.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69661
2020-01-28 10:52:17 +00:00
Guillaume Chatelet d9bff3be99 Update tests for @llvm.memcpy.inline intrinsics 2020-01-28 10:32:43 +01:00
Matt Arsenault d2a9739274 AMDGPU/GlobalISel: Eliminate SelectVOP3Mods_f32
Trivial type predicates should be moved into the tablegen pattern
itself, and not checked inside complex patterns. This eliminates a
redundant complex pattern, and fixes select source modifiers for
GlobalISel.

I have further patches which fully handle select in tablegen and
remove all of the C++ selection, although it requires the ugliness to
support the entire range of legal register types.
2020-01-27 17:53:54 -05:00
Matt Arsenault c3075e6171 AMDGPU/GlobalISel: Select buffer atomics
The cmpswap handling is incomplete and fails to select.
2020-01-27 15:16:44 -05:00
Matt Arsenault a69c26a927 AMDGPU/GlobalISel: Select llvm.amdgcn.struct.buffer.store[.format] 2020-01-27 15:00:21 -05:00
Matt Arsenault 533d650e94 AMDGPU/GlobalISel: Move llvm.amdgcn.raw.buffer.store handling
Treat this the same way as loads. There's less value to the
intermediate nodes, but it's good to be consistent.
2020-01-27 14:59:30 -05:00
Matt Arsenault 75d66f8434 AMDGPU/GlobalISel: Select llvm.amdcn.struct.tbuffer.load 2020-01-27 14:42:04 -05:00
Matt Arsenault 09ed0e44d9 AMDGPU/GlobalISel: Select llvm.amdgcn.raw.tbuffer.load 2020-01-27 13:40:37 -05:00
Stanislav Mekhanoshin 53eb0f8c07 [AMDGPU] Attempt to reschedule withou clustering
We want to have more load/store clustering but we also want
to maintain low register pressure which are oposit targets.
Allow scheduler to reschedule regions without mutations
applied if we hit a register limit.

Differential Revision: https://reviews.llvm.org/D73386
2020-01-27 10:27:16 -08:00
Matt Arsenault 97711228fd AMDGPU/GlobalISel: Select llvm.amdgcn.struct.buffer.load.format 2020-01-27 13:23:35 -05:00
Matt Arsenault ce7ca2caf2 AMDGPU/GlobalISel: Select llvm.amdgcn.struct.buffer.load 2020-01-27 13:05:55 -05:00
Matt Arsenault 198624c39d AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.load.format 2020-01-27 13:02:19 -05:00
Matt Arsenault fc90222a91 AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.load
Use intermediate instructions, unlike with buffer stores. This is
necessary because of the need to have an internal way to distinguish
between signed and unsigned extloads. This introduces some duplication
and near duplication with the buffer store selection path. The store
handling should maybe be moved into legalization to match and
eliminate the duplication.
2020-01-27 12:49:23 -05:00
Matt Arsenault e60d658260 AMDGPU/GlobalISel: Handle VOP3NoMods 2020-01-27 09:03:44 -08:00
Matt Arsenault d309b4ebe4 AMDGPU/GlobalISel: Add baseline tests for fma/fmad selection 2020-01-27 09:02:13 -08:00
Matt Arsenault bef27175c7 AMDGPU: Fix not using f16 fsin/fcos
I noticed this because this accidentally started working for
GlobalISel.
2020-01-27 08:59:59 -08:00
Jay Foad e37997cc0d [AMDGPU] Simplify test and extend to gfx9 and gfx10
Summary:
This is in preparation for adding more test cases for D69661 and other
bug fixes in the same area.

Reviewers: tpr, dstuttard, critson, nhaehnle, arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70708
2020-01-27 16:56:40 +00:00
Matt Arsenault a1d33ce73a AMDGPU/GlobalISel: Custom legalize v2s16 G_SHUFFLE_VECTOR
Try to keep simple v2s16 cases as-is. This will more naturally map to
how the VOP3P op_sel modifiers work compared to the expansion
involving bitcasts and bitshifts.

This could maybe try harder with wider source vector types, although
that could be handled with a pre-legalize combine.
2020-01-27 08:28:05 -08:00
Matt Arsenault 4e69df091d Revert "AMDGPU: Temporary drop s_mul_hi_i/u32 patterns"
This reverts commit fe23ed2c68.

It was never really clear this was responsible for the performance
regressions that caused this to be reverted. It's been a long time,
and we need to have scalar patterns for this to get GlobalISel
working.
2020-01-27 08:07:21 -08:00
Matt Arsenault bc3d900fa5 AMDGPU/GlobalISel: Fix not using global atomics on gfx9+
For some reason the flat/global atomics end up in the generated
matcher table in a different order from SelectionDAG. Use
AddedComplexity to prefer checking for global atomics first.
2020-01-27 07:42:42 -08:00
Matt Arsenault ac0b9b4ccf AMDPGPU/GlobalISel: Select more MUBUF global addressing modes
The handling of the high bits of the resource descriptor seem weird to
me, where the 3rd dword changes based on the instruction.
2020-01-27 07:28:36 -08:00
Matt Arsenault fdaad485e6 AMDGPU/GlobalISel: Initial selection of MUBUF addr64 load/store
Fixes the main reason for compile failures on SI, but doesn't really
try to use the addressing modes yet.
2020-01-27 07:13:56 -08:00
Matt Arsenault 2214bc81d0 AMDGPU: Allow i16 shader arguments
Not allowing this just creates unnecessary complications when writing
simple tests.
2020-01-27 06:55:32 -08:00
Matt Arsenault 2a160ba5b0 GlobalISel: Reimplement widenScalar for G_UNMERGE_VALUES results
Only use shifts if the requested type exactly matches the source type,
and create sub-unmerges otherwise.
2020-01-27 06:18:26 -08:00
Matt Arsenault 06d9230fef GlobalISel: Translate vector GEPs 2020-01-27 05:35:05 -08:00
Simon Pilgrim c8de7c8f50 [TargetLowering] SimplifyDemandedBits - Remove ashr if all our demandedbits already match the sign bit
Differential Revision: https://reviews.llvm.org/D73412
2020-01-25 17:36:46 +00:00
Matt Arsenault fe9765762c AMDGPU: Generate test checks 2020-01-24 23:25:57 -05:00
Tom Stellard 86c944d790 AMDGPU/SILoadStoreOptimizer: Improve merging of out of order offsets
Summary:
This improves merging of sequences like:

store a, ptr + 4
store b, ptr + 8
store c, ptr + 12
store d, ptr + 16
store e, ptr + 20
store f, ptr

Prior to this patch the basic block was scanned in order to find instructions
to merge and the above sequence would be transformed to:

store4 <a, b, c, d>, ptr + 4
store e, ptr + 20
store r, ptr

With this change, we now sort all the candidate merge instructions by their offset,
so instructions are visited in offset order rather than in the order they appear
in the basic block.  We now transform this sequnce into:

store4 <f, a, b, c>, ptr
store2 <d, e>, ptr + 16

Another benefit of this change is that since we have sorted the mergeable lists
by offset, we can easily check if an instruction is mergeable by checking the
offset of the instruction that becomes before or after it in the sorted list.
Once we determine an instruction is not mergeable we can remove it from the list
and avoid having to do the more expensive mergeablilty checks.

Reviewers: arsenm, pendingchaos, rampitec, nhaehnle, vpykhtin

Reviewed By: arsenm, nhaehnle

Subscribers: kerbowa, merge_guards_bot, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65966
2020-01-24 19:45:56 -08:00
Matt Arsenault 3b93945587 AMDGPU/GlobalISel: Select wqm, softwqm and wwm intrinsics 2020-01-24 13:06:44 -08:00
Matt Arsenault 87c46a3129 AMDGPU: Don't error on ds.ordered intrinsic in function
These should be assumed to be called from a compute context. Also
don't use a 2 entry switch over constants.
2020-01-24 13:06:44 -08:00
Matt Arsenault 4fdae24733 AMDGPU/GlobalISel: Add selection tests for G_ATOMICRMW_ADD 2020-01-24 12:15:09 -08:00
Stanislav Mekhanoshin 555d8f4ef5 [AMDGPU] Bundle loads before post-RA scheduler
We are relying on atrificial DAG edges inserted by the
MemOpClusterMutation to keep loads and stores together in the
post-RA scheduler. This does not work all the time since it
allows to schedule a completely independent instruction in the
middle of the cluster.

Removed the DAG mutation and added pass to bundle already
clustered instructions. These bundles are unpacked before the
memory legalizer because it does not work with bundles but also
because it allows to insert waitcounts in the middle of a store
cluster.

Removing artificial edges also allows a more relaxed scheduling.

Differential Revision: https://reviews.llvm.org/D72737
2020-01-24 11:33:38 -08:00
Stanislav Mekhanoshin 44b865fa7f [AMDGPU] Allow narrowing muti-dword loads
Currently BE allows only a little load narrowing because
of the fear it will produce sub-dword ext loads. However,
we can always allow narrowing if we are shrinking one
multi-dword load to another multi-dword load.

In particular we were unable to reduce s_load_dwordx8 into
s_load_dwordx4 if identity shuffle was used to extract
low 4 dwords.

Differential Revision: https://reviews.llvm.org/D73133
2020-01-24 11:03:41 -08:00
Stanislav Mekhanoshin 7a94d4f4ee Allow combining of extract_subvector to extract element
Differential Revision: https://reviews.llvm.org/D73132
2020-01-24 10:50:26 -08:00
Changpeng Fang 2531535984 AMDGPU: Implement FDIV optimizations in AMDGPUCodeGenPrepare
Summary:
      RCP has the accuracy limit. If FDIV fpmath require high accuracy rcp may not
    meet the requirement. However, in DAG lowering, fpmath information gets lost,
    and thus we may generate either inaccurate rcp related computation or slow code
    for fdiv.

    In patch implements fdiv optimizations in the AMDGPUCodeGenPrepare, which could
    exactly know !fpmath.

     FastUnsafeRcpLegal: We determine whether it is legal to use rcp based on
                         unsafe-fp-math, fast math flags, denormals and fpmath
                         accuracy request.

     RCP Optimizations:
       1/x -> rcp(x) when fast unsafe rcp is legal or fpmath >= 2.5ULP with
                                                      denormals flushed.
       a/b -> a*rcp(b) when fast unsafe rcp is legal.

     Use fdiv.fast:
       a/b -> fdiv.fast(a, b) when RCP optimization is not performed and
                              fpmath >= 2.5ULP with denormals flushed.

       1/x -> fdiv.fast(1,x)  when RCP optimization is not performed and
                              fpmath >= 2.5ULP with denormals.

    Reviewers:
      arsenm

    Differential Revision:
      https://reviews.llvm.org/D71293
2020-01-23 16:57:43 -08:00
Matt Arsenault 86e5b56a7c AMDGPU/GlobalISel: Fix RegBanKSelect for llvm.amdgcn.exp.compr
This wasn't updated for the immarg handling change. We really need a
verifier for this.
2020-01-23 13:30:46 -08:00
Matt Arsenault 618fa77ae4 AMDGPU/GlobalISel: Select V_ADD3_U32/V_XOR3_B32
The other 3-op patterns should also be theoretically handled, but
currently there's a bug in the inferred pattern complexity.

I'm not sure what the error handling strategy should be for potential
constant bus violations. I think the correct strategy is to never
produce mixed SGPR and VGPR operands in a typical VOP instruction,
which will trivially avoid them. However, it's possible to still have
hand written MIR (or erroneously transformed code) with these
operands. When these fold, the restriction will be violated. We
currently don't have any verifiers for reg bank legality. For now,
just ignore the restriction.

It might be worth triggering a DAG fallback on verifier error.
2020-01-23 12:04:20 -05:00
Matt Arsenault dfec702290 AMDGPU: Check for other uses when looking through casted select
Fixes mesa regression on ext_transform_feedback-max-varyings
2020-01-23 11:31:24 -05:00
Matt Arsenault e050256377 AMDGPU/GlobalISel: Fix generated wave64 checks
This inexplicably managed to pass locally without the updated wave64
checks.
2020-01-22 22:05:54 -05:00
Matt Arsenault 4d14772f5c AMDGPU/GlobalISel: Remove redundant or patterns
These ended up with higher priority than or3 patterns in a future
patch. This also fixes the using VOP2 forms.
2020-01-22 21:45:51 -05:00
Matt Arsenault 7dc49f77ee R600: Fix failing testcase 2020-01-22 16:01:35 -05:00
Jan Vesely 1b8eab179d AMDGPU/R600: Emit rodata in text segment
R600 relies on this behaviour.
Fixes: 6e18266aa4 ('Partially revert D61491 "AMDGPU: Be explicit about whether the high-word in SI_PC_ADD_REL_OFFSET is 0"')
Fixes ~100 piglit regressions since 6e18266

Differential Revision: https://reviews.llvm.org/D72991
2020-01-22 14:31:51 -05:00
Matt Arsenault 1192d7b254 AMDGPU/GlobalISel: Handle 16-bank LDS llvm.amdgcn.interp.p1.f16
The pattern is also mishandled by the generated matcher, so workaround
this as in the DAG path.

The existing DAG tests aren't particularly targeted to just this one
intrinsic. These also end up differing in scheduling from SGPR->VGPR
operand constraint copies.
2020-01-22 12:10:59 -05:00
Matt Arsenault c05f23e409 AMDGPU/GlobalISel: Select llvm.amdgcn.mov.dpp
This is deprecated, but easy to support.
2020-01-22 11:43:53 -05:00
Matt Arsenault dd09ec1208 AMDGPU/GlobalISel: Select llvm.amdgcn.mov.dpp8 2020-01-22 11:43:40 -05:00
Matt Arsenault bb562d1af0 AMDGPU/GlobalISel: Keep G_BITCAST out of waterfall loop
The waterfall utility function blindly inserts a phi for every def in
the loop. We don't need this one to be preserved for every
iteration. Saves an extra phi and copy inside the loop body.
2020-01-22 11:16:19 -05:00
Matt Arsenault 52ec7379ad AMDGPU/GlobalISel: Fold add of constant into G_INSERT_VECTOR_ELT
Move the subregister base like in the extract case.
2020-01-22 11:09:15 -05:00
Matt Arsenault d1dbb5e471 AMDGPU/GlobalISel: Select G_INSERT_VECTOR_ELT 2020-01-22 11:00:49 -05:00
Matt Arsenault 3524d4412c AMDGPU/GlobalISel: Fix RegBankSelect for G_INSERT_VECTOR_ELT
The result and source vector are going to be tied, so these need to be
the same bank.

The inserted value also needs to be broken down based on the result
bank, not the inserted value itself.
2020-01-22 10:57:50 -05:00
Matt Arsenault e3d352c541 AMDGPU/GlobalISel: Fold constant offset vector extract indexes
Handle dynamic vector extracts that use an index that's an add of a
constant offset into moving the base subregister of the indexing
operation.

Force the add into the loop in regbankselect, which will be recognized
when selected.
2020-01-22 10:50:59 -05:00
Matt Arsenault 2fe500ab5b AMDGPU: Look through casted selects to constant fold bin ops
The promotion of the uniform select to i32 interfered with this fold.
2020-01-22 10:16:39 -05:00
Matt Arsenault bcd91778fe AMDGPU: Do binop of select of constant fold in AMDGPUCodeGenPrepare
DAGCombiner does this, but divisions expanded here miss this
optimization. Since 67aa18f165,
divisions have been expanded here and missed out on this
optimization. Avoids test regressions in a future patch.
2020-01-22 10:16:39 -05:00
Matt Arsenault a174f0da62 AMDGPU/GlobalISel: Add pre-legalize combiner pass
Just copy the AArch64 pass as-is for now, except for removing the
memcpy handling.
2020-01-22 10:16:39 -05:00
Matt Arsenault 70096ca111 AMDGPU/GlobalISel: Fix RegbankSelect for llvm.amdgcn.fmul.legacy 2020-01-22 09:26:17 -05:00
Matt Arsenault a722cbf77c AMDGPU/GlobalISel: Handle atomic_inc/atomic_dec
The intermediate instruction drops the extra volatile argument. We are
missing an atomic ordering on these.
2020-01-22 09:26:17 -05:00
Matt Arsenault 9c928649a0 AMDGPU: Fix interaction of tfe and d16
This using the wrong result register, and dropping the result entirely
for v2f16. This would fail to select on the scalar case. I believe it
was also mishandling packed/unpacked subtargets.
2020-01-22 09:26:17 -05:00
Matt Arsenault b94d3b9b77 AMDGPU/GlobalISel: RegBankSelect interp intrinsics
Note this assumes the future use of immediates for immarg, not the
current G_CONSTANT which will be emitted.
2020-01-22 09:01:34 -05:00
Carl Ritson 6b4b3e2856 [AMDGPU] SIRemoveShortExecBranches should not remove branches exiting loops
Summary:
Check that a s_cbranch_execz is not a loop exit before removing it.
As the pass is generating infinite loops.

Reviewers: cdevadas, arsenm, nhaehnle

Reviewed By: nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, tpr, t-tye, hiraditya, kerbowa, llvm-commits, dstuttard, foad

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72997
2020-01-22 13:18:40 +09:00
cdevadas e53a9d96e6 Resubmit: [AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.

This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.

Differential revision: https://reviews.llvm.org/D68092
2020-01-22 13:18:32 +09:00
Matt Arsenault fd109308a7 AMDGPU/GlobalISel: Legalize G_PTR_ADD for arbitrary pointers
Pointers of unrecognized address spaces shoudl be treated as
global-like pointers. Even if loads and stores of them aren't handled,
dumb operations that just operate on the bits should work.
2020-01-21 16:35:36 -05:00
Matt Arsenault 5181c67feb AMDGPU/GlobalISel: Add some baseline tests for unmerge legalization 2020-01-21 08:31:10 -05:00
Simon Pilgrim 5f5f478564 [DAG] Fold extract_vector_elt (scalar_to_vector), K to undef (K != 0)
This was unconditionally folding this to the source operand, even if the access was out of bounds. Use undef instead of the extract is not the first element.

This helps with some cases where 3-vectors are legalized and avoids processing the 4th component.

Original Patch by: arsenm (Matt Arsenault)

Differential Revision: https://reviews.llvm.org/D51589
2020-01-21 10:58:30 +00:00
Nicolai Hähnle a80291ce10 Revert "[AMDGPU] Invert the handling of skip insertion."
This reverts commit 0dc6c249bf.

The commit is reported to cause a regression in piglit/bin/glsl-vs-loop for
Mesa.
2020-01-21 09:17:25 +01:00
Matt Arsenault c72aa27f91 AMDDGPU/GlobalISel: Fix RegBankSelect for llvm.amdgcn.ps.live 2020-01-20 23:21:53 -05:00
Matt Arsenault 385fb337de AMDGPU: Generate test checks
These weren't much different than copied output anyway.
2020-01-20 20:03:45 -05:00
Matt Arsenault e5823bf806 AMDGPU: Don't create weird sized integers
There's no reason to introduce a new, unnaturally sized value
here. This has a chance to produce worse code with
legalization. Avoids regression in a future patch.
2020-01-20 20:02:54 -05:00
Matt Arsenault 317fdcd09a AMDGPU: Cleanup and generate 64-bit div tests
Split out r600 tests, and try to be more consistent with
coverage. Cover a few more cases for 24-bit optimization and
constants.
2020-01-20 17:19:39 -05:00
Matt Arsenault 9b13b4a0e3 AMDGPU: Prepare to use scalar register indexing
Define pseudos mirroring the the VGPR indexing ones, and adjust the
operands in the s_movrel* instructions to avoid the result def.
2020-01-20 17:19:16 -05:00
Fangrui Song 886d2c2ca7 [BranchRelaxation] Simplify offset computation and fix a bug in adjustBlockOffsets()
If Start!=0, adjustBlockOffsets() may unnecessarily adjust the offset of
Start. There is no correctness issue, but it can create more block
splits.
2020-01-19 16:02:16 -08:00
Matt Arsenault 592de0009f AMDGPU/GlobalISel: Select llvm.amdgcn.update.dpp
The existing test is overly reliant on -mattr=-flat-for-global, and
some missing optimizations to re-use.
2020-01-17 20:09:53 -05:00
Matt Arsenault ec9628318d AMDGPU/GlobalISel: Select DS append/consume 2020-01-17 20:09:53 -05:00
Stanislav Mekhanoshin eebdd85e7d [AMDGPU] allow multi-dword flat scratch access since GFX9
This is supported starting with GFX9.

Differential Revision: https://reviews.llvm.org/D72865
2020-01-17 10:47:03 -08:00
Matt Arsenault 886f9071c6 AMDGPU: Don't assert on a16 images on targets without FeatureR128A16
Currently the lowering for i16 image coordinates asserts on gfx10. I'm
somewhat confused by this though. The feature is missing from the
gfx10 feature lists, but the a16 bit appears to be present in the
manual for MIMG instructions.
2020-01-17 11:07:00 -05:00
Matt Arsenault 8945b23af5 AMDGPU: Update more tests to use modern buffer intrinsics 2020-01-16 14:29:38 -05:00
Matt Arsenault e12b840abf AMDGPU/GlobalISel: Improve lowering of G_SEXT_INREG
Clamping the scalar is much better than lowering with superwide shifts
for types > s64.
2020-01-16 14:29:37 -05:00
Matt Arsenault a66d2817ca GlobalISel: Don't ignore requested ext narrowing type
This was assuming the narrow target was the source type. Respect the
requested type when these don't match by using intermediate
merges. This avoids producing very wide, illegal shift expansions.
2020-01-16 14:29:37 -05:00
Matt Arsenault de4f88df97 AMDGPU: Remove IR section from MIR test
Also generate check lines so this isn't just testing the meaningless
block name.
2020-01-16 13:49:44 -05:00
Matt Arsenault 0d0fce42b0 GlobalISel: Preserve load/store metadata in IRTranslator
This was dropping the invariant metadata on dead argument loads, so
they weren't deleted.

Atomics still need to be fixed the same way. Also, apparently store
was never preserving dereferencable which should also be fixed.
2020-01-16 13:49:43 -05:00
Matt Arsenault 20ca49b646 AMDGPU: Update tests to use modern buffer intrinsics 2020-01-16 13:49:43 -05:00
Matt Arsenault 4ca1ad85b7 AMDGPU/GlobalISel: Don't handle legacy buffer intrinsic 2020-01-16 11:31:12 -05:00
Matt Arsenault 9b2f3532c7 AMDGPU/GlobalISel: Select DS GWS intrinsics 2020-01-16 11:25:10 -05:00
Yuanfang Chen 6e24c6037f Revert "[Support] make report_fatal_error `abort` instead of `exit`"
This reverts commit 647c3f4e47.

Got bots failure from sanitizer-windows and maybe others.
2020-01-15 17:52:25 -08:00
Yuanfang Chen 647c3f4e47 [Support] make report_fatal_error `abort` instead of `exit`
Summary:
This patch could be treated as a rebase of D33960. It also fixes PR35547.
A fix for `llvm/test/Other/close-stderr.ll` is proposed in D68164. Seems
the consensus is that the test is passing by chance and I'm not
sure how important it is for us. So it is removed like in D33960 for now.
The rest of the test fixes are just adding `--crash` flag to `not` tool.

** The reason it fixes PR35547 is

`exit` does cleanup including calling class destructor whereas `abort`
does not do any cleanup. In multithreading environment such as ThinLTO or JIT,
threads may share states which mostly are ManagedStatic<>. If faulting thread
tearing down a class when another thread is using it, there are chances of
memory corruption. This is bad 1. It will stop error reporting like pretty
stack printer; 2. The memory corruption is distracting and nondeterministic in
terms of error message, and corruption type (depending one the timing, it
could be double free, heap free after use, etc.).

Reviewers: rnk, chandlerc, zturner, sepavloff, MaskRay, espindola

Reviewed By: rnk, MaskRay

Subscribers: wuzish, jholewinski, qcolombet, dschuff, jyknight, emaste, sdardis, nemanjai, jvesely, nhaehnle, sbc100, arichardson, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, lenary, s.egerton, pzheng, cfe-commits, MaskRay, filcab, davide, MatzeB, mehdi_amini, hiraditya, steven_wu, dexonsmith, rupprecht, seiya, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D67847
2020-01-15 17:05:13 -08:00
Stanislav Mekhanoshin 8b417dd3d6 Process BUNDLE in tail duplication
When tail duplication estimates a size of tail it uses instruction
count. Account for a number of instrictions in a bundle too.

Differential Revision: https://reviews.llvm.org/D72783
2020-01-15 15:46:57 -08:00
Matt Arsenault 711a17afaf AMDGPU/GlobalISel: Select exp with patterns
This does produce slightly different code. Now a unique IMPLICIT_DEF
is emitted for each of the implicit_def operands, rather than reusing
the same one.
2020-01-15 18:33:15 -05:00
Matt Arsenault 25e9938a45 GlobalISel: Handle more cases of G_SEXT narrowing
This now develops the same problem G_ZEXT/G_ANYEXT have where the
requested type is assumed to be the source type. This will be fixed
separately by creating intermediate merges.
2020-01-15 18:33:15 -05:00
Matt Arsenault 936483fb7d GlobalISel: Implement lower for G_BITCAST
Bitcast only really applies between scalars and vectors. Implement as
an unmerge and remerge. The test needs to tolerate failure since one
of the unmerges currently fails to legalize.
2020-01-15 08:58:58 -05:00
Matt Arsenault 91715617ad GlobalISel: Fix narrowScalar for G_ANYEXT results
This is nearly the same as G_ZEXT.
2020-01-15 08:58:57 -05:00
cdevadas 0dc6c249bf [AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.

This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.

Differential revision: https://reviews.llvm.org/D68092
2020-01-15 15:18:16 +05:30
Michael Liao 65c8abb14e [amdgpu] Fix typos in a test case.
- There are typos introduced due to merge.
2020-01-14 20:08:39 -05:00
Michael Liao 01a4b83154 [codegen,amdgpu] Enhance MIR DIE and re-arrange it for AMDGPU.
Summary:
- `dead-mi-elimination` assumes MIR in the SSA form and cannot be
  arranged after phi elimination or DeSSA. It's enhanced to handle the
  dead register definition by skipping use check on it. Once a register
  def is `dead`, all its uses, if any, should be `undef`.
- Re-arrange the DIE in RA phase for AMDGPU by placing it directly after
  `detect-dead-lanes`.
- Many relevant tests are refined due to different register assignment.

Reviewers: rampitec, qcolombet, sunfish

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72709
2020-01-14 19:26:15 -05:00
Michael Liao 8d07f8d98c [DAGCombine] Replace `getIntPtrConstant()` with `getVectorIdxTy()`.
- Prefer `getVectorIdxTy()` as the index operand type for
  `EXTRACT_SUBVECTOR` as targets expect different types by overloading
  `getVectorIdxTy()`.
2020-01-14 17:03:05 -05:00
Jay Foad b777e551f0 [MachineScheduler] Reduce reordering due to mem op clustering
Summary:
Mem op clustering adds a weak edge in the DAG between two loads or
stores that should be clustered, but the direction of this edge is
pretty arbitrary (it depends on the sort order of MemOpInfo, which
represents the operands of a load or store). This often means that two
loads or stores will get reordered even if they would naturally have
been scheduled together anyway, which leads to test case churn and goes
against the scheduler's "do no harm" philosophy.

The fix makes sure that the direction of the edge always matches the
original code order of the instructions.

Reviewers: atrick, MatzeB, arsenm, rampitec, t.p.northover

Subscribers: jvesely, wdng, nhaehnle, kristof.beyls, hiraditya, javed.absar, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72706
2020-01-14 19:19:02 +00:00
Stanislav Mekhanoshin ad741853c3 [AMDGPU] Model distance to instruction in bundle
This change allows to model the height of the instruction
within a bundle for latency adjustment purposes.

Differential Revision: https://reviews.llvm.org/D72669
2020-01-14 01:18:59 -08:00
Stanislav Mekhanoshin eca4474587 [AMDGPU] Fix getInstrLatency() always returning 1
We do not have InstrItinerary so generic getInstLatency() was always
defaulting to return 1 cycle. We need to use TargetSchedModel instead
to compute an instruction's latency.

Differential Revision: https://reviews.llvm.org/D72655
2020-01-14 01:08:30 -08:00
Matt Arsenault 203801425d AMDGPU/GlobalISel: Select llvm.amdgcn.ds.ordered.{add|swap} 2020-01-13 13:09:38 -05:00
Matt Arsenault 2f090cc8f1 AMDGPU/GlobalISel: Add some baseline tests for vector extract
A future change will try to fold constant offsets into the loop which
these will stress.
2020-01-13 12:51:05 -05:00
Matt Arsenault ca19d7a399 AMDGPU/GlobalISel: Fix branch targets when emitting SI_IF
The branch target needs to be changed depending on whether there is an
unconditional branch or not.

Loops also need to be similarly fixed, but compiling a simple testcase
end to end requires another set of patches that aren't upstream yet.
2020-01-13 12:51:05 -05:00
Matt Arsenault d7d88b9d8b GlobalISel: Fix assertion on wide G_ZEXT sources
It's possible to have a type that needs a mask greater than 64-bits.
2020-01-13 08:29:45 -05:00
Simon Pilgrim 8f49204f26 [SelectionDAG] ComputeKnownBits - minimum leading/trailing zero bits in LSHR/SHL (PR44526)
As detailed in https://blog.regehr.org/archives/1709 we don't make use of the known leading/trailing zeros for shifted values in cases where we don't know the shift amount value.

This patch adds support to SelectionDAG::ComputeKnownBits to use KnownBits::countMinTrailingZeros and countMinLeadingZeros to set the minimum guaranteed leading/trailing known zero bits.

Differential Revision: https://reviews.llvm.org/D72573
2020-01-13 11:08:12 +00:00
Matt Arsenault 3c868cbbda AMDGPU: Split test function
This avoids slightly different scheduling/regalloc behavior, and
avoids a test diff between GlobalISel and SelectionDAG.
2020-01-12 22:44:51 -05:00
Matt Arsenault 555e7ee04c AMDGPU/GlobalISel: Don't use XEXEC class for SGPRs
We don't use the xexec register classes for arbitrary values
anymore. Avoids a test variance beween GlobalISel and SelectionDAG>
2020-01-12 22:44:51 -05:00
Matt Arsenault a10527cd37 AMDGPU/GlobalISel: Copy type when inserting readfirstlane
getDefIgnoringCopies will fail to find any def if no type is set if we
try to use it on the use's operand, so propagate the type.
2020-01-12 22:44:51 -05:00
Simon Pilgrim 065eefcfe9 [AMDGPU] Regenerate shl shift tests 2020-01-12 14:34:36 +00:00
Michael Bedy 4a32cd11ac [AMDGPU] Remove unnecessary v_mov from a register to itself in WQM lowering.
Summary:
- SI Whole Quad Mode phase is replacing WQM pseudo instructions with v_mov instructions.
While this is necessary for the special handling of moving results out of WWM live ranges,
it is not necessary for WQM live ranges. The result is a v_mov from a register to itself after every
WQM operation. This change uses a COPY psuedo in these cases, which allows the register
allocator to coalesce the moves away.

Reviewers: tpr, dstuttard, foad, nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71386
2020-01-10 23:01:19 -05:00
Matt Arsenault bac995d978 AMDGPU/GlobalISel: Clamp G_ZEXT source sizes
Also clamps G_SEXT/G_ANYEXT, but the implementation is more limited so
fewer cases actually work.
2020-01-10 09:42:49 -05:00
Matt Arsenault 35c3d101ae AMDGPU/GlobalISel: Select G_EXTRACT_VECTOR_ELT
Doesn't try to do the fold into the base register of an add of a
constant in the index like the DAG path does.
2020-01-09 19:52:24 -05:00
Matt Arsenault 5cabb8357a AMDGPU/GlobalISel: Fix G_EXTRACT_VECTOR_ELT mapping for s-v case
If an SGPR vector is indexed with a VGPR, the actual indexing will be
done on the SGPR and produce an SGPR. A copy needs to be inserted
inside the waterwall loop to the VGPR result.
2020-01-09 19:46:54 -05:00
Stanislav Mekhanoshin cd69e4c74c [AMDGPU] Fix bundle scheduling
Bundles coming to scheduler considered free, i.e. zero latency.
Fixed.

Differential Revision: https://reviews.llvm.org/D72487
2020-01-09 15:56:36 -08:00
Matt Arsenault b4a647449f TableGen/GlobalISel: Add way for SDNodeXForm to work on timm
The current implementation assumes there is an instruction associated
with the transform, but this is not the case for
timm/TargetConstant/immarg values. These transforms should directly
operate on a specific MachineOperand in the source
instruction. TableGen would assert if you attempted to define an
equivalent GISDNodeXFormEquiv using timm when it failed to find the
instruction matcher.

Specially recognize SDNodeXForms on timm, and pass the operand index
to the render function.

Ideally this would be a separate render function type that looks like
void renderFoo(MachineInstrBuilder, const MachineOperand&), but this
proved to be somewhat mechanically painful. Add an optional operand
index which will only be passed if the transform should only look at
the one source operand.

Theoretically it would also be possible to only ever pass the
MachineOperand, and the existing renderers would check the parent. I
think that would be somewhat ugly for the standard usage which may
want to inspect other operands, and I also think MachineOperand should
eventually not carry a pointer to the parent instruction.

Use it in one sample pattern. This isn't a great example, since the
transform exists to satisfy DAG type constraints. This could also be
avoided by just changing the MachineInstr's arbitrary choice of
operand type from i16 to i32. Other patterns have nontrivial uses, but
this serves as the simplest example.

One flaw this still has is if you try to use an SDNodeXForm defined
for imm, but the source pattern uses timm, you still see the "Failed
to lookup instruction" assert. However, there is now a way to avoid
it.
2020-01-09 17:37:52 -05:00
Matt Arsenault 0ea3c7291f GlobalISel: Handle llvm.read_register
Compared to the attempt in bdcc6d3d26,
this uses intermediate generic instructions.
2020-01-09 17:37:52 -05:00
Matt Arsenault 767aa507a4 AMDGPU/GlobalISel: Fix argument lowering for vectors of pointers
When these arguments are broken down by the EVT based callbacks, the
pointer information is lost. Hack around this by coercing the register
types to be the expected pointer element type when building the
remerge operations.
2020-01-09 16:29:44 -05:00
Matt Arsenault 35ad66fae8 AMDGPU/GlobalISel: Widen 16-bit shift amount sources
This should be legal, but will require future selection work. 16-bit
shift amounts were already removed from being legal, but this didn't
adjust the transformation rules.
2020-01-09 16:29:44 -05:00
Matt Arsenault 9ffd0ed838 AMDGPU/GlobalISel: Fix import of integer med3
This isn't too useful now, since nothing is currently trying to form
min/max from cmp+select.
2020-01-09 10:29:32 -05:00
Matt Arsenault 7d67742160 AMDGPU/GlobalISel: Fix import of zext of s16 op patterns 2020-01-09 10:29:32 -05:00
Matt Arsenault 3952748ffd AMDGPU/GlobalISel: Fix add of neg inline constant pattern 2020-01-09 10:29:31 -05:00
Daniel Sanders de3d0ee023 Revert "Revert "[MIR] Target specific MIR formating and parsing""
There was an unguarded dereference of MF in a function that permitted
nullptr. Fixed

This reverts commit 71d64f72f9.
2020-01-08 20:03:29 -08:00
Nico Weber 71d64f72f9 Revert "[MIR] Target specific MIR formating and parsing"
This reverts commit 3ef05d85be.
It broke check-llvm on many bots, see comments on D69836.
2020-01-08 22:50:49 -05:00
Peng Guo 3ef05d85be [MIR] Target specific MIR formating and parsing
Summary:
Added MIRFormatter for target specific MIR formating and parsing with
immediate and custom pseudo source values. Target machine can subclass
MIRFormatter and implement custom logic for printing and parsing
immediate and custom pseudo source values for better readability.

* Target specific immediate mnemonic need to start with "." follows by
  identifier string. When MIR parser sees immediate it will call target
  specific parsing function.

* Custom pseudo source value need to start with custom follows by
  double-quoted string. MIR parser will pass the quoted string to target
  specific PSV parsing function.

* MIRFormatter have 2 helper functions to facilitate LLVM value printing
  and parsing for custom PSV if they refers LLVM values.

Patch by Peng Guo

Reviewers: dsanders, arsenm

Reviewed By: dsanders

Subscribers: wdng, jvesely, nhaehnle, hiraditya, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69836
2020-01-08 18:48:02 -08:00
Daniel Sanders 5ab6fa7b70 Revert "[MIR] Target specific MIR formating and parsing"
Forgot to credit Peng in the commit message.

This reverts commit be841f89d0.
2020-01-08 18:48:02 -08:00
Peng Guo be841f89d0 [MIR] Target specific MIR formating and parsing
Summary:
Added MIRFormatter for target specific MIR formating and parsing with
immediate and custom pseudo source values. Target machine can subclass
MIRFormatter and implement custom logic for printing and parsing
immediate and custom pseudo source values for better readability.

* Target specific immediate mnemonic need to start with "." follows by
  identifier string. When MIR parser sees immediate it will call target
  specific parsing function.

* Custom pseudo source value need to start with custom follows by
  double-quoted string. MIR parser will pass the quoted string to target
  specific PSV parsing function.

* MIRFormatter have 2 helper functions to facilitate LLVM value printing
  and parsing for custom PSV if they refers LLVM values.

Reviewers: dsanders, arsenm

Reviewed By: dsanders

Subscribers: wdng, jvesely, nhaehnle, hiraditya, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69836
2020-01-08 18:34:21 -08:00
Matt Arsenault 6652cc0cf7 AMDGPU/GlobalISel: Fix scalar G_SELECT for arbitrary pointers
4e85ca9562 missed updating the legal
condition type set for pointers with any unrecognized address space.
2020-01-07 16:36:31 -05:00
Matt Arsenault a2d54fc534 AMDGPU/GlobalISel: Add some missing G_SELECT testcases 2020-01-07 16:36:31 -05:00
Matt Arsenault 577b0b5f54 AMDGPU/GlobalISel: Fix missing test for s16 icmp 2020-01-07 16:36:31 -05:00
Matt Arsenault 4844bf0fe2 AMDGPU: Apply i16 add->sub pattern with zext to i32
This was only applying the deeper nested zext pattern, and missing the
special case code size fold.
2020-01-07 16:36:31 -05:00
Matt Arsenault 449ab10509 AMDGPU: Add baseline test for missing pattern
The optimization to turn an add into a sub isn't triggering when the
pattern to use the zeroed high bits is used.
2020-01-07 15:10:08 -05:00
Matt Arsenault 68e70fb098 AMDGPU: Fix not using v_cvt_f16_[iu]16
We weren't treating i16->f16 casts as legal on targets with these
instructions, and always using a pair of casts through i32.
2020-01-07 15:10:07 -05:00
Matt Arsenault 78b30a54c9 AMDGPU/GlobalISel: Fix readfirstlane pattern import
The imm folding optimization pattern failed to import. The instruction
pattern was already working, but failing to fail on SGPR inputs.
2020-01-07 11:07:08 -05:00
Matt Arsenault e699c03c9b AMDGPU/GlobalISel: Fix import of s_abs_i32 pattern 2020-01-07 10:32:07 -05:00
Matt Arsenault 9150d6bd73 AMDGPU/GlobalISel: Select llvm.amdgcn.wqm.vote 2020-01-07 10:15:29 -05:00
Matt Arsenault f26ed6e47c llc: Change behavior of -mcpu with existing attribute
Don't overwrite existing target-cpu attributes.

I've often found the replacement behavior annoying, and this is
inconsistent with how the fast math command line flags interact with
the function attributes.

Does not yet change target-features, since I think that should behave
as a concatenation.
2020-01-07 10:10:25 -05:00
Matt Arsenault e8d9d202bc AMDGPU: Add run line to int_to_fp tests
This wasn't catching a regression on targets with legal i16 triggered
in a future commit.
2020-01-06 21:38:50 -05:00
Matt Arsenault 52afc93c38 AMDGPU/GlobalISel: Legalize G_READCYCLECOUNTER 2020-01-06 19:16:32 -05:00
Matt Arsenault d4c9e13324 AMDGPU/GlobalISel: Select G_UADDE/G_USUBE 2020-01-06 18:27:52 -05:00
Matt Arsenault 4e85ca9562 AMDGPU/GlobalISel: Replace handling of boolean values
This solves selection failures with generated selection patterns,
which would fail due to inferring the SGPR reg bank for virtual
registers with a set register class instead of VCC bank. Use
instruction selection would constrain the virtual register to a
specific class, so when the def was selected later the bank no longer
was set to VCC.

Remove the SCC reg bank. SCC isn't directly addressable, so it
requires copying from SCC to an allocatable 32-bit register during
selection, so these might as well be treated as 32-bit SGPR values.

Now any scalar boolean value that will produce an outupt in SCC should
be widened during RegBankSelect to s32. Any s1 value should be a
vector boolean during selection. This makes the vcc register bank
unambiguous with a normal SGPR during selection.

Summary of how this should now work:

- G_TRUNC is always a no-op, and never should use a vcc bank result.

- SALU boolean operations should be promoted to s32 in RegBankSelect
  apply mapping

- An s1 value means vcc bank at selection. The exception is for
  legalization artifacts that use s1, which are never VCC. All other
  contexts should infer the VCC register classes for s1 typed
  registers. The LLT for the register is now needed to infer the
  correct register class. Extensions with vcc sources should be
  legalized to a select of constants during RegBankSelect.

- Copy from non-vcc to vcc ensures high bits of the input value are
  cleared during selection.

- SALU boolean inputs should ensure the inputs are 0/1. This includes
  select, conditional branches, and carry-ins.

There are a few somewhat dirty details. One is that G_TRUNC/G_*EXT
selection ignores the usual register-bank from register class
functions, and can't handle truncates with VCC result banks. I think
this is OK, since the artifacts are specially treated anyway. This
does require some care to avoid producing cases with vcc. There will
also be no 100% reliable way to verify this rule is followed in
selection in case of register classes, and violations manifests
themselves as invalid copy instructions much later.

Standard phi handling also only considers the bank of the result
register, and doesn't insert copies to make the source banks
match. This doesn't work for vcc, so we have to manually correct phi
inputs in this case. We should add a verifier check to make sure there
are no phis with mixed vcc and non-vcc register bank inputs.

There's also some duplication with the LegalizerHelper, and some code
which should live in the helper. I don't see a good way to share
special knowledge about what types to use for intermediate operations
depending on the bank for example. Using the helper to replace
extensions with selects also seems somewhat awkward to me.

Another issue is there are some contexts calling
getRegBankFromRegClass that apparently don't have the LLT type for the
register, but I haven't yet run into a real issue from this.

This also introduces new unnecessary instructions in most cases, since
we don't yet try to optimize out the zext when the source is known to
come from a compare.
2020-01-06 18:26:42 -05:00
Matt Arsenault f3de8ab5cc GlobalISel: Implement lower for G_INTRINSIC_ROUND
Mostly copied from AMDGPU lowering implementation, except used
G_SITOFP instead of directly creating a select on -1.0, 0.0.
2020-01-06 18:26:42 -05:00
Matt Arsenault ee6b8722ff GlobalISel: Fix unsupported legalize action
This would complain about invalid legalizer rules otherwise.

Mark some operations as unsupported for AMDGPU. This currently seems
to produce the same legalize error as when no rules are defined, but
eventually this should produce a proper user facing error.
2020-01-06 17:21:51 -05:00
Matt Arsenault 7f2db2917d AMDGPU: Fix legalizing f16 fpow
The existing test only covered one case for r600. The use of
mul_legacy also looks suspicious to me, but leave it for now. The
patterns are also not making use of source modifiers.
2020-01-06 17:21:51 -05:00
Matt Arsenault e4464bf3d4 AMDGPU/GlobalISel: Select scalar v2s16 G_BUILD_VECTOR 2020-01-06 11:19:33 -05:00
Matt Arsenault f1c85ecdfc AMDGPU/GlobalISel: Select more G_EXTRACTs correctly
This assumed a 32-bit extract size, which would produce invalid copies
with 64-bit extracts. Handle the easy case. Ideally we would have a
way to get the proper subreg index for any 32-bit offset, but there
should probably be a tablegenerated way of getting the subreg index
for any size and offset.
2020-01-06 11:10:13 -05:00
Matt Arsenault d12f2a2998 GlobalISel: Scalarize all division operations
This only handled G_SDIV, but they all are trivially scalarizable.

Also define placeholder AMDGPU division legalizer rules.
2020-01-04 13:47:10 -05:00
Matt Arsenault 4e972224c4 AMDGPU/GlobalISel: Refine SMRD selection rules
Fix selecting these for volatile global loads, and ensure the loads
are constant enough.
2020-01-04 12:40:35 -05:00
Matt Arsenault d9b5063b25 AMDGPU/GlobalISel: Legalize more odd sized loads
The attempts to widen sufficently aligned, odd sized loads wasn't
consistently applied.
2020-01-04 12:38:39 -05:00
Matt Arsenault 5fb59f16e2 AMDGPU/GlobalISel: Assume vcc phis for any vcc input
This produces more intelligible looking results, more comparabble to
the DAG output in the simplest cases. This is probably wrong in
complex control flow, but RegBankSelect doesn't attempt analyzing if
this is on a masked path for selecting the bank yet.
2020-01-04 12:38:11 -05:00
alex-t ca8b20ca3b [AMDGPU] need to insert wait between the scalar load and vector store to the same address to avoid WAR conflict.
Reviewers: rampitec, vpykhtin, nhaehnle

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D71934
2020-01-04 18:23:14 +03:00
Matt Arsenault 089e1ee172 AMDGPU: Add gfx9 run lines to a testcase 2020-01-03 15:25:50 -05:00
Matt Arsenault 53fc484067 AMDGPU/GlobalISel: Fix off by one in operand index
This should be looking at the RHS of the add for a constant.
2020-01-03 10:30:30 -05:00
Matt Arsenault 086ac7e75c AMDGPU/GlobalISel: Correct MMO sizes in some tests
There intended to test non-extloads, but the memory size did not match
the result size.
2020-01-02 16:00:46 -05:00
Matt Arsenault 203182b7b6 AMDGPU/GlobalISel: Regenerate check lines
This avoids diff noise in a future commit from the check name change
from the G_GEP->G_PTR_ADD rename.
2020-01-02 16:00:45 -05:00
Matt Arsenault 4d7201e7b9 DAG: Stop trying to fold FP -(x-y) -> y-x in getNode with nsz
This was increasing the number of instructions when fsub was legalized
on AMDGPU with no signed zeros enabled. This fold should be guarded by
hasOneUse, and I don't think getNode should be doing that. The same
fold is already done as a regular combine through isNegatibleForFree.

This does require duplicating, even though isNegatibleForFree does
this combine already (and properly checks hasOneUse) to avoid one PPC
regression. In the regression, the outer fneg has nsz but the fsub
operand does not. isNegatibleForFree only sees the operand, and
doesn't see it's used from a nsz context. A nsz parameter needs to be
added and threaded through isNegatibleForFree to avoid this.
2019-12-31 22:49:51 -05:00
Matt Arsenault 64cf26548a AMDGPU: Precommit test showing extra instructions are introduced 2019-12-31 14:54:57 -05:00
Michael Liao 79d401905f [amdgpu] Fix scoreboard updating on `s_waitcnt_vscnt`.
Summary: - Other counters are accidentally cleared.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71866
2019-12-31 14:20:30 -05:00
Matt Arsenault 7fa0bfe7d5 AMDGPU/GlobalISel: Select mul24 intrinsics 2019-12-30 14:24:25 -05:00
Petar Avramovic 98f72a5107 [MIPS GlobalISel] Select bitreverse. Recommit
G_BITREVERSE is generated from llvm.bitreverse.<type> intrinsics,
clang genrates these intrinsics from __builtin_bitreverse32 and
__builtin_bitreverse64.
Add lower and narrowscalar for G_BITREVERSE.
Lower G_BITREVERSE on MIPS32.

Recommit notes:
Introduce temporary variables in order to make sure
instructions get inserted into MachineFunction in same order
regardless of compiler used to build llvm.

Differential Revision: https://reviews.llvm.org/D71363
2019-12-30 18:06:29 +01:00
Matt Arsenault 1247865fe0 AMDGPU/GlobalISel: Select llvm.amdgcn.fmad.ftz 2019-12-30 11:12:35 -05:00
Matt Arsenault 18240c3cd6 AMDGPU/GlobalISel: Add select test for fexp2 2019-12-30 10:56:37 -05:00
Matt Arsenault 9fd31fdbd3 GlobalISel: moreElementsVector for FP min/max 2019-12-30 10:39:53 -05:00
Matt Arsenault 9e1a2a668b AMDGPU: Improve llvm.round.f64 lowering for CI+
The path already used for f16/f32 works a lot better when v_trunc_f64
is available.
2019-12-30 09:55:46 -05:00
Matt Arsenault 58bcf51107 AMDGPU: Generate check lines 2019-12-30 09:55:40 -05:00
Matt Arsenault 491cfa4250 AMDGPU/GlobalISel: Account for G_PHI result bank
Sometimes the result bank of the phi is already assigned to something,
and should not be ignored. This is in preparation for additional
boolean phi handling changes.

Also refine the logic to fix some cases that were incorrectly deciding
to use SGPRs.
2019-12-30 09:55:21 -05:00
Dmitri Gribenko 32cc14100e Revert "[MIPS GlobalISel] Select bitreverse"
This reverts commit dbc136e0fe.
It broke buildbots:
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/21066
2019-12-30 14:29:47 +01:00
Petar Avramovic dbc136e0fe [MIPS GlobalISel] Select bitreverse
G_BITREVERSE is generated from llvm.bitreverse.<type> intrinsics,
clang genrates these intrinsics from __builtin_bitreverse32 and
__builtin_bitreverse64.
Add lower and narrowscalar for G_BITREVERSE.
Lower G_BITREVERSE on MIPS32.

Differential Revision: https://reviews.llvm.org/D71363
2019-12-30 11:26:45 +01:00
Matt Arsenault a33cab0f06 AMDGPU: Adjust test so it will work with GlobalISel
This is mostly a workaround for not handling the mubuf store path yet.
2019-12-27 19:37:39 -05:00
Matt Arsenault 5ce2ca524e AMDGPU/GlobalISel: Use SReg_32 for readfirstlane constraining
This matches the DAG behavior where we don't use SReg_32_XM0
everywhere anymore, and fixes not coalescing the copies into m0.
2019-12-27 17:52:12 -05:00
Matt Arsenault 3213ce966b TailDuplication: Clear NoPHIs property
The early tail duplicator pass introduces new ones, so a MIR test that
infers no phis since there were none on the input would fail the
verifier after running.
2019-12-27 14:06:31 -05:00
Matt Arsenault e088846712 AMDGPU/GlobalISel: Fix extra result register in fdiv64 lowering
There ended up being two result registers, which would fail on
select. It was really defing a new temp register in the correct def
position, instead of the correct result register.
2019-12-27 08:49:43 -05:00
Matt Arsenault ed9a56b0f2 AMDGPU/GlobalISel: Select some 128-bit load/stores 2019-12-27 08:49:43 -05:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Matt Arsenault df5c2159d0 AMDGPU/GlobalISel: Legalize some 16-bit round instructions 2019-12-24 09:53:01 -05:00
Matt Arsenault e351256c0d GlobalISel: Define equivalent node for G_INTRINSIC_TRUNC 2019-12-24 09:53:01 -05:00
Matt Arsenault 9035fa6b54 AMDGPU/GlobalISel: Lower llvm.amdgcn.else 2019-12-24 09:53:01 -05:00
Jay Foad c7c05b0c8a [AMDGPU] Don't create MachinePointerInfos with an UndefValue pointer
Summary:
The only useful information the UndefValue conveys is the address space,
which MachinePointerInfo can represent directly without referring to an
IR value.

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71838
2019-12-23 15:58:19 +00:00
Carl Ritson 2791667d2e [DAGCombiner] Check term use before applying aggressive FSUB optimisations
Summary:
Without this check unnecessary FMA instructions are generated when the FSUB terms are reused.
This also has the side-effect that the same value is computed to different levels of precision, which can create undesirable effects if the results are used together in subsequent computation.

Reviewers: arsenm, nhaehnle, foad, tpr, dstuttard, spatel

Reviewed By: arsenm

Subscribers: jvesely, wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71656
2019-12-23 09:37:58 +09:00
Matt Arsenault 42a26445f9 AMDGPU/GlobalISel: Fix misuse of div_scale intrinsics
Confusingly, the intrinsic operands do not match the
instruction/custom node. The order is shuffled, and the 3rd operand is
an immediate to select operands.

I'm not 100% sure I did this right, but fdiv still doesn't select end
to end and it will be easier to tell when it does. This at least
avoids an assertion in RegBankSelect and allows hitting the fallback
on selection.
2019-12-21 04:55:36 -05:00
Matt Arsenault dff3f8d742 AMDGPU/GlobalISel: Fix missing scc imp-def on scalar and/or/xor 2019-12-21 04:55:36 -05:00
Jay Foad 0412f518dc [AMDGPU] Fix typo in SIInstrInfo::memOpsHaveSameBasePtr
Summary:
The typo has been present since memOpsHaveSameBasePtr was introduced in
r313208.

It caused SIInstrInfo::shouldClusterMemOps to cluster more mem ops than
it was supposed to.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71616
2019-12-17 18:54:27 +00:00
Jay Foad ad622af079 [AMDGPU] Update autogenerated checks 2019-12-17 16:48:02 +00:00
alex-t e7f585ed61 PostRA Machine Sink should take care of COPY defining register that is a sub-register by another COPY source operand
Differential Revision: https://reviews.llvm.org/D71132
2019-12-17 15:20:43 +03:00
Jay Foad 4f17b1784e Fix for AMDGPU MUL_I24 known bits calculation
Summary:
At present, the code calculating known bits of AMDGPU MUL_I24 confuses the concepts of "non-negative number" and "positive number".

In some situations, it results in incorrect code. I have a case where the optimizer replaces the result of calculating MUL_I24(-5, 0) with -8.

Reviewers: foad, arsenm

Reviewed By: arsenm

Subscribers: foad, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Patch by Eugene Kuznetsov.

Differential Revision: https://reviews.llvm.org/D70367
2019-12-16 10:25:57 +00:00
Sanjay Patel 2afe864118 [DAG] Add SimplifyDemandedBits support for BSWAP
This exposes a shortcoming for AArch64, and that is tracked by PR40881:
https://bugs.llvm.org/show_bug.cgi?id=40881

Patch by: @RKSimon (Simon Pilgrim)

Differential Revision: https://reviews.llvm.org/D58017
2019-12-15 08:52:34 -05:00
Roman Tereshin 8731799fc6 [Legalizer] Making artifact combining order-independent
Legalization algorithm is complicated by two facts:
1) While regular instructions should be possible to legalize in
   an isolated, per-instruction, context-free manner, legalization
   artifacts can only be eliminated in pairs, which could be deeply, and
   ultimately arbitrary nested: { [ () ] }, where which paranthesis kind
   depicts an artifact kind, like extend, unmerge, etc. Such structure
   can only be fully eliminated by simple local combines if they are
   attempted in a particular order (inside out), or alternatively by
   repeated scans each eliminating only one innermost pair, resulting in
   O(n^2) complexity.
2) Some artifacts might in fact be regular instructions that could (and
   sometimes should) be legalized by the target-specific rules. Which
   means failure to eliminate all artifacts on the first iteration is
   not a failure, they need to be tried as instructions, which may
   produce more artifacts, including the ones that are in fact regular
   instructions, resulting in a non-constant number of iterations
   required to finish the process.

I trust the recently introduced termination condition (no new artifacts
were created during as-a-regular-instruction-retrial of artifacts not
eliminated on the previous iteration) to be efficient in providing
termination, but only performing the legalization in full if and only if
at each step such chains of artifacts are successfully eliminated in
full as well.

Which is currently not guaranteed, as the artifact combines are applied
only once and in an arbitrary order that has to do with the order of
creation or insertion of artifacts into their worklist, which is a no
particular order.

In this patch I make a small change to the artifact combiner, making it
to re-insert into the worklist immediate (modulo a look-through copies)
artifact users of each vreg that changes its definition due to an
artifact combine.

Here the first scan through the artifacts worklist, while not
being done in any guaranteed order, only needs to find the innermost
pair(s) of artifacts that could be immediately combined out. After that
the process follows def-use chains, making them shorter at each step, thus
combining everything that can be combined in O(n) time.

Reviewers: volkan, aditya_nandakumar, qcolombet, paquette, aemerson, dsanders

Reviewed By: aditya_nandakumar, paquette

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71448
2019-12-13 15:45:18 -08:00
Tim Renouf fce1a6f584 Revert "AMDGPU: Try to commute sub of boolean ext"
This reverts commit 69fcfb7d35.

As shown in the test I attached to this commit, the change I reverted
causes a problem with "zext(cc1) - zext(cc2)". It commuted
the operands to the sub and used different logic to select the addc/subc
instruction:
   sub zext (setcc), x => addcarry 0, x, setcc
   sub sext (setcc), x => subcarry 0, x, setcc

... but that is bogus. I believe it is not possible to fold those commuted
patterns into any form of addcarry or subcarry. It may have worked as
intended before "AMDGPU: Change boolean content type to 0 or 1" because
the setcc was considered to be -1 rather than 1.

Differential Revision: https://reviews.llvm.org/D70978

Change-Id: If2139421aa6c935cbd1d925af58fe4a4aa9e8f43
2019-12-13 12:49:06 +00:00
Guozhi Wei 72942459d0 [MBP] Avoid tail duplication if it can't bring benefit
Current tail duplication integrated in bb layout is designed to increase the fallthrough from a BB's predecessor to its successor, but we have observed cases that duplication doesn't increase fallthrough, or it brings too much size overhead.

To overcome these two issues in function canTailDuplicateUnplacedPreds I add two checks:

  make sure there is at least one duplication in current work set.
  the number of duplication should not exceed the number of successors.

The modification in hasBetterLayoutPredecessor fixes a bug that potential predecessor must be at the bottom of a chain.

Differential Revision: https://reviews.llvm.org/D64376
2019-12-06 09:53:53 -08:00
David Stuttard 46db606834 AMDGPU: Avoid folding 2 constant operands into an SALU operation
Summary:
Catch the (admittedly unusual) case where SIFoldOperands attempts to fold 2
constant operands into the same SALU operation, with neither operand able to be
encoded as an inline constant.

Change-Id: Ibc48d662c9ffd8bbacd154976b0b1c257ace0927

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70896
2019-12-04 10:25:34 +00:00
Austin Kerbow cfbbdc83b4 AMDGPU/GlobalISel: Add AGPR bank and RegBankSelect mfma intrinsics
Differential Revision: https://reviews.llvm.org/D70871
2019-12-01 22:15:48 -08:00
Austin Kerbow 256ad954a9 AMDGPU: Reuse carry out register during FI elimination
Summary:
Pre gfx9 we need to scavenge a 64-bit SGPR to use as the carry out for an Add.
If only one SGPR was available this crashed when trying to scavenge another
32bit SGPR to materialize the offset.

Instead, reuse a 32-bit SGPR from the carry out as the offset register.

Also prefer to use vcc for the unused carry out when it is available.

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70614
2019-11-28 10:13:48 -08:00
David Stuttard 943d8326dd AMDGPU: Fix lit test checks with dag option
Summary:
I was seeing some failures on a test with slightly different instruction
ordering. Adding in some DAG directives solved the issue.

Change-Id: If5a3d3969055fb19279943bd45161bb70a3dabce

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70531
2019-11-28 10:01:06 +00:00
Eric Christopher fd39b1bb20 Revert "Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.""
This reapplies: 8ff85ed905

Original commit message:

As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.

This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410

This reverts commit c9ddb02659.
2019-11-26 20:28:52 -08:00
vpykhtin 008e65a7bf [AMDGPU] Fix emitIfBreak CF lowering: use temp reg to make register coalescer life easier.
Differential revision: https://reviews.llvm.org/D70405
2019-11-26 18:59:37 +03:00
Muhammad Omair Javaid c9ddb02659 Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there."
This reverts commit 8ff85ed905.

This commit introduced 9 new failures on lldb buildbot host at http://lab.llvm.org:8014/builders/lldb-aarch64-ubuntu

Following tests were failing:
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq1/TestAmbiguousTailCallSeq1.py
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq2/TestAmbiguousTailCallSeq2.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_call_site/TestDisambiguateCallSite.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_paths_to_common_sink/TestDisambiguatePathsToCommonSink.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_tail_call_seq/TestDisambiguateTailCallSeq.py
    lldb-api :: functionalities/tail_call_frames/inlining_and_tail_calls/TestInliningAndTailCalls.py
    lldb-api :: functionalities/tail_call_frames/sbapi_support/TestTailCallFrameSBAPI.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_message/TestArtificialFrameStepOutMessage.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_or_return/TestSteppingOutWithArtificialFrames.py
    lldb-api :: functionalities/tail_call_frames/unambiguous_sequence/TestUnambiguousTailCalls.py

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-26 09:32:13 +05:00
Eric Christopher 8ff85ed905 As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.
This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-25 17:16:46 -08:00
Austin Kerbow fef69706dc AMDGPU: Handle waitcnt overflow
Summary:
The waitcnt pass can overflow the counters when the number of outstanding events
for a type exceed the capacity of the counter. This can lead to inefficient
insertion of waitcnts, or to waitcnt instructions with max values for each type.
The last situation can cause an instruction which when disassembled appears to
be an illegal waitcnt without an operand.

In these cases we should add a wait for the 'counter maximum' - 1, and update the
waitcnt brackets accordingly.

Reviewers: rampitec, arsenm

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70418
2019-11-23 09:34:23 -08:00
Tim Corringham 6821a3ccd6 [AMDGPU] Add attribute for target loop unroll threshold default
Summary:
Add a function attribute to allow the target specific default loop unroll threshold
to be specified on a per-function basis. This allows a front-end to give guidance
where it has insight that is not available to the back-end, while still allowing the
target specific heuristics to also have an effect.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68873
2019-11-21 09:47:28 +00:00
Piotr Sobczak 4a801170f3 [AMDGPU][SILoadStoreOptimizer] Merge TBUFFER loads/stores
Summary: Extend SILoadStoreOptimizer to merge tbuffer loads and stores.

Reviewers: nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69794
2019-11-20 22:59:30 +01:00
Stanislav Mekhanoshin 899cdf95d9 [AMDGPU] Fixed mfma test check. NFC. 2019-11-20 12:33:12 -08:00
Michael Liao 4a308d302c [AMDGPU] Keep consistent check of legal addressing mode.
Summary:
- Add test cases for GFX10, which has narrower offset range compared to
  GFX9.

Reviewers: rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70473
2019-11-20 15:08:17 -05:00
Sameer Sahasrabuddhe 52c5014da0 [AMDGPU] add support for hostcall buffer pointer as hidden kernel argument
Hostcall is a service that allows a kernel to submit requests to the
host using shared buffers, and block until a response is
received. This will eventually replace the shared buffer currently
used for printf, and repurposes the same hidden kernel argument. This
change introduces a new ValueKind in the HSA metadata to represent the
hostcall buffer.

Differential Revision: https://reviews.llvm.org/D70038
2019-11-20 15:53:55 +05:30
Austin Kerbow f3225f2abe AMDGPU/GlobalISel: Legalize FDIV64
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70403
2019-11-19 21:02:27 -08:00
Matt Arsenault db0ed3e429 AMDGPU: Refactor treatment of denormal mode
Start moving towards treating this as a property of the calling
convention, and not the subtarget. The default denormal mode should
not be part of the subtarget, and be moved into a separate function
attribute.

This patch is still NFC. The denormal mode remains as a subtarget
feature for now, but make the necessary changes to switch to using an
attribute.
2019-11-19 19:55:43 +05:30
Matt Arsenault ea23b6428b AMDGPU: Be explicit about denormal mode in MIR tests
Start checking the machine function in GlobalISel instead of the
target directly.

This temporarily breaks fcanonicalize selection in GlobalISel.
2019-11-19 19:55:43 +05:30
dfukalov 6fd11b14f6 [AMDGPU] Tune inlining parameters for AMDGPU target (part 2)
Summary:
Most of IR instructions got better code size estimations after commit 47a5c36b.
So default parameters values should be updated to improve inlining and
unrolling for the target.

Reviewers: rampitec, arsenm

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, zzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70391
2019-11-19 16:33:16 +03:00
Piotr Sobczak 02419ab5c7 [AMDGPU] Lower llvm.amdgcn.s.buffer.load.v3[i|f]32
Summary: Add lowering support for 32-bit vec3 variant of s.buffer.load intrinsic.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70118
2019-11-15 15:01:15 +01:00
Matt Arsenault 31479d868e AMDGPU: Change boolean content type to 0 or 1
The usage of target boolean checks is overly inflexible, since sext
and zext of a compare are equally cheap. The choice is arbitrary, but
using 0/1 to some degree is the choice of lower resistance since
that's what most targets use. This enables a few combines that don't
bother to support ZeroOrNegativeOneBooleanContent.
2019-11-15 13:43:47 +05:30
Matt Arsenault 69fcfb7d35 AMDGPU: Try to commute sub of boolean ext
Avoids another regression in a future patch.
2019-11-15 13:43:42 +05:30
Matt Arsenault bc276c6379 GlobalISel: Lower s1 source G_SITOFP/G_UITOFP 2019-11-15 13:37:20 +05:30
Stanislav Mekhanoshin 4fa44f989e [AMDGPU] Fixed dpp test. NFC. 2019-11-13 16:38:54 -08:00
Stanislav Mekhanoshin af7d4022c7 [AMDGPU] Fixed mfma-loop test. NFC. 2019-11-13 16:03:54 -08:00
Matt Arsenault 9d7bccab66 AMDGPU: Extend add x, (ext setcc) combine to sub
This is the same as the add case, but inverts the operation type.

This avoids regressions in a future patch.
2019-11-13 07:13:58 +05:30
Matt Arsenault 4b47213951 AMDGPU: Switch backend default max workgroup size to 1024
Previously this would default to 256, not the maximum supported size
of 1024. Using a maximum lower than the hardware maximum requires
language runtimes to enforce this limit for correctness, which no
language has correctly done. Switch the default to the conservatively
correct maximum, and force frontends to opt-in to the more optimal 256
default maximum.

I don't really understand why the changes in occupancy-levels.ll
increased the computed occupancy, which I expected to decrease. I'm
not sure if these tests should be forcing the old maximum.
2019-11-13 07:11:02 +05:30
Matt Arsenault 25c5da5a42 AMDGPU Reduce reported maximum group size to 1024
While some targets allow encoding 2048, this was never tested or
supported.
2019-11-13 06:34:28 +05:30
Tim Renouf 07ebd74154 MCP: Fixed bug with dest overlapping copy source
In MachineCopyPropagation, when propagating the source of a copy into
the operand of a later instruction, bail if a destination overlaps
(partly defines) the copy source. If the instruction where the
substitution is happening is also a copy, allowing the propagation
confuses the tracking mechanism.

Differential Revision: https://reviews.llvm.org/D69953

Change-Id: Ic570754f878f2d91a4a50a9bdcf96fbaa240726d
2019-11-12 08:18:11 +00:00
Matt Arsenault e16a71382d AMDGPU: Select global atomicrmw fadd
This only works if there is no use of the return value.
2019-11-06 16:06:38 -08:00
Stanislav Mekhanoshin d17bcf2bb9 [AMDGPU] Add handling of 160 bit registers in analyzeResourceUsage
This was omitted. Also SReg_96Reg missed IsSGPR assignment.

Differential Revision: https://reviews.llvm.org/D69919
2019-11-06 15:47:32 -08:00
Quentin Colombet 52af7aedfe [GISel][ArtifactCombiner] Relax the constraint to combine unmerge with concat_vectors
The combine G_UNMERGE_VALUES with G_CONCAT_VECTORS used to only be performed
when the result type of the G_UNMERGE_VALUES was a vector type.
In other words, we were expecting that the G_UNMERGE_VALUES was effectively
the exact opposite of the G_CONCAT_VECTORS.

Lift that constraint by allowing any G_UNMERGE_VALUES to be combined
with any G_CONCAT_VECTORS (as long as the size of the different pieces
that we merge/unmerge match).

Differential Revision: https://reviews.llvm.org/D69288
2019-11-06 11:27:50 -08:00
Daniel Sanders e74c5b9661 [globalisel] Rename G_GEP to G_PTR_ADD
Summary:
G_GEP is rather poorly named. It's a simple pointer+scalar addition and
doesn't support any of the complexities of getelementptr. I therefore
propose that we rename it. There's a G_PTR_MASK so let's follow that
convention and go with G_PTR_ADD

Reviewers: volkan, aditya_nandakumar, bogner, rovka, arsenm

Subscribers: sdardis, jvesely, wdng, nhaehnle, hiraditya, jrtc27, atanasyan, arphaman, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69734
2019-11-05 10:31:17 -08:00
Michael Liao 4531aee2ac [amdgpu] Fix known bits compuation on `MUL_I24`/`MUL_U24`.
Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, dstuttard, tpr, t-tye, hiraditya, llvm-commits, yaxunl

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69735
2019-11-01 17:06:17 -04:00
Matt Arsenault d9e0a2942a AMDGPU: Disallow spill folding with m0 copies
readlane and writelane instructions are not allowed to use m0 as the
data operand, so spilling them is tricky and would require an
intermediate SGPR to spill it. Constrain the virtual register class in
this caes to disallow the inline spiller from folding the m0 operand
directly into the spill instruction.

I copied this hack from AArch64 which has the same problem for $sp.
2019-10-30 14:56:33 -07:00
Matt Arsenault edca9ac0de AMDGPU: Don't fold S_NOPs with implicit operands 2019-10-30 14:40:56 -07:00
Jay Foad e5972f2a04 [AMDGPU] Simplify VCCZ bug handling
Summary:
VCCZBugHandledSet was used to make sure we don't apply the same
workaround more than once to a single cbranch instruction, but it's not
necessary because the workaround involves inserting an s_waitcnt
instruction, which is enough for subsequent iterations to detect that no
further workaround is necessary.

Also beef up the test case to check that the workaround was only applied
once. I have also manually verified that the test still passes even if I
hack the big do-while loop in runOnMachineFunction to run a minimum of
five iterations.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69621
2019-10-30 17:09:07 +00:00
Jay Foad 86549c7528 [SelectionDAG] Add support for FP_ROUND in WidenVectorOperand.
Summary:
This is used on AMDGPU for rounding from v3f64 (which is illegal) to
v3f32 (which is legal).

Subscribers: jvesely, nhaehnle, tpr, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69339
2019-10-30 15:18:21 +00:00
Austin Kerbow 2b88b344f2 AMDGPU/GlobalISel: Legalize FDIV32
Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69581
2019-10-29 17:18:06 -07:00
Matt Arsenault 21bc8e5a13 AMDGPU: Make VReg_1 only include 1 artificial register
When TableGen is inferring register classes from contexts, it uses a
sorting function based on the number of registers in the class. Since
this was being treated as an alias of VGPR_32, they had exactly the
same size. The sort used wasn't a stable sort, and even if it were, I
believe the tie breaker would effectively end up being the
alphabetical ordering of the class name. There appear to be issues
trying to use an empty set of registers, so add only one so this will
always sort to the end.

Also add a comment explaining how VReg_1 is a dirty hack for
SelectionDAG.

This does end up changing the behavior of i1 with inline asm and VGPR
constraints, but the existing behavior was was already nonsensical and
inconsistent. It should probably be disallowed anyway.

Fixes bug 43699
2019-10-28 20:51:51 -07:00
Austin Kerbow d11b93ec6a AMDGPU: Avoid overwriting saved PC
Summary:
An outstanding load with same destination sgpr as call could cause PC to be
updated with junk value on return.

Reviewers: arsenm, rampitec

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69474
2019-10-28 10:02:22 -07:00
cdevadas e921ede540 [AMDGPU] Fix Vreg_1 PHI lowering in SILowerI1Copies.
There is a minor flaw in the implementation of function lowerPhis.
This function replaces values of regclass Vreg_1 (boolean values)
involved in PHIs into an SGPR. Currently it iterates over the MBBs
and performs an inplace lowering of PHIs and fails to lower any
incoming value that itself is another PHI of Vreg_1 regclass.
The failure occurs only when the MBB where the incoming PHI value
belongs is not visited/lowered yet.

To fix this problem, collect all Vreg_1 PHIs upfront and then
perform the lowering.

Differential Revision: https://reviews.llvm.org/D69182
2019-10-26 14:37:45 +05:30
Sanjay Patel 4c47617627 [SDAG] fold extract_vector_elt with undef index
This makes the DAG behavior consistent with IR's extractelement after:
rGb32e4664a715

https://bugs.llvm.org/show_bug.cgi?id=42689

I've tried to maintain test intent for WebAssembly.
The AMDGPU test is trying to test for crashing or other bad behavior,
but I'm not sure if that's possible after this change.
2019-10-25 19:27:26 -04:00
Stanislav Mekhanoshin 4c0251da14 [AMDGPU] Enable SGPR copy folding
That used to fail in the last testcase function because after
%0:sreg_64.sub0 was folded into %3:sreg_32_xm0_xexec COPY, it
was further folded into S_STORE_DWORD_IMM. Its legal effective
subreg class is SReg_32 while instruction expects more restricted
SReg_32_XM0_EXEC. However, SIInstrInfo::isLegalRegOperand()
passed the legality check and it was caught in the verifier.

Borrowed code from the verifier to check for RC legality.

Differential Revision: https://reviews.llvm.org/D69445
2019-10-25 15:08:30 -07:00
Matt Arsenault 1a276d1e8c GlobalISel: Implement widenScalar for G_INSERT_VECTOR_ELT 2019-10-25 13:55:07 -07:00
Matt Arsenault 171cf5302f AMDGPU/GlobalISel: Handle flat/global G_ATOMIC_CMPXCHG
Custom lower this to a target instruction with the merge operands. I
think it might be better to directly select this and emit a
REG_SEQUENCE, but this would be more work since it would require
splitting the tablegen patterns for these cases from the other
atomics.
2019-10-25 13:11:09 -07:00
Changpeng Fang 1ce552f3ef AMDGPU: Fix the broken dominator tree when creating waterfall loop for resource descriptor
Summary:
  In loadSRsrcFromVGPR, if MBB is the same as Succ, Remiander is not the immediate dominator of Succ.

Reviewer:
  arsenm

Differential Revision:
  https://reviews.llvm.org/D69358
2019-10-25 13:08:04 -07:00
Stanislav Mekhanoshin d4303b3861 [AMDGPU] Fold AGPR reg_sequence initializers
Differential Revision: https://reviews.llvm.org/D69413
2019-10-25 11:39:02 -07:00
vpykhtin c9c18e5a31 [AMDGPU] Disallow dpp combining for dpp instructions without Src2 operand (when Src2 is required)
Differential revision: https://reviews.llvm.org/D69430
2019-10-25 21:30:37 +03:00
Austin Kerbow c35b358b74 AMDGPU/GlobalISel: Legalize FDIV16
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69347
2019-10-25 11:07:17 -07:00
Scott Linder 7ad3636c30 [AMDGPU] Remove update_llc_test_checks for a test
The test split-arg-dbg-value.ll has a host-specific path in the
full output captured by update_llc_test_checks.

Fix for test failures introduced in https://reviews.llvm.org/D69402

Tags: #llvm
2019-10-25 11:47:33 -04:00
Scott Linder 2c37833931 [AMDGPU] Clean up update_llc_test_checks CodeGen tests
Summary:
Some tests have been hand edited without removing the
update_llc_test_checks header, some have slightly outdated CHECK lines
which still pass, and some have additional comments which
update_llc_test_checks pushes towards the function body.

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69402
2019-10-24 17:35:33 -04:00
Craig Topper a5376f6322 [GlobalISel][AArch64][AMDGPU][X86] Teach LegalizationArtifactCombiner to combine trunc(g_constant).
This allows X86 to properly form shift by immediate instructions
since we require an 8-bit constant to match the imported
SelectionDAG patterns.
2019-10-24 12:59:26 -07:00
Stanislav Mekhanoshin 3c8e055187 [AMDGPU] Fix mfma scheduling crash
An SUnit can be neither intruction not SDNode. It is all
null if represents a nop. Fixed a crash on using SU->getInstr().

Differential Revision: https://reviews.llvm.org/D69395
2019-10-24 11:01:52 -07:00
Michael Liao b2a65f0d70 [AMDGPU] Skip additional folding on the same operand.
Reviewers: rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69355
2019-10-24 11:30:22 -04:00
Stanislav Mekhanoshin 61e7a61bdc [AMDGPU] Allow folding of sgpr to vgpr copy
Potentially sgpr to sgpr copy should also be possible.
That is however trickier because we may end up with a
wrong register class at use because of xm0/xexec permutations.

Differential Revision: https://reviews.llvm.org/D69280
2019-10-23 18:42:48 -07:00
Stanislav Mekhanoshin f9b1dc5553 [AMDGPU] Updated fold-vgpr-copy.mir test. NFC. 2019-10-22 12:43:23 -07:00
Stanislav Mekhanoshin 48f57138be [AMDGPU] Allow tied operand subreg folding
Turns out it makes sense, contrarily to what comment said.

Differential Revision: https://reviews.llvm.org/D69287
2019-10-22 11:27:36 -07:00
Austin Kerbow 97263fa2dd AMDGPU/GlobalISel: Legalize fast unsafe FDIV
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69231

llvm-svn: 375460
2019-10-21 22:18:26 +00:00
Matt Arsenault 38038f116f AMDGPU: Use CopyToReg for interp intrinsic lowering
This doesn't use the default value, so doesn't benefit from the hack
to help optimize it.

llvm-svn: 375450
2019-10-21 19:53:49 +00:00
Matt Arsenault 8ebbf25cb1 AMDGPU: Erase redundant redefs of m0 in SIFoldOperands
Only handle simple inter-block redefs of m0 to the same value. This
avoids interference from redefs of m0 in SILoadStoreOptimzer. I was
initially teaching that pass to ignore redefs of m0, but having them
not exist beforehand is much simpler.

This is in preparation for deleting the current special m0 handling in
SIFixSGPRCopies to allow the register coalescer to handle the
difficult cases.

llvm-svn: 375449
2019-10-21 19:53:46 +00:00
Matt Arsenault dd6cf159ba AMDGPU: Stop adding m0 implicit def to SGPR spills
r375293 removed the SGPR spilling with scalar stores path, so this is
no longer necessary. This also always had the defect of adding the def
even when this path wasn't in use.

llvm-svn: 375448
2019-10-21 19:42:29 +00:00
Stanislav Mekhanoshin 33092194f2 [AMDGPU] Select AGPR in PHI operand legalization
If a PHI defines AGPR legalize its operands to AGPR.
At the moment we can get an AGPR PHI with VGPR operands.
I am not aware of any problems as it seems to be handled
gracefully in RA, but this is not right anyway.

It also slightly decreases VGPR pressure in some cases
because we do not have to a copy via VGPR.

Differential Revision: https://reviews.llvm.org/D69206

llvm-svn: 375446
2019-10-21 19:25:27 +00:00
Yevgeny Rouban 5e5af533ab [IR] Fix mayReadFromMemory() for writeonly calls
Current implementation of Instruction::mayReadFromMemory()
returns !doesNotAccessMemory() which is !ReadNone. This
does not take into account that the writeonly attribute
also indicates that the call does not read from memory.

The patch changes the predicate to !doesNotReadMemory()
that reflects the intended behavior.

Differential Revision: https://reviews.llvm.org/D69086

llvm-svn: 375389
2019-10-21 06:52:08 +00:00
Matt Arsenault e5be543a55 AMDGPU: Increase vcc liveness scan threshold
Avoids a test regression in a future patch. Also add debug printing on
this case, so I waste less time debugging folds in the future.

llvm-svn: 375367
2019-10-20 17:44:17 +00:00
Matt Arsenault 7cd57dcd5b AMDGPU: Split flat offsets that don't fit in DAG
We handle it this way for some other address spaces.

Since r349196, SILoadStoreOptimizer has been trying to do this. This
is after SIFoldOperands runs, which can change the addressing
patterns. It's simpler to just split this earlier.

llvm-svn: 375366
2019-10-20 17:34:44 +00:00
Matt Arsenault bba8fd7132 AMDGPU: Add baseline tests for flat offset splitting
llvm-svn: 375364
2019-10-20 16:33:21 +00:00
Matt Arsenault 8a8b317460 AMDGPU: Don't error on calls to null or undef
Calls to constants should probably be generally handled.

llvm-svn: 375356
2019-10-20 07:46:04 +00:00
Matt Arsenault 1aae510893 AMDGPU: Remove optnone from a test
It's not clear why the test had this. I'm unable to break the original
case with the original patch reverted with or without optnone.

This avoids a failure in a future commit.

llvm-svn: 375321
2019-10-19 01:34:59 +00:00
Matt Arsenault d4274f0174 LiveIntervals: Fix handleMoveUp with subreg def moving across a def
If a subregister def was moved across another subregister def and
another use, the main range was not correctly updated. The end point
of the moved interval ended too early and missed the use from theh
other lanes in the subreg def.

llvm-svn: 375300
2019-10-18 23:24:25 +00:00
Stanislav Mekhanoshin 0fab220eb5 [AMDGPU] move PHI nodes to AGPR class
If all uses of a PHI are in AGPR register class we should
avoid unneeded copies via VGPRs.

Differential Revision: https://reviews.llvm.org/D69200

llvm-svn: 375297
2019-10-18 22:48:45 +00:00
Jay Foad a9aa4ec6a3 [AMDGPU] Remove -amdgpu-spill-sgpr-to-smem.
Summary: The implementation was never completed and never used except in tests.

Reviewers: arsenm, mareko

Subscribers: qcolombet, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69163

llvm-svn: 375293
2019-10-18 21:48:22 +00:00
Matt Arsenault f9a42ed0a7 AMDGPU: Relax 32-bit SGPR register class
Mostly use SReg_32 instead of SReg_32_XM0 for arbitrary values. This
will allow the register coalescer to do a better job eliminating
copies to m0.

For GlobalISel, as a terrible hack, use SGPR_32 for things that should
use SCC until booleans are solved.

llvm-svn: 375267
2019-10-18 18:26:37 +00:00
Austin Kerbow 2f41a023af AMDGPU: Fix SMEM WAR hazard for gfx10 readlane
Summary: Hazard recognizer fails to see hazard with V_READLANE_B32_gfx10.

Reviewers: rampitec

Reviewed By: rampitec

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69172

llvm-svn: 375265
2019-10-18 18:20:30 +00:00
Matt Arsenault 34ed76e180 GlobalISel: Implement lower for G_SADDO/G_SSUBO
Port directly from SelectionDAG, minus the path using
ISD::SADDSAT/ISD::SSUBSAT.

llvm-svn: 375042
2019-10-16 20:46:32 +00:00
Stanislav Mekhanoshin edcd5815ce [AMDGPU] Do not combine dpp mov reading physregs
We cannot be sure physregs will stay unchanged.

Differential Revision: https://reviews.llvm.org/D69065

llvm-svn: 375033
2019-10-16 19:28:25 +00:00
Stanislav Mekhanoshin 3d99310c15 [AMDGPU] Do not combine dpp with physreg def
We will remove dpp mov along with the physreg def otherwise.

Differential Revision: https://reviews.llvm.org/D69063

llvm-svn: 375030
2019-10-16 18:48:54 +00:00
David Stuttard 2d6a2303f8 [AMDGPU] Fix-up cases where writelane has 2 SGPR operands
Summary:
Even though writelane doesn't have the same constraints as other valu
instructions it still can't violate the >1 SGPR operand constraint

Due to later register propagation (e.g. fixing up vgpr operands via
readfirstlane) changing writelane to only have a single SGPR is tricky.

This implementation puts a new check after SIFixSGPRCopies that prevents
multiple SGPRs being used in any writelane instructions.

The algorithm used is to check for trivial copy prop of suitable constants into
one of the SGPR operands and perform that if possible. If this isn't possible
put an explicit copy of Src1 SGPR into M0 and use that instead (this is
allowable for writelane as the constraint is for SGPR read-port and not
constant-bus access).

Reviewers: rampitec, tpr, arsenm, nhaehnle

Reviewed By: rampitec, arsenm, nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, mgorny, yaxunl, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D51932

Change-Id: Ic7553fa57440f208d4dbc4794fc24345d7e0e9ea
llvm-svn: 375004
2019-10-16 14:37:39 +00:00
Piotr Sobczak 02baaca742 [AMDGPU] Extend the SI Load/Store optimizer
Summary:
Extend the SI Load/Store optimizer to merge MIMG load instructions. Handle
different flavours of image_load and image_sample instructions.

When the instructions of the same subclass differ only in dmask, merge
them and update dmask accordingly.

Reviewers: nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64911

llvm-svn: 374984
2019-10-16 10:17:02 +00:00
Austin Kerbow 527e9f9a3f AMDGPU: Fix infinite searches in SIFixSGPRCopies
Summary:
Two conditions could lead to infinite loops when processing PHI nodes in
SIFixSGPRCopies.

The first condition involves a REG_SEQUENCE that uses registers defined by both
a PHI and a COPY.

The second condition arises when a physical register is copied to a virtual
register which is then used in a PHI node. If the same virtual register is
copied to the same physical register, the result is an endless loop.

%0:sgpr_64 = COPY $sgpr0_sgpr1
%2 = PHI %0, %bb.0, %1, %bb.1
$sgpr0_sgpr1 = COPY %0

Reviewers: alex-t, rampitec, arsenm

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68970

llvm-svn: 374944
2019-10-15 19:59:45 +00:00
Stanislav Mekhanoshin 1184c27fa5 [AMDGPU] Support mov dpp with 64 bit operands
We define mov/update dpp intrinsics as overloaded but do not
support i64, which is a practically useful type. Fix the
selection and lowering.

Differential Revision: https://reviews.llvm.org/D68673

llvm-svn: 374910
2019-10-15 16:41:15 +00:00
Stanislav Mekhanoshin 6e8599d939 [AMDGPU] Allow DPP combiner to work with REG_SEQUENCE
Differential Revision: https://reviews.llvm.org/D68828

llvm-svn: 374908
2019-10-15 16:17:50 +00:00
Roman Tereshin 044297ccbf [update_mir_test_checks] Handle MI flags properly
previously we would generate literal check lines w/ no reg-exps for
vregs as MI flags (nsw, ninf, etc.) won't be recognized as a part of MI.

Fixing that. Includes updating the MIR tests that suffered from the
problem.

Reviewed By: bogner

Differential Revision: https://reviews.llvm.org/D68905

llvm-svn: 374829
2019-10-14 22:01:58 +00:00
Matt Arsenault e8f1ad2ad8 AMDGPU: Remove unnecessary IR from test
llvm-svn: 374800
2019-10-14 18:30:29 +00:00
Cameron McInally 20b8ed2c2b [IRBuilder] Update IRBuilder::CreateFNeg(...) to return a UnaryOperator
Reapply r374240 with fix for Ocaml test, namely Bindings/OCaml/core.ml.

Differential Revision: https://reviews.llvm.org/D61675

llvm-svn: 374782
2019-10-14 15:35:01 +00:00
Alexander Timofeev c4d256a590 [AMDGPU] Come back patch for the 'Assign register class for cross block values according to the divergence.'
Detailed description:

    After https://reviews.llvm.org/D59990 submit several issues were discovered.
    Changes in common code were preserved but AMDGPU specific part was reverted to keep the backend working correctly.

    Discovered issues were addressed in the following commits:

    https://reviews.llvm.org/D67662
    https://reviews.llvm.org/D67101
    https://reviews.llvm.org/D63953
    https://reviews.llvm.org/D63731

    This change brings back AMDGPU specific changes.

  Reviewed by: rampitec, arsenm

  Differential Revision: https://reviews.llvm.org/D68635

llvm-svn: 374767
2019-10-14 12:01:10 +00:00
Stanislav Mekhanoshin f87fe45d5c [AMDGPU] Use GCN prefix in dpp_combine.mir. NFC.
llvm-svn: 374607
2019-10-11 22:28:04 +00:00
Stanislav Mekhanoshin e2d104f64c [AMDGPU] link dpp pseudos and real instructions on gfx10
This defaults to zero fi operand, but we do not expose it
anyway. Should we expose it later it needs to be added to
the pseudo.

This enables dpp combining on gfx10.

Differential Revision: https://reviews.llvm.org/D68888

llvm-svn: 374604
2019-10-11 22:03:36 +00:00
Marcello Maggioni 0112123eea [GISel] Allow getConstantVRegVal() to return G_FCONSTANT values.
In GISel we have both G_CONSTANT and G_FCONSTANT, but because
in GISel we don't really have a concept of Float vs Int value
the only difference between the two is where the data originates
from.

What both G_CONSTANT and G_FCONSTANT return is just a bag of bits
with the constant representation in it.

By making getConstantVRegVal() return G_FCONSTANTs bit representation
as well we allow ConstantFold and other things to operate with
G_FCONSTANT.

Adding tests that show ConstantFolding to work on mixed G_CONSTANT
and G_FCONSTANT sources.

Differential Revision: https://reviews.llvm.org/D68739

llvm-svn: 374458
2019-10-10 21:46:26 +00:00
Stanislav Mekhanoshin 19a1a739b1 [AMDGPU] Handle undef old operand in DPP combine
It was missing an undef flag.

Differential Revision: https://reviews.llvm.org/D68813

llvm-svn: 374455
2019-10-10 21:32:41 +00:00
Stanislav Mekhanoshin cbe55c7caf [AMDGPU] Fixed dpp_combine.mir with expensive checks. NFC.
llvm-svn: 374365
2019-10-10 15:28:52 +00:00
Dmitri Gribenko eaf6dd482b Revert "[IRBuilder] Update IRBuilder::CreateFNeg(...) to return a UnaryOperator"
This reverts commit r374240. It broke OCaml tests:
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/19014

llvm-svn: 374354
2019-10-10 14:13:54 +00:00
Matt Arsenault 12994a70cf AMDGPU: Use SGPR_128 instead of SReg_128 for vregs
SGPR_128 only includes the real allocatable SGPRs, and SReg_128 adds
the additional non-allocatable TTMP registers. There's no point in
allocating SReg_128 vregs. This shrinks the size of the classes
regalloc needs to consider, which is usually good.

llvm-svn: 374284
2019-10-10 07:11:33 +00:00
Matt Arsenault f8bf7d7f42 AMDGPU: Don't fold copies to physregs
In a future patch, this will help cleanup m0 handling.

The register coalescer handles copies from a register that
materializes an immediate, but doesn't handle move immediates
itself. The virtual register uses will often be allocated to the same
register, so there end up being no real copy.

llvm-svn: 374257
2019-10-09 22:51:42 +00:00
Matt Arsenault 85dfa82302 AMDGPU/GlobalISel: Fix crash on wide constant load with VGPR pointer
This was ignoring the register bank of the input pointer, and
isUniformMMO seems overly aggressive.

This will now conservatively assume a VGPR in cases where the incoming
bank hasn't been determined yet (i.e. is from a loop phi).

llvm-svn: 374255
2019-10-09 22:44:49 +00:00
Matt Arsenault 3cd3959fe2 GlobalISel: Implement fewerElementsVector for G_BUILD_VECTOR
Turn it into a G_CONCAT_VECTORS of G_BUILD_VECTOR.

llvm-svn: 374252
2019-10-09 22:44:43 +00:00
Stanislav Mekhanoshin c6dec1d828 [AMDGPU] Fixed dpp combine of VOP1
If original instruction did not have source modifiers they were
not added to the new DPP instruction as well, even if needed.

Differential Revision: https://reviews.llvm.org/D68729

llvm-svn: 374241
2019-10-09 22:02:58 +00:00
Cameron McInally 47363a148f [IRBuilder] Update IRBuilder::CreateFNeg(...) to return a UnaryOperator
Also update Clang to call Builder.CreateFNeg(...) for UnaryMinus.

Differential Revision: https://reviews.llvm.org/D61675

llvm-svn: 374240
2019-10-09 21:52:15 +00:00
Matt Arsenault 190a17bbd1 AMDGPU: Fix i16 arithmetic pattern redundancy
There were 2 problems here. First, these patterns were duplicated to
handle the inverted shift operands instead of using the commuted
PatFrags.

Second, the point of the zext folding patterns don't apply to the
non-0ing high subtargets. They should be skipped instead of inserting
the extension. The zeroing high code would be emitted when necessary
anyway. This was also emitting unnecessary zexts in cases where the
high bits were undefined.

llvm-svn: 374092
2019-10-08 17:36:38 +00:00
Tom Stellard 3a8d80944b AMDGPU: Add offsets to MMO when lowering buffer intrinsics
Summary:
Without offsets on the MachineMemOperands (MMOs),
MachineInstr::mayAlias() will return true for all reads and writes to the
same resource descriptor.  This leads to O(N^2) complexity in the MachineScheduler
when analyzing dependencies of buffer loads and stores.  It also limits
the SILoadStoreOptimizer from merging more instructions.

This patch reduces the compile time of one pathological compute shader
from 12 seconds to 1 second.

Reviewers: arsenm, nhaehnle

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65097

llvm-svn: 374087
2019-10-08 17:04:51 +00:00
Amaury Sechet 7df5b2f79f (Re)generate various tests. NFC
llvm-svn: 374074
2019-10-08 16:16:26 +00:00
Nicolai Haehnle df6e67697b AMDGPU: Propagate undef flag during pre-RA exec mask optimizations
Summary: Issue: https://github.com/GPUOpen-Drivers/llpc/issues/204

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68184

llvm-svn: 374041
2019-10-08 12:46:32 +00:00
Nicolai Haehnle 7febdb7f27 MachineSSAUpdater: insert IMPLICIT_DEF at top of basic block
Summary:
When getValueInMiddleOfBlock happens to be called for a basic block
that has no incoming value at all, an IMPLICIT_DEF is inserted in that
block via GetValueAtEndOfBlockInternal. This IMPLICIT_DEF must be at
the top of its basic block or it will likely not reach the use that
the caller intends to insert.

Issue: https://github.com/GPUOpen-Drivers/llpc/issues/204

Reviewers: arsenm, rampitec

Subscribers: jvesely, wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68183

llvm-svn: 374040
2019-10-08 12:46:20 +00:00
Matt Arsenault c8a6df7130 AMDGPU/GlobalISel: Clamp G_SITOFP/G_UITOFP sources
llvm-svn: 373989
2019-10-07 23:33:08 +00:00
Matt Arsenault 538b73b797 AMDGPU/GlobalISel: Handle more G_INSERT cases
Start manually writing a table to get the subreg index. TableGen
should probably generate this, but I'm not sure what it looks like in
the arbitrary case where subregisters are allowed to not fully cover
the super-registers.

llvm-svn: 373947
2019-10-07 19:16:26 +00:00
Matt Arsenault 4bcdcad91b GlobalISel: Partially implement lower for G_INSERT
llvm-svn: 373946
2019-10-07 19:13:27 +00:00
Matt Arsenault 1237aa2996 AMDGPU/GlobalISel: Fix selection of 16-bit shifts
llvm-svn: 373945
2019-10-07 19:10:44 +00:00
Matt Arsenault 09ec6918bc AMDGPU/GlobalISel: Select VALU G_AMDGPU_FFBH_U32
llvm-svn: 373944
2019-10-07 19:10:43 +00:00
Matt Arsenault 0b2ea91d6d AMDGPU/GlobalISel: Use S_MOV_B64 for inline constants
This hides some defects in SIFoldOperands when the immediates are
split.

llvm-svn: 373943
2019-10-07 19:07:19 +00:00
Matt Arsenault 578fa2819f AMDGPU/GlobalISel: Widen 16-bit G_MERGE_VALUEs sources
Continue making a mess of merge/unmerge legality.

llvm-svn: 373942
2019-10-07 19:05:58 +00:00
Matt Arsenault b4cbf9862c AMDGPU/GlobalISel: Select more G_INSERT cases
At minimum handle the s64 insert type, which are emitted in real cases
during legalization.

We really need TableGen to emit something to emit something like the
inverse of composeSubRegIndices do determine the subreg index to use.

llvm-svn: 373938
2019-10-07 18:43:31 +00:00
Matt Arsenault 27269054d2 GlobalISel: Add target pre-isel instructions
Allows targets to introduce regbankselectable
pseudo-instructions. Currently the closet feature to this is an
intrinsic. However this requires creating a public intrinsic
declaration. This litters the public intrinsic namespace with
operations we don't necessarily want to expose to IR producers, and
would rather leave as private to the backend.

Use a new instruction bit. A previous attempt tried to keep using enum
value ranges, but it turned into a mess.

llvm-svn: 373937
2019-10-07 18:43:29 +00:00
Jay Foad 301decd93d [AMDGPU] Fix test checks
The GFX10-DENORM-STRICT checks were only passing by accident. Fix them
to make the test more robust in the face of scheduling or register
allocation changes.

llvm-svn: 373893
2019-10-07 10:57:41 +00:00
Matt Arsenault c0ec72d4f8 AMDGPU/GlobalISel: RegBankSelect DS GWS intrinsics
llvm-svn: 373840
2019-10-06 01:37:38 +00:00
Matt Arsenault bcd6b1d209 AMDGPU/GlobalISel: Lower G_ATOMIC_CMPXCHG_WITH_SUCCESS
llvm-svn: 373839
2019-10-06 01:37:37 +00:00
Matt Arsenault a5b9c75674 GlobalISel: Partially implement lower for G_EXTRACT
Turn into shift and truncate. Doesn't yet handle pointers.

llvm-svn: 373838
2019-10-06 01:37:35 +00:00
Matt Arsenault 69c65a8609 AMDGPU/GlobalISel: Fix RegBankSelect for sendmsg intrinsics
This wasn't updated for the immarg handling change.

llvm-svn: 373837
2019-10-06 01:37:34 +00:00
Matt Arsenault d7cad4fb41 AMDGPU/GlobalISel: Fix using wrong addrspace for aperture
This was always passing the destination flat address space, when it
should be picking between the two valid source options.

llvm-svn: 373716
2019-10-04 08:35:38 +00:00
Matt Arsenault 412e0bf8f3 AMDGPU/GlobalISel: Select G_PTRTOINT
llvm-svn: 373715
2019-10-04 08:35:37 +00:00
Matt Arsenault be9521acaa AMDGPU/GlobalISel: Support wave32 waterfall loops
llvm-svn: 373714
2019-10-04 08:35:35 +00:00
Matt Arsenault ed77b27441 AMDGPU/GlobalISel: Handle RegBankSelect of G_INSERT_VECTOR_ELT
llvm-svn: 373639
2019-10-03 17:59:03 +00:00
Matt Arsenault 233ff982c7 AMDGPU/GlobalISel: Split 64-bit vector extracts during RegBankSelect
Register indexing 64-bit elements is possible on the SALU, but not the
VALU. Handle splitting this into two 32-bit indexes. Extend waterfall
loop handling to allow moving a range of instructions.

llvm-svn: 373638
2019-10-03 17:55:27 +00:00
Matt Arsenault 56271fe180 AMDGPU/GlobalISel: Allow VGPR to index SGPR register
We can still do a waterfall loop over the index if using a VGPR to
index an SGPR. The result will still be a VGPR, but we can avoid the
wide copy of the source register to a VGPR.

llvm-svn: 373637
2019-10-03 17:50:32 +00:00
Matt Arsenault 9256183994 AMDGPU/GlobalISel: Add some more tests for G_INSERT legalization
llvm-svn: 373636
2019-10-03 17:50:31 +00:00
Matt Arsenault 3d23e58dbe AMDGPU/GlobalISel: Fix mutationIsSane assert v8s8 and
This would try to do FewerElements to v9s8

llvm-svn: 373635
2019-10-03 17:50:29 +00:00
Matt Arsenault 1c135a39aa AMDGPU/GlobalISel: Expand G_BITCAST legality
llvm-svn: 373567
2019-10-03 05:46:08 +00:00
Stanislav Mekhanoshin 1384c3a5b8 [AMDGPU] Fix illegal agpr use by VALU
When SIFixSGPRCopies attempts to fix an illegal copy from vector to
scalar register it calls moveToVALU(). A copy from an agpr to sgpr
becomes a copy from agpr to agpr, which may result in the illegal
register class at a use of this copy.

Solution is to copy it always into a vgpr. This may result in a
subsequent copy into an agpr if that is what really needed, however
should not happen too often and likely will be folded later.

The opposite situation may not happen because an sgpr is always
illegal where agpr is legal, so such user instructions may not
exist.

Differential Revision: https://reviews.llvm.org/D68358

llvm-svn: 373544
2019-10-02 23:23:46 +00:00
Piotr Sobczak 265e94e657 [AMDGPU] Extend buffer intrinsics with swizzling
Summary:
Extend cachepolicy operand in the new VMEM buffer intrinsics
to supply information whether the buffer data is swizzled.
Also, propagate this information to MIR.

Intrinsics updated:
int_amdgcn_raw_buffer_load
int_amdgcn_raw_buffer_load_format
int_amdgcn_raw_buffer_store
int_amdgcn_raw_buffer_store_format
int_amdgcn_raw_tbuffer_load
int_amdgcn_raw_tbuffer_store
int_amdgcn_struct_buffer_load
int_amdgcn_struct_buffer_load_format
int_amdgcn_struct_buffer_store
int_amdgcn_struct_buffer_store_format
int_amdgcn_struct_tbuffer_load
int_amdgcn_struct_tbuffer_store

Furthermore, disable merging of VMEM buffer instructions
in SI Load/Store optimizer, if the "swizzled" bit on the instruction
is on.

The default value of the bit is 0, meaning that data in buffer
is linear and buffer instructions can be merged.

There is no difference in the generated code with this commit.
However, in the future it will be expected that front-ends
use buffer intrinsics with correct "swizzled" bit set.

Reviewers: arsenm, nhaehnle, tpr

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, arphaman, jfb, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68200

llvm-svn: 373491
2019-10-02 17:22:36 +00:00
Matt Arsenault cdfe5efe9b AMDGPU/GlobalISel: Assume VGPR for G_FRAME_INDEX
In principle this should behave as any other constant. However
eliminateFrameIndex currently assumes a VALU use and uses a vector
shift. Work around this by selecting to VGPR for now until
eliminateFrameIndex is fixed.

llvm-svn: 373415
2019-10-02 01:02:24 +00:00
Matt Arsenault bfce0c2664 AMDGPU/GlobalISel: Private loads always use VGPRs
llvm-svn: 373414
2019-10-02 01:02:21 +00:00
Matt Arsenault 05aa8a733e AMDGPU/GlobalISel: Legalize 1024-bit G_BUILD_VECTOR
This will be needed to support AGPR operations.

llvm-svn: 373413
2019-10-02 01:02:18 +00:00
Matt Arsenault 3a657afb3a AMDGPU/GlobalISel: Fix RegBankSelect for 1024-bit values
llvm-svn: 373412
2019-10-02 01:02:14 +00:00
Stanislav Mekhanoshin 075bc48a7f [AMDGPU] separate accounting for agprs
Account and report agprs separately on gfx908. Other targets
do not change the reporting.

Differential Revision: https://reviews.llvm.org/D68307

llvm-svn: 373411
2019-10-02 00:26:58 +00:00
Changpeng Fang e4ee28d14c AMDGPU: Fix an out of date assert in addressing FrameIndex
Reviewers:
  arsenm

Differential Revision:
  https://reviews.llvm.org/D67574

llvm-svn: 373404
2019-10-01 23:07:14 +00:00
Matt Arsenault 9dba603748 AMDGPU/GlobalISel: Increase max legal size to 1024
There are 1024 bit register classes defined for AGPRs. Additionally
OpenCL defines vectors up to 16 x i64, and this helps those tests
legalize.

llvm-svn: 373350
2019-10-01 16:35:06 +00:00
Dmitri Gribenko 827a7fab78 Revert "GlobalISel: Handle llvm.read_register"
This reverts commit r373294. It broke Clang's
CodeGen/arm64-microsoft-status-reg.cpp:
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/18483

llvm-svn: 373310
2019-10-01 08:24:01 +00:00
Matt Arsenault fdea5e02ce AMDGPU/GlobalISel: Select s1 src G_SITOFP/G_UITOFP
llvm-svn: 373298
2019-10-01 02:23:20 +00:00
Matt Arsenault 59b91aa93e AMDGPU/GlobalISel: Add support for init.exec intrinsics
TThe existing wave32 behavior seems broken and incomplete, but this
reproduces it.

llvm-svn: 373296
2019-10-01 02:07:25 +00:00
Matt Arsenault bdcc6d3d26 GlobalISel: Handle llvm.read_register
SelectionDAG has a bunch of machinery to defer this to selection time
for some reason. Just directly emit a copy during IRTranslator. The
x86 usage does somewhat questionably check hasFP, which could depend
on the whole function being at minimum translated.

This does lose the convergent bit if the callsite had it, which may be
a problem. We also lose that in general for intrinsics, which may also
be a problem.

llvm-svn: 373294
2019-10-01 02:07:16 +00:00
Matt Arsenault 8f6bdb7668 AMDGPU/GlobalISel: Avoid creating shift of 0 in arg lowering
This is sort of papering over the fact that we don't run a combiner
anywhere, but avoiding creating 2 instructions in the first place is
easy.

llvm-svn: 373293
2019-10-01 01:44:46 +00:00
Matt Arsenault 54167ea316 AMDGPU/GlobalISel: Select G_UADDO/G_USUBO
llvm-svn: 373288
2019-10-01 01:23:13 +00:00
Matt Arsenault ed85b0cee6 GlobalISel: Implement widenScalar for G_SITOFP/G_UITOFP sources
Legalize 16-bit G_SITOFP/G_UITOFP for AMDGPU.

llvm-svn: 373287
2019-10-01 01:06:48 +00:00
Matt Arsenault 77ac400117 AMDGPU/GlobalISel: Legalize G_GLOBAL_VALUE
Handle other cases besides LDS. Mostly a straight port of the existing
handling, without the intermediate custom nodes.

llvm-svn: 373286
2019-10-01 01:06:43 +00:00
Alexander Timofeev 565b1d3d46 [AMDGPU] SIFoldOperands should not fold register acrocc the EXEC definition
Reviewers: rampitec

      Differential Revision: https://reviews.llvm.org/D67662

llvm-svn: 373221
2019-09-30 15:31:17 +00:00
Roger Ferrer Ibanez 5a2a14db0b [TargetLowering] Simplify expansion of S{ADD,SUB}O
ISD::SADDO uses the suggested sequence described in the section §2.4 of
the RISCV Spec v2.2. ISD::SSUBO uses the dual approach but checking for
(non-zero) positive.

Differential Revision: https://reviews.llvm.org/D47927

llvm-svn: 373187
2019-09-30 07:58:50 +00:00
Matt Arsenault 317d991fa5 AMDGPU/GlobalISel: Fix select for v2s16 and/or/xor
llvm-svn: 373180
2019-09-30 06:31:30 +00:00
Stanislav Mekhanoshin 374c04e257 [AMDGPU] Improve fma.f64 test. NFC.
llvm-svn: 372908
2019-09-25 18:50:34 +00:00
Stanislav Mekhanoshin d3b2b97195 [AMDGPU] gfx10 v_fmac_f16 operand folding
Fold immediates into v_fmac_f16.

Differential Revision: https://reviews.llvm.org/D68037

llvm-svn: 372906
2019-09-25 18:40:20 +00:00
Matt Arsenault eb6eb694e4 AMDGPU/GlobalISel: Allow selection of scalar min/max
I believe all of the uniform/divergent pattern predicates are
redundant and can be removed. The uniformity bit already influences
the register class, and nothhing has broken when I've removed this and
others.

llvm-svn: 372450
2019-09-21 02:37:33 +00:00
Stanislav Mekhanoshin af77ca7e6e Remove assert from MachineLoop::getLoopPredecessor()
According to the documentation method returns predecessor
if the given loop's header has exactly one unique predecessor
outside the loop. Otherwise return null.

In reality it asserts if there is no predecessor outside of
the loop.

The testcase has the loop where predecessors outside of the
loop were not identified as analyzeBranch() was unable to
process the mask branch and returned true. That is also not
correct to assert for the truly dead loops.

Differential Revision: https://reviews.llvm.org/D67634

llvm-svn: 372405
2019-09-20 15:26:10 +00:00
Nico Weber 03475adcf7 Revert r372366 "Use getTargetConstant for BLENDI, and add a test to catch it."
This reverts commit 52621307bc.

Tests have been failing all night with

    [0/2] ACTION //llvm/test:check-llvm(//llvm/utils/gn/build/toolchain:unix)
    -- Testing: 33647 tests, 64 threads --
    Testing: 0 .. 10..
    UNRESOLVED: LLVM :: CodeGen/AMDGPU/GlobalISel/isel-blendi-gettargetconstant.ll (6943 of 33647)
    ******************** TEST 'LLVM :: CodeGen/AMDGPU/GlobalISel/isel-blendi-gettargetconstant.ll' FAILED ********************
    Test has no run line!
    ********************

Since there were other concerns on https://reviews.llvm.org/D67785,
I'm just reverting for now.

llvm-svn: 372383
2019-09-20 12:05:29 +00:00
Sterling Augustine 52621307bc Use getTargetConstant for BLENDI, and add a test to catch it.
Summary: This fixes a crasher introduced by r372338.

Reviewers: echristo, arsenm

Subscribers: jvesely, wdng, nhaehnle, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67785

Tighten up the test case.

llvm-svn: 372366
2019-09-20 02:29:16 +00:00
Matt Arsenault dd74f4839b MachineScheduler: Fix missing dependency with multiple subreg defs
If an instruction had multiple subregister defs, and one of them was
undef, this would improperly conclude all other lanes are
killed. There could still be other defs of those read-undef lanes in
other operands. This would improperly remove register uses from
CurrentVRegUses, so the visitation of later operands would not find
the necessary register dependency. This would also mean this would
fail or not depending on how different subregister def operands were
ordered.

On an undef subregister def, scan the instruction for other
subregister defs and avoid killing those.

This possibly should be deferring removing anything from
CurrentVRegUses until the entire instruction has been processed
instead.

llvm-svn: 372362
2019-09-20 00:09:15 +00:00
Alexander Timofeev e2f9bc3b11 [AMDGPU] Unnecessary -amdgpu-scalarize-global-loads=false flag removed from min/max lit tests.
Reviewers: arsenm

Differential Revision: https://reviews.llvm.org/D67712

llvm-svn: 372340
2019-09-19 16:44:38 +00:00
Matt Arsenault 3ecab8e455 Reapply r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This reverts r372314, reapplying r372285 and the commits which depend
on it (r372286-r372293, and r372296-r372297)

This was missing one switch to getTargetConstant in an untested case.

llvm-svn: 372338
2019-09-19 16:26:14 +00:00
Hans Wennborg 13bdae8541 Revert r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This broke the Chromium build, causing it to fail with e.g.

  fatal error: error in backend: Cannot select: t362: v4i32 = X86ISD::VSHLI t392, Constant:i8<15>

See llvm-commits thread of r372285 for details.

This also reverts r372286, r372287, r372288, r372289, r372290, r372291,
r372292, r372293, r372296, and r372297, which seemed to depend on the
main commit.

> Encode them directly as an imm argument to G_INTRINSIC*.
>
> Since now intrinsics can now define what parameters are required to be
> immediates, avoid using registers for them. Intrinsics could
> potentially want a constant that isn't a legal register type. Also,
> since G_CONSTANT is subject to CSE and legalization, transforms could
> potentially obscure the value (and create extra work for the
> selector). The register bank of a G_CONSTANT is also meaningful, so
> this could throw off future folding and legalization logic for AMDGPU.
>
> This will be much more convenient to work with than needing to call
> getConstantVRegVal and checking if it may have failed for every
> constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
> immarg operands, many of which need inspection during lowering. Having
> to find the value in a register is going to add a lot of boilerplate
> and waste compile time.
>
> SelectionDAG has always provided TargetConstant for constants which
> should not be legalized or materialized in a register. The distinction
> between Constant and TargetConstant was somewhat fuzzy, and there was
> no automatic way to force usage of TargetConstant for certain
> intrinsic parameters. They were both ultimately ConstantSDNode, and it
> was inconsistently used. It was quite easy to mis-select an
> instruction requiring an immediate. For SelectionDAG, start emitting
> TargetConstant for these arguments, and using timm to match them.
>
> Most of the work here is to cleanup target handling of constants. Some
> targets process intrinsics through intermediate custom nodes, which
> need to preserve TargetConstant usage to match the intrinsic
> expectation. Pattern inputs now need to distinguish whether a constant
> is merely compatible with an operand or whether it is mandatory.
>
> The GlobalISelEmitter needs to treat timm as a special case of a leaf
> node, simlar to MachineBasicBlock operands. This should also enable
> handling of patterns for some G_* instructions with immediates, like
> G_FENCE or G_EXTRACT.
>
> This does include a workaround for a crash in GlobalISelEmitter when
> ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372314
2019-09-19 12:33:07 +00:00
Matt Arsenault bffbeecb44 AMDGPU/GlobalISel: RegBankSelect llvm.amdgcn.ds.swizzle
llvm-svn: 372297
2019-09-19 04:11:17 +00:00
Matt Arsenault 494243597b AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.store.format
This needs special handling due to some subtargets that have a
nonstandard register layout for f16 vectors

Also reject some illegal types on other targets.

llvm-svn: 372293
2019-09-19 02:35:08 +00:00
Matt Arsenault 67f1f6ff8c AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.store
llvm-svn: 372292
2019-09-19 02:30:27 +00:00
Matt Arsenault 838ff36553 AMDGPU/GlobalISel: RegBankSelect struct buffer load/store
llvm-svn: 372291
2019-09-19 02:26:53 +00:00
Matt Arsenault a62ef58346 AMDGPU/GlobalISel: RegBankSelect llvm.amdgcn.raw.buffer.{load|store}
llvm-svn: 372290
2019-09-19 02:25:09 +00:00
Matt Arsenault a30d022db6 AMDGPU/GlobalISel: Attempt to RegBankSelect image intrinsics
Images should always have 2 consecutive, mandatory SGPR arguments.

llvm-svn: 372289
2019-09-19 02:23:06 +00:00
Matt Arsenault 01213407c4 Fix typo
llvm-svn: 372288
2019-09-19 02:15:29 +00:00
Matt Arsenault c189f023ac MachineScheduler: Fix assert from not checking subregs
The assert would fail if there was a dead def of a subregister if
there was a previous use of a different subregister.

llvm-svn: 372287
2019-09-19 02:14:12 +00:00
Matt Arsenault 22e2c09515 AMDGPU/GlobalISel: Fix RegBankSelect G_SMULH/G_UMULH pre-gfx9
The scalar versions were only introduced in gfx9.

llvm-svn: 372286
2019-09-19 01:42:34 +00:00
Matt Arsenault d8399d12cd GlobalISel: Don't materialize immarg arguments to intrinsics
Encode them directly as an imm argument to G_INTRINSIC*.

Since now intrinsics can now define what parameters are required to be
immediates, avoid using registers for them. Intrinsics could
potentially want a constant that isn't a legal register type. Also,
since G_CONSTANT is subject to CSE and legalization, transforms could
potentially obscure the value (and create extra work for the
selector). The register bank of a G_CONSTANT is also meaningful, so
this could throw off future folding and legalization logic for AMDGPU.

This will be much more convenient to work with than needing to call
getConstantVRegVal and checking if it may have failed for every
constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
immarg operands, many of which need inspection during lowering. Having
to find the value in a register is going to add a lot of boilerplate
and waste compile time.

SelectionDAG has always provided TargetConstant for constants which
should not be legalized or materialized in a register. The distinction
between Constant and TargetConstant was somewhat fuzzy, and there was
no automatic way to force usage of TargetConstant for certain
intrinsic parameters. They were both ultimately ConstantSDNode, and it
was inconsistently used. It was quite easy to mis-select an
instruction requiring an immediate. For SelectionDAG, start emitting
TargetConstant for these arguments, and using timm to match them.

Most of the work here is to cleanup target handling of constants. Some
targets process intrinsics through intermediate custom nodes, which
need to preserve TargetConstant usage to match the intrinsic
expectation. Pattern inputs now need to distinguish whether a constant
is merely compatible with an operand or whether it is mandatory.

The GlobalISelEmitter needs to treat timm as a special case of a leaf
node, simlar to MachineBasicBlock operands. This should also enable
handling of patterns for some G_* instructions with immediates, like
G_FENCE or G_EXTRACT.

This does include a workaround for a crash in GlobalISelEmitter when
ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372285
2019-09-19 01:33:14 +00:00
Tim Renouf 1786117111 [AMDGPU] Allow FP inline constant in v_madak_f16 and v_fmaak_f16
Differential Revision: https://reviews.llvm.org/D67680

Change-Id: Ic38f47cb2079c2c1070a441b5943854844d80a7c
llvm-svn: 372208
2019-09-18 09:32:06 +00:00