Commit Graph

126531 Commits

Author SHA1 Message Date
Martin Storsjo 5d26959039 [Object] Implement relocation resolver for COFF ARM/ARM64
Adding testscases for this via llvm-dwarfdump.

Also add testcases for the existing resolver support for X86.

Differential Revision: https://reviews.llvm.org/D67340

llvm-svn: 371515
2019-09-10 12:31:40 +00:00
Guillaume Chatelet 3729b17cff [Alignment][NFC] Use llvm::Align for TargetLowering::getPrefLoopAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Reviewed By: courbet

Subscribers: wuzish, arsenm, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, MaskRay, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67386

llvm-svn: 371511
2019-09-10 12:00:43 +00:00
Alexander Timofeev c2d292f839 [AMDGPU]: PHI Elimination hooks added for custom COPY insertion.
Reviewers: rampitec, vpykhtin

  Differential Revision: https://reviews.llvm.org/D67101

llvm-svn: 371508
2019-09-10 10:58:57 +00:00
Dmitri Gribenko 2bf8d77453 Revert "Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline.""
This reverts commit r371502, it broke tests
(clang/test/CodeGenCXX/auto-var-init.cpp).

llvm-svn: 371507
2019-09-10 10:39:09 +00:00
Clement Courbet 612c260ec3 Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline."
With a fix for sanitizer breakage (see explanation in D60318).

llvm-svn: 371502
2019-09-10 09:18:00 +00:00
Fangrui Song 1da4f47195 [yaml2obj] Set p_align to the maximum sh_addralign of contained sections
The address difference between two sections in a PT_LOAD is a constant.
Consider a hypothetical case (pagesize can be very small, say, 4).

```
.text     sh_addralign=4
.text.hot sh_addralign=16
```

If we set p_align to 4, the PT_LOAD will be loaded at an address which
is a multiple of 4. The address of .text.hot is guaranteed to be a
multiple of 4, but not necessarily a multiple of 16.

This patch deletes the constraint

  if (SHeader->sh_offset == PHeader.p_offset)

Reviewed By: grimar, jhenderson

Differential Revision: https://reviews.llvm.org/D67260

llvm-svn: 371501
2019-09-10 09:16:34 +00:00
Guillaume Chatelet b6722af068 [Alignment] Use Align for TargetLowering::MinStackArgumentAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: sdardis, nemanjai, hiraditya, kbarton, jrtc27, MaskRay, atanasyan, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67288

llvm-svn: 371498
2019-09-10 09:01:18 +00:00
Craig Topper e8b432fa0e [LegalizeTypes] Teach SoftenFloatOp_SELECT_CC to handle operand 2 or 3 being softened.
This can only happen on X86 when fp128 is a legal type, but we
go through softening to generate libcalls. This causes fp128 to
be softened to fp128 instead of an integer type. This can be
removed if D67128 lands.

llvm-svn: 371493
2019-09-10 07:56:02 +00:00
Petr Hosek 7d1757aba8 Revert "clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM"
This reverts commit r371484: this broke sanitizer-x86_64-linux-fast bot.

llvm-svn: 371488
2019-09-10 06:25:13 +00:00
Craig Topper 0e533ca4bb [X86] Add broadcast load unfolding support for VCMPPS/PD.
llvm-svn: 371487
2019-09-10 05:49:53 +00:00
Petr Hosek a10802fd73 clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM
This patch contains the basic functionality for reporting potentially
incorrect usage of __builtin_expect() by comparing the developer's
annotation against a collected PGO profile. A more detailed proposal and
discussion appears on the CFE-dev mailing list
(http://lists.llvm.org/pipermail/cfe-dev/2019-July/062971.html) and a
prototype of the initial frontend changes appear here in D65300

We revised the work in D65300 by moving the misexpect check into the
LLVM backend, and adding support for IR and sampling based profiles, in
addition to frontend instrumentation.

We add new misexpect metadata tags to those instructions directly
influenced by the llvm.expect intrinsic (branch, switch, and select)
when lowering the intrinsics. The misexpect metadata contains
information about the expected target of the intrinsic so that we can
check against the correct PGO counter when emitting diagnostics, and the
compiler's values for the LikelyBranchWeight and UnlikelyBranchWeight.
We use these branch weight values to determine when to emit the
diagnostic to the user.

A future patch should address the comment at the top of
LowerExpectIntrisic.cpp to hoist the LikelyBranchWeight and
UnlikelyBranchWeight values into a shared space that can be accessed
outside of the LowerExpectIntrinsic pass. Once that is done, the
misexpect metadata can be updated to be smaller.

In the long term, it is possible to reconstruct portions of the
misexpect metadata from the existing profile data. However, we have
avoided this to keep the code simple, and because some kind of metadata
tag will be required to identify which branch/switch/select instructions
are influenced by the use of llvm.expect

Patch By: paulkirth
Differential Revision: https://reviews.llvm.org/D66324

llvm-svn: 371484
2019-09-10 03:11:39 +00:00
Matt Arsenault a91f017ae3 AMDGPU/GlobalISel: Fix insert point when lowering fminnum/fmaxnum
llvm-svn: 371471
2019-09-09 23:30:11 +00:00
Austin Kerbow 06c8cb03ca AMDGPU/GlobalISel: Rename MIRBuilder to B. NFC
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67374

llvm-svn: 371467
2019-09-09 23:06:13 +00:00
Reid Kleckner bf02399a85 [Windows] Replace TrapUnreachable with an int3 insertion pass
This is an alternative to D66980, which was reverted. Instead of
inserting a pseudo instruction that optionally expands to nothing, add a
pass that inserts int3 when appropriate after basic block layout.

Reviewers: hans

Differential Revision: https://reviews.llvm.org/D67201

llvm-svn: 371466
2019-09-09 23:04:25 +00:00
Aditya Nandakumar 5112b71126 [GlobalISel]: Fix a bug where we could dereference None
getConstantVRegVal returns None when dealing with constants > 64 bits.
Don't assume we always have a value in GISelKnownBits.

llvm-svn: 371465
2019-09-09 22:51:41 +00:00
Simon Pilgrim 7f37d9a714 Fix MSVC "not all control paths return a value" warning. NFCI.
llvm-svn: 371454
2019-09-09 21:30:11 +00:00
Philip Reames 7403569be7 [LoopVectorize] Leverage speculation safety to avoid masked.loads
If we're vectorizing a load in a predicated block, check to see if the load can be speculated rather than predicated.  This allows us to generate a normal vector load instead of a masked.load.

To do so, we must prove that all bytes accessed on any iteration of the original loop are dereferenceable, and that all loads (across all iterations) are properly aligned.  This is equivelent to proving that hoisting the load into the loop header in the original scalar loop is safe.

Note: There are a couple of code motion todos in the code.  My intention is to wait about a day - to be sure this sticks - and then perform the NFC motion without furthe review.

Differential Revision: https://reviews.llvm.org/D66688

llvm-svn: 371452
2019-09-09 20:54:13 +00:00
Francis Visoiu Mistrih 3d85013b63 [Remarks] Fix warning for uint8_t < 0 comparison
http://lab.llvm.org:8011/builders/clang-with-thin-lto-ubuntu/builds/19109/steps/build-stage1-compiler/logs/stdio

llvm-svn: 371443
2019-09-09 19:47:25 +00:00
Philip Reames 20aafa3156 Introduce infrastructure for an incremental port of SelectionDAG atomic load/store handling
This is the first patch in a large sequence. The eventual goal is to have unordered atomic loads and stores - and possibly ordered atomics as well - handled through the normal ISEL codepaths for loads and stores. Today, there handled w/instances of AtomicSDNodes. The result of which is that all transforms need to be duplicated to work for unordered atomics. The benefit of the current design is that it's harder to introduce a silent miscompile by adding an transform which forgets about atomicity.  See the thread on llvm-dev titled "FYI: proposed changes to atomic load/store in SelectionDAG" for further context.

Note that this patch is NFC unless the experimental flag is set.

The basic strategy I plan on taking is:

    introduce infrastructure and a flag for testing (this patch)
    Audit uses of isVolatile, and apply isAtomic conservatively*
    piecemeal conservative* update generic code and x86 backedge code in individual reviews w/tests for cases which didn't check volatile, but can be found with inspection
    flip the flag at the end (with minimal diffs)
    Work through todo list identified in (2) and (3) exposing performance ops

(*) The "conservative" bit here is aimed at minimizing the number of diffs involved in (4). Ideally, there'd be none. In practice, getting it down to something reviewable by a human is the actual goal. Note that there are (currently) no paths which produce LoadSDNode or StoreSDNode with atomic MMOs, so we don't need to worry about preserving any behaviour there.

We've taken a very similar strategy twice before with success - once at IR level, and once at the MI level (post ISEL). 

Differential Revision: https://reviews.llvm.org/D66309

llvm-svn: 371441
2019-09-09 19:23:22 +00:00
Matt Arsenault a0933e6df7 AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR v2s16
Handle it the same way as G_BUILD_VECTOR_TRUNC. Arguably only
G_BUILD_VECTOR_TRUNC should be legal for this, but G_BUILD_VECTOR will
probably be more convenient in most cases.

llvm-svn: 371440
2019-09-09 18:57:51 +00:00
Matt Arsenault 8bc05d7d60 AMDGPU: Make VReg_1 size be 1
This was getting chosen as the preferred 32-bit register class based
on how TableGen selects subregister classes.

llvm-svn: 371438
2019-09-09 18:43:29 +00:00
Matt Arsenault 77e3e9cafd AMDGPU/GlobalISel: Select llvm.amdgcn.class
Also fixes missing SubtargetPredicate on f16 class instructions.

llvm-svn: 371436
2019-09-09 18:29:45 +00:00
Matt Arsenault d6c1f5bb15 AMDGPU/GlobalISel: Select fmed3
llvm-svn: 371435
2019-09-09 18:29:37 +00:00
Eli Friedman 79f0d3a6e5 [IfConversion] Correctly handle cases where analyzeBranch fails.
If analyzeBranch fails, on some targets, the out parameters point to
some blocks in the function. But we can't use that information, so make
sure to clear it out.  (In some places in IfConversion, we assume that
any block with a TrueBB is analyzable.)

The change to the testcase makes it trigger a bug on builds without this
fix: IfConvertDiamond tries to perform a followup "merge" operation,
which isn't legal, and we somehow end up with a branch to a deleted MBB.
I'm not sure how this doesn't crash the compiler.

Differential Revision: https://reviews.llvm.org/D67306

llvm-svn: 371434
2019-09-09 18:29:27 +00:00
Matt Arsenault 6ebf605851 AMDGPU: Use PatFrags to allow selecting custom nodes or intrinsics
This enables GlobalISel to handle various intrinsics. The custom node
pattern will be ignored, and the intrinsic will work. This will also
allow SelectionDAG to directly select the intrinsics, but as they are
all custom lowered to the nodes, this ends up leaving dead code in the
table.

Eventually either GlobalISel should add the equivalent of custom nodes
equivalent, or intrinsics should be directly used. These each have
different tradeoffs.

There are a few more to handle, but these are easy to handle
ones. Some others fail for other reasons.

llvm-svn: 371432
2019-09-09 18:10:31 +00:00
Craig Topper 5ebd0a6e88 [SelectionDAG] Remove ISD::FP_ROUND_INREG
I don't think anything in tree creates this node. So all of this
code appears to be dead.

Code coverage agrees
http://lab.llvm.org:8080/coverage/coverage-reports/llvm/coverage/Users/buildslave/jenkins/workspace/clang-stage2-coverage-R/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp.html

Differential Revision: https://reviews.llvm.org/D67312

llvm-svn: 371431
2019-09-09 17:54:44 +00:00
Craig Topper ce2cb0f09e [X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together on instructions that only support SAE and not embedded rounding.
Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.

Differential Revision: https://reviews.llvm.org/D67289

llvm-svn: 371430
2019-09-09 17:48:05 +00:00
Francis Visoiu Mistrih a85d9ef11a [Remarks] Add parser for bitstream remarks
The bitstream remark serializer landed in r367372.

This adds a bitstream remark parser that parser bitstream remark files
to llvm::remarks::Remark objects through the RemarkParser interface.

A few interesting things to point out:

* There are parsing helpers to parse the different types of blocks
* The main parsing helper allows us to parse remark metadata and open an
external file containing the encoded remarks
* This adds a dependency from the Remarks library to the BitstreamReader
library
* The testing strategy is to create a remark entry through YAML, parse
it, serialize it to bitstream, parse that back and compare the objects.
* There are close to no tests for malformed bitstream remarks, due to
the lack of textual format for the bitstream format.
* This adds a new C API for parsing bitstream remarks:
LLVMRemarkParserCreateBitstream.
* This bumps the REMARKS_API_VERSION to 1.

Differential Revision: https://reviews.llvm.org/D67134

llvm-svn: 371429
2019-09-09 17:43:50 +00:00
Simon Atanasyan 56e4ea2bff [mips] Fix decoding of microMIPS JALX instruction
microMIPS jump and link exchange instruction stores a target in a
26-bits field. Despite other microMIPS JAL instructions these bits
are target address shifted right 2 bits [1]. The patch fixes the
JALX instruction decoding and uses 2-bit shift.

[1] MIPS Architecture for Programmers Volume II-B: The microMIPS32 Instruction Set

Differential Revision: https://reviews.llvm.org/D67320

llvm-svn: 371428
2019-09-09 17:28:45 +00:00
Matt Arsenault d2a9516a6d AMDGPU: Move MnemonicAlias out of instruction def hierarchy
Unfortunately MnemonicAlias defines a "Predicates" field just like an
instruction or pattern, with a somewhat different interpretation.

This ends up overriding the intended Predicates set by
PredicateControl on the pseudoinstruction defintions with an empty
list. This allowed incorrectly selecting instructions that should have
been rejected due to the SubtargetPredicate from patterns on the
instruction definition.

This does remove the divergent predicate from the 64-bit shift
patterns, which were already not used for the 32-bit shift, so I'm not
sure what the point was. This also removes a second, redundant copy of
the 64-bit divergent patterns.

llvm-svn: 371427
2019-09-09 17:25:35 +00:00
Jessica Paquette bfb00e3d53 [GlobalISel][AArch64] Handle tail calls with non-void return types
Just return once you emit the call, which is exactly what SelectionDAG does in
this situation.

Update call-translator-tail-call.ll.

Also update dllimport.ll to show that we tail call here in GISel again. Add
-verify-machineinstrs to the GISel line too, to defend against verifier
failures.

Differential revision: https://reviews.llvm.org/D67282

llvm-svn: 371425
2019-09-09 17:15:56 +00:00
Matt Arsenault 64ecca90d4 AMDGPU/GlobalISel: Implement LDS G_GLOBAL_VALUE
Handle the simple case that lowers to a constant.

llvm-svn: 371424
2019-09-09 17:13:44 +00:00
Matt Arsenault 182f9248e8 AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR_TRUNC
Treat this as legal on gfx9 since it can use S_PACK_* instructions for
this.

This isn't used by anything yet. The same will probably apply to
16-bit G_BUILD_VECTOR without the trunc.

llvm-svn: 371423
2019-09-09 17:04:18 +00:00
Dmitri Gribenko d9c4060bd5 Revert "[MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp"
This reverts commit 371359. I'm suspecting a miscompile, I posted a
reproducer to https://reviews.llvm.org/D65267.

llvm-svn: 371421
2019-09-09 16:46:45 +00:00
Fangrui Song c28f3e6e2c [yaml2obj] Simplify p_filesz/p_memsz computing
This fixes a bug as well. When "FileSize:" (p_filesz) is specified and
different from the actual value, the following code probably should not
use PHeader.p_filesz:

  if (SHeader->sh_offset == PHeader.p_offset + PHeader.p_filesz)
    PHeader.p_memsz += SHeader->sh_size;

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D67256

llvm-svn: 371420
2019-09-09 16:45:17 +00:00
David Green 2b7089949e [ARM] Fix loads and stores for predicate vectors
These predicate vectors can usually be loaded and stored with a single
instruction, a VSTR_P0. However this instruction will store the entire P0
predicate, 16 bits, zeroextended to 32bits. Each lane of the the
v4i1/v8i1/v16i1 representing 4/2/1 bits.

As far as I understand, when llvm says "store this v4i1", it really does need
to store 4 bits (or 8, that being the size of a byte, with this bottom 4 as the
interesting bits). For example a bitcast from a v8i1 to a i8 is defined as a
store followed by a load, which is how the code is expanded.

So this instead lowers the v4i1/v8i1 load/store through some shuffles to get
the bits into the correct positions. This, as you might imagine, is not as
efficient as a single instruction. But I believe it is needed for correctness.
v16i1 equally should not load/store 32bits, only storing the 16bits of data.
Stack loads/stores are still using the VSTR_P0 (as can be seen by the test not
changing). This is fine as they are self-consistent, it is only "externally
observable loads/stores" (from our point of view) that need to be corrected.

Differential revision: https://reviews.llvm.org/D67085

llvm-svn: 371419
2019-09-09 16:35:49 +00:00
Matt Arsenault 63e6d8db1c AMDGPU/GlobalISel: Select atomic loads
A new check for an explicitly atomic MMO is needed to avoid
incorrectly matching pattern for non-atomic loads

llvm-svn: 371418
2019-09-09 16:18:07 +00:00
Matt Arsenault d8409b178e AMDGPU/GlobalISel: Fix RegBankSelect for unaligned, uniform constant loads
llvm-svn: 371416
2019-09-09 16:06:37 +00:00
Matt Arsenault 02eb308387 AMDGPU/GlobalISel: Fix regbankselect for uniform extloads
There are no scalar extloads.

llvm-svn: 371414
2019-09-09 16:03:45 +00:00
Matt Arsenault ebbd6e4976 AMDGPU: Remove code address space predicates
Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due
to not be reported as legal.

llvm-svn: 371413
2019-09-09 16:02:07 +00:00
Matt Arsenault c34b4036ff AMDGPU/GlobalISel: Select G_PTR_MASK
llvm-svn: 371412
2019-09-09 15:46:13 +00:00
Matt Arsenault fdb7030117 AMDGPU/GlobalISel: Fix reg bank for uniform LDS loads
The pointer is always a VGPR. Also fix hardcoding the pointer size to
64.

llvm-svn: 371411
2019-09-09 15:44:16 +00:00
Matt Arsenault 2dd088ec7d AMDGPU/GlobalISel: Use known bits for selection
llvm-svn: 371409
2019-09-09 15:39:32 +00:00
Matt Arsenault 8e3bc9b572 AMDGPU/GlobalISel: Legalize wavefrontsize intrinsic
llvm-svn: 371407
2019-09-09 15:20:49 +00:00
Matt Arsenault d50f937378 AMDGPU/GlobalISel: Try generated matcher before add/sub code
This will allow optimization patterns which fold adds away to work.

llvm-svn: 371406
2019-09-09 15:20:44 +00:00
Simon Tatham 0e48bd24e2 [ARM] Remove some spurious MVE reduction instructions.
The family of 'dual-accumulating' vector multiply-add instructions
(VMLADAV, VMLALDAV and VRMLALDAVH) can all operate on both signed and
unsigned integer types, and they all have an 'exchange' variant (with
an X in the name) that modifies which pairs of vector lanes in the two
inputs are multiplied together. But there's a clause in the spec that
says that the X variants //don't// operate on unsigned integer types,
only signed. You can have X, or unsigned, or neither, but not both.

We didn't notice that clause when we implemented the MC support for
these instructions, so LLVM believes that things like VMLADAVX.U8 do
exist, contradicting the spec. Here I fix that by conditioning them
out in Tablegen.

In order to do that, I've reversed the nesting order of the Tablegen
multiclasses for those instructions. Previously, the innermost
multiclass generated the X and not-X variants, and the one outside
that generated the A and not-A variants. Now X is done by the outer
multiclass, which allows me to bypass that one when I only want the
two not-X variants.

Changing the multiclass nesting order also changes the names of the
instruction ids unless I make a special effort not to. I decided that
while I was changing them anyway I'd make them look nicer; so now the
instructions have names like MVE_VMLADAVs32 or MVE_VMLADAVaxs32,
instead of cumbersome _noacc_noexch suffixes.

The corresponding multiply-subtract instructions are unaffected. Those
don't accept unsigned types at all, either in the spec or in LLVM.

Reviewers: ostannard, dmgreen

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67214

llvm-svn: 371405
2019-09-09 15:17:26 +00:00
Matt Arsenault 508dff2ce1 AMDGPU/GlobalISel: Remove dead patterns
llvm-svn: 371404
2019-09-09 15:06:06 +00:00
James Molloy b6c7fce67a [DFAPacketizer] Reapply: Track resources for packetized instructions
Reapply with fix to reduce resources required by the compiler - use
unsigned[2] instead of std::pair. This causes clang and gcc to compile
the generated file multiple times faster, and hopefully will reduce
the resource requirements on Visual Studio also. This fix is a little
ugly but it's clearly the same issue the previous author of
DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
too).

This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936

llvm-svn: 371399
2019-09-09 13:17:55 +00:00
Sam Parker 1ad508e8e2 [ARM][MVE] VCTP instruction selection
Add codegen support for vctp{8,16,32}.

Differential Revision: https://reviews.llvm.org/D67344

llvm-svn: 371395
2019-09-09 12:54:47 +00:00
Simon Pilgrim 462e3d8050 Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936
........
Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds

llvm-svn: 371393
2019-09-09 12:33:22 +00:00