Commit Graph

131965 Commits

Author SHA1 Message Date
Jim Lin ea6eb813c7 [AVR][NFC] Use Register instead of unsigned
Summary: Use Register type for variables instead of unsigned type.

Reviewers: dylanmckay

Reviewed By: dylanmckay

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75595
2020-03-05 11:38:24 +08:00
Greg Clayton ffe6695acf Fix buildbots with merge that didn't happen for 4050b01ba9. 2020-03-04 19:28:24 -08:00
Greg Clayton 4050b01ba9 Fix GSYM tests to run the yaml files and fix test failures on some machines.
YAML files were not being run during lit testing as there was no lit.local.cfg file. Once this was fixed, some buildbots would fail due to a StringRef that pointed to a std::string inside of a temporary llvm::Triple object. These issues are fixed here by making a local triple object that stays around long enough so the StringRef points to valid data. Fixed memory sanitizer bot bugs as well.

Differential Revision: https://reviews.llvm.org/D75390
2020-03-04 19:14:08 -08:00
hsmahesha 3fda1fde8f AMDGPU/GlobalISel: Support llvm.trap and llvm.debugtrap intrinsics
Summary: Lower trap and debugtrap intrinsics to AMDGPU machine instruction(s).

Reviewers: arsenm, nhaehnle, kerbowa, cdevadas, t-tye, kzhuravl

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, rovka, dstuttard, tpr, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74688
2020-03-05 08:16:57 +05:30
Shengchen Kan b3722dea3b [X86] Add a private member function determinePaddingPrefix for X86AsmBackend
Summary: X86 can reduce the bytes of NOP by padding instructions with prefixes to get a better peformance in some cases. So a private member function `determinePaddingPrefix` is added to determine which prefix is the most suitable.

Reviewers: annita.zhang, reames, MaskRay, craig.topper, LuoYuanke, jyknight

Reviewed By: reames

Subscribers: llvm-commits, dexonsmith, hiraditya

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75357
2020-03-05 09:26:33 +08:00
Philip Reames f708c823f0 [X86] Relax existing instructions to reduce the number of nops needed for alignment purposes
If we have an explicit align directive, we currently default to emitting nops to fill the space. As discussed in the context of the prefix padding work for branch alignment (D72225), we're allowed to play other tricks such as extending the size of previous instructions instead.

This patch will convert near jumps to far jumps if doing so decreases the number of bytes of nops needed for a following align. It does so as a post-pass after relaxation is complete. It intentionally works without moving any labels or doing anything which might require another round of relaxation.

The point of this patch is mainly to mock out the approach. The optimization implemented is real, and possibly useful, but the main point is to demonstrate an approach for implementing such "pad previous instruction" approaches. The key notion in this patch is to treat padding previous instructions as an optional optimization, not as a core part of relaxation. The benefit to this is that we avoid the potential concern about increasing the distance between two labels and thus causing further potentially non-local code grown due to relaxation. The downside is that we may miss some opportunities to avoid nops.

For the moment, this patch only implements a small set of existing relaxations.. Assuming the approach is satisfactory, I plan to extend this to a broader set of instructions where there are obvious "relaxations" which are roughly performance equivalent.

Note that this patch *doesn't* change which instructions are relaxable. We may wish to explore that separately to increase optimization opportunity, but I figured that deserved it's own separate discussion.

There are possible downsides to this optimization (and all "pad previous instruction" variants). The major two are potentially increasing instruction fetch and perturbing uop caching. (i.e. the usual alignment risks) Specifically:
 * If we pad an instruction such that it crosses a fetch window (16 bytes on modern X86-64), we may cause the decoder to have to trigger a fetch it wouldn't have otherwise. This can effect both decode speed, and icache pressure.
 * Intel's uop caching have particular restrictions on instruction combinations which can fit in a particular way. By moving around instructions, we can both cause misses an change misses into hits. Many of the most painful cases are around branch density, so I don't expect this to be too bad on the whole.

On the whole, I expect to see small swings (i.e. the typical alignment change problem), but nothing major or systematic in either direction.

Differential Revision: https://reviews.llvm.org/D75203
2020-03-04 16:52:35 -08:00
Stefan Gränitz 76c59a63bc [ORC] Decompose LazyCallThroughManager::callThroughToSymbol()
Summary: Decompose callThroughToSymbol() into findReexport(), resolveSymbol(), notifyResolved() and reportCallThroughError(). This allows derived classes to reuse the functionality while adding their own code in between.

Reviewers: lhames

Reviewed By: lhames

Subscribers: hiraditya, steven_wu, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75084
2020-03-05 00:24:23 +01:00
Craig Topper eadea7868f [X86] Convert vXi1 vectors to xmm/ymm/zmm types via getRegisterTypeForCallingConv rather than using CCPromoteToType in the td file
Previously we tried to promote these to xmm/ymm/zmm by promoting
in the X86CallingConv.td file. But this breaks when we run out
of xmm/ymm/zmm registers and need to fall back to memory. We end
up trying to create a non-sensical scalar to vector. This lead
to an assertion. The new tests in avx512-calling-conv.ll all
trigger this assertion.

Since we really want to treat these types like we do on avx2,
it seems better to promote them before the calling convention
code gets involved. Except when the calling convention is one
that passes the vXi1 type in a k register.

The changes in avx512-regcall-Mask.ll are because we indicated
that xmm/ymm/zmm types should be passed indirectly for the
Win64 ABI before we go to the common lines that promoted the
vXi1 types. This caused the promoted types to be picked up by
the default calling convention code. Now we promote them earlier
so they get passed indirectly as though they were xmm/ymm/zmm.

Differential Revision: https://reviews.llvm.org/D75154
2020-03-04 15:02:32 -08:00
shafik 37549464c1 [dsymutil] Fix template stripping in getDIENames(...) to account for overloaded operators
Currently dsymutil when generating accelerator tables will attempt to strip the template parameters from names for subroutines.
For some overload operators which contain < in their names e.g. operator< the current method ends up stripping the operator name as well,
we just end up with the name operator in the table for each case.

Differential Revision: https://reviews.llvm.org/D75545
2020-03-04 14:54:31 -08:00
Craig Topper 6ca96765c7 [X86] Disable commuting for the first source operand of zero masked scalar fma intrinsic instructions.
I believe this is the correct fix for D75506 rather than disabling all commuting. We can still commute the remaining two sources.

Differential Revision:m https://reviews.llvm.org/D75526
2020-03-04 14:35:53 -08:00
Matt Arsenault 15bf916b54 AMDGPU: Remove VOP3OpSelMods0 complex pattern
Use default operand of 0 instead.
2020-03-04 17:18:22 -05:00
Nikita Popov c6ff3c9bad [InstSimplify] Constant fold icmp of gep
InstSimplify can fold icmps of gep where the base pointers are the
same and the offsets are constant. It does so by constructing a
constant expression icmp and assumes that it gets folded -- but
this doesn't actually happen, because GEP expressions can usually
only be folded by the target-dependent constant folding layer.
As such, we need to explicitly invoke it here.

Differential Revision: https://reviews.llvm.org/D75407
2020-03-04 23:16:52 +01:00
Muhammad Omair Javaid 5583c2f2fb Revert "[GlobalISel][Localizer] Enable intra-block localization of already-local uses."
This reverts commit e91e1df6ab.
2020-03-05 03:12:28 +05:00
Matt Arsenault 9e1d2afc13 AMDGPU/GlobalISel: Don't use vector G_EXTRACT in arg lowering
Create a wider source vector, and unmerge with dead defs like the
legalizer. The legalization handling for G_EXTRACT is incomplete, and
it's preferrable to keep everything in 32-bit pieces.

We should probably start moving these functions into utils, since we
have a growing number of places that do almost the same thing.
2020-03-04 16:49:01 -05:00
Matt Arsenault b71203a751 GlobalISel: Move some legalizer functions to utils 2020-03-04 16:40:00 -05:00
Matt Arsenault fb0c35fa34 GlobalISel: Set alignment on function argument stack load/store 2020-03-04 16:38:46 -05:00
Zola Bridges aa3f791fa9 [x86][SLH] Rm liveness check from data invariance check
SLH had two functions named isDataInvariant and isDataInvariantLoad that
checked whether the passed instruction was data invariant. For some instructions,
if the EFLAGS were dead then they were considered data invariant, otherwise
they were not considered data invariant.

In this patch, I extracted that EFLAGS liveness check and made it
explicit at every call to isDataInvariant and isDataInvariantLoad.
This makes the isDataInvariant function behave more generally
and preserves the liveness check behavior that SLH would like to have.

Tested via llvm-lit llvm/test/CodeGen/X86/speculative-load-hardening*

This is the first step in making these two data invariance checks
available for non-SLH passes. The second step is to move the passes from
SLH to X86InstrInfo.cpp. I'll follow up with a patch that does that.

Differential Revision: https://reviews.llvm.org/D70283
2020-03-04 21:49:49 +01:00
Lang Hames 8363ff04af [ORC] Add some debugging output for initializers.
This output can be useful in tracking down initialization failures in the JIT.
2020-03-04 12:38:25 -08:00
Wei Mi 3c96d01d2e Generate Callee Saved Register (CSR) related cfi directives like .cfi_restore.
https://reviews.llvm.org/D42848 only handled CFA related cfi directives but
didn't handle CSR related cfi. The patch adds the CSR part. Basically it reuses
the framework created in D42848. For each basicblock, the patch tracks which
CSR set have been saved at its CFG predecessors's exits, and compare the CSR
set with the set at its previous basicblock's exit (The previous block is the
block laid before the current block). If the saved CSR set at its previous
basicblock's exit is larger, .cfi_restore will be inserted.

The patch also generates proper .cfi_restore in epilogue to make sure the
saved CSR set is consistent for the incoming edges of each block.

Differential Revision: https://reviews.llvm.org/D74303
2020-03-04 11:18:37 -08:00
Guozhi Wei ee9a3eba76 [CodeGenPrepare] Handle ExtractValueInst in dupRetToEnableTailCallOpts
As the test case shows if there is an ExtractValueInst in the Ret block, function dupRetToEnableTailCallOpts can't duplicate it into the block containing call. So later no tail call is generated in CodeGen.

    This patch adds the ExtractValueInst handling code in function dupRetToEnableTailCallOpts and FoldReturnIntoUncondBranch, and later tail call can be generated for this case.

Differential Revision: https://reviews.llvm.org/D74242
2020-03-04 11:10:32 -08:00
David Green 38e532278e [LSR] Add masked load and store handling
This teaches Loop Strength Reduction the details about masked load and
store address operands, so that it can have a better time optimising
them as it would for normal loads and stores.

Differential Revision: https://reviews.llvm.org/D75371
2020-03-04 18:36:10 +00:00
Mitch Phillips 58079aa91b Revert "Fix GSYM tests to run the yaml files and fix test failures on some machines."
This reverts commit 8d41f1a023.

This change broke the MSan buildbots - see comments in
https://reviews.llvm.org/D75390 for more information.
2020-03-04 10:21:54 -08:00
Nikita Popov 293d813020 [InstCombine] Don't explicitly invoke const folding in shift combine
InstCombine uses an IRBuilder that automatically performs
target-dependent constant folding, so explicitly invoking it here
is not necessary.
2020-03-04 18:33:00 +01:00
Nikita Popov 9b5de84e27 [InstCombine] Use IRBuilder to create bitcast
This makes sure that the constant expression bitcast goes through
target-dependent constant folding, and thus avoids an additional
iteration of InstCombine.
2020-03-04 18:28:38 +01:00
Nikita Popov 0e890cd4d4 [ConstantFolding] Always return something from ConstantFoldConstant
Spin-off from D75407. As described there, ConstantFoldConstant()
currently returns null for non-ConstantExpr/ConstantVector inputs,
but otherwise always returns non-null, independently of whether
any folding has happened or not.

This is confusing and makes consumer code more complicated.
I would expect either that ConstantFoldConstant() returns only if
it actually folded something, or that it always returns non-null.
I'm going to the latter possibility here, which appears to be more
useful considering existing usage.

Differential Revision: https://reviews.llvm.org/D75543
2020-03-04 18:24:47 +01:00
Craig Topper 06de426426 [X86] Directly form VBROADCAST_LOAD in lowerShuffleAsBroadcast on AVX targets.
If we would emit a VBROADCAST node, we can instead directly emit
a VBROADCAST_LOAD. This allows us to get rid of the special case
to use an f64 load on 32-bit targets for vXi64.

I believe there is more cleanup we can do later in this function,
but I'll do that in follow ups.
2020-03-04 09:11:57 -08:00
Sanjay Patel 71a316883d [PassManager] adjust VectorCombine placement
The initial placement of vector-combine in the opt pipeline revealed phase ordering bugs:
https://bugs.llvm.org/show_bug.cgi?id=45015
https://bugs.llvm.org/show_bug.cgi?id=42022

This patch contains a few independent changes:

1. Move the pass up in the pipeline, so it happens just after loop-vectorization.
   This is only to keep vectorization passes together in the pipeline at the moment.
   I don't have evidence of interaction between these yet.
2. Add an -early-cse pass after -vector-combine to clean up redundant ops. This was
   partly proposed as far back as rL219644 (which is why it's effectively being moved
   in the old PM code). This is important because the subsequent -instcombine doesn't
   work as well without EarlyCSE. With the CSE, -instcombine is able to squash
   shuffles together in 1 of the tests (because those are simple "select" shuffles).
3. Remove the -vector-combine pass that was running after SLP. We may want to do that
   eventually, but I don't have a test case to support it yet.

Differential Revision: https://reviews.llvm.org/D75145
2020-03-04 11:10:49 -05:00
Sanjay Patel 29a2b20ab3 [SDAG] simplify FP binops to undef
As discussed in the commit thread for rGa253a2a and D73978, we can do more undef folding for FP ops.
The nnan and ninf fast-math-flags specify that if an operand is the disallowed value, the result is
poison, so we can produce an undef result.

But this doesn't work as expected (the undef operand cases remain) because of a Flags propagation
problem in SelectionDAGBuilder.

I've added DAGCombiner calls to enable these for the other cases because we've shown in other
patches that (because of the limited way that SDAG iterates), it is possible to miss simplifications
like this if they are done only at node creation time.

Several potential follow-ups to expand on this patch are possible.

Differential Revision: https://reviews.llvm.org/D75576
2020-03-04 10:42:16 -05:00
Pavel Labath bddab92858 Use new DWARFDataExtractor::getInitialLength in DWARFDebugFrame 2020-03-04 13:01:35 +01:00
Pavel Labath 2458492a9a Use new DWARFDataExtractor::getInitialLength in DWARFDebugPubTable 2020-03-04 13:01:35 +01:00
Pavel Labath c9579271b3 Use new DWARFDataExtractor::getInitialLength in DWARFUnit 2020-03-04 13:01:35 +01:00
Pavel Labath a8bc9c3f0f Use new DWARFDataExtractor::getInitialLength in DWARFVerifier 2020-03-04 13:01:34 +01:00
Pavel Labath eb2b17eea7 Use DWARFDataExtractor::getInitialLength in debug_aranges
Summary:
getInitialLength is a *DWARF*DataExtractor method so I had to "upgrade"
some DataExtractors to be able to make use of it.

Reviewers: ikudrin, jhenderson, probinson

Subscribers: aprantl, hiraditya, llvm-commits, dblaikie

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75535
2020-03-04 13:01:07 +01:00
Pavel Labath 38385630ad Use DWARFDataExtractor::getInitialLength in DWARFDebugAddr
Reviewers: ikudrin, jhenderson, probinson

Subscribers: hiraditya, dblaikie, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75532
2020-03-04 13:00:15 +01:00
Kerry McLaughlin f5502c7035 [AArch64][SVE] Add SVE2 intrinsic for xar
Summary: Implements the @llvm.aarch64.sve.xar intrinsic

Reviewers: andwar, c-rhodes, dancgr, efriedma, rengolin

Reviewed By: andwar

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75160
2020-03-04 11:44:32 +00:00
Evgeniy Brevnov 5a63813dc7 [DependenceAnalysis] Dependecies for loads marked with "ivnariant.load" should not be shared with general accesses(PR42151).
Summary:
This is second attempt to fix the problem with incorrect dependencies reported in presence of invariant load. Initial fix (https://reviews.llvm.org/D64405) was reverted due to a regression reported in https://reviews.llvm.org/D70516.

The original fix changed caching behavior for invariant loads. Namely such loads are not put into the second level cache (NonLocalDepInfo). The problem with that fix is the first level cache  (CachedNonLocalPointerInfo) still works as if invariant loads were in the second level cache. The solution is in addition to not putting dependence results into the second level cache avoid putting info about invariant loads into the first level cache as well.

Reviewers: jdoerfert, reames, hfinkel, efriedma

Reviewed By: jdoerfert

Subscribers: DaniilSuchkov, hiraditya, bmahjour, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73027
2020-03-04 18:40:02 +07:00
Simon Pilgrim e2f0093800 [AMDGPU] performCvtF32UByteNCombine - revisit node after src operand simplification.
If SimplifyDemandedBits succeeds in simplifying the byte src, add the CVT_F32_UBYTE node back to the worklist as we might be able to simplify further.

Yet another step towards removing SelectionDAG::GetDemandedBits.
2020-03-04 11:25:50 +00:00
Simon Tatham 068b2f313c [ARM,MVE] Add the `vshlcq` intrinsics.
Summary:
The VSHLC instruction performs a left shift of a whole vector register
by an immediate shift count up to 32, shifting in new bits at the low
end from a GPR and delivering the shifted-out bits from the high end
back into the same GPR.

Since the instruction produces two outputs (the shifted vector
register and the output GPR of shifted-out bits), it has to be
instruction-selected in C++ rather than Tablegen.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75445
2020-03-04 08:49:27 +00:00
Simon Tatham 810127f6ab [ARM,MVE] Add the `vsbciq` intrinsics.
Summary:
These are exactly parallel to the existing `vadciq` intrinsics, which
we implemented last year as part of the original MVE intrinsics
framework setup.

Just like VADC/VADCI, the MVE VSBC/VSBCI instructions deliver two
outputs, both of which the intrinsic exposes: a modified vector
register and a carry flag. So they have to be instruction-selected in
C++ rather than Tablegen. However, in this case, that's trivial: the
same C++ isel routine we already have for VADC works unchanged, and
all we have to do is to pass it a different instruction id.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75444
2020-03-04 08:49:27 +00:00
Craig Topper 9284abd004 [X86] Directly form VBROADCAST_LOAD for BUILD_VECTOR of splat loads in lowerBuildVectorAsBroadcast. 2020-03-03 22:27:34 -08:00
Juneyoung Lee 952ad4701c [ValueTracking] Let isGuaranteedNotToBeUndefOrPoison look into branch conditions of dominating blocks' terminators
Summary:
```
  br i1 c, BB1, BB2:
BB1:
  use1(c)
BB2:
  use2(c)
```

In BB1 and BB2, c is never undef or poison because otherwise the branch would have triggered UB.

Checked with Alive2

Reviewers: xbolva00, spatel, lebedev.ri, reames, jdoerfert, nlopes, sanjoy

Reviewed By: reames

Subscribers: jdoerfert, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75401
2020-03-04 11:43:31 +09:00
Amara Emerson e91e1df6ab [GlobalISel][Localizer] Enable intra-block localization of already-local uses.
This changes the localizer to attempt intra-block localizer of instructions
that have local uses. This is useful because sometimes the entry block itself
has many uses of constant-like instructions, which would benefit from shortening
live ranges. Previously if an inst had no non-local uses, we wouldn't add it to
the list of instructions to attempt further intra-block localization.

This gives a 0.7% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D75555
2020-03-03 18:14:57 -08:00
Fangrui Song 90acc505ed [MCDwarf] Change emitListsTableHeaderStart to use a reference and fold Start/End symbols generation into it
Apply @dblaikie's suggestions in a post-commit review for D75375

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D75568
2020-03-03 16:20:40 -08:00
Lang Hames 31e0331763 [ORC] Skip ST_File symbols in MaterializationUnit interfaces / resolution.
ST_File symbols aren't relevant for linking purposes, but can end up shadowing
real symbols if they're not filtered.

No test case yet: The ideal testcase for this would be an ELF llvm-jitlink test,
but llvm-jitlink support for ELF is still under development. We should add a
testcase for this once support lands in tree.
2020-03-03 16:15:44 -08:00
Matt Arsenault f9047ede58 LICM: Reorder condition checks
Check the fast math flag before the more expensive loop check.
2020-03-03 17:15:57 -05:00
Matt Arsenault 88aced1e45 AMDGPU: Fix computation for getOccupancyWithLocalMemSize
The computation here didn't really make sense to me, and reported
wildy different results depending on the flat work group size
attribute.

I think this should really report a range derived from the possible
work group size bounds, and only allow an occupancy that is a multiple
of the group size.
2020-03-03 17:15:57 -05:00
Brian Gesiak aa85b437a9 [Coroutines] Use dbg.declare for frame variables
Summary:
https://gist.github.com/modocache/ed7c62f6e570766c0f39b35dad675c2f
is an example of a small C++ program that uses C++20 coroutines that
is difficult to debug, due to the loss of debug info for variables that
"spill" across coroutine suspension boundaries. This patch addresses
that issue by inserting 'llvm.dbg.declare' intrinsics that point the
debugger to the variables' location at an offset to the coroutine frame.

With this patch, I confirmed that running the 'frame variable' commands in
https://gist.github.com/modocache/ed7c62f6e570766c0f39b35dad675c2f at
the specified breakpoints results in the correct values being printed
for coroutine frame variables 'i' and 'j' when using an lldb built from
trunk, as well as with gdb 8.3 (lldb 9.0.1, however, could not print the
values). The added test case also verifies this improved behavior.

The existing coro-debug.ll test case is also modified to reflect the
locations at which Clang actually places calls to 'dbg.declare', and
additional checks are added to ensure this patch works as intended in that
example as well.

Reviewers: vsk, jmorse, GorNishanov, lewissbaker, wenlei

Subscribers: EricWF, aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75338
2020-03-03 17:13:46 -05:00
Amy Huang 5b3b21f025 [DebugInfo] Fix for adding "returns cxx udt" option to functions in CodeView.
Summary:
This change checks for the return type in the frontend and adds a flag
to the DISubroutineType to indicate that the option should be added in
CodeViewDebug.

Previously function types sometimes appeared twice in the PDB: once with
"returns cxx udt" and once without.
See https://bugs.llvm.org/show_bug.cgi?id=44785.

Reviewers: rnk, asmith

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75215
2020-03-03 14:00:08 -08:00
Lang Hames ab16ef17e8 [JITLink] Fix a pointer-to-integer cast in jitlink::InProcessMemoryManager.
reinterpret_cast'ing the block base address directly to a uint64_t leaves the
high bits in an implementation-defined state, but JITLink expects them to be
zero. Switching to pointerToJITTargetAddress for the cast should fix this.

This should fix the jitlink test failures that we have seen on some of the
32-bit testers.
2020-03-03 13:53:00 -08:00
Vedant Kumar f002ee55c7 [MachineVerifier] Remove placement rule exception for debug entry values
There should not be an exception allowing debug entry values to be
placed after a terminator.

Differential Revision: https://reviews.llvm.org/D75559
2020-03-03 13:02:18 -08:00