Commit Graph

30903 Commits

Author SHA1 Message Date
Matt Arsenault a2d7ace3e3 GlobalISel: Surface offsets parameter from ComputeValueVTs 2021-07-15 19:11:40 -04:00
Matt Arsenault e91da668d0 GlobalISel: Track argument pointeriness with arg flags
Since we're still building on top of the MVT based infrastructure, we
need to track the pointer type/address space on the side so we can end
up with the correct pointer LLTs when interpreting CCValAssigns.
2021-07-15 19:11:40 -04:00
Amara Emerson 4e3dc6b8dd GlobalISel: Introduce GenericMachineInstr classes and derivatives for idiomatic LLVM RTTI.
This adds some level of type safety, allows helper functions to be added for
specific opcodes for free, and also allows us to succinctly check for class
membership with the usual dyn_cast/isa/cast functions.

To start off with, add variants for the different load/store operations with some
places using it.

Differential Revision: https://reviews.llvm.org/D105751
2021-07-15 15:21:57 -07:00
Jessica Paquette 5da0f9ab61 [GlobalISel] Fix infinite loop in reassociationCanBreakAddressingModePattern
It didn't update the opcode while walking through G_INTTOPTR/G_PTRTOINT.

Differential Revision: https://reviews.llvm.org/D106080
2021-07-15 10:09:07 -07:00
Simon Pilgrim 0aece73aba [DAG] Fold select(cond,binop(x,y),binop(x,z)) -> binop(x,select(cond,y,z))
Similar to the folds performed in InstCombinerImpl::foldSelectOpOp, this attempts to push a select further up to help merge a pair of binops.

I'm primarily interested in select(cond,add(x,y),add(x,z)) folds to help expose pointer math (see https://bugs.llvm.org/show_bug.cgi?id=51069 etc.) but I've tried to use the more generic isBinOp().

Differential Revision: https://reviews.llvm.org/D106058
2021-07-15 16:08:30 +01:00
Tim Northover 5d7632ee72 MachO: don't emit L... private symbols in do_not_dead_strip sections.
The linker can sometimes drop the do_not_dead_strip if it can't associate the
atom with a symbol (the other place to specify no dead-stripping in MachO
files).
2021-07-15 14:40:43 +01:00
Djordje Todorovic fa2daaeff8 [2/2][RemoveRedundantDebugValues] Add a Pass that removes redundant DBG_VALUEs
This patch adds the forward scan for finding redundant DBG_VALUEs.

This analysis aims to remove redundant DBG_VALUEs by going forward
in the basic block by considering the first DBG_VALUE as a valid
until its first (location) operand is not clobbered/modified.
For example:

(1) DBG_VALUE $edi, !"var1", ...
(2) <block of code that does affect $edi>
(3) DBG_VALUE $edi, !"var1", ...
 ...
in this case, we can remove (3).

Differential Revision: https://reviews.llvm.org/D105280
2021-07-15 00:08:31 -07:00
Kai Luo b9c3941cd6 [PowerPC] Generate inlined quadword lock free atomic operations via AtomicExpand
This patch uses AtomicExpandPass to implement quadword lock free atomic operations. It adopts the method introduced in https://reviews.llvm.org/D47882, which expand atomic operations post RA to avoid spilling that might prevent LL/SC progress.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D103614
2021-07-15 01:12:09 +00:00
Stanislav Mekhanoshin 76b7d3432e [AMDGPU] Add TII::isIgnorableUse() to allow VOP rematerialization
Any def of EXEC prevents rematerialization of any VOP instruction
because of the physreg use. Create a callback to check if the
physreg use can be ingored to allow rematerialization.

Differential Revision: https://reviews.llvm.org/D105836
2021-07-14 13:03:58 -07:00
Eli Friedman 1e30bf8621 [SelectionDAG] Add an overload of getStepVector that assumes step 1.
This is mostly a minor convenience, but the pattern seems frequent
enough to be worthwhile (and we'll probably add more uses in the
future).

Differential Revision: https://reviews.llvm.org/D105850
2021-07-14 11:37:01 -07:00
Matt Arsenault 47269da5d8 GlobalISel: Handle lowering non-power-of-2 extloads 2021-07-14 11:54:11 -04:00
Djordje Todorovic df686842bc [RemoveRedundantDebugValues] Add a Pass that removes redundant DBG_VALUEs
This new MIR pass removes redundant DBG_VALUEs.

After the register allocator is done, more precisely, after
the Virtual Register Rewriter, we end up having duplicated
DBG_VALUEs, since some virtual registers are being rewritten
into the same physical register as some of existing DBG_VALUEs.
Each DBG_VALUE should indicate (at least before the LiveDebugValues)
variables assignment, but it is being clobbered for function
parameters during the SelectionDAG since it generates new DBG_VALUEs
after COPY instructions, even though the parameter has no assignment.
For example, if we had a DBG_VALUE $regX as an entry debug value
representing the parameter, and a COPY and after the COPY,
DBG_VALUE $virt_reg, and after the virtregrewrite the $virt_reg gets
rewritten into $regX, we'd end up having redundant DBG_VALUE.

This breaks the definition of the DBG_VALUE since some analysis passes
might be built on top of that premise..., and this patch tries to fix
the MIR with the respect to that.

This first patch performs bacward scan, by trying to detect a sequence of
consecutive DBG_VALUEs, and to remove all DBG_VALUEs describing one
variable but the last one:

For example:

(1) DBG_VALUE $edi, !"var1", ...
(2) DBG_VALUE $esi, !"var2", ...
(3) DBG_VALUE $edi, !"var1", ...
 ...

in this case, we can remove (1).

By combining the forward scan that will be introduced in the next patch
(from this stack), by inspecting the statistics, the RemoveRedundantDebugValues
removes 15032 instructions by using gdb-7.11 as a testbed.

Differential Revision: https://reviews.llvm.org/D105279
2021-07-14 04:29:42 -07:00
Ruiling Song 40e3df2a1b [RegisterCoalescer] Resolve conflict based on liveness of subregister
Currently we are resolving lane/subregister conflict by visiting
instructions sequentially in current block to see whether there is any
use of the tainted lanes. To save compile time, we are not doing further
check in successor blocks. This sounds reasonable without subgregister liveness.

But since we have added subregister liveness tracking capability to
register coalescer, we can easily determine whether we have subregister
liveness conflict by checking subranges. This would help coalescing more
COPYs for target that enables subregister liveness tracking.

Reviewed by: arsenm, qcolombet

Differential Revision: https://reviews.llvm.org/D104509
2021-07-14 14:43:22 +08:00
Hongtao Yu 74b99b5c2e [CSSPGO] Do not import pseudo probe desc in thinLTO
Previously we reliedy on pseudo probe descriptors to look up precomputed GUID during probe emission for inlined probes. Since we are moving to always using unique linkage names, GUID for functions can be computed in place from dwarf names. This eliminates the need of importing pseudo probe descs in thinlto, since those descs should be emitted by the original modules.

This significantly reduces thinlto memory footprint in some extreme case where the number of imported modules for a single module is massive.

Test Plan:

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D105248
2021-07-13 18:26:36 -07:00
Matt Arsenault eebe841a47 RegAlloc: Allow targets to split register allocation
AMDGPU normally spills SGPRs to VGPRs. Previously, since all register
classes are handled at the same time, this was problematic. We don't
know ahead of time how many registers will be needed to be reserved to
handle the spilling. If no VGPRs were left for spilling, we would have
to try to spill to memory. If the spilled SGPRs were required for exec
mask manipulation, it is highly problematic because the lanes active
at the point of spill are not necessarily the same as at the restore
point.

Avoid this problem by fully allocating SGPRs in a separate regalloc
run from VGPRs. This way we know the exact number of VGPRs needed, and
can reserve them for a second run.  This fixes the most serious
issues, but it is still possible using inline asm to make all VGPRs
unavailable. Start erroring in the case where we ever would require
memory for an SGPR spill.

This is implemented by giving each regalloc pass a callback which
reports if a register class should be handled or not. A few passes
need some small changes to deal with leftover virtual registers.

In the AMDGPU implementation, a new pass is introduced to take the
place of PrologEpilogInserter for SGPR spills emitted during the first
run.

One disadvantage of this is currently StackSlotColoring is no longer
used for SGPR spills. It would need to be run again, which will
require more work.

Error if the standard -regalloc option is used. Introduce new separate
-sgpr-regalloc and -vgpr-regalloc flags, so the two runs can be
controlled individually. PBQB is not currently supported, so this also
prevents using the unhandled allocator.
2021-07-13 18:49:29 -04:00
Guillaume Chatelet 2c47b8847e Revert "[llvm] Add enum iteration to Sequence"
This reverts commit a006af5d6e.
2021-07-13 16:44:42 +00:00
Guillaume Chatelet a006af5d6e [llvm] Add enum iteration to Sequence
This patch allows iterating typed enum via the ADT/Sequence utility.

Differential Revision: https://reviews.llvm.org/D103900
2021-07-13 16:22:19 +00:00
Matt Arsenault 222fde1eec GlobalISel: Use extension instead of merge with undef in common case
This fixes not respecting signext/zeroext in these cases. In the
anyext case, this avoids a larger merge with undef and should be a
better canonical form.

This should also handle this if a merge is needed, but I'm not aware
of a case where that can happen. In a future change this will also
allow AMDGPU to drop some custom code without introducing regressions.
2021-07-13 11:04:47 -04:00
Matt Arsenault 77a608d9de GlobalISel: Remove getIntrinsicID utility function
This is redundant with a method directly on MachineInstr
2021-07-13 11:04:10 -04:00
Qiu Chaofan 954a15d639 [SelectionDAG] Check use before combining into USUBSAT
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D105789
2021-07-13 14:50:26 +08:00
Jessica Paquette 47d0780f45 [GlobalISel] Handle more types in narrowScalar for eq/ne G_ICMP
Generalize the existing eq/ne case using `extractParts`. The original code only
handled narrowings for types of width 2n->n. This generalization allows for any
type that can be broken down by `extractParts`.

General overview is:

- Loop over each narrow-sized part and do exactly what the 2-register case did.
- Loop over the leftover-sized parts and do the same thing
- Widen the leftover-sized XOR results to the desired narrow size
- OR that all together and then do the comparison against 0 (just like the old
  code)

This shows up a lot when building clang for AArch64 using GlobalISel, so it's
worth fixing. For the sake of simplicity, this doesn't handle the non-eq/ne
case yet.

Also remove the code in this case that notifies the observer; we're just going
to delete MI anyway so talking to the observer shouldn't be necessary.

Differential Revision: https://reviews.llvm.org/D105161
2021-07-12 22:18:50 -07:00
Arthur Eubanks 7987c46273 [OpaquePtr][ISel] Use ArgListEntry::IndirectType more 2021-07-12 21:14:35 -07:00
Eli Friedman ec1cdee6aa [SelectionDAG][RISCV] Support @llvm.vscale.i64() on 32-bit targets.
Not really useful on its own, but D105673 depends on it.

Differential Revision: https://reviews.llvm.org/D105840
2021-07-12 14:53:42 -07:00
Jinsong Ji 28fb69e00a [AIX] Emit version string in .file directive
AIX .file directive support including compiler version string.
https://www.ibm.com/docs/en/aix/7.2?topic=ops-file-pseudo-op

This patch adds the support so that it will be easier to identify build
compiler in objects.

Reviewed By: #powerpc, shchenz

Differential Revision: https://reviews.llvm.org/D105743
2021-07-12 17:03:52 +00:00
Bradley Smith 112c09039b [SelectionDAG] Simplify PromoteIntRes_INSERT_SUBVECTOR to only handle result
Let other parts of legalization handle the rest of the node, this allows
re-use of existing optimizations elsewhere.

Differential Revision: https://reviews.llvm.org/D105624
2021-07-12 15:20:44 +00:00
David Truby c305557acd [llvm][sve] Lowering for VLS truncating stores
This adds custom lowering for truncating stores when operating on
fixed length vectors in SVE. It also includes a DAG combine to
fold extends followed by truncating stores into non-truncating
stores in order to prevent this pattern appearing once truncating
stores are supported.

Currently truncating stores are not used in certain cases where
the size of the vector is larger than the target vector width.

Differential Revision: https://reviews.llvm.org/D104471
2021-07-12 11:14:17 +01:00
David Green dc0bbc9d89 [IfCvt] Don't use pristine register for counting liveins for predicated instructions.
The test case here hits machine verifier problems. There are volatile
long loads that the results of do not get used, loading into two dead
registers. IfCvt will predicate them and as it does will add implicit
uses of the predicating registers due to thinking they are live in. As
nothing has used the register, the machine verifier disagrees that they
are really live and we end up with a failure.

The registers come from Pristine regs that LivePhysRegs counts as live.
This patch adds a addLiveInsNoPristines method to be used instead in
IfCvt, so that only really live in regs need to be added as implicit
operands.

Differential Revision: https://reviews.llvm.org/D90965
2021-07-11 14:45:54 +01:00
Amara Emerson 97c426394a [AArch64][GlobalISel] Implement moreElements legalization for G_SHUFFLE_VECTOR.
Differential Revision: https://reviews.llvm.org/D103301
2021-07-10 00:25:26 -07:00
Amara Emerson 58a2cb5143 [GlobalISel] Add a new artifact combiner for unmerge which looks through general artifact expressions.
The original motivation for this was to implement moreElementsVector of shuffles
on AArch64, which resulted in complex sequences of artifacts like unmerge(unmerge(concat...))
which the combiner couldn't handle. It seemed here that the better option,
instead of writing ever-more-complex combines, was to have a way to find
the original "non-artifact" source registers for a given definition, walking
through arbitrary expressions of unmerge/concat/insert. As long as the bits
aren't extended or truncated, this is a pretty simple algorithm that avoids
the need for lots of combines and instead jumps straight to the final result
we want.

I've only used this new technique in 2 places within tryCombineUnmerge, using it
in more general situations resulted in infinite loops in AMDGPU. So for now
it's used when we would otherwise fail to combine and that seems to work.

In order to support looking through G_INSERTs, I also had to add it as an
artifact in isArtifact(), which caused a whole lot of issues in tests. AMDGPU
started infinite looping since full legalization of G_INSERT doensn't seem to
be there. To work around this, I've temporarily added a CLI option to use the
old behaviour so that the MIR tests will still run and terminate.

Other minor changes include no longer making >128b G_MERGE/UNMERGE legal.
We never had isel support for that anyway and it was a remnant of the legacy
legalizer rules. However being legal prevented the combiner from checking if it
was dead and deleting them.

Differential Revision: https://reviews.llvm.org/D104355
2021-07-09 22:35:00 -07:00
Jessica Paquette 47aeeffc8f [GlobalISel] Use GCDTy when extracting GCD ty from leftover regs in insertParts
`LegalizerHelper::insertParts` uses `extractGCDType` on registers split into
a desired type and a smaller leftover type. This is used to populate a list
of registers. Each register in the list will have the same type as returned by
`extractGCDType`.

If we have

- `ResultTy` = s792
- `PartTy` = s64
- `LeftoverTy` = s24

When we call `extractGCDType`, we'll end up with two different types appended
to the list:

Part: gcd(792, 64, 24) => s8
Leftover: gcd(792, 24, 24) => s24

When this happens, we'll hit an assert while trying to build a G_MERGE_VALUES.

This patch changes the code for the leftover type so that we reuse the GCD from
the desired type.

e.g.

Leftover: gcd(792, 8, 24) => s8

https://llvm.godbolt.org/z/137Kqxj6j

Differential Revision: https://reviews.llvm.org/D105674
2021-07-09 14:15:44 -07:00
Wouter van Oortmerssen 9647a6f719 [WebAssembly] Added initial type checker to MC Assembler
This to protect against non-sensical instruction sequences being assembled,
which would either cause asserts/crashes further down, or a Wasm module being output that doesn't validate.

Unlike a validator, this type checker is able to give type-errors as part of the parsing process, which makes the assembler much friendlier to be used by humans writing manual input.

Because the MC system is single pass (instructions aren't even stored in MC format, they are directly output) the type checker has to be single pass as well, which means that from now on .globaltype and .functype decls must come before their use. An extra pass is added to Codegen to collect information for this purpose, since AsmPrinter is normally single pass / streaming as well, and would otherwise generate this information on the fly.

A `-no-type-check` flag was added to llvm-mc (and any other tools that take asm input) that surpresses type errors, as a quick escape hatch for tests that were not intended to be type correct.

This is a first version of the type checker that ignores control flow, i.e. it checks that types are correct along the linear path, but not the branch path. This will still catch most errors. Branch checking could be added in the future.

Differential Revision: https://reviews.llvm.org/D104945
2021-07-09 14:07:25 -07:00
Jeremy Morse 30cce54dad [X86] Return src/dest register from stack spill/restore recogniser
LLVM provides target hooks to recognise stack spill and restore
instructions, such as isLoadFromStackSlot, and it also provides post frame
elimination versions such as isLoadFromStackSlotPostFE. These are supposed
to return the store-source and load-destination registers; unfortunately on
X86, the PostFE recognisers just return "1", apparently to signify "yes
it's a spill/load". This patch alters the hooks to correctly return the
store-source and load-destination registers:

This is really useful for debug-info as we it helps follow variable values
as they move on/off the stack. There should be no codegen changes: the only
other users of these PostFE target hooks are MachineInstr::getRestoreSize
and MachineInstr::getSpillSize, which don't attempt to interpret the
returned register location.

While we're here, delete the (InstrRef) LiveDebugValues heuristic that
tries to find the spill source register by looking for a killed reg -- we
should be able to rely on the target hooks for that. This involves
temporarily turning off a n InstrRef LivedDebugValues test on aarch64
(patch to re-enable it is in D104521).

Differential Revision: https://reviews.llvm.org/D105428
2021-07-09 18:12:30 +01:00
Jeremy Morse f551fb96c7 [Debug-info][InstrRef] Avoid an unnecessary map ordering
We keep a record of substitutions between debug value numbers post-isel,
however we never actually look them up until the end of compilation. As a
result, there's nothing gained by the collection being a std::map. This
patch downgrades it to being a vector, that's then sorted at the end of
compilation in LiveDebugValues.

Differential Revision: https://reviews.llvm.org/D105029
2021-07-09 15:43:13 +01:00
Muhammad Omair Javaid 932e3d9960 Revert "GlobalISel/AArch64: don't optimize away redundant branches at -O0"
This reverts commit 458c230b5e.

This broke LLDB buildbot testcase where breakpoint set at start of loop
failed to hit. https://lab.llvm.org/buildbot/#/builders/96/builds/9404

https://github.com/llvm/llvm-project/blob/main/lldb/test/API/commands/process/attach/main.cpp#L15

Differential Revision: https://reviews.llvm.org/D105238
2021-07-09 08:23:36 +05:00
David Blaikie 1def2579e1 PR51018: Remove explicit conversions from SmallString to StringRef to future-proof against C++23
C++23 will make these conversions ambiguous - so fix them to make the
codebase forward-compatible with C++23 (& a follow-up change I've made
will make this ambiguous/invalid even in <C++23 so we don't regress
this & it generally improves the code anyway)
2021-07-08 13:37:57 -07:00
Matt Arsenault 9b057f647d GlobalISel: Track original argument index in ArgInfo
SelectionDAG's equivalents in ISD::InputArg/OutputArg track the
original argument index. Mips relies on this, and its currently
reinventing its own parallel CallLowering infrastructure which tracks
these indexes on the side. Add this to help move towards deleting the
custom mips handling.
2021-07-08 13:39:02 -04:00
Jeremy Morse 63cc251eb9 [DebugInfo][InstrRef][4/4] Support DBG_INSTR_REF through all backend passes
This is a cleanup patch -- we're now able to support all flavours of
variable location in instruction referencing mode. This patch updates
various tests for debug instructions to be broader: numerous code paths
try to ignore debug isntructions, and they now have to ignore the
additional DBG_PHI and DBG_INSTR_REFs that we can generate.

A small amount of rework happens for LiveDebugVariables: as we don't need
to track live intervals through regalloc any more, we can get away with
unlinking debug instructions before regalloc, then re-inserting them after.
Note that this isn't (yet) true of DBG_VALUE_LISTs, they still have to go
through live interval tracking.

In SelectionDAG, add a helper lambda that emits half-formed DBG_INSTR_REFs
for arguments in instr-ref mode, DBG_VALUE otherwise. This is one of the
final locations where DBG_VALUEs are emitted for vreg arguments.

X86InstrInfo now un-sets the debug instr number on SUB instructions that
get mutated into CMP instructions. As the instruction no longer computes a
subtraction, we can't use it for variable locations.

Differential Revision: https://reviews.llvm.org/D88898
2021-07-08 16:42:24 +01:00
Stanislav Mekhanoshin 0fdb25cd95 [AMDGPU] Disable garbage collection passes
Differential Revision: https://reviews.llvm.org/D105593
2021-07-07 15:47:57 -07:00
Arthur Eubanks aad41e2299 [OpaquePtr] Use ArgListEntry::IndirectType for lowering ABI attributes
Consolidate PreallocatedType and ByValType into IndirectType, and use that for inalloca.
2021-07-07 14:58:38 -07:00
Adrian Prantl 458c230b5e GlobalISel/AArch64: don't optimize away redundant branches at -O0
This patch prevents GlobalISel from optimizing out redundant branch
instructions when compiling without optimizations.

The motivating example is code like the following common pattern in
Swift, where users expect to be able to set a breakpoint on the early
exit:

public func f(b: Bool) {
  guard b else {
    return // I would like to set a breakpoint here.
  }
  ...
}

The patch modifies two places in GlobalISEL: The first one is in
IRTranslator.cpp where the removal of redundant branches is made
conditional on the optimization level. The second one is in
AArch64InstructionSelector.cpp where an -O0 *only* optimization is
being removed.

Disabling these optimizations increases code size at -O0 by
~8%. However, doing so improves debuggability, and debug builds are
the primary reason why developers compile without optimizations. We
thus concluded that this is the right trade-off.

rdar://79515454

Differential Revision: https://reviews.llvm.org/D105238
2021-07-07 12:51:55 -07:00
Dylan Fleming 8ae9ab43dd [SVE] Fixed cast<FixedVectorType> on scalable vector in SelectionDAGBuilder::getUniformBase
Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D105350
2021-07-07 10:48:17 +01:00
David Green 4ce26deac2 [DAG] Reassociate Add with Or
We already have reassociation code for Adds and Ors separately in DAG
combiner, this adds it for the combination of the two where Ors act like
Adds. It reassociates (add (or (x, c), y) -> (add (add (x, y), c)) where
we know that the Ors operands have no common bits set, and the Or has
one use.

Differential Revision: https://reviews.llvm.org/D104765
2021-07-07 10:21:07 +01:00
Jeremy Morse 2b2ffb7bdc [DebugInfo][InstrRef][3/4] Produce DBG_INSTR_REFs for all variable locations
This patch emits DBG_INSTR_REFs for two remaining flavours of variable
locations that weren't supported: copies, and inter-block VRegs. There are
still some locations that must be represented by DBG_VALUE such as
constants, but they're mostly independent of optimisations.

For variable locations that refer to values defined in different blocks,
vregs are allocated before isel begins, but the defining instruction
might not exist until late in isel. To get around this, emit
DBG_INSTR_REFs in a "half done" state, where the first operand refers to a
VReg. Then at the end of isel, patch these back up to refer to
instructions, using the finalizeDebugInstrRefs method.

Copies are something that I complained about the original RFC, and I
really don't want to have to put instruction numbers on copies. They don't
define a value: they move them. To address this isel, salvageCopySSA
interprets:
 * COPYs,
 * SUBREG_TO_REG,
 * Anything that isCopyInstr thinks is a copy.
And follows chains of copies back to the defining instruction that they
read from. This relies on any physical registers that COPYs read being
defined in the same block, or being entry-block arguments. For the former
we can put an instruction number on the defining instruction; for the
latter we can drop a DBG_PHI that reads the incoming value.

Differential Revision: https://reviews.llvm.org/D88896
2021-07-06 18:31:38 +01:00
David Stuttard 83cb9632a1 [DAGCombiner] Add support for mulhi const folding in DAGCombiner
Differential Revision: https://reviews.llvm.org/D103323

Change-Id: I4ffaaa32301795ba8a339567a68e77fe0862b869
2021-07-05 12:01:26 +01:00
Stephen Tozer 14b62f7e2f [DebugInfo] CGP+HWasan: Handle dbg.values with duplicate location ops
This patch fixes an issue which occurred in CodeGenPrepare and
HWAddressSanitizer, which both at some point create a map of Old->New
instructions and update dbg.value uses of these. They did this by
iterating over the dbg.value's location operands, and if an instance of
the old instruction was found, replaceVariableLocationOp would be
called on that dbg.value. This would cause an error if the same operand
appeared multiple times as a location operand, as the first call to
replaceVariableLocationOp would update all uses of the old instruction,
invalidating the old iterator and eventually hitting an assertion.

This has been fixed by no longer iterating over the dbg.value's location
operands directly, but by first collecting them into a set and then
iterating over that, ensuring that we never attempt to replace a
duplicated operand multiple times.

Differential Revision: https://reviews.llvm.org/D105129
2021-07-05 10:35:19 +01:00
Paul Walker 287d39dd5a [NFC] Fix a few whitespace issues and typos. 2021-07-04 11:49:58 +01:00
Simon Pilgrim 80dd591610 [SelectionDAG] Replace APInt.lshr().trunc() with APInt.extractBits() where possible. NFC.
This also allows us to use KnownBits::extractBits in one case.
2021-07-03 16:33:00 +01:00
Simon Pilgrim e2e44c3da9 [SelectionDAG] Use KnownBits::insertBits instead of separate APInt::insertBits calls. NFC. 2021-07-03 16:32:59 +01:00
Craig Topper af331e8284 [SelectionDAG] Rename memory VT argument for getMaskedGather/getMaskedScatter from VT to MemVT.
Use getMemoryVT() in MGATHER/MSCATTER DAG combines instead of
using the passthru or store value VT for this argument.
2021-07-02 17:37:40 -07:00
Jonas Devlieghere 52b5491a21 Revert "[DebugInfo] Enforce implicit constraints on `distinct` MDNodes"
This reverts commit 8cd35ad854.

It breaks `TestMembersAndLocalsWithSameName.py` on GreenDragon and
Mikael Holmén points out in D104827 that bitcode files created with the
patch cannot be parsed with binaries built before it.
2021-07-02 15:57:07 -07:00