Commit Graph

146902 Commits

Author SHA1 Message Date
Florian Hahn 860b37526a [Passes] Run GlobalsAA before LICM during LTO in new PM.
This patch adjusts the LTO pipeline in the new PM to run GlobalsAA
before LICM to match the legacy PM.

This fixes a regression where the new PM failed to vectorize loops that
require hoisting/sinking by LICM depending on GlobalsAA info.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D102345
2021-05-13 13:11:18 +01:00
Florian Hahn 3eaf235855 [Passes] Use MemorySSA for LICM during LTO.
Split off from D102345 to commit this separately from other changes in
the patch. This aligns the behavior of the new PM with the legacy PM
for LTO, with respect to running LICM.

Together with the remaining changes in D102345, this fixes new PM
regressions where we fail to vectorize loops that are vectorized with
the legacy PM.
2021-05-13 12:16:41 +01:00
Nemanja Ivanovic 39e4676ca7 [PowerPC] Provide doubleword vector predicate form comparisons on Power7
There are two reasons this shouldn't be restricted to Power8 and up:
1. For XL compatibility
2. Because clang will expand comparison operators to these intrinsics*

*Without this patch, the following causes a selection error:

int test(vector signed long a, vector signed long b) {
  return a < b;
}

This patch provides the handling for the intrinsics in the back
end and removes the Power8 guards from the predicate functions
(vec_{all|any}_{eq|ne|gt|ge|lt|le}).
2021-05-13 04:56:56 -05:00
Florian Hahn e2759f110b
[SCEV] Apply guards to max with non-unitary steps.
We already apply loop-guards when computing the maximum with unitary
steps. This extends the code to also do so when dealing with non-unitary
steps.

This allows us to infer a tighter maximum in some cases.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D102267
2021-05-13 09:47:29 +01:00
Jingu Kang 107d19eb01 Revert "[SimpleLoopUnswitch] Port partially invariant unswitch from LoopUnswitch to SimpleLoopUnswitch"
This reverts commit 88b259c014.

It needs to fix below bugs.

https://bugs.llvm.org/show_bug.cgi?id=50279
https://bugs.llvm.org/show_bug.cgi?id=50302
2021-05-13 08:40:49 +01:00
Serge Pavlov 12537ab772 [FPEnv][X86] Implement lowering of llvm.set.rounding
Differential Revision: https://reviews.llvm.org/D74730
2021-05-13 14:30:38 +07:00
Max Kazantsev d8b37de8a4 [GC][NFC] Move GCStrategy from CodeGen to IR
We want it to be available in analyzes so that we could use the
CodeGen notion in middle-end passes (for example, to check if
a GC may free some particular pointer).

This is a preparatory patch that simply moves the files around.

Note: if this causes some build issues, this patch must just be reverted.

Differential Revision: https://reviews.llvm.org/D100557
Reviewed By: reames
2021-05-13 12:31:59 +07:00
Chuanqi Xu c1359ef07e [Coroutines] Salvege Debug.values
Summary: The previous implementation of coro-split didn't collect values
used by dbg instructions into the spills which made a log debug info
unavailable with optimization on.
This patch tries to collect these uses which are used by dbg.values. In
this way, the debugbility of coroutine could be as powerful as normal
functions with optimization on.

To avoid enlarging the coroutine frame, this patch only collects
`dbg.value` whose value is already in the coroutine frame. This decision
may make some debug info getting unavailable. But if we are with
optimization on, the performance issue should be considered first. And
this patch would make the debugbility of coroutine to be better only
without changing the layout of the frame.

Test-plan: check-llvm

Reviewed By: aprantl, lxfind

Differential Revision: https://reviews.llvm.org/D97673
2021-05-13 13:06:33 +08:00
Chuanqi Xu 6e5b8f489a [Coroutines] Enable printing coroutine frame when dbg info is available
Summary: This patch tries to build debug info for coroutine frame in the
middle end. Although the coroutine frame is constructed and maintained by
the compiler and the programmer shouldn't care about the coroutine frame
by the design of C++20 coroutine,
a lot of programmers told me that they want to see the layout of the
coroutine frame strongly. Although C++ is designed as an abstract layer
so that the programmers shouldn't care about the actual memory in bits,
many experienced C++ programmers  are familiar with assembler and
debugger to see the memory layout in fact, After I was been told they
want to see the coroutine frame about 3 times, I think it is an actual
and desired demand.

However, the debug information is constructed in the front end and
coroutine frame is constructed in the middle end. This is a natural and
clear gap. So I could only try to construct the debug information in the
middle end after coroutine frame constructed. It is unusual, but we are
in consensus that the approch is the best one.

One hard part is we need construct the name for variables since there
isn't a map from llvm variables to DIVar. Then here is the strategy this
patch uses:
- The name `__resume_fn `, `__destroy_fn` and `__coro_index ` are
  constructed by the patch.
- Then the name `__promise` comes from the dbg.variable of corresponding
  dbg.declare of PromiseAlloca, which shows highest priority to
construct the debug information for the member of coroutine frame.
- Then if the member is struct, we would try to get the name of the llvm
  struct directly. Then replace ':' and '.' with '_' to make it
printable for debugger.
- If the member is a basic type like integer or double, we would try to
  emit the corresponding name.
- Then if the member is a Pointer Type, we would add `Ptr` after
  corresponding pointee type.
- Otherwise, we would name it with 'UnknownType'.

Reviewered by: lxfind, aprantl, rjmcall, dblaikie

Differential Revision: https://reviews.llvm.org/D99179
2021-05-13 12:43:08 +08:00
Anton Afanasyev ab2c499d3a [SLP] Add insertelement instructions to vectorizable tree
Add new type of tree node for `InsertElementInst` chain forming vector.
These instructions could be either removed, or replaced by shuffles during
vectorization and we can add this node to cost model, so naturally estimating
their cost, getting rid of `CompensateCost` tricks and reducing further work
for InstCombine. This fixes PR40522 and PR35732 in a natural way. Also this
patch is the first step towards revectorization of partially vectorization
(to fix PR42022 completely). After adding inserts to tree the next step is
to add vector instructions there (for instance, to merge `store <2 x float>`
and `store <2 x float>` to `store <4 x float>`).

Fixes PR40522 and PR35732.

Differential Revision: https://reviews.llvm.org/D98714
2021-05-13 07:41:45 +03:00
Chen Zheng a0ca4c46ca [Debug-Info] add -gstrict-dwarf support in backend
Reviewed By: dblaikie, probinson

Differential Revision: https://reviews.llvm.org/D100826
2021-05-12 23:00:52 -04:00
Stanislav Mekhanoshin bd00106d1e [AMDGPU] Refactor shouldExpandAtomicRMWInIR(). NFC.
This is logic simplification for better readability.

Differential Revision: https://reviews.llvm.org/D102371
2021-05-12 16:39:03 -07:00
Justin Bogner e7d26aceca Change the context instruction for computeKnownBits in LoadStoreVectorizer pass
This change enables cases for which the index value for the first
load/store instruction in a pair could be a function argument. This
allows using llvm.assume to provide known bits information in such
cases.

Patch by Viacheslav Nikolaev. Thanks!

Differential Revision: https://reviews.llvm.org/D101680
2021-05-12 15:29:29 -07:00
Greg Clayton e5bdacba2e Optimize GSymCreator::finalize.
The algorithm removing duplicates from the Funcs list used to have
amortized quadratic time complexity because it was potentially
removing each entry using std::vector::erase individually. This
patch is now using a erase-remove idiom with an adapted
removeIfBinary algorithm.

Probably this was made under the assumption that these removals are
rare, but there are cases where the case of duplicate entries is
occurring frequently. In these cases, the actual runtime was very
poor, taking hours to process a single binary of around 1 GiB size
including debug info. Another factor contributing to that is the
frequent output of the warning, which is now removed.

It seems this is particularly an issue with GCC-compiled binaries,
rather than clang-built binaries.

Reviewed By: clayborg

Differential Revision: https://reviews.llvm.org/D102219
2021-05-12 15:18:07 -07:00
Sam Clegg cd01430ff1 [lld][WebAssembly] Allow data symbols to extend past end of segment
This fixes a bug with string merging with string symbols that contain
NULLs, as is the case in the `merge-string.s` test.

The bug only showed when we run with `--relocatable` and then try read
the resulting object back in.  In this case we would end up with string
symbols that extend past the end of the segment in which they live.

The problem comes from the fact that sections which are flagged as
string mergable assume that all strings are NULL terminated.  The
merging algorithm will drop trailing chars that follow a NULL since they
are essentially unreachable.  However, the "size" attribute (in the
symbol table) of such a truncated symbol is not updated resulting a
symbol size that can overlap the end of the segment.

I verified that this can happen in ELF too given the right conditions
and the its harmless enough.  In practice Strings that contain embedded
null should not be part of a mergable section.

Differential Revision: https://reviews.llvm.org/D102281
2021-05-12 13:43:37 -07:00
Sam Clegg 3041b16f73 [WebAssembly] Add TLS data segment flag: WASM_SEG_FLAG_TLS
Previously the linker was relying solely on the name of the segment
to imply TLS.

Differential Revision: https://reviews.llvm.org/D102202
2021-05-12 13:31:02 -07:00
Heejin Ahn ba38b72ec2 [WebAssembly] Allow Wasm EH with Emscripten SjLj
We explicitly made it error out in D101403, out of a good intention that
the error message will make people less confusing. Turns out, we weren't
failing all cases of wasm EH + SjLj; only a few cases were failing and
our client was able to get around by fixing source code, but now we made
it fail for all cases, even the cases that previously succeeded fail,
which we didn't intend. This reverts that change.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D102364
2021-05-12 13:27:04 -07:00
Craig Topper 9c345407b4 [RISCV] Remove RISCVII:VSEW enum. Make encodeVYPE operate directly on SEW.
The VSEW encoding isn't a useful value to pass around. It's better
to use SEW or log2(SEW) directly. The only real ugliness is that
the vsetvli IR intrinsics use the VSEW encoding, but it's easy
enough to decode that when the intrinsic is processed.
2021-05-12 13:19:08 -07:00
Nikita Popov a8f7dee1df [InstCombine] Support one-hot merge for logical and/or
If a logical and/or is used, we need to be careful not to propagate
a potential poison value from the RHS by inserting a freeze
instruction. Otherwise it works the same way as bitwise and/or.

This is intended to address the regression reported at
https://reviews.llvm.org/D101191#2751002.

Differential Revision: https://reviews.llvm.org/D102279
2021-05-12 21:01:18 +02:00
Stelios Ioannou 1124ad2f5d [LoopFlatten] Simplify loops so that the pass can operate on unsimplified loops.
The loop flattening pass requires loops to be in simplified form. If the
loops are not in simplified form, the pass cannot operate. This patch
simplifies all loops before flattening. As a result, all loops will be
simplified regardless of whether anything ends up being flattened.

This change was inspired by observing a certain loop that was not flatten
because the loops were not in simplified form. This loop is added as a
test to verify that it is now flattened.

Differential Revision: https://reviews.llvm.org/D102249

Change-Id: I45bcabe70fb99b0d89f0effafc82eb9e0585ec30
2021-05-12 19:22:01 +01:00
Fangrui Song 0fe6649bc5 [X86] Fix -Wunused-lambda-capture 2021-05-12 10:34:32 -07:00
Simon Pilgrim fb1d61b725 [X86][AVX] Fold concat(ps*lq(x,32),ps*lq(y,32)) -> shuffle(concat(x,y),zero) (PR46621)
On AVX1 targets we can handle v4i64 logical shifts by 32 bits as a pair of v8f32 shuffles with zero.

I was hoping to put this in LowerScalarImmediateShift, but performing that early causes regressions where other instructions were respliting the subvectors.
2021-05-12 18:04:40 +01:00
Amara Emerson dc8d16c03f [AArch64][GlobalISel] Add MMOs to constant pool loads to allow LICM hoisting.
This caused performance regressions vs SDAG on SingleSource/Benchmarks/Adobe-C++
2021-05-12 09:47:09 -07:00
Victor Huang cf4610d27b [PowerPC] Fix definitions of CMPRB8, CMPEQB, CMPRB, SETB in PPCInstr64Bit.td and PPCInstrInfo.td 2021-05-12 10:59:33 -05:00
Baptiste Saleil 5885f1a4cb [AMDGPU] Disable the SIFormMemoryClauses pass at -O1
This patch disables the SIFormMemoryClauses pass at -O1. This pass has a
significant impact on compilation time, so we only want it to be enabled
starting from -O2.

Differential Revision: https://reviews.llvm.org/D101939
2021-05-12 11:51:59 -04:00
Simon Pilgrim 7bff9bdd34 [X86][AVX] combineConcatVectorOps - add ConcatSubOperand helper. NFCI.
Pull out repeated code to create a concat_vectors of the same operand from all subvecs.
2021-05-12 16:42:18 +01:00
Fraser Cormack c5ec00e62b [TargetLowering] Improve legalization of scalable vector types
This patch extends the vector type-conversion and legalization capabilities of
scalable vector types.

Firstly, `vscale x 1` types now behave more like the corresponding `vscale x
2+` types. This enables the integer promotion legalization of extended scalable
types, such as the promotion of `<vscale x 1 x i5>` to `<vscale x 1 x i8>`.

These `vscale x 1` types are also now better handled by
`getVectorTypeBreakdown`, where what looks like older handling for 1-element
fixed-length vector types was spuriously updated to include scalable types.

Widening of scalable types is now better supported, by using `INSERT_SUBVECTOR`
to insert the smaller scalable vector "value" type into the wider scalable
vector "part" type. This allows AArch64 to pass and return `vscale x 1` types
by value by widening.

There are still cases where we are unable to legalize `vscale x 1` types, such
as where expansion would require splitting the vector in two.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D102073
2021-05-12 16:33:07 +01:00
Craig Topper 44e0e91db0 [ValueTypes] Rename MVT::getVectorNumElements() to MVT::getVectorMinNumElements(). Fix some misuses of getVectorNumElements()
getVectorNumElements() returns a value for scalable vectors
without any warning so it is effectively getVectorMinNumElements().
By renaming it and making getVectorNumElements() forward to
it, we can insert a check for scalable vectors into getVectorNumElements()
similar to EVT. I didn't do that in this patch because there are still more
fixes needed, but I was able to temporarily do it and passed the RISCV
lit tests with these changes.

The changes to isPow2VectorType and getPow2VectorType are copied from EVT.

The change to TypeInfer::EnforceSameNumElts reduces the size of AArch64's isel table.
We're now considering SameNumElts to require the scalable property to match which
removes some unneeded type checks.

This was motivated by the bug I fixed yesterday in 80b9510806

Reviewed By: frasercrmck, sdesmalen

Differential Revision: https://reviews.llvm.org/D102262
2021-05-12 07:46:45 -07:00
Stefan Pintilie 8d37411e48 Revert "[SelectionDAG][Mips][PowerPC][RISCV][WebAssembly] Teach computeKnownBits/ComputeNumSignBits about atomics"
This reverts commit 6c80361b84.
Breaks PowerPC Big Endian buildbots.
2021-05-12 09:46:18 -05:00
Hendrik Greving 762ac725bf [DAGCombiner] Fix DAG combine store elimination, different address space.
Fixes a bug in the DAG combiner that eliminates the stores because it missed
to inspect the address space of the pointers.

%v = load %ptr_as1
// no chain side effect
store %v, %ptr_as2

As well as

store %v, %ptr_as1
store %v, %ptr_as2

Fixes a test for above in X86.

Differential Revision: https://reviews.llvm.org/D102096
2021-05-12 07:14:22 -07:00
Peter Waller 3fa6510f6e [CodeGen][AArch64][SVE] Fold [rdffr, ptest] => rdffrs; bugfix for optimizePTestInstr
When a ptest is used to set flags from the output of rdffr, the ptest
can be eliminated, using a flags-setting rdffrs instead.

Additionally, check that nothing consumes flags between rdffr and ptest;
this case appears to have been missed previously.

* There is no unpredicated RDFFRS instruction.
* If substituting RDFFR_PP, require that the mask argument of the
  PTEST matches that of the RDFFR_PP.
* Move some precondition code up inside optimizePTestInstr, so that it
  covers the new code paths for RDFFR which return earlier.
  * Only consider RDFFR, PTEST in same basic block.
  * Check for other flag setting instructions between the two, abort if
    found.
  * Drop an old TODO comment about removing dead PTEST instructions.

RDFFR_P to follow in later patch.

Differential Revision: https://reviews.llvm.org/D101357
2021-05-12 15:06:22 +01:00
Julien Pagès 46adccc5cc [AMDGPU] Improve Codegen for build_vector
Improve the code generation of build_vector.
Use the v_pack_b32_f16 instruction instead of
v_and_b32 + v_lshl_or_b32

Differential Revision: https://reviews.llvm.org/D98081

Patch by Julien Pagès!
2021-05-12 14:17:44 +01:00
Roman Lebedev 554b1bced3
[InstCombine] ~(C + X) --> ~C - X (PR50308)
We can not rely on (C+X)-->(X+C) already happening,
because we might not have visited that `add` yet.
The added testcase would get stuck in an endless combine loop.
2021-05-12 16:10:55 +03:00
Jay Foad a383d325f6 [TargetRegisterInfo] Speed up getAllocatableSet. NFCI.
MachineRegisterInfo caches the reserved register set that is computed by
by TargetRegisterInfo::getReservedRegs, so call into MRI to get the
reserved regs to avoid recomputing them.

In particular this speeds up AMDGPU's SIFormMemoryClauses pass because
AMDGPU has a particularly complicated reserved set that is expensive to
compute.

Differential Revision: https://reviews.llvm.org/D102318
2021-05-12 14:09:05 +01:00
Piotr Sobczak a4db7025a9 [AMDGPU] Remove assert
Remove assert introduced in D101177, following post-commit feedback.
2021-05-12 14:52:37 +02:00
Sanjay Patel f58e0513dd [x86] try harder to lower to PCMPGT instead of not-of-PCMPEQ
This is motivated by the example in https://llvm.org/PR50055 ,
but it doesn't do anything for that bug currently because we
don't actually have a zero-extended setcc there.

Proof for the generic transform (inverse of what we would
try to do in combining):
https://alive2.llvm.org/ce/z/aBL-Mg

Differential Revision: https://reviews.llvm.org/D102275
2021-05-12 08:25:29 -04:00
Simon Pilgrim 72e242a286 [X86][AVX] canonicalizeShuffleMaskWithHorizOp - improve support for 256/512-bit vectors
Extend the HOP(HOP(X,Y),HOP(Z,W)) and SHUFFLE(HOP(X,Y),HOP(Z,W)) folds to handle repeating 256/512-bit vector cases.

This allows us to drop the UNPACK(HOP(),HOP()) custom fold in combineTargetShuffle.

This required isRepeatedTargetShuffleMask to be tweaked to support target shuffle masks taking more than 2 inputs.
2021-05-12 12:13:24 +01:00
David Sherwood b7a11274f9 [LoopVectorize] Fix scalarisation crash in widenPHIInstruction for scalable vectors
In InnerLoopVectorizer::widenPHIInstruction there are cases where we have
to scalarise a pointer induction variable after vectorisation. For scalable
vectors we already deal with the case where the pointer induction variable
is uniform, but we currently crash if not uniform. For fixed width vectors
we calculate every lane of the scalarised pointer induction variable for a
given VF, however this cannot work for scalable vectors. In this case I
have added support for caching the whole vector value for each unrolled
part so that we can always extract an arbitrary element. Additionally, we
still continue to cache the known minimum number of lanes too in order
to improve code quality by avoiding an extractelement operation.

I have adapted an existing test `pointer_iv_mixed` from the file:

  Transforms/LoopVectorize/consecutive-ptr-uniforms.ll

and added it here for scalable vectors instead:

  Transforms/LoopVectorize/AArch64/sve-widen-phi.ll

Differential Revision: https://reviews.llvm.org/D101294
2021-05-12 11:02:11 +01:00
Peter Waller 6e6f9a636b [AArch64][SVE] Improve sve.convert.to.svbool lowering
The sve.convert.to.svbool lowering has the effect of widening a logical
<M x i1> vector representing lanes into a physical <16 x i1> vector
representing bits in a predicate register.

In general, if converting to svbool, the contents of lanes in the
physical register might not be known. For sve.convert.to.svbool the new
lanes are specified to be zeroed, requiring 'and' instructions to mask
off the new lanes. For lanes coming from a ptrue or a comparison,
however, they are known to be zero.

CodeGen Before:
  ptrue p0.s, vl16
  ptrue p1.s
  ptrue p2.b
  and   p0.b, p2/z, p0.b, p1.b
  ret

After:
  ptrue	p0.s, vl16
  ret

Differential Revision: https://reviews.llvm.org/D101544
2021-05-12 10:57:25 +01:00
Stephen Tozer fdb055f4f1 Reapply "[DebugInfo] Fix updateDbgUsersToReg to support DBG_VALUE_LIST"
Previous crashes caused by this patch were the result of machine
subregisters being incorrectly handled in updateDbgUsersToReg; this has
been fixed by using RegUnits to determine overlapping registers, instead
of using the register values directly.

Differential Revision: https://reviews.llvm.org/D101523

This reverts commit 7ca26c5fa2.
2021-05-12 10:19:57 +01:00
Piotr Sobczak 68137ef568 [AMDGPU] Skip invariant loads when avoiding WAR conflicts
No need to handle invariant loads when avoiding WAR conflicts, as
there cannot be a vector store to the same memory location.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D101177
2021-05-12 10:57:05 +02:00
Tomas Matheson 34c098b780 [ARM] Prevent spilling between ldrex/strex pairs
Based on the same for AArch64: 4751cadcca

At -O0, the fast register allocator may insert spills between the ldrex and
strex instructions inserted by AtomicExpandPass when expanding atomicrmw
instructions in LL/SC loops. To avoid this, expand to cmpxchg loops and
therefore expand the cmpxchg pseudos after register allocation.

Required a tweak to ARMExpandPseudo::ExpandCMP_SWAP to use the 4-byte encoding
of UXT, since the pseudo instruction can be allocated a high register (R8-R15)
which the 2-byte encoding doesn't support. However, the 4-byte encodings
are not present for ARM v8-M Baseline. To enable this, two new pseudos are
added for Thumb which are only valid for v8mbase, tCMP_SWAP_8 and
tCMP_SWAP_16.

The previously committed attempt in D101164 had to be reverted due to runtime
failures in the test suites. Rather than spending time fixing that
implementation (adding another implementation of atomic operations and more
divergence between backends) I have chosen to follow the approach taken in
D101163.

Differential Revision: https://reviews.llvm.org/D101898

Depends on D101912
2021-05-12 09:43:21 +01:00
Martin Storsjö 382c505d9c [COFF] Fix ARM and ARM64 REL32 relocations to be relative to the end of the relocation
This matches how they are defined on X86.

This should fix the relative lookup tables pass for COFF, allowing
it to be reenabled.

Differential Revision: https://reviews.llvm.org/D102217
2021-05-12 09:53:43 +03:00
Vitaly Buka 85a96d82ca [symbolizer] Fix leak after D96883 2021-05-11 22:51:36 -07:00
Qiu Chaofan 6d2df18163 [VectorComine] Restrict single-element-store index to inbounds constant
Vector single element update optimization is landed in 2db4979. But the
scope needs restriction. This patch restricts the index to inbounds and
vector must be fixed sized. In future, we may use value tracking to
relax constant restrictions.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D102146
2021-05-12 13:18:20 +08:00
Congzhe Cao 3f8be15f29 [LoopInterchange] Handle lcssa PHIs with multiple predecessors
This is a bugfix in the transformation phase.

If the original outer loop header branches to both the inner loop
(header) and the outer loop latch, and if there is an lcssa PHI
node outside the loop nest, then after interchange the new outer latch
will have an lcssa PHI node inserted which has two predecessors, i.e.,
the original outer header and the original outer latch. Currently
the transformation assumes it has only one predecessor (the original
outer latch) and crashes, since the inserted lcssa PHI node does
not take both predecessors as incoming BBs.

Reviewed By: Whitney

Differential Revision: https://reviews.llvm.org/D100792
2021-05-11 21:30:54 -04:00
Matt Arsenault cc79aaced0 AMDGPU: Fix SILoadStoreOptimizer for gfx90a
This was hardcoding the register class to use for the newly created
pointer registers, violating the aligned VGPR requirement.
2021-05-11 21:26:43 -04:00
Matt Arsenault 6f5ddf6731 GlobalISel: Don't hardcode varargs=false in resultsCompatible 2021-05-11 20:22:06 -04:00
Matt Arsenault a15ed701ab AMDGPU: Fix assert on constant load from addrspacecasted pointer
This was trying to create a bitcast between different address spaces.
2021-05-11 20:12:20 -04:00
Matt Arsenault 24e2e5df0e GlobalISel: Split ValueHandler into assignment and emission classes
Currently the ValueHandler handles both selecting the type and
location for arguments, as well as inserting instructions needed to
handle them. Split this so that the determination of the argument
handling is independent of the function state. Currently the checks
for tail call compatibility do not follow the full assignment logic,
so it misses cases where arguments require nontrivial legalization.

This should help avoid targets ending up in a buggy state where the
argument evaluation may change in different contexts.
2021-05-11 19:50:12 -04:00