I copied the nearly identical function from AArch64 into AMDGPU, so
fix this duplication.
Mips and X86 have their own more exotic versions which should be
removed. However replacing those is better left for a separate patch
since it requires other changes to avoid regressions.
This was reusing the parent function calling convention instead of the
callee. I'm not sure if there's a case where there's an observable
difference.
I previously missed this in b72a23650f
Moved some of the `sve-getIntrinsicCost-<..>` into a single sve-intrinsics.ll
file, and simplified the tests a bit by bundling all the intrinsics in one
function (instead of testing one intrinsic per function). That makes it easier
to see the cost of the intrinsics.
For ops that produces tensor types and implement the shaped type component interface, the type inference interface can be used. Create a grouping of these together to make it easier to specify (it cannot be added into a list of traits, but must rather be appended/concated to one as it isn't a trait but a list of traits).
Differential Revision: https://reviews.llvm.org/D97636
Given a zero input for a udot, an add can be folded in to take the place
of the input, using thte addition that the instruction naturally
performs.
Differential Revision: https://reviews.llvm.org/D97188
See original comment in 560ce2c70f
Baiscally the default seed value results in less collision, but changes the
iteration order, which matters for a few test cases.
Differential Revision: https://reviews.llvm.org/D97396
Like with EXTRACT_SUBVECTOR, INSERT_SUBVECTOR poses a problem
for vector masks as RVV isn't able to slide mask types around. We choose
instead to bitcast to equivalently-sized i8 types where we can, else we
zero-extend, perform the operation, and truncate back down.
One test was left disabled due to a crash in the legalizer.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D97559
This patch fixes a bug where the lowering for INSERT_SUBVECTOR and
EXTRACT_SUBVECTOR would insist on first extracting a register-aligned
LMUL1 vector type before perfoming the slide up/down. This was even if
the vector was a fractional LMUL type, in which case the aligned
EXTRACT_SUBVECTOR was invalid.
This issue only occurred for scalable vector types, but a variety of
tests for both scalable and fixed-length vectors have been added to
ensure this does not regress in the future.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D97556
This patch unifies the two disparate paths for lowering INSERT_SUBVECTOR
operations under one roof. Consequently, with this patch it is possible to
support any fixed-length subvector insertion, not just "cast-like" ones.
As before, support for the insertion of mask vectors will come in a
separate patch.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D97543
This patch adds support for extracting subvectors from vector masks.
This can be either extracting a scalable vector from another, or a fixed-length
vector from a fixed-length or scalable vector.
Since RVV lacks a way to slide vector masks down on an element-wise
basis and we don't know the true length of the vector registers, in many
cases we must resort to using equivalently-sized i8 vectors to perform
the operation. When this is not possible we fall back and extend to a
suitable i8 vector.
Support was also added for fixed-length truncation to mask types.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D97475
Simply make sure that the CodeGenFunction::CXXThisValue and CXXABIThisValue
are correctly initialized to the recovered value.
For lambda capture, we also need to make sure to fill the LambdaCaptureFields
Differential Revision: https://reviews.llvm.org/D97534
This patch updates LV to generate the runtime checks just after cost
modeling, to allow a more precise estimate of the actual cost of the
checks. This information will be used in future patches to generate
larger runtime checks in cases where the checks only make up a small
fraction of the expected scalar loop execution time.
The runtime checks are created up-front in a temporary block to allow better
estimating the cost and un-linked from the existing IR. After deciding to
vectorize, the checks are moved backed. If deciding not to vectorize, the
temporary block is completely removed.
This patch is similar in spirit to D71053, but explores a different
direction: instead of delaying the decision on whether to vectorize in
the presence of runtime checks it instead optimistically creates the
runtime checks early and discards them later if decided to not
vectorize. This has the advantage that the cost-modeling decisions
can be kept together and can be done up-front and thus preserving the
general code structure. I think delaying (part) of the decision to
vectorize would also make the VPlan migration a bit harder.
One potential drawback of this patch is that we speculatively
generate IR which we might have to clean up later. However it seems like
the code required to do so is quite manageable.
Reviewed By: lebedev.ri, ebrevnov
Differential Revision: https://reviews.llvm.org/D75980
This reverts commit 07de0846a5.
The original patch has caused 6 out 8 of Flang's public buildbots to
fail. As I'm not sure what the fix should be, I'm reverting this for
now. Please see https://reviews.llvm.org/D97201 for more context and
discussion.
This patch addresses issues arising from the fact that the index type
used for subvector insertion/extraction is inconsistent between the
intrinsics and SDNodes. The intrinsic forms require i64 whereas the
SDNodes use the type returned by SelectionDAG::getVectorIdxTy.
Rather than update the intrinsic definitions to use an overloaded index
type, this patch fixes the issue by transforming the index to the
correct type as required. Any loss of index bits going from i64 to a
smaller type is unexpected, and will be caught by an assertion in
SelectionDAG::getVectorIdxConstant.
The patch also updates the documentation for INSERT_SUBVECTOR and adds
an assertion to its creation to bring it in line with EXTRACT_SUBVECTOR.
This necessitated changes to AArch64 which was using i64 for
EXTRACT_SUBVECTOR but i32 for INSERT_SUBVECTOR. Only one test changed
its codegen after updating the backend accordingly.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D97459
Currently dead gc value mentioned in the deopt section are not listed in gc section
and so are processed separately.
With this CL all deopt gc values are considered as base pointers and processed in the
same way as other gc values.
The fact that deopt gc pointer is a base pointer was used all the time but
it is explicitly documented here by putting the value in SI.Base.
The idea of the patch comes from Philip Reames.
Reviewers: reames, dantrushin
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D97554
Use the fact that the number of line break is lower than printable characters to
guide the optimization process. Also use a fuzzy test that catches both \n and
\r in a single check to speedup the computation.
Differential Revision: https://reviews.llvm.org/D97320
Currently our strategy for getting header compile flags is something like:
A) look for flags for the header in compile_commands.json
This basically never works, build systems don't generate this info.
B) try to match to an impl file in compile_commands.json and use its flags
This only (mostly) works if the headers are in the same project.
C) give up and use fallback flags
This kind of works for stdlib in the default configuration, and
otherwise doesn't.
Obviously there are big gaps here.
This patch inserts a new attempt between A and B: if the header is
transitively included by any open file (whether same project or not),
then we use its compile command.
This doesn't make any attempt to solve some related problems:
- parsing non-self-contained header files in context (importing PP state)
- using the compile flags of non-opened candidate files found in the index
Fixes https://github.com/clangd/clangd/issues/123
Fixes https://github.com/clangd/clangd/issues/695
See https://github.com/clangd/clangd/issues/519
Differential Revision: https://reviews.llvm.org/D97351
They were added so that if no metadata section is present,
`__start_llvm_prf_*` references would not cause "undefined symbol"
errors. By switching to undefined weak symbols in D96936, the dummy
sections are not needed.
This patch is also needed to work around
https://sourceware.org/bugzilla/show_bug.cgi?id=27490
Differential Revision: https://reviews.llvm.org/D97648
If the type of the deopt operand has an illegal type and we want to use
register for it then it needs to be legalized.
This is not supported currently by legalizer and it is not actually clear how to
legalize this type of values.
Instead we just spill such values and use spill slot location in statepoint.
Originally tests were created by Philip Reames.
Reviewers: reames, dantrushin
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D97541
Also a couple of minor cleanups in merge-string.s:
- fix inconsistent use of tabs
- use `.p2align` rather than `.align` since `.p2align` works the
same on all platforms (the meaning of align seems to differ
between platforms according to `AlignmentIsInBytes`.
I noticed these potential cleanups while porting SHF_STRINGS support to
wasm-ld.
Differential Revision: https://reviews.llvm.org/D97647
Peeking through AND is only valid if the input to both shifts is
the same. If the inputs are different, then the original pattern
ORs the two values when the masked shift amount is 0. This is ok
if the values are the same since the OR would be a NOP which is
why its ok for rotate.
Fixes PR49365 and reverts PR34641
Differential Revision: https://reviews.llvm.org/D97637
Even if the first computeKnownBits call doesn't have any zero
bits it is possible the other operand has bitwidth-1 leading zero.
In that case overflow is still impossible. So always call computeKnownBits
for both operands.
Some implementations of the DeepCopy function called the copy constructor that copied m_parent member instead of setting a new parent. Others just leaved the base class's members (m_parent, m_callback, m_was_set) empty.
One more problem is that not all classes override this function, e.g. OptionValueArgs::DeepCopy produces OptionValueArray instance, and Target[Process/Thread]ValueProperty::DeepCopy produces OptionValueProperty. This makes downcasting via static_cast invalid.
The patch implements idiom "virtual constructor" to fix these issues.
Add a test that checks DeepCopy for correct copying/setting all data members of the base class.
Differential Revision: https://reviews.llvm.org/D96952
SelectionDAG forces us to have a weird ABI for 16-bit values without
legal 16-bit operations, but currently GlobalISel bypasses this and
sometimes ends up using the gfx8+ ABI in some contexts. Make sure
we're testing the normal ABI to avoid a test change in a future patch.
If we insert undef using a VMOVN, we can just use the original value in
three out of the four possible combinations. Using VMOVT into a undef
vector will still require the lanes to be moved, but otherwise the
non-undef value can be used.
Only one of the two callers used the lastBinding parameter, so
do that work at that one call site. Extract a ordinalForDylibSymbol()
helper to make this tidy.
No behavior change.
Differential Revision: https://reviews.llvm.org/D97597