This is an enhancement to load vectorization that is motivated by
a pattern in https://llvm.org/PR16739.
Unfortunately, it's still not enough to make a difference there.
We will have to handle multi-use cases in some better way to avoid
creating multiple overlapping loads.
Differential Revision: https://reviews.llvm.org/D92858
The non-strict variants are already handled because they are canonicalized
to strict variants by swapping hands in both the select and icmp,
and the fold simply considers that strictness is irrelevant here.
But that isn't actually true for the last pattern, as PR48390 reports.
Add vfmk intrinsic instructions, a few pseudo instructions to expand
vfmk intrinsic using VM512 correctly, and regression tests.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D92758
TestLldbGdbServer.py testcases are timing out on LLDB/AArch64 Linux
buildbot since recent changes. I am temporarily increasing
DEFAULT_TIMEOUT to 20 seconds to see impact.
For stores chain vectorization we choose the size of vector
elements to ensure we fit to minimum and maximum vector register
size for the number of elements given. This patch corrects vector
element size choosing the width of value truncated just before
storing instead of the width of value stored.
Fixes PR46983
Differential Revision: https://reviews.llvm.org/D92824
If a function parameter is marked as "undef", prevent creation
of CallSiteInfo for that parameter.
Without this patch, the parameter's call_site_value would be incorrect.
The incorrect call_value case reported in PR39716,
addressed in D85111.
Patch by Nikola Tesic
Differential revision: https://reviews.llvm.org/D92471
This patch adds the following DAGCombines, which apply if isVectorLoadExtDesirable() returns true:
- fold (and (masked_gather x)) -> (zext_masked_gather x)
- fold (sext_inreg (masked_gather x)) -> (sext_masked_gather x)
LowerMGATHER has also been updated to fetch the LoadExtType associated with the
gather and also use this value to determine the correct masked gather opcode to use.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D92230
* Steps are scaled by `vscale`, a runtime value.
* Changes to circumvent the cost-model for now (temporary)
so that the cost-model can be implemented separately.
This can vectorize the following loop [1]:
void loop(int N, double *a, double *b) {
#pragma clang loop vectorize_width(4, scalable)
for (int i = 0; i < N; i++) {
a[i] = b[i] + 1.0;
}
}
[1] This source-level example is based on the pragma proposed
separately in D89031. This patch only implements the LLVM part.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D91077
This patch removes a number of asserts that VF is not scalable, even though
the code where this assert lives does nothing that prevents VF being scalable.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D91060
Adds the ExtensionType flag, which reflects the LoadExtType of a MaskedGatherSDNode.
Also updated SelectionDAGDumper::print_details so that details of the gather
load (is signed, is scaled & extension type) are printed.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D91084
This commit adds two new intrinsics.
- llvm.experimental.vector.insert: used to insert a vector into another
vector starting at a given index.
- llvm.experimental.vector.extract: used to extract a subvector from a
larger vector starting from a given index.
The codegen work for these intrinsics has already been completed; this
commit is simply exposing the existing ISD nodes to LLVM IR.
Reviewed By: cameron.mcinally
Differential Revision: https://reviews.llvm.org/D91362
The original code was inserting the barrier at the location given by the
caller. Make sure it is always inserted at the end of the loop exit block
instead.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D92849
The register operand was not being marked as a def when it should be. No tests
for this in the main branch as there are not yet any pseudos without a
non-negative VLIndex.
Also change the type of a virtual register operand from unsigned to Register
and adjust formatting.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D92823
This changes the `printNotesHelper` to report warnings on its side when
there are errors when dumping notes.
With that we can provide more content when reporting warnings about broken notes.
Differential revision: https://reviews.llvm.org/D92636
It is allowed to have multiple `SHT_SYMTAB_SHNDX` sections, though
we currently don't implement it.
The current implementation assumes that there is a maximum of one SHT_SYMTAB_SHNDX
section and that it is always linked with .symtab section.
This patch drops this limitations.
Differential revision: https://reviews.llvm.org/D92644
On RH66 does not support 'PTRACE_GETREGSET'. This change makes this part of compiler-rt build again on older os-es
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D91686
Add more tests of the command line marshalling infrastructure.
The new tests now make a "round-trip": from arguments, to CompilerInvocation instance to arguments again in a single test case.
The TODOs are resolved in a follow-up patch.
Depends on D92830.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D92774
This scans through blocks looking for constants used as predicates in
MVE instructions. When two constants are found which are the inverse of
one another, the second can be replaced by a VPNOT of the first,
potentially allowing that not to be folded away into an else predicate
of a vpt block.
Differential Revision: https://reviews.llvm.org/D92470
We defined SubRegIndex for 256/512 regs,
but we did not set the offset for higher part,
so the offset of lower and higher part are the same.
This may cause problem in assessing ranges of SubReg,
it is great that this haven't affected any testcases,
but I think we should fix it to avoid hidden bugs in the future.
Reviewed By: bsaleil, #powerpc
Differential Revision: https://reviews.llvm.org/D92864
The main this this test does is to add the `IsNotPIC` predicate to the
all the atomic instructions pattern that directly refer to
`tglobaladdr`.
This is because in PIC mode we need to generate separate instruction
sequence (either a direct global.get, or __memory_base + offset) for
accessing global addresses.
As part of this change I noticed that many of the `Requires` attributes
added to the instruction in `WebAssemblyInstrAtomics.td` were being
honored. This is because the wrapped in a `let Predicates =
[HasAtomics]` block and it seems that that outer wrapping overrides any
`Requires` on defs within it. As a workaround I removed the outer
`let` and added `HasAtomics` to all the inner `Requires`. I believe
that all the instrucitons that don't have `Requires` explicit bottom out
in `ATOMIC_I` and `ATOMIC_NRI` which have `HasAtomics` so this should
not remove this predicate from any patterns (at least that is the idea).
The alternative to this approach looks like implementing something
like `PredicateControl` in `Mips.td` where we can split the predicates
into groups so they don't clobber each other.
Differential Revision: https://reviews.llvm.org/D92744