This patch extends CodeGenPrepare to lower zext v16i8 -> v16i32 in loops
using a wide shuffle creating a v64i8 vector, selecting groups of 3
zero elements and an element from the input.
This is profitable on AArch64 where such shuffles can be lowered to tbl
instructions, but only in loops, because it requires materializing 4
masks, which can be done in the loop preheader.
This is the only reason the transform is part of CGP. If there's a
better alternative I missed, please let me know. The same goes for the
shouldReplaceZExtWithShuffle hook which guards this. I am not sure if
this transform will be beneficial on other targets, but it seems like
there is no way other convenient way.
This improves the generated code for loops like the one below in
combination with D96522.
int foo(uint8_t *p, int N) {
unsigned long long sum = 0;
for (int i = 0; i < N ; i++, p++) {
unsigned int v = *p;
sum += (v < 127) ? v : 256 - v;
}
return sum;
}
https://clang.godbolt.org/z/Wco866MjY
Reviewed By: t.p.northover
Differential Revision: https://reviews.llvm.org/D120571
All in-tree targets pass pointer-sized ConstantSDNodes to the
method. This overload reduced amount of boilerplate code a bit. This
also makes getCALLSEQ_END consistent with getCALLSEQ_START, which
already takes uint64_ts.
This is similar to the existing signed instruction folds.
We get the obvious minimal patterns in other passes, but
this avoids potential missed folds when the multi-block
tests are converted to selects.
There are two ctlz intrinsics here with the zero_is_poison flag
set. There are also two comparisons that check if either of the
inputs the ctlzs are zero. We need to use a logical or to block
the poison from the ctlz if either of the inputs is zero.
Reviewed By: arsenm, aqjune
Differential Revision: https://reviews.llvm.org/D130680
This is safe when the mul does not overflow:
https://alive2.llvm.org/ce/z/LedVVP
This could be extended to handle non-zero compare constants
and non-squared multiplies.
These are the worst case generic vector shift costs, where nothing is known about the shift amounts - in particular this should stop us using the default sizelatency cost of 1 for so many pre-AVX2 vector shifts that can often actually expand during lowering to +20 uops, just for 128-bit vectors, resulting in some horrible inline/unroll decisions.
This was achieved with an updated version of the 'cost-tables vs llvm-mca' script D103695 (I'll update the patch soon for reference)
As the comment suggests, `_LIBCPP_HAS_NO_PLATFORM_WAIT` is not documented or
defined anywhere internally in the build system. It's a direct define in terms
of `_LIBCPP_HAS_NO_THREADS`. So, remove `_LIBCPP_HAS_NO_PLATFORM_WAIT` and use
`_LIBCPP_HAS_NO_THREADS` instead to control the desired behavior.
Differential Revision: https://reviews.llvm.org/D132715
Only do this for 16 and 32 register tuples, although we might want to
extend to 8 tuples.
It's incredibly expensive to spill these, and doing so majorly
interferes with the ability to allocate anything else in the function.
The lit tests show mostly sizeable improvements with a handful of tiny
regressions with large vectors.
Add support for `arith.extsi`, `arith.extui`, and `arith.trunci` ops.
Tested by checking the results for all 16-bit inputs when emulating i16 with i8.
Reviewed By: antiagainst, Mogball
Differential Revision: https://reviews.llvm.org/D133612
A thread may not have access to SME or TPIDR2_EL0, so in order to
safely query PSTATE.SM in a streaming-compatible function, the
code should call `__arm_sme_state()`, as described in the ABI:
c2bb09c4d4
This means that the value of pstate.sm is:
* 0 if the function is non-streaming.
* 1 if the function has `arm_streaming` or `arm_locally_streaming`.
* evaluated at runtime by a call to __arm_sme_state() otherwise.
This patch also adds a calling convention for calls to SME support routines.
At some point we can remove the need for the llvm.aarch64.get.pstatesm() intrinsic
and use function calls (with the corresponding cc) directly instead.
Reviewed By: aemerson
Differential Revision: https://reviews.llvm.org/D131571
The class priority is expected to be at most 5 bits before it starts
clobbering bits used for other fields. Also clamp the instruction
distance in case we have millions of instructions.
AMDGPU was accidentally overflowing into the global priority bit in
some cases. I think in principal we would have wanted this, but in the
cases I've looked at, it had the counter intuitive effect and
de-prioritized the large register tuple.
Avoid using weird bit hack PPC uses for global priority. The
AllocationPriority field is really 5 bits, and PPC was relying on
overflowing this to 6-bits to forcibly set the global priority
bit. Split this out as a separate flag to avoid having magic behavior
for values above 31.
Currently, DWARFLinker determines the target DWARF version internally.
It examines incoming object files, detects maximal
DWARF version and uses that version for the output file.
This patch allows explicitly setting output DWARF version by the consumer
of DWARFLinker. So that DWARFLinker uses a specified version instead
of autodetected one. It allows consumers to use different logic for
setting the target DWARF version. f.e. instead of the maximally used version
someone could set a higher version to convert from DWARFv4 to DWARFv5
(This possibility is not supported yet, but it would be good if
the interface will support it). Or another variant is to set the target
version through the command line. In this patch, the autodetection is moved
into the consumers(DwarfLinkerForBinary.cpp, DebugInfoLinker.cpp).
Differential Revision: https://reviews.llvm.org/D132755
Vector shift by const uniform is the cheapest shift instruction we have, non-const uniform have a marginally higher cost - some targets 'splat' the amount internally to use the shift-per-element instruction, others see a higher cost for the explicit zeroing of the upper bits for the (64-bit) shift amount.
This was achieved with an updated version of the 'cost-tables vs llvm-mca' script D103695 (I'll update the patch soon for reference)
A complete implementation of `applyFixup` for D132323.
Makes `LoongArchAsmBackend::shouldForceRelocation` to determine
if the relocation types must be forced.
This patch also adds range and alignment checks for `b*` instructions'
operands, at which point the offset to a label is known.
Differential Revision: https://reviews.llvm.org/D132818
This patch fixes a crash which appears because of getTypeAlignInChars() call with depentent type.
Reviewed By: hokein
Differential Revision: https://reviews.llvm.org/D133886
This is similar to the `-alias` CLI option, but it gives finer-grained
control in that it allows the aliased symbols to be treated as private
externs.
While working on this, I realized that our `-alias` handling did not
cover the cases where the aliased symbol is a common or dylib symbol,
nor the case where we have an undefined that gets treated specially and
converted to a defined later on. My N_INDR handling neglects this too
for now; I've added checks and TODO messages for these.
`N_INDR` symbols cropped up as part of our attempt to link swift-stdlib.
Reviewed By: #lld-macho, thakis, thevinster
Differential Revision: https://reviews.llvm.org/D133825
The patch adds the regularization pass that prepare LLVM IR for
the IR translation. It also contains following changes:
- reduce indentation, make getNonParametrizedType, getSamplerType,
getPipeType, getImageType, getSampledImageType static in SPIRVBuiltins,
- rename mayBeOclOrSpirvBuiltin to getOclOrSpirvBuiltinDemangledName,
- move isOpenCLBuiltinType, isSPIRVBuiltinType, isSpecialType from
SPIRVGlobalRegistry.cpp to SPIRVUtils.cpp, renaming isSpecialType to
isSpecialOpaqueType,
- implment getTgtMemIntrinsic() in SPIRVISelLowering,
- add hasSideEffects = 0 in Pseudo (SPIRVInstrFormats.td),
- add legalization rule for G_MEMSET, correct G_BRCOND rule,
- add capability processing for OpBuildNDRange in SPIRVModuleAnalysis,
- don't correct types of registers holding constants and used in
G_ADDRSPACE_CAST (SPIRVPreLegalizer.cpp),
- lower memset/bswap intrinsics to functions in SPIRVPrepareFunctions,
- change TargetLoweringObjectFileELF to SPIRVTargetObjectFile
in SPIRVTargetMachine.cpp,
- correct comments.
5 LIT tests are added to show the improvement.
Differential Revision: https://reviews.llvm.org/D133253
Co-authored-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Co-authored-by: Michal Paszkowski <michal.paszkowski@outlook.com>
Co-authored-by: Andrey Tretyakov <andrey1.tretyakov@intel.com>
Co-authored-by: Konrad Trifunovic <konrad.trifunovic@intel.com>
Sometimes we make changes to the compiler that we expect may cause
disruption for users. For example, we may strengthen a warning to
default to be an error, or fix an accepts-invalid bug that's been
around for a long time, etc which may cause previously accepted code to
now be rejected. Rather than hope users discover that information by
reading all of the release notes, it's better that we call these out in
one location at the top of the release notes.
Based on feedback collected in the discussion at:
https://discourse.llvm.org/t/configure-script-breakage-with-the-new-werror-implicit-function-declaration/65213/
Differential Revision: https://reviews.llvm.org/D133771
This patch adds better functions for parsing MultiAffineFunctions and
PWMAFunctions in Presburger unittests.
A PWMAFunction can now be parsed as:
```
PWMAFunction result = parsePWMAF({
{"(x, y) : (x >= 10, x <= 20, y >= 1)", "(x, y) -> (x + y)"},
{"(x, y) : (x >= 21)", "(x, y) -> (x + y)"},
{"(x, y) : (x <= 9)", "(x, y) -> (x - y)"},
{"(x, y) : (x >= 10, x <= 20, y <= 0)", "(x, y) -> (x - y)"},
});
```
which is much more readable than the old format since the output can be
described as an AffineMap, instead of coefficients.
This patch also adds support for parsing divisions in MultiAffineFunctions
and PWMAFunctions which was previously not possible.
Reviewed By: arjunp
Differential Revision: https://reviews.llvm.org/D133654