Instead of looking up the vector loop using the header, keep track of
the current vector loop in VPTransformState. This removes the
requirement for the vector header block being part of the loop up front.
A follow-up patch will move the code to generate the Loop object for the
vector loop to VPRegionBlock.
Depends on D121619.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D121621
If the `ExternalFS` has already remapped a path then the
`RedirectingFileSystem` should not change it to the originally provided
path. This fixes the original path always being used if multiple VFS
overlays were provided and the path wasn't found in the highest (ie.
first in the chain).
This also renames `IsVFSMapped` to `ExposesExternalVFSPath` and only
sets it if `UseExternalName` is true. This flag then represents that the
`Status` has an external path that's different from its virtual path.
Right now the contained path is still the external path, but further PRs
will change this to *always* be the virtual path. Clients that need the
external can then request it specifically.
Note that even though `ExposesExternalVFSPath` isn't set for all
VFS-mapped paths, `IsVFSMapped` was only being used by a hack in
`FileManager` that was specific to module searching. In that case
`UseExternalNames` is always `true` and so that hack still applies.
Resolves rdar://90578880 and llvm-project#53306.
Differential Revision: https://reviews.llvm.org/D122549
Previously, these isel optimizations were disabled if the AND could
be selected as a ANDI instruction. This patch disables the optimizations
only if the immediate is valid for C.ANDI. If we can't use C.ANDI,
we might be able to compress the shift instructions instead.
I'm not checking the C extension since we have relatively poor test
coverage of the C extension. Without C extension the code size
should be equal. My only concern would be if the shift+andi had
better latency/throughput on a particular CPU.
I did have to add a peephole to match SRLIW if the input is zexti32
to prevent a regression in rv64zbp.ll.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D122701
The splat_vector will be legalized to build_vector eventually
anyway. This patch makes it take fewer steps.
Unfortunately, this results in some codegen changes. It looks
like it comes down to how the nodes were ordered in the topological
sort for isel. Because the build_vector is created earlier we end up
with a different ordering of nodes.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D122185
Avoids merge errors when opaque pointers are loaded into different types.
Reviewed by: jcranmer-intel, hiraditya
Differential Revision: https://reviews.llvm.org/D122521
D122053 set the ExtendType for ConstantSDNodes in getCopyToRegs to
ZERO_EXTEND to match assumptions in ComputePHILiveOutRegInfo. PHIs
are probably not the only way ConstantSDNodeNodes can get to
getCopyToRegs.
This patch adds an ExtendType parameter to CopyValueToVirtualRegister and
has HandlePHINodesInSuccessorBlocks pass ISD::ZERO_EXTEND for ConstantInts.
This way we only affect ConstantSDNodes for PHIs.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D122171
Now that all dependencies on creating the latch block up-front have been
removed, there is no need to create it early.
Depends on D121618.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D121619
This patch mostly follows up on D121292 which introduced the vp.fcmp
intrinsic.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D122729
We only want to do the upgrade from named to anonymous struct
return if the intrinsic is declared to return a struct, but not
if it has an overloaded return type that just happens to be a
struct. In that case the struct type will be mangled into the
intrinsic name and there is no problem.
This should address the problem reported in
https://reviews.llvm.org/D122471#3416598.
This is an extension of D70965 to avoid creating a mathlib
call where it did not exist in the original source. Also see
D70852 for discussion about an alternative proposal that was
abandoned.
In the motivating bug report:
https://github.com/llvm/llvm-project/issues/54554
...we also have a more general issue about handling "no-builtin" options.
Differential Revision: https://reviews.llvm.org/D122610
This patch adds the first support for vector-predicated comparison
intrinsics, starting with vp.fcmp. It uses metadata to encode its
condition code, like the llvm.experimental.constrained.fcmp intrinsic.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D121292
When shifting by a byte-multiple:
bswap (shl X, C) --> lshr (bswap X), C
bswap (lshr X, C) --> shl (bswap X), C
This is the backend version of D122010 and an alternative
suggested in D120648.
There's an extra check to make sure the shift amount is
valid that was not in the rough draft.
I'm not sure if there is a larger motivating case for RISCV (bug report?),
but the ARM diffs show a benefit from having a late version of the
transform (because we do not combine the loads in IR).
Differential Revision: https://reviews.llvm.org/D122655
This patch fixes a (seemingly very rare) crash during vector constant
folding introduced in D113300.
Normally, during legalization, if we create an illegally-typed node during
a failed attempt at constant folding it's cleaned up before being
visited, due to it having no uses.
If, however, an illegally-typed node is created during one round of
legalization and isn't cleaned up, it's possible for a second round of
legalization to create new illegally-typed nodes which add extra uses to
the old illegal nodes. This means that we can end up visiting the old
nodes before they're known to be dead, at which point we crash.
I'm not happy about this fix. Creating illegal types at all seems like a
bad idea, but we all-too-often rely on illegal constants being
successfully folded and being fixed up afterwards. However, we can't
rely on constant folding actually happening, and we don't have a
foolproof way of peering into the future.
Perhaps the correct fix is to revisit the node-iteration order during
legalization, ensuring we visit all uses of nodes before the nodes
themselves. Or alternatively we could try and clean up dead nodes
immediately after failing constant folding.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D122382
This reverts commit 115b3ace36.
Starting from this commit the buildbot sanitizer-x86_64-linux-bootstrap-msan
starts failing (build 10071). Reverted for investigation.
If we're not relying on the flag result, we can fold the constants together into the RHS immediate operand and set the LHS operand to zero, simplifying for further folds.
We could do something similar if the flag result is in use and the constant fold doesn't affect it, but I don't have any real test cases for this yet.
As suggested by @davezarzycki on Issue #35256
Differential Revision: https://reviews.llvm.org/D122482
Based off the script from D103695, we were exaggerating the cost of the v2i64 comparison expansion using instruction count instead of effective throughput
In D122512, several masked patterns were added to support lowering of
vector-predicated float-to-int and int-to-float conversions. With the
introduction of these patterns, all of the old "unmasked" patterns are
matchable via the DAG post-process introduced in D118810, once the relevant
opcode entries are set up in the helper table.
Locally this reduces the generated isel table by 4%.
Reviewed By: arcbbb
Differential Revision: https://reviews.llvm.org/D122637
This is an alternative to D122376. Rather than working around the
problem, this patch requires that struct return types in intrinsics
are anonymous/literal and adds auto-upgrade code to convert
existing uses of intrinsics with named struct types.
This ensures that the mapping between intrinsic name and
intrinsic function type is actually bijective, as it is supposed
to be.
This also fixes https://github.com/llvm/llvm-project/issues/37891.
Differential Revision: https://reviews.llvm.org/D122471
A new function 'getConstrainedIntrinsic' is added, which for any gived
instruction returns id of the corresponding constrained intrinsic. If
there is no constrained counterpart for the instruction or the instruction
is already a constrained intrinsic, the function returns zero.
Differential Revision: https://reviews.llvm.org/D69562
masked compare and vmsbf/vmsif/vmsof are always tail agnostic, we could
check maskedoff value to decide mask policy rather than have a addtional
policy operand.
Reviewed By: craig.topper, arcbbb
Differential Revision: https://reviews.llvm.org/D122456
This reverts commit 10fd2822b7.
I have a better implementation for those operations without the
additional policy operand.
masked compare and vmsbf/vmsif/vmsof are always tail agnostic so we could
assume undef maskedoff is mask agnostic.
Differential Revision: https://reviews.llvm.org/D122455
Delyan Kratunov reported an issue where __builtin_memcmp is
not inlined into simple load/compare instructions.
This is a known issue. In the current state, __builtin_memcmp
will be converted to memcmp call which won't work for
bpf programs.
This patch added support for expanding __builtin_memcmp with
actual loads and compares up to currently maximum 128 total loads.
The implementation is identical to PowerPC.
Differential Revision: https://reviews.llvm.org/D122676
In some case, like in the added test case, we can reach
selectInterleaveCount with loops that actually have a cost of 0.
Unfortunately a loop cost of 0 is also used to communicate that the cost
has not been computed yet. To resolve the crash, bail out if the cost
remains zero after computing it.
This seems like the best option, as there are multiple code paths that
return a cost of 0 to force a computation in selectInterleaveCount.
Computing the cost at multiple places up front there would unnecessarily
complicate the logic.
Fixes#54413.
On targets which don't allow "@" in unquoted identifiers, make sure we
don't emit them; otherwise, we can't parse our own output.
Differential Revision: https://reviews.llvm.org/D122516
DXIL is wrapped in a container format defined by the DirectX 11
specification. Codebases differ in calling this format either DXBC or
DXILContainer.
Since eventually we want to add support for DXBC as a target
architecture and the format is used by DXBC and DXIL, I've termed it
DXContainer here.
Most of the changes in this patch are just adding cases to switch
statements to address warnings.
Reviewed By: pete
Differential Revision: https://reviews.llvm.org/D122062
This patch moves the code to set the correct incoming block for the
backedge value to VPlan::execute.
When generating the phi node, the backedge value is temporarily added
using the pre-header as incoming block. The invalid phi node will be
fixed up during VPlan::execute after main VPlan code generation.
At the same time, the backedge value is also moved to the latch.
This change removes the requirement to create the latch block up-front
for VPWidenInductionPHIRecipe::execute, which in turn will enable
modeling the pre-header in VPlan.
Depends on D121617.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D121618
Fleshing this out now allows me to rely on enum math to translate
values rather than having to translate the off cases.
I should have added this in the first pass, but wasn't thinking about
it.
D113200 introduced an error where it was converting FP_TO_SI_SAT with
multiply to a fixed point floating point convert. The saturation
bitwidth needs to be equal to the floating point width, or else the
routine would truncate the result as opposed to saturating it.
Fixes#54601
In generic cpu model, there are only low 16 registers and little 32-bit instruction. CK801 is the cpu
family with least basic features like generic model.
Add test run and check for generic cpu model in original test case to cover basic LLVM IR functionality.
According to definition of canonical form, it is a canonical
if scale reg does not contain addrec for loop L then none of bases
should contain addrec for this loop.
The critical word here is "contains".
Current checker of canonical form checks not "containing" property
but "is". So it does not check whether it contains but whether it is.
Fix the checker and canonicalizing utility to follow definition.
Without this fix in the test attached the base formula looking as
reg((-1 * {0,+,8}<nuw><nsw><%bb2>)<nsw>) + 1*reg((8 * (%arg /u 8))<nuw>)
is considered as conanocial while base contains an addrec.
And modified formula we want to insert
reg({0,+,8}<nuw><nsw><%bb2>) + 1*reg((-8 * (%arg /u 8)))
is considered as not canonical.
Reviewed By: mkazantsev
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D122457
This function now takes a uint64_t instead of an APInt. The caller
is responsible for masking the shift amount, extracting and inserting
into the KnownBits APInts, and inverting to compute zeros.
This is less code and cleaner division of responsibilities.
We have the same code repeated in both callers, sink it into callee.
The motivation here isn't just code style, we can also defer the relatively expensive aliasing checks until the cheap structural preconditions have been validated. (e.g. Don't bother aliasing if src is not an alloca.) This helps compile time significantly.
If we vectorize a e.g. store, we leave around a bunch of getelementptrs for the individual scalar stores which we removed. We can go ahead and delete them as well.
This is purely for test output quality and readability. It should have no effect in any sane pipeline.
Differential Revision: https://reviews.llvm.org/D122493