Test case to verify that the expected code is generated for a
vector float gather based on the patterns in tablegen for big
and little endian cases.
Patch by: Kamau Bridgeman
Differential Revision: https://reviews.llvm.org/D69443
This reapplies c0f6ad7d1f with an
additional fix in test/DebugInfo/X86/constant-loclist.ll, which had a
slightly different output on windows targets. The test now accounts for
this difference.
The original commit message follows.
Summary:
As discussed in D70081, this adds the ability to dump section
names/indices to the location list dumper. It does this by moving the
range specific logic from DWARFDie.cpp:dumpRanges into the
DWARFAddressRange class.
The trickiest part of this patch is the backflip in the meanings of the
two dump flags for the location list sections.
The dumping of "raw" location list data is now controlled by
"DisplayRawContents" flag. This frees up the "Verbose" flag to be used
to control whether we print the section index. Additionally, the
DisplayRawContents flag is set for section-based dumps whenever the
--verbose option is passed, but this is not done for the "inline" dumps.
Also note that the index dumping currently does not work for the DWARF
v5 location lists, as the parser does not fill out the appropriate
fields. This will be done in a separate patch.
Reviewers: dblaikie, probinson, JDevlieghere, SouraVX
Subscribers: sdardis, hiraditya, jrtc27, atanasyan, arphaman, aprantl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70227
If you're writing C code using the ACLE MVE intrinsics that passes the
result of a vcmp as input to a predicated intrinsic, e.g.
mve_pred16_t pred = vcmpeqq(v1, v2);
v_out = vaddq_m(v_inactive, v3, v4, pred);
then clang's codegen for the compare intrinsic will create calls to
`@llvm.arm.mve.pred.v2i` to convert the output of `icmp` into an
`mve_pred16_t` integer representation, and then the next intrinsic
will call `@llvm.arm.mve.pred.i2v` to convert it straight back again.
This will be visible in the generated code as a `vmrs`/`vmsr` pair
that move the predicate value pointlessly out of `p0` and back into it again.
To prevent that, I've added InstCombine rules to remove round trips of
the form `v2i(i2v(x))` and `i2v(v2i(x))`. Also I've taught InstCombine
about the known and demanded bits of those intrinsics. As a result,
you now get just the generated code you wanted:
vpt.u16 eq, q1, q2
vaddt.u16 q0, q3, q4
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70313
Provides support for using r6-r11 as globally scoped
register variables. This requires a -ffixed-rN flag
in order to reserve rN against general allocation.
If for a given GRV declaration the corresponding flag
is not found, or the the register in question is the
target's FP, we fail with a diagnostic.
Differential Revision: https://reviews.llvm.org/D68862
Summary:
As discussed in D70081, this adds the ability to dump section
names/indices to the location list dumper. It does this by moving the
range specific logic from DWARFDie.cpp:dumpRanges into the
DWARFAddressRange class.
The trickiest part of this patch is the backflip in the meanings of the
two dump flags for the location list sections.
The dumping of "raw" location list data is now controlled by
"DisplayRawContents" flag. This frees up the "Verbose" flag to be used
to control whether we print the section index. Additionally, the
DisplayRawContents flag is set for section-based dumps whenever the
--verbose option is passed, but this is not done for the "inline" dumps.
Also note that the index dumping currently does not work for the DWARF
v5 location lists, as the parser does not fill out the appropriate
fields. This will be done in a separate patch.
Reviewers: dblaikie, probinson, JDevlieghere, SouraVX
Subscribers: sdardis, hiraditya, jrtc27, atanasyan, arphaman, aprantl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70227
Summary:
This also adds testing of 32-bit V9 atomic lowering, splitting the
64-bit-only tests out into their own file.
Reviewers: venkatra, jyknight
Reviewed By: jyknight
Subscribers: hiraditya, fedor.sergeev, jfb, llvm-commits, glaubitz
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69352
This doesn't handle softening the input type, but we don't handle
softening any of the strict nodes yet. Skipping that made it easy
to reuse an existing function for creating a libcall from a node
with a chain.
Now, PPCPreIncPrep pass changes a loop to update form and update all load/store
with same base accordingly. We can do more for load/store with same base, for
example, convert load/store with same base to ds/dq form.
Reviewed by: jsji
Differential Revision: https://reviews.llvm.org/D67088
This only implements the non-dwo part, but loclistx is necessary to use
location lists in DWARFv5, so it's a precursor to that work - and
generally reduces relocations (only using one reloc, then
indexes/relative offsets for all location list references) in non-split
DWARF.
LLVM IR of 1-element vectors get lower into scalar in GISel. As a
result, shuffle vector may also produce a scalar.
This patch teaches the shuffle combiner how to deal with scalars when
they are in the destination type of a shuffle vector.
For now, we just support the easy case where this can be lowered to
a plain copy. For other cases, we leave the shuffle vector as is.
This type of IR are seen in O0 pipelines. E.g., as produced with
SingleSource/UnitTests/Vector/AArch64/aarch64_neon_intrinsics.c.
rdar://problem/57198904
https://reviews.llvm.org/D70210
Previously:
Due to sensitivity of the algorithm with gaps, and extra instructions,
when diffing, often we see naming being off by a few. Makes the diff
unreadable even for tests with 7 and 8 instructions respectively.
Naming can change depending on candidates (and order of picking
candidates). Suddenly if there's one extra instruction somewhere, the
entire subtree would be named completely differently.
No consistent naming of similar instructions which occur in different
functions. If we try to do something like count the frequency
distribution of various differences across suite, then the above
sensitivity issues are going to result in poor results.
Instead:
Name instruction based on semantics of the instruction (hash of the
opcode and operands). Essentially for a given instruction that occurs in
any module/function it'll be named similarly (ie semantic). This has
some nice properties
Can easily look at many instructions and just check the hash and if
they're named similarly, then it's the same instruction. Makes it very
easy to spot the same instruction both multiple times, as well as across
many functions (useful for frequency distribution).
Independent of traversal/candidates/depth of graph. No need to keep
track of last index/gaps/skip count etc.
No off by few issues with diffs. I've tried the old vs new
implementation in files ranging from 30 to 700 instructions. In both
cases with the old algorithm, diffs are a sea of red, where as for the
semantic version, in both cases, the diffs line up beautifully.
Simplified implementation of the main loop (simple iteration) , no keep
track of what's visited and not.
Handle collision just by incrementing a counter. Roughly
bb[N]_hash_[CollisionCount].
Additionally with the new implementation, we can probably avoid doing
the hoisting of instructions to various places, as they'll likely be
named the same resulting in differences only based on collision (ie
regardless of whether the instruction is hoisted or not/close to use or
not, it'll be named the same hash which should result in use of the
instruction be identical with the only change being the collision count)
which is very easy to spot visually.
Summary:
As well as vector/vector compare instructions, MVE also has a family
of comparisons taking a vector and a scalar, which compare every lane
of the vector against the same value. We generate those at isel time
using isel patterns that match `(ARMvcmp vector, (ARMvdup scalar))`.
This commit adds corresponding patterns for the operand-reversed form
`(ARMvcmp (ARMvdup scalar), vector)`, with condition codes swapped as
necessary. That way, we can still generate the vector/scalar compare
instruction if the IR happens to have been rearranged to put the
operands the other way round, which can happen in some optimization
phases. Previously, a vcmp the other way round was handled by emitting
a `vdup` instruction to //explicitly// replicate the scalar input into
a vector, and then doing a vector/vector comparison.
I haven't added a new test, because it turned out that several
existing tests were already exhibiting that failure mode. So just
updating the expected output in the existing MVE codegen tests
demonstrates what's been improved.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70296
Summary:
This adds a visitLocationList function to the DWARF v4 location lists,
similar to what already exists for DWARF v5. It follows the approach
outlined in previous patches (D69672), where the parsed form is always
stored in the DWARF v5 format, which makes it easier for generic code to
be built on top of that. v4 location lists are "upgraded" during
parsing, and then this upgrade is undone while dumping.
Both "inline" and section-based dumping is rewritten to reuse the
existing "generic" location list dumper. This means that the output
format is consistent for all location lists (the only thing one needs to
implement is the function which prints the "raw" form of a location
list), and that debug_loc dumping correctly processes base address
selection entries, etc.
The previous existing debug_loc functionality (e.g.,
parseOneLocationList) is rewritten on top of the new API, but it is not
removed as there is still code which uses them. This will be done in
follow-up patches, after I build the API to access the "interpreted"
location lists in a generic way (as that is what those users really
want).
Reviewers: dblaikie, probinson, JDevlieghere, aprantl, SouraVX
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69847
Introduce IntImmLeaf version of PatLeaf immZExt16 for 32-bit immediates.
Change immZExt16 with imm32ZExt16 for andi, ori and xori.
This keeps same behavior for SDAG and allows for GlobalISel selectImpl
to select 'G_CONSTANT imm' + G_AND, G_OR, G_XOR into ANDi, ORi, XORi,
respectively, when 32-bit imm satisfies imm32ZExt16 predicate: zero
extending 16 low bits of imm is equal to imm.
Large number of test changes comes from zero extending of small types
which is transformed into 'and' with bitmask in legalizer.
Differential Revision:https://reviews.llvm.org/D70185
Introduce IntImmLeaf version of PatLeaf immSExt16 for 32-bit immediates.
Change immSExt16 with imm32SExt16 for addiu.
This keeps same behavior for SDAG and allows for GlobalISel selectImpl
to select 'G_CONSTANT imm' + G_ADD into ADDIu when 32-bit imm satisfies
imm32SExt16 predicate: sign extending 16 low bits of imm is equal to imm.
Differential Revision: https://reviews.llvm.org/D70184
The usage of target boolean checks is overly inflexible, since sext
and zext of a compare are equally cheap. The choice is arbitrary, but
using 0/1 to some degree is the choice of lower resistance since
that's what most targets use. This enables a few combines that don't
bother to support ZeroOrNegativeOneBooleanContent.
Summary: This is a bug fix for further issues in PR43585.
Reviewers: rnk, RKSimon, craig.topper, andrew.w.kaylor
Subscribers: hiraditya, llvm-commits, annita.zhang
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70224
The conditional instructions that are translated to mux instructions
are deleted and the iterators to these deleted instructions are being
used later. This patch fixed this issue.
Summary:
The RISC-V backend used to generate `add <reg>, x0, <reg>` in a few
instances. It seems most places no longer generate this sequence.
This is semantically equivalent to `addi <reg>, <reg>, 0`, but the
latter has the advantage of being noted to be the canonical instruction
to be used for moves (which microarchitectures can and should recognise
as such).
The changed testcases use instruction aliases - `mv <reg>, <reg>` is an
alias for `addi <reg>, <reg>, 0`.
Reviewers: luismarques
Reviewed By: luismarques
Subscribers: hiraditya, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, sameer.abuasal, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70124
Summary: Removes CFI CFA directives that could incorrectly propagate
beyond the basic block they were inteded for. Specifically it removes
the epilogue CFI directives. See the branch_and_tail_call test for an
example of the issue. Should fix the stack unwinding issues caused by
the incorrect directives.
Reviewers: asb, lenary, shiva0217
Reviewed By: lenary
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69723
During register coalescing, we update the live-intervals on-the-fly.
To do that we are in this strange mode where the live-intervals can
be slightly out-of-sync (more precisely they are forward looking)
compared to what the IR actually represents.
This happens because the register coalescer only updates the IR when
it is done with updating the live-intervals and it has to do it this
way because updating the IR on-the-fly would actually clobber some
information on how the live-ranges that are being updated look like.
This is problematic for updates that rely on the IR to accurately
represents the state of the live-ranges. Right now, we have only
one of those: stripValuesNotDefiningMask.
To reconcile this need of out-of-sync IR, this patch introduces a
new argument to LiveInterval::refineSubRanges that allows the code
doing the live range updates to reason about how the code should
look like after the coalescer will have rewritten the registers.
Essentially this captures how a subregister index with be offseted
to match its position in a new register class.
E.g., let say we want to merge:
V1.sub1:<2 x s32> = COPY V2.sub3:<4 x s32>
We do that by choosing a class where sub1:<2 x s32> and sub3:<4 x s32>
overlap, i.e., by choosing a class where we can find "offset + 1 == 3".
Put differently we align V2's sub3 with V1's sub1:
V2: sub0 sub1 sub2 sub3
V1: <offset> sub0 sub1
This offset will look like a composed subregidx in the the class:
V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32>
=> V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32>
Now if we didn't rewrite the uses and def of V1, all the checks for V1
need to account for this offset to match what the live intervals intend
to capture.
Prior to this patch, we would fail to recognize the uses and def of V1
and would end up with machine verifier errors: No live segment at def.
This could lead to miscompile as we would drop some live-ranges and
thus, miss some interferences.
For this problem to trigger, we need to reach stripValuesNotDefiningMask
while having a mismatch between the IR and the live-ranges (i.e.,
we have to apply a subreg offset to the IR.)
This requires the following three conditions:
1. An update of overlapping subreg lanes: e.g., dsub0 == <ssub0, ssub1>
2. An update with Tuple registers with a possibility to coalesce the
subreg index: e.g., v1.dsub_1 == v2.dsub_3
3. Subreg liveness enabled.
looking at the IR to decide what is alive and what is not, i.e., calling
stripValuesNotDefiningMask.
coalescer maintains for the live-ranges information.
None of the targets that currently use subreg liveness (i.e., the targets
that fulfill #3, Hexagon, AMDGPU, PowerPC, and SystemZ IIRC) expose #1 and
and #2, so this patch also artificial enables subreg liveness for ARM,
so that a nice test case can be attached.
RETA always implicitly uses LR, unlike RET which merely has an
alias that defaults it to LR.
Additionally, RETA implicitly uses SP as well, which it uses as
a discriminator to authenticate LR.
This isn't usually noticeable, because RET_ReallyLR is used in most
of the backend. However, the post-RA scheduler, if enabled, will
cause miscompiles if the imp-uses are missing.
While there, fix a typo in the lone affected testcase.
Summary: Removes CFI CFA directives that could incorrectly propagate
beyond the basic block they were inteded for. Specifically it removes
the epilogue CFI directives. See the branch_and_tail_call test for an
example of the issue. Should fix the stack unwinding issues caused by
the incorrect directives.
Reviewers: asb, lenary, shiva0217
Reviewed By: lenary
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69723
This patch adds the ACLE intrinsics for all the MVE load and store
instructions not already handled by D69791. These ones don't need new
IR intrinsics, because they can be implemented in terms of standard
LLVM IR constructions.
Some of the load and store instructions access less than 128 bits of
memory, sign/zero extending each value to a wider vector lane on load
or truncating it on store. These are represented in IR by a load of a
shorter vector followed by a zext/sext, and conversely, a trunc
followed by a short store. Existing ISel patterns already recognize
those combinations and turn them into the right MVE instructions.
The predicated forms of all these instructions are represented in the
same way, except that the ordinary load/store operation is replaced
with the existing intrinsics @llvm.masked.{load,store}. These are
currently only code-generated as predicated MVE load/store
instructions if you give LLVM the `-enable-arm-maskedldst` option; so
I've done that in the LLVM codegen test. When we make that the
default, that option can be removed.
In the Tablegen backend, I've had to add a handful of extra support
features:
* We need to be able to make clang::Address objects out of a
pointer and an alignment (previously we only needed these when the
user passed us an existing one).
* We can now specify vector types that aren't 128 bits wide (for use
in those intermediate values in IR), the parametrized type system
can make one starting from two existing vector types (using the lane
count of one and the element type of the other).
* I've added support for code generation of pointer casts, and for
specifying LLVM types as operands to IRBuilder operations (for zext
and sext, though I think they'll come in useful again).
* Now not all IR construction operations need to be specified as
Builder.CreateFoo; some don't involve a Builder at all, and one
passes it as a parameter to a tiny static helper function in
CGBuiltin.cpp.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Subscribers: kristof.beyls, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D70088
Instruction ldi.fmt can be considered cheap enough to avoid spill and restore
of value that it produces since it's loaded from immediate.
Differential Revision: https://reviews.llvm.org/D69898
When a 64-bit triple is used emit an error if the CPU only supports
32-bit code.
Patch by Miloš Stojanović.
Differential Revision: https://reviews.llvm.org/D70018