This reapplies c0f6ad7d1f with an
additional fix in test/DebugInfo/X86/constant-loclist.ll, which had a
slightly different output on windows targets. The test now accounts for
this difference.
The original commit message follows.
Summary:
As discussed in D70081, this adds the ability to dump section
names/indices to the location list dumper. It does this by moving the
range specific logic from DWARFDie.cpp:dumpRanges into the
DWARFAddressRange class.
The trickiest part of this patch is the backflip in the meanings of the
two dump flags for the location list sections.
The dumping of "raw" location list data is now controlled by
"DisplayRawContents" flag. This frees up the "Verbose" flag to be used
to control whether we print the section index. Additionally, the
DisplayRawContents flag is set for section-based dumps whenever the
--verbose option is passed, but this is not done for the "inline" dumps.
Also note that the index dumping currently does not work for the DWARF
v5 location lists, as the parser does not fill out the appropriate
fields. This will be done in a separate patch.
Reviewers: dblaikie, probinson, JDevlieghere, SouraVX
Subscribers: sdardis, hiraditya, jrtc27, atanasyan, arphaman, aprantl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70227
* Implements scalable size queries for MVTs, split out from D53137.
* Contains a fix for FindMemType to avoid using scalable vector type
to contain non-scalable types.
* Explicit casts for several places where implicit integer sign
changes or promotion from 32 to 64 bits caused problems.
* CodeGenDAGPatterns will treat scalable and non-scalable vector types
as different.
Reviewers: greened, cameron.mcinally, sdesmalen, rovka
Reviewed By: rovka
Differential Revision: https://reviews.llvm.org/D66871
If you're writing C code using the ACLE MVE intrinsics that passes the
result of a vcmp as input to a predicated intrinsic, e.g.
mve_pred16_t pred = vcmpeqq(v1, v2);
v_out = vaddq_m(v_inactive, v3, v4, pred);
then clang's codegen for the compare intrinsic will create calls to
`@llvm.arm.mve.pred.v2i` to convert the output of `icmp` into an
`mve_pred16_t` integer representation, and then the next intrinsic
will call `@llvm.arm.mve.pred.i2v` to convert it straight back again.
This will be visible in the generated code as a `vmrs`/`vmsr` pair
that move the predicate value pointlessly out of `p0` and back into it again.
To prevent that, I've added InstCombine rules to remove round trips of
the form `v2i(i2v(x))` and `i2v(v2i(x))`. Also I've taught InstCombine
about the known and demanded bits of those intrinsics. As a result,
you now get just the generated code you wanted:
vpt.u16 eq, q1, q2
vaddt.u16 q0, q3, q4
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70313
Provides support for using r6-r11 as globally scoped
register variables. This requires a -ffixed-rN flag
in order to reserve rN against general allocation.
If for a given GRV declaration the corresponding flag
is not found, or the the register in question is the
target's FP, we fail with a diagnostic.
Differential Revision: https://reviews.llvm.org/D68862
Summary:
As discussed in D70081, this adds the ability to dump section
names/indices to the location list dumper. It does this by moving the
range specific logic from DWARFDie.cpp:dumpRanges into the
DWARFAddressRange class.
The trickiest part of this patch is the backflip in the meanings of the
two dump flags for the location list sections.
The dumping of "raw" location list data is now controlled by
"DisplayRawContents" flag. This frees up the "Verbose" flag to be used
to control whether we print the section index. Additionally, the
DisplayRawContents flag is set for section-based dumps whenever the
--verbose option is passed, but this is not done for the "inline" dumps.
Also note that the index dumping currently does not work for the DWARF
v5 location lists, as the parser does not fill out the appropriate
fields. This will be done in a separate patch.
Reviewers: dblaikie, probinson, JDevlieghere, SouraVX
Subscribers: sdardis, hiraditya, jrtc27, atanasyan, arphaman, aprantl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70227
Summary:
This also adds testing of 32-bit V9 atomic lowering, splitting the
64-bit-only tests out into their own file.
Reviewers: venkatra, jyknight
Reviewed By: jyknight
Subscribers: hiraditya, fedor.sergeev, jfb, llvm-commits, glaubitz
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69352
These were both recently added. While the call to GetSoftenedFloat
is a little more optimal, we don't do it in the expand for
FP_TO_SINT/UINT so there's no real reason to do it here. This
avoids a FIXME for strict fp.
Split out a helper function for the individual call optimizations and
skip useless calls to it (where the instruction is not an ARC
intrinsic). Besides reducing indentation (and possibly speeding up
compile time in some small way), an upcoming patch will add additional
calls and expand out the `switch`.
This doesn't handle softening the input type, but we don't handle
softening any of the strict nodes yet. Skipping that made it easy
to reuse an existing function for creating a libcall from a node
with a chain.
Now, PPCPreIncPrep pass changes a loop to update form and update all load/store
with same base accordingly. We can do more for load/store with same base, for
example, convert load/store with same base to ds/dq form.
Reviewed by: jsji
Differential Revision: https://reviews.llvm.org/D67088
Before this we were emitting a bitcast to integer from the lowering
code that itself will need to be legalized. By calling
GetSoftenedFloat we get the integer conversion in one step without
needing to relegalize a bitcast.
This code isn't exercised, and was in the wrong place. If we need
this, we would need to promote the type before figuring out which
libcall to use.
I'm choosing to remove it rather than fixing since we don't
support PromoteFloat for LRINT/LROUND/LLRINT/LLROUND when the
result type is legal so I don't see much reason to support it
for the case where the result type isn't legal.
These too functions are were the same except for which libcall gets
emitted. Just merge them into one.
This is prep work for some other work including strict fp support.
Currently we miss folds with undef and identity values for binary ops
that do not fold to undef in general.
We can generalize the identity simplifications and do them before
checking for undef in particular.
Alive checks:
* OR - https://rise4fun.com/Alive/8OsK
* AND - https://rise4fun.com/Alive/e3tE
This will also allow us to remove some now redundant cases throughout
the function, but I would like to do this as follow-up. That should make
tracking down potential issues easier.
Reviewers: spatel, RKSimon, lebedev.ri
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D70169
Similar to/extension of D70208 (rGee0882bdf866), but this one
may finally allow closing motivating bugs.
This is another step towards having FMF apply only to FP values
rather than those + fcmp. See PR38086 for one of the original
discussions/motivations:
https://bugs.llvm.org/show_bug.cgi?id=38086
And the test here is derived from PR39535:
https://bugs.llvm.org/show_bug.cgi?id=39535
Currently, we lose FMF when converting any phi to select in
SimplifyCFG. There are a small number of similar changes needed
to correct within SimplifyCFG, so it should be quick to patch
this pass up.
FMF was extended to select and phi with:
D61917
D67564
Working on top of D69252, this adds canonicalisation patterns for ssub.with.overflow to ssub.sats.
Differential Revision: https://reviews.llvm.org/D69753
This adds to D69245, adding extra signed patterns for folding from a
sadd_with_overflow to a sadd_sat. These are more complex than the
unsigned patterns, as the overflow can occur in either direction.
For the add case, the positive overflow can only occur if both of the
values are positive (same for both the values being negative). So there
is an extra select on whether to use the positive or negative overflow
limit.
Differential Revision: https://reviews.llvm.org/D69252
This patch, adds support for DW_AT_alignment[DWARF5] attribute, to be emitted with typdef DIE.
When explicit alignment is specified.
Patch by Awanish Pandey <Awanish.Pandey@amd.com>
Reviewers: aprantl, dblaikie, jini.susan.george, SouraVX, alok,
deadalinx
Differential Revision: https://reviews.llvm.org/D70111
In MCObjectStreamer, when there is no current fragment, initially
symbols are created in a "pending" state and assigned to a dummy
empty fragment.
Previously, they were not being assigned an offset, and thus
evaluateAbsolute would fail if trying to evaluate an expression 'a -
b', where both 'a' and 'b' were in this pending state.
Also slightly refactored the EmitLabel overload which takes an
MCFragment for clarity.
Fixes: https://llvm.org/PR41825
Differential Revision: https://reviews.llvm.org/D70062
It was failing with
PerfJITEventListener.cpp:489:7: error: 'ManagedStatic' in namespace 'llvm' does not name a template type
llvm::ManagedStatic<PerfJITEventListener> PerfListener;
Summary:
We should check for same instruction class before checking whether they
have the same base address, else we might iterate out of bounds of a
MachineInstr operands list. The InstClass check is also cheaper.
This was introduced in SVN r373630.
Reviewers: tstellar
Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68690
This only implements the non-dwo part, but loclistx is necessary to use
location lists in DWARFv5, so it's a precursor to that work - and
generally reduces relocations (only using one reloc, then
indexes/relative offsets for all location list references) in non-split
DWARF.
LLVM IR of 1-element vectors get lower into scalar in GISel. As a
result, shuffle vector may also produce a scalar.
This patch teaches the shuffle combiner how to deal with scalars when
they are in the destination type of a shuffle vector.
For now, we just support the easy case where this can be lowered to
a plain copy. For other cases, we leave the shuffle vector as is.
This type of IR are seen in O0 pipelines. E.g., as produced with
SingleSource/UnitTests/Vector/AArch64/aarch64_neon_intrinsics.c.
rdar://problem/57198904
It was added in 2014 in 732e0aa9fb with one use in Scalarizer.cpp.
That one use was then removed when porting to the new pass manager in
2018 in b6f76002d9.
While the RFC and the desire to get off of static initializers for
cl::opt all still stand, this code is now dead, and I think we should
delete this code until someone is ready to do the migration.
There were many clients of CommandLine.h that were it transitively
through LLVMContext.h, so I cleaned that up in 4c1a1d3cf9.
Reviewers: beanz
Differential Revision: https://reviews.llvm.org/D70280
This is another step towards having FMF apply only to FP values
rather than those + fcmp. See PR38086 for one of the original
discussions/motivations:
https://bugs.llvm.org/show_bug.cgi?id=38086
And the test here is derived from PR39535:
https://bugs.llvm.org/show_bug.cgi?id=39535
Currently, we lose FMF when converting any phi to select in
SimplifyCFG. There are a small number of similar changes needed
to correct within SimplifyCFG, so it should be quick to patch
this pass up.
FMF was extended to select and phi with:
D61917
D67564
Differential Revision: https://reviews.llvm.org/D70208
This patch makes LLVM compatible with GAS. It accepts `la` pseudo
instruction on arch with 64-bit pointers and just shows a warning.
Differential Revision: https://reviews.llvm.org/D70202
O32 ABI uses relocations in REL format. Relocation's addend is written
in place. R_MIPS_JALR relocation points to the `jalr` instruction which
does not have a place to store the relocation addend. So it's impossible
to save non-zero "offset". This patch blocks emission of `R_MIPS_JALR`
relocations in such cases.
Differential Revision: https://reviews.llvm.org/D70201
Ensure the stride and trip count have the same type before multiplying them during reference cost calculation
Reviewed By: jdoefert
Differential Revision: https://reviews.llvm.org/D70192
This is a patch to support D66328, which was reverted until this lands.
Enable a compiler-rt test that used to fail previously with D66328.
Differential Revision: https://reviews.llvm.org/D67283
This patch introduces a function pass to inject the scalar-to-vector
mappings stored in the TargetLIbraryInfo (TLI) into the Vector
Function ABI (VFABI) variants attribute.
The test is testing the injection for three vector libraries supported
by the TLI (Accelerate, SVML, MASSV).
The pass does not change any of the analysis associated to the
function.
Differential Revision: https://reviews.llvm.org/D70107
https://reviews.llvm.org/D70210
Previously:
Due to sensitivity of the algorithm with gaps, and extra instructions,
when diffing, often we see naming being off by a few. Makes the diff
unreadable even for tests with 7 and 8 instructions respectively.
Naming can change depending on candidates (and order of picking
candidates). Suddenly if there's one extra instruction somewhere, the
entire subtree would be named completely differently.
No consistent naming of similar instructions which occur in different
functions. If we try to do something like count the frequency
distribution of various differences across suite, then the above
sensitivity issues are going to result in poor results.
Instead:
Name instruction based on semantics of the instruction (hash of the
opcode and operands). Essentially for a given instruction that occurs in
any module/function it'll be named similarly (ie semantic). This has
some nice properties
Can easily look at many instructions and just check the hash and if
they're named similarly, then it's the same instruction. Makes it very
easy to spot the same instruction both multiple times, as well as across
many functions (useful for frequency distribution).
Independent of traversal/candidates/depth of graph. No need to keep
track of last index/gaps/skip count etc.
No off by few issues with diffs. I've tried the old vs new
implementation in files ranging from 30 to 700 instructions. In both
cases with the old algorithm, diffs are a sea of red, where as for the
semantic version, in both cases, the diffs line up beautifully.
Simplified implementation of the main loop (simple iteration) , no keep
track of what's visited and not.
Handle collision just by incrementing a counter. Roughly
bb[N]_hash_[CollisionCount].
Additionally with the new implementation, we can probably avoid doing
the hoisting of instructions to various places, as they'll likely be
named the same resulting in differences only based on collision (ie
regardless of whether the instruction is hoisted or not/close to use or
not, it'll be named the same hash which should result in use of the
instruction be identical with the only change being the collision count)
which is very easy to spot visually.