It was failing with
PerfJITEventListener.cpp:489:7: error: 'ManagedStatic' in namespace 'llvm' does not name a template type
llvm::ManagedStatic<PerfJITEventListener> PerfListener;
Summary:
We should check for same instruction class before checking whether they
have the same base address, else we might iterate out of bounds of a
MachineInstr operands list. The InstClass check is also cheaper.
This was introduced in SVN r373630.
Reviewers: tstellar
Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68690
This only implements the non-dwo part, but loclistx is necessary to use
location lists in DWARFv5, so it's a precursor to that work - and
generally reduces relocations (only using one reloc, then
indexes/relative offsets for all location list references) in non-split
DWARF.
Summary:
Counters are stored as uint64_t in the coverage mapping, but
exporting in JSON requires signed integers. Clamp the values to the
smaller range to make the conversion safe.
Reviewers: Dor1s, vsk
Reviewed By: Dor1s
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70200
LLVM IR of 1-element vectors get lower into scalar in GISel. As a
result, shuffle vector may also produce a scalar.
This patch teaches the shuffle combiner how to deal with scalars when
they are in the destination type of a shuffle vector.
For now, we just support the easy case where this can be lowered to
a plain copy. For other cases, we leave the shuffle vector as is.
This type of IR are seen in O0 pipelines. E.g., as produced with
SingleSource/UnitTests/Vector/AArch64/aarch64_neon_intrinsics.c.
rdar://problem/57198904
It was added in 2014 in 732e0aa9fb with one use in Scalarizer.cpp.
That one use was then removed when porting to the new pass manager in
2018 in b6f76002d9.
While the RFC and the desire to get off of static initializers for
cl::opt all still stand, this code is now dead, and I think we should
delete this code until someone is ready to do the migration.
There were many clients of CommandLine.h that were it transitively
through LLVMContext.h, so I cleaned that up in 4c1a1d3cf9.
Reviewers: beanz
Differential Revision: https://reviews.llvm.org/D70280
This is another step towards having FMF apply only to FP values
rather than those + fcmp. See PR38086 for one of the original
discussions/motivations:
https://bugs.llvm.org/show_bug.cgi?id=38086
And the test here is derived from PR39535:
https://bugs.llvm.org/show_bug.cgi?id=39535
Currently, we lose FMF when converting any phi to select in
SimplifyCFG. There are a small number of similar changes needed
to correct within SimplifyCFG, so it should be quick to patch
this pass up.
FMF was extended to select and phi with:
D61917
D67564
Differential Revision: https://reviews.llvm.org/D70208
This patch makes LLVM compatible with GAS. It accepts `la` pseudo
instruction on arch with 64-bit pointers and just shows a warning.
Differential Revision: https://reviews.llvm.org/D70202
O32 ABI uses relocations in REL format. Relocation's addend is written
in place. R_MIPS_JALR relocation points to the `jalr` instruction which
does not have a place to store the relocation addend. So it's impossible
to save non-zero "offset". This patch blocks emission of `R_MIPS_JALR`
relocations in such cases.
Differential Revision: https://reviews.llvm.org/D70201
Ensure the stride and trip count have the same type before multiplying them during reference cost calculation
Reviewed By: jdoefert
Differential Revision: https://reviews.llvm.org/D70192
This is a patch to support D66328, which was reverted until this lands.
Enable a compiler-rt test that used to fail previously with D66328.
Differential Revision: https://reviews.llvm.org/D67283
This patch introduces a function pass to inject the scalar-to-vector
mappings stored in the TargetLIbraryInfo (TLI) into the Vector
Function ABI (VFABI) variants attribute.
The test is testing the injection for three vector libraries supported
by the TLI (Accelerate, SVML, MASSV).
The pass does not change any of the analysis associated to the
function.
Differential Revision: https://reviews.llvm.org/D70107
Similar to D46029 (ELF) and D70036 (COFF), but for MachO.
Note, when --strip-symbol (not implemented for MachO) is also specified,
--redefine-sym executes before --strip-symbol.
Reviewed By: jhenderson, seiya
Differential Revision: https://reviews.llvm.org/D70212
Allow call site paramter descriptions to reference spill slots. Spill
slots are not visible to high-level LLVM IR, so they can safely be
referenced during entry value evaluation (as they cannot be clobbered by
some other function).
This gives a 5% increase in the number of call site parameter DIEs in an
LTO x86_64 build of the xnu kernel.
This reverts commit eb4c98ca3d (
[DebugInfo] Exclude memory location values as parameter entry values),
effectively reintroducing the portion of D60716 which dealt with memory
locations (authored by Djordje, Nikola, Ananth, and Ivan).
This partially addresses llvm.org/PR43343. However, not all memory
operands forwarded to callees live in spill slots. In the xnu build, it
may be possible to use an escape analysis to increase the number of call
site parameter by another 15% (more details in PR43343).
Differential Revision: https://reviews.llvm.org/D70254
https://reviews.llvm.org/D70210
Previously:
Due to sensitivity of the algorithm with gaps, and extra instructions,
when diffing, often we see naming being off by a few. Makes the diff
unreadable even for tests with 7 and 8 instructions respectively.
Naming can change depending on candidates (and order of picking
candidates). Suddenly if there's one extra instruction somewhere, the
entire subtree would be named completely differently.
No consistent naming of similar instructions which occur in different
functions. If we try to do something like count the frequency
distribution of various differences across suite, then the above
sensitivity issues are going to result in poor results.
Instead:
Name instruction based on semantics of the instruction (hash of the
opcode and operands). Essentially for a given instruction that occurs in
any module/function it'll be named similarly (ie semantic). This has
some nice properties
Can easily look at many instructions and just check the hash and if
they're named similarly, then it's the same instruction. Makes it very
easy to spot the same instruction both multiple times, as well as across
many functions (useful for frequency distribution).
Independent of traversal/candidates/depth of graph. No need to keep
track of last index/gaps/skip count etc.
No off by few issues with diffs. I've tried the old vs new
implementation in files ranging from 30 to 700 instructions. In both
cases with the old algorithm, diffs are a sea of red, where as for the
semantic version, in both cases, the diffs line up beautifully.
Simplified implementation of the main loop (simple iteration) , no keep
track of what's visited and not.
Handle collision just by incrementing a counter. Roughly
bb[N]_hash_[CollisionCount].
Additionally with the new implementation, we can probably avoid doing
the hoisting of instructions to various places, as they'll likely be
named the same resulting in differences only based on collision (ie
regardless of whether the instruction is hoisted or not/close to use or
not, it'll be named the same hash which should result in use of the
instruction be identical with the only change being the collision count)
which is very easy to spot visually.
Summary:
As well as vector/vector compare instructions, MVE also has a family
of comparisons taking a vector and a scalar, which compare every lane
of the vector against the same value. We generate those at isel time
using isel patterns that match `(ARMvcmp vector, (ARMvdup scalar))`.
This commit adds corresponding patterns for the operand-reversed form
`(ARMvcmp (ARMvdup scalar), vector)`, with condition codes swapped as
necessary. That way, we can still generate the vector/scalar compare
instruction if the IR happens to have been rearranged to put the
operands the other way round, which can happen in some optimization
phases. Previously, a vcmp the other way round was handled by emitting
a `vdup` instruction to //explicitly// replicate the scalar input into
a vector, and then doing a vector/vector comparison.
I haven't added a new test, because it turned out that several
existing tests were already exhibiting that failure mode. So just
updating the expected output in the existing MVE codegen tests
demonstrates what's been improved.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70296
ValueInfo has user-defined 'operator bool' which allows incorrect implicit conversion
to GlobalValue::GUID (which is unsigned long). This causes bugs which are hard to
track and should be removed in future.
Enumerations that describe rounding mode and exception behavior were
defined inside ConstrainedFPIntrinsic. It makes sense to use the same
definitions to represent the same properties in other cases, not only
in constrained intrinsics. It was however inconvenient as required to
include constrained intrinsics definitions even if they were not needed.
Also using long scope prefix reduced readability.
This change moves these definitioins to the namespace llvm::fp.
No functional changes.
Differential Revision: https://reviews.llvm.org/D69552
Summary:
Using c-index-test is fragile since it does not parse all the clang
arguments that are used in the RUN: line. This can result in incorrect
mangled names that do not match any of the generated IR.
For example macOS triples include a leading underscore (which was handled
with a hack in the current script). For the CHERI target we have added
new qualifiers which affect C++ name mangling, but will be included added
by update_cc_test_checks since it parses the source file with the host
triple because it ignores the -triple= argument passed to clang -cc1.
Using the new feature of including the mangled name in the JSON AST dump
(see D69564), we can parse the output of the RUN: command with
"-fsyntax-only -ast-dump=json" appended.
This should make the script less fragile and also forks one process less.
Reviewers: MaskRay, xbolva00
Reviewed By: MaskRay
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69565
Summary:
This adds a visitLocationList function to the DWARF v4 location lists,
similar to what already exists for DWARF v5. It follows the approach
outlined in previous patches (D69672), where the parsed form is always
stored in the DWARF v5 format, which makes it easier for generic code to
be built on top of that. v4 location lists are "upgraded" during
parsing, and then this upgrade is undone while dumping.
Both "inline" and section-based dumping is rewritten to reuse the
existing "generic" location list dumper. This means that the output
format is consistent for all location lists (the only thing one needs to
implement is the function which prints the "raw" form of a location
list), and that debug_loc dumping correctly processes base address
selection entries, etc.
The previous existing debug_loc functionality (e.g.,
parseOneLocationList) is rewritten on top of the new API, but it is not
removed as there is still code which uses them. This will be done in
follow-up patches, after I build the API to access the "interpreted"
location lists in a generic way (as that is what those users really
want).
Reviewers: dblaikie, probinson, JDevlieghere, aprantl, SouraVX
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69847
The SmallVector reserve() call in
MachineInstrExpressionTrait::getHashValue accounted for over 3% of all
calls to malloc() when I compiled a bunch of graphics shaders for the
AMDGPU target. Its initial size was only enough for machine instructions
with up to 7 operands, but for AMDGPU 8 and 10 operands are very common.
Here's a histogram of number of operands for each call to getHashValue,
gathered from the same collection of shaders:
1 13503
2 254273
3 135781
4 422508
5 614997
6 194953
7 287248
8 1517255
9 31218
10 1191269
11 70731
12 24
13 77
15 84
17 4692
27 16
33 705
49 6
Typical instructions with 8 and 10 operands are floating point
arithmetic and multiply-accumulate instructions like:
%83:vgpr_32 = V_MUL_F32_e64 0, killed %82:vgpr_32, 0, killed %81:vgpr_32, 0, 0, implicit $exec
%330:vgpr_32 = V_MAC_F32_e64 0, killed %327:vgpr_32, 0, killed %329:sgpr_32, 0, %328:vgpr_32(tied-def 0), 0, 0, implicit $exec
Differential Revision: https://reviews.llvm.org/D70301
This is a follow up of d90804d, to also flag fmcp instructions as instructions
that we do not support in tail-predicated vector loops.
Differential Revision: https://reviews.llvm.org/D70295
Introduce IntImmLeaf version of PatLeaf immZExt16 for 32-bit immediates.
Change immZExt16 with imm32ZExt16 for andi, ori and xori.
This keeps same behavior for SDAG and allows for GlobalISel selectImpl
to select 'G_CONSTANT imm' + G_AND, G_OR, G_XOR into ANDi, ORi, XORi,
respectively, when 32-bit imm satisfies imm32ZExt16 predicate: zero
extending 16 low bits of imm is equal to imm.
Large number of test changes comes from zero extending of small types
which is transformed into 'and' with bitmask in legalizer.
Differential Revision:https://reviews.llvm.org/D70185
Introduce IntImmLeaf version of PatLeaf immSExt16 for 32-bit immediates.
Change immSExt16 with imm32SExt16 for addiu.
This keeps same behavior for SDAG and allows for GlobalISel selectImpl
to select 'G_CONSTANT imm' + G_ADD into ADDIu when 32-bit imm satisfies
imm32SExt16 predicate: sign extending 16 low bits of imm is equal to imm.
Differential Revision: https://reviews.llvm.org/D70184
Summary:
When scalarizing PHI nodes we might try to examine/rewrite
InsertElement nodes in predecessors. If those predecessors
are unreachable from entry, then the IR in those blocks could
have unexpected properties resulting in infinite loops in
Scatterer::operator[].
By simply treating values originating from instructions in
unreachable blocks as undef we do not need to analyse them
further.
This fixes PR41723.
Reviewers: bjope
Reviewed By: bjope
Subscribers: bjope, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70171
It was failing with
llvm/lib/ExecutionEngine/Orc/DebugUtils.cpp:56:10:
error: could not convert ‘Obj’ from ‘std::unique_ptr<llvm::MemoryBuffer>’
to ‘llvm::Expected<std::unique_ptr<llvm::MemoryBuffer> >’
return Obj;
^
The usage of target boolean checks is overly inflexible, since sext
and zext of a compare are equally cheap. The choice is arbitrary, but
using 0/1 to some degree is the choice of lower resistance since
that's what most targets use. This enables a few combines that don't
bother to support ZeroOrNegativeOneBooleanContent.
Adds a DumpObjects utility that can be used to dump JIT'd objects to disk.
Instances of DebugObjects may be used by ObjectTransformLayer as no-op
transforms.
This patch also adds an ObjectTransformLayer to LLJIT and an example of how
to use this utility to dump JIT'd objects in LLJIT.
getFirstNonPHI iterates over all the instructions in a block until it
finds a non-PHI.
Then, the loop starts from the beginning of the block and goes through
all the instructions until it reaches the instruction found by
getFirstNonPHI.
Instead of doing that, just stop when a non-PHI is found.
This reduces the compile-time of a test case discussed in
https://reviews.llvm.org/D47023 by 13x.
Not entirely sure how to come up with a test case for this since it's a
compile time issue that would significantly slow down running the tests.
Differential Revision: https://reviews.llvm.org/D70016
Summary: This is a bug fix for further issues in PR43585.
Reviewers: rnk, RKSimon, craig.topper, andrew.w.kaylor
Subscribers: hiraditya, llvm-commits, annita.zhang
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70224