For MASM syntax, the prefixes are not enclosed in braces.
The assembly code should like:
"evex vcvtps2pd xmm0, xmm1"
Differential Revision: https://reviews.llvm.org/D90441
This allows to reuse the RelocationResolver from the code
that doesn't want to deal with `RelocationRef` class.
I am going to use it in llvm-readobj. See the description
of D91530 for more details.
Differential revision: https://reviews.llvm.org/D91533
Clang generates a '%' prefix for some registers in CFI directives. E.g.
".cfi_register lr, r12" becomes ".cfi_register lr, %r12" after
processing.
Differential Revision: https://reviews.llvm.org/D91735
This reuses the existing lower-matrix-intrinsics pass rather than going
the legacy pass route of creating a new pass.
Use this new variant in the NPM -O0 pipeline.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D91811
Unused since https://reviews.llvm.org/D91762 and triggering
-Wunused-private-field
```
llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp:365:13: error: private field 'GetArgTLS' is not used [-Werror,-Wunused-private-field]
Constant *GetArgTLS;
^
llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp:366:13: error: private field 'GetRetvalTLS' is not used [-Werror,-Wunused-private-field]
Constant *GetRetvalTLS;
```
Reviewed By: stephan.yichao.zhao
Differential Revision: https://reviews.llvm.org/D91820
One less instruction and reducing use count of zext.
As alive2 confirms, we're fine with all the weird combinations of
undef elts in constants, but unless the shift amount was undef
for a lane, we must sanitize undef mask to zero, since sign bits
are no longer zeros.
https://rise4fun.com/Alive/d7r
```
----------------------------------------
Optimization: zz
Precondition: ((C1 == (width(%r) - width(%x))) && isSignBit(C2))
%o0 = zext %x
%o1 = shl %o0, C1
%r = and %o1, C2
=>
%n0 = sext %x
%r = and %n0, C2
Done: 2016
Optimization is correct!
```
When constructing a MemoryLocation by hand, require that a
LocationSize is explicitly specified. D91649 will split up
LocationSize::unknown() into two different states, and callers
should make an explicit choice regarding the kind of MemoryLocation
they want to have.
Similarly to assumes and guards deoptimize intrinsics are
marked as writing to ensure proper control dependencies
but they never modify any particular memory location.
Differential Revision: https://reviews.llvm.org/D91658
Instead of separately passing pointer and size, make use of
MemoryLocation. This allows us to also reuse all the existing
logic for determining the MemoryLocation correponding to an
instruction or call argument.
Not quite NFC because used locations may be more precise in some
cases.
This moves handling of alwaysinline, coroutines, matrix lowering, PGO,
and LTO-required passes into PassBuilder. Much of this is replicated
between Clang and opt. Other out-of-tree users also replicate some of
this, such as Rust [1] replicating the alwaysinline, LTO, and PGO
passes.
The LTO passes are also now run in
build(Thin)LTOPreLinkDefaultPipeline() since they are semantically
required for (Thin)LTO.
[1]: f5230fbf76/compiler/rustc_llvm/llvm-wrapper/PassWrapper.cpp (L896)
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D91585
In the current state, if getFromHash(0) is called and there's no CU with
dwo_id=0, the lookup will stop at an empty slot, then the check
`Rows[H].getSignature() != S` won't cause the lookup to fail and return
a nullptr (as it should), because the empty slot has a 0 in the
signature field, and a pointer to the empty slot will be incorrectly
returned.
This patch fixes this by using the index field in the hash entry to
check for empty slots: signature = 0 can match a valid hash but
according to the spec the index for an occupied slot will always be
non-zero.
Differential Revision: https://reviews.llvm.org/D91670
The `dso_local_equivalent` constant is a wrapper for functions that represents a
value which is functionally equivalent to the global passed to this. That is, if
this accepts a function, calling this constant should have the same effects as
calling the function directly. This could be a direct reference to the function,
the `@plt` modifier on X86/AArch64, a thunk, or anything that's equivalent to the
resolved function as a call target.
When lowered, the returned address must have a constant offset at link time from
some other symbol defined within the same binary. The address of this value is
also insignificant. The name is leveraged from `dso_local` where use of a function
or variable is resolved to a symbol in the same linkage unit.
In this patch:
- Addition of `dso_local_equivalent` and handling it
- Update Constant::needsRelocation() to strip constant inbound GEPs and take
advantage of `dso_local_equivalent` for relative references
This is useful for the [Relative VTables C++ ABI](https://reviews.llvm.org/D72959)
which makes vtables readonly. This works by replacing the dynamic relocations for
function pointers in them with static relocations that represent the offset between
the vtable and virtual functions. If a function is externally defined,
`dso_local_equivalent` can be used as a generic wrapper for the function to still
allow for this static offset calculation to be done.
See [RFC](http://lists.llvm.org/pipermail/llvm-dev/2020-August/144469.html) for more details.
Differential Revision: https://reviews.llvm.org/D77248
This moves the recognition of GREVI and GORCI from TableGen patterns
into a DAGCombine. This is done primarily to match "deeper" patterns in
the future, like (grevi (grevi x, 1) 2) -> (grevi x, 3).
TableGen is not best suited to matching patterns such as these as the compile
time of the DAG matchers quickly gets out of hand due to the expansion of
commutative permutations.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D91259
rGf571fe6df585127d8b045f8e8f5b4e59da9bbb73 led to a warning of an unused
variable for MaxSafeDepDist (written but not used). It seems this
variable and assignment can be safely removed.
This converts the intermediate VPR use assertion to a condition in the if-statement to protect against assertion failures in case behaviuour is changed.
This is a follow-up to https://reviews.llvm.org/D90935 and implements the post-approval comments.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D91790
Summary:
Add support for passing source locations to libomptarget runtime functions using the ident_t struct present in the rest of the libomp API. This will allow the runtime system to give much more insightful error messages and debugging values.
Reviewers: jdoerfert grokos
Differential Revision: https://reviews.llvm.org/D87946
SUMMARY:
1. decode the Vector extension if has_vec is set
2. decode long table fields, if longtbtable is set.
There is conflict on the bit order of HasVectorInfoMask and HasExtensionTableMask between AIX os header and IBM aix compiler XLC.
In the /usr/include/sys/debug.h defines
static constexpr uint32_t HasVectorInfoMask = 0x0040'0000;
static constexpr uint32_t HasExtensionTableMask = 0x0080'0000;
but the XLC defines as
static constexpr uint32_t HasVectorInfoMask = 0x0080'0000;
static constexpr uint32_t HasExtensionTableMask = 0x0040'0000;
we follows the definition of the IBM AIX compiler XLC here.
Reviewer: Jason Liu
Differential Revision: https://reviews.llvm.org/D86461
These are all lightweight to compute and helps avoid issues with Known being used to hold both the shift amount and then the shifted result.
Minor cleanup for D90479.
This was already something that was handled by one of the "else"
branches in maybeLoweredToCall, so this patch is an NFC but makes it
explicit and adds a test. We may in the future want to support this
under certain situations but for the moment just don't try and create
low overhead loops with inline asm in them.
Differential Revision: https://reviews.llvm.org/D91257
D57663 allowed us to reuse broadcasts of the same scalar value by extracting low subvectors from the widest type.
Unfortunately we weren't ensuring the broadcasts were from the same SDValue, just the same SDNode - which failed on multiple-value nodes like ISD::SDIVREM
FYI: I intend to request this be merged into the 11.x release branch.
Differential Revision: https://reviews.llvm.org/D91709
The assertion that vector widths are <= 256 elements was hard wired in the LV code. Eg, VE allows for vectors up to 512 elements. Test again the TTI vector register bit width instead - this is an NFC for non-asserting builds.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D91518
In some cases, the values passed to `asm sideeffect` calls cannot be
mapped directly to simple MVTs. Currently, we crash in the backend if
that happens. An example can be found in the @test_vector_too_large_r_m
test case, where we pass <9 x float> vectors. In practice, this can
happen in cases like the simple C example below.
using vec = float __attribute__((ext_vector_type(9)));
void f1 (vec m) {
asm volatile("" : "+r,m"(m) : : "memory");
}
One case that use "+r,m" constraints for arbitrary data types in
practice is google-benchmark's DoNotOptimize.
This patch updates visitInlineAsm so that it use MVT::Other for
constraints with complex VTs. It looks like the rest of the backend
correctly deals with that and properly legalizes the type.
And we still report an error if there are no registers to satisfy the
constraint.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D91710
This defines the vec_broadcast SDNode along with lowering and isel code.
We also remove unused type mappings for the vector register classes (all vector MVTs that are not used in the ISA go).
We will implement support for short vectors later by intercepting nodes with illegal vector EVTs before LLVM has had a chance to widen them.
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D91646
Some nested loops may share the same ExitingBB, so after we finishing FoldExit,
we need to notify OuterLoop and SCEV to drop any stored trip count.
Patched by: guopeilin
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D91325
The lookup logic is also reusable.
Also refactored the API to return the loaded vector - this makes it more
clear what state it is in in the case of error (as it won't be
returned).
Differential Revision: https://reviews.llvm.org/D91759
2c196bbc6b asserted that
`SmallVector::push_back` doesn't invalidate the parameter when it needs
to grow. Do the same for `resize`, `append`, `assign`, `insert`, and
`emplace_back`.
Differential Revision: https://reviews.llvm.org/D91744
In some places the parser guards against dereferencing `End`, while in
others it relies on the presence of a trailing `'\0'` to elide checks.
Add the remaining guards needed to ensure the parser never attempts to
dereference `End`, making it safe to not require a null-terminated input
buffer.
Update the parser fuzzer harness so that it tests with buffers that are
guaranteed to be non-null-terminated, null-terminated, and 1-terminated,
additionally ensuring the result of the parse is the same in each case.
Some of the regression tests were written by inspection, and some are
cases caught by the fuzzer which required additional fixes in the
parser.
Differential Revision: https://reviews.llvm.org/D84050
@tangxingxin1008 found a bug that regard vadd.vv v1, v3, a0 as a valid V
instruction. We should remove the VRegAsmOperand operand class and use
VR register class directly.
Patched by: tangxingxin1008, Hsiangkai
Differential Revision: https://reviews.llvm.org/D91712
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=4acf8c78e659833be8be047ba2f8561386a11d4b
(1994) introduced this behavior:
if a fixup symbol is equated to an expression with an undefined symbol, convert
the fixup to be against the target symbol. glibc relies on this behavior to perform
assembly level indirection
```
asm("memcpy = __GI_memcpy"); // from sysdeps/generic/symbol-hacks.h
...
// call memcpy@PLT
// The relocation references __GI_memcpy in GNU as, but memcpy in MC (without the patch)
memcpy (...);
```
(1) It complements `extern __typeof(memcpy) memcpy asm("__GI_memcpy");` The frontend asm label does not redirect synthesized memcpy in the middle-end. (See D88712 for details)
(2) `asm("memcpy = __GI_memcpy");` is in every translation unit, but the memcpy declaration may not be visible in the translation unit where memcpy is synthesized.
MC already redirects `memcpy = __GI_memcpy; call memcpy` but not `memcpy = __GI_memcpy; call memcpy@plt`.
This patch fixes the latter by allowing MCExpr::evaluateAsRelocatableImpl to
evaluate a non-VK_None MCSymbolRefExpr, which is only done after the layout is available.
GNU as allows `memcpy = __GI_memcpy+1; call memcpy@PLT` which seems nonsensical, so we don't allow it.
`MC/PowerPC/pr38945.s` `NUMBER = 0x6ffffff9; cmpwi 8,NUMBER@l` requires the
`symbol@l` form in AsmMatcher, so evaluation needs to be deferred. This is the
place whether future simplification may be possible.
Note, if we suppress the VM_None evaluation when MCAsmLayout is nullptr, we may
lose the `invalid reassignment of non-absolute variable` diagnostic
(`ARM/thumb_set-diagnostics.s` and `MC/AsmParser/variables-invalid.s`).
We know that this diagnostic is troublesome in some cases
(https://github.com/ClangBuiltLinux/linux/issues/1008), so we can consider
making simplification in the future.
Reviewed By: jyknight
Differential Revision: https://reviews.llvm.org/D88625
In some situations, the compiler may insert an accumulator prime instruction and
an accumulator unprime instruction with no use of that accumulator between the two.
That's for example the case when we store an accumulator after assembling it or
restoring it. This patch adds a peephole to remove these prime and unprime instructions.
Differential Revision: https://reviews.llvm.org/D91386
The GEP aliasing implementation currently has two pieces of code
that solve two different subsets of the same basic problem: If you
have GEPs with offsets 4*x + 0 and 4*y + 1 (assuming access size 1),
then they do not alias regardless of whether x and y are the same.
One implementation is in aliasSameBasePointerGEPs(), which looks at
this in a limited structural way. It requires both GEP base pointers
to be exactly the same, then (optionally) a number of equal indexes,
then an unknown index, then a non-equal index into a struct. This
set of limitations works, but it's overly restrictive and hides the
core property we're trying to exploit.
The second implementation is part of aliasGEP() itself and tries to
find a common modulus in the scales, so it can then check that the
constant offset doesn't overlap under modular arithmetic. The second
implementation has the right idea of what the general problem is,
but effectively only considers power of two factors in the scales
(while aliasSameBasePointerGEPs also works with non-pow2 struct sizes.)
What this patch does is to adjust the aliasGEP() implementation to
instead find the largest common factor in all the scales (i.e. the GCD)
and use that as the modulus.
Differential Revision: https://reviews.llvm.org/D91027
This reverts commit 562addba65.
Reverted change too quickly, the failing test cases passed on the next build.
So reverting revert (to include the changes).
Summary:
This patch adds support for passing in the original delcaration name in the source file to the libomptarget runtime. This will allow the runtime to provide more intelligent debugging messages. This patch takes the original expression parsed from the OpenMP map / update clause and provides a textual representation if it was explicitly mapped, otherwise it takes the name of the variable declaration as a fallback. The information in passed to the runtime in a global array of strings that matches the existing ident_t source location strings using ";name;filename;column;row;;"
Reviewers: jdoerfert
Differential Revision: https://reviews.llvm.org/D89802
This is the same fix as 23aeadb89d,
just for CloneScopedAliasMetadata rather than PropagateCallSiteMetadata.
In this case the previous outcome was incorrectly dropped metadata,
as it was not part of the computed metadata map.
The real change in the test is that the first load now retains
metadata, the rest of the changes are due to changes in metadata
numbering.
The VMap also contains a mapping from Argument => Instruction,
where the instruction is part of the original function, not the
inlined one. The code was assuming that all the instructions in
the VMap were inlined.
This was a pre-existing problem for the loop access metadata, but
was extended to the more common noalias metadata by
27f647d117, thus causing miscompiles.
There is a similar assumption inside CloneAliasScopeMetadata(), so
that one likely needs to be fixed as well.
Summary:
Expand existing loopsink testing to also test loopsinking using new pass
manager. Enable memoryssa for loopsink with new pass manager. This
combination exposed a bug that was previously fixed for loopsink
without memoryssa. When sinking an instruction into a loop, the source
block may not be part of the loop but still needs to be checked for
pointer invalidation. This is the fix for bugzilla #39695 (PR 54659)
expanded to also work with memoryssa.
Respond to review comments. Enable Memory SSA in legacy Loop Sink pass
under EnableMSSALoopDependency option control. Update tests accordingly.
Respond to review comments. Add options controlling whether memoryssa is
used for loop sink, defaulting to off. Expand testing based on these
options.
Respond to review comments. Properly indicated preserved analyses.
Author: Jamie Schmeiser <schmeise@ca.ibm.com>
Reviewed By: asbirlea (Alina Sbirlea)
Differential Revision: https://reviews.llvm.org/D90249
This reverts commit c6ef6e1690.
Basically, publicly linked libraries have a different semantic than components,
which link libraries privately.
Differential Revision: https://reviews.llvm.org/D91461
We have workarounds for two different cases where vccz can get out of
sync with the value in vcc. This fixes them in two ways:
1. Fix the case where the def of vcc was in a previous basic block, by
pessimistically assuming that vccz might be incorrect at a basic block
boundary.
2. Fix the handling of pre-existing waitcnt instructions by calling
generateWaitcntInstBefore before examining ScoreBrackets to determine
whether there's an outstanding smem read operation.
Differential Revision: https://reviews.llvm.org/D91636
As Wei Mi is reporting in post-commit review
https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20201116/853479.html
teaching -reassociate about add-like-or's (70472f3) results in breaking apart
load widening patterns, and reassociating them.
For now, simply exclude any such `or` that appears to be a root of
load widening idiom from the or->add transformation.
Note that the heuristic is greedy, it doesn't ensure that loads
can *actually* be widened into a single load.
This patch generalizes the extraction of a constraint for a given
condition. It allows decompose to return a vector of c * X pairs, which
allows de-composing multiple instructions in the future.
It also adds more clarifying comments.
The SystemZISD::IABS node is no longer needed since ISD::ABS can be used
instead.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D91697
In some cases we can handle IV and iter count of different types. It's a typical situation
after IV have been widened. This patch adds support for such cases, when legal.
Differential Revision: https://reviews.llvm.org/D88528
Reviewed By: skatkov
Support more instructions in SpeculativeExecution pass:
- ExtractElement
- InsertElement
- ShuffleVector
Differential Revision: https://reviews.llvm.org/D91633
When we produce an YAML output, we also print leading zeroes currently.
An output might look like this:
```
- Name: .dynsym
Type: SHT_DYNSYM
Address: 0x0000000000001000
EntSize: 0x0000000000000018
```
There are probably no reason to print leading zeroes.
It just makes harder to read values. This patch stops printing them.
The output becomes like:
```
- Name: .dynsym
Type: SHT_DYNSYM
Address: 0x1000
EntSize: 0x18
```
This affects obj2yaml mostly, but also dsymutil and llvm-xray tools output.
Differential revision: https://reviews.llvm.org/D90930
We can use GF2P8AFFINEQB to reverse bits in a byte. Shuffles are needed to reverse the bytes in elements larger than i8. LegalizeVectorOps takes care of inserting the shuffle for the larger element size.
We already have Custom lowering for v16i8 with SSSE3, v32i8 with AVX, and v64i8 with AVX512BW.
I think we might be able to use this for scalars too by moving into a vector and back. But I'll save that for a follow up as its a little more involved.
Reviewed By: RKSimon, pengfei
Differential Revision: https://reviews.llvm.org/D91515
I don't see any reason not to unconditionally retrieve TLI, it's fairly
cheap.
Fixes calls-errno.ll under NPM.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D91476
The design of the PreservedCFG Checker (landed with the commit
28012e00d8) has a fundamental flaw which makes it incorrect.
The checker is based on the PreservedAnalyses result returned
by functional passes: if CFGAnalyses is in the returned
PreservedAnalyses set, then the checker asserts that the CFG
snapshot saved before the pass is equal to the CFG snapshot
taken after the the pass. The problem is in passes that change
CFG and invalidate CFGAnalyses on their own. Such passes do not
return CFGanalyses in the returned PreservedAnalyses. So the
checker mistakenly expects CFG unchanged. As an example see the
class TestSimplifyCFGInvalidatingAnalysisPass in the new tests.
It is interesting that the bug was not found in LLVM. That is
because the CFG checker ran only if CFGAnalyses was checked
incorrectly:
if (!PassPA.allAnalysesInSetPreserved<CFGAnalyses>())
return;
but must be checked as follows:
auto PAC = PA.getChecker<PreservedCFGCheckerAnalysis>();
if (!(PAC.preserved() ||
PAC.preservedSet<AllAnalysesOn<Function>>() ||
PAC.preservedSet<CFGAnalyses>())
return;
A fully redesigned checker will be sent as a separate follow-up
patch.
Reviewed By: Serguei Katkov, Jakub Kuderski
Differential Revision: https://reviews.llvm.org/D91324
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
This should be a perfectly reasonable operation for scalable vectors.
Currently, it only works for zeroinitializer values of
ScalableVectorType, but the fundamental operation is sound and it should
be possible to make it work for other splats
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D77442
m_SpecificInt has the same 'no undef element' behaviour as m_APInt so no change there, and anyway we have test coverage for undef elements in the fold.
Noticed while fixing a Wshadow warning about shadow Value *X, *Y variables.
Constant hoisting may hide the constant value behind bitcast for And's
operand. Track down the constant to make the BFI result consistent
regardless of hoisting.
Differential Revision: https://reviews.llvm.org/D91450
aliasGEP() currently implements some special handling for the case
where all variable offsets are positive, in which case the constant
offset can be taken as the minimal offset. However, it does not
perform the same handling for the all-negative case. This means that
the alias-analysis result between two GEPs is asymmetric:
If GEP1 - GEP2 is all-positive, then GEP2 - GEP1 is all-negative,
and the first will result in NoAlias, while the second will result
in MayAlias.
Apart from producing sub-optimal results for one order, this also
violates our caching assumption. In particular, if BatchAA is used,
the cached result depends on the order of the GEPs in the first query.
This results in an inconsistency in BatchAA and AA results, which
is how I noticed this issue in the first place.
Differential Revision: https://reviews.llvm.org/D91383
This patch introduces a new VPDef class, which can be used to
manage VPValues defined by recipes/VPInstructions.
The idea here is to mirror VPUser for values defined by a recipe. A
VPDef can produce either zero (e.g. a store recipe), one (most recipes)
or multiple (VPInterleaveRecipe) result VPValues.
To traverse the def-use chain from a VPDef to its users, one has to
traverse the users of all values defined by a VPDef.
VPValues now contain a pointer to their corresponding VPDef, if one
exists. To traverse the def-use chain upwards from a VPValue, we first
need to check if the VPValue is defined by a VPDef. If it does not have
a VPDef, this means we have a VPValue that is not directly defined
iniside the plan and we are done.
If we have a VPDef, it is defined inside the region by a recipe, which
is a VPUser, and the upwards def-use chain traversal continues by
traversing all its operands.
Note that we need to add an additional field to to VPVAlue to link them
to their defs. The space increase is going to be offset by being able to
remove the SubclassID field in future patches.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D90558
This wasn't properly remapping the type like with the other
attributes, so this would end up hitting a verifier error after
linking different modules using byref.
For the scattered operands of load instructions it makes sense
to use gathering load intrinsic, which can lower to native instruction
for X86/AVX512 and ARM/SVE. This also enables building
vectorization tree with entries containing scattered operands.
The next step is to add scattered store.
Fixes PR47629 and PR47623
Differential Revision: https://reviews.llvm.org/D90445
When processing conditional branches, if the condition is an AND of 2 compares
and the true successor only has the current block as predecessor, queue both
conditions for the true successor.
Implement JumpTable to make BRIND work on VE. Update an existing
br_jt regression test also.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D91582
A common routine is to have the compiler crash, and attempt to rerun the
cc1 command-line by copying and pasting the arguments printed by
`llvm::Support::PrettyStackProgram::print`. However, these arguments are
not quoted or escaped which means they must be manually edited before
working correctly. This patch ensures that shell-unfriendly characters
are C-escaped, and arguments with spaces are double-quoted reducing the
frustration of running cc1 inside a debugger.
As the quoting is C, this is "best effort for most shells", but should
be fine for at least bash, zsh, csh, and cmd.exe.
Reviewed by: jhenderson
Differential Revision: https://reviews.llvm.org/D90759
This patch uses the new `getMnemonic` helper from D90039
to display mnemonics instead of the internal opcodes.
The main motivation behind using the mnemonics is that they
are more user-friendly and more directly related to the assembly
the users will be presented.
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D90040
This patch factors out the part of printInstruction that gets the
mnemonic string for a given MCInst. This is intended to be used
subsequently for the instruction-mix remarks to display the final
mnemonic (D90040).
Unfortunately making `getMnemonic` available to the AsmPrinter
seems to require making it virtual. Not sure if there's a way around
that with the current layering of the AsmPrinters.
Reviewed By: Paul-C-Anagnostopoulos
Differential Revision: https://reviews.llvm.org/D90039
Use LINK_COMPONENTS instead of explicit target_link_libraries for components.
This avoids redundancy and potential inconsistencies.
Differential Revision: https://reviews.llvm.org/D91461
When instructions are cloned from block BB to PredBB in the method
DuplicateCondBranchOnPHIIntoPred() number of successors of PredBB
changes from 1 to number of successors of BB. So we have to copy
branch probabilities from BB to PredBB.
Reviewed By: Kazu Hirata
Differential Revision: https://reviews.llvm.org/D90841
With a function pass manager, it would insert debuginfo metadata before
getting to function passes while processing the pass manager, causing
debugify to skip while running the function passes.
Skip special passes + verifier + printing passes. Compared to the legacy
implementation of -debugify-each, this additionally skips verifier
passes. Probably no need to update the legacy version since it will be
obsolete soon.
This fixes 2 instcombine tests using -debugify-each under NPM.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D91558
When we see
```
xor = G_XOR xor_lhs, -1
select = G_SELECT cc, tval, xor
```
Fold this into
```
select = CSINV tval, xor_lhs, cc
```
Update select-select.mir to reflect the changes.
For now, only handle the case where the G_XOR is the false-value for the
G_SELECT. It may make more sense to handle the true-value case in post-legalizer
lowering.
Differential Revision: https://reviews.llvm.org/D90774
- In certain cases, a generic pointer could be assumed as a pointer to
the global memory space or other spaces. With a dedicated target hook
to query that address space from a given value, infer-address-space
pass could infer and propagate that to all its users.
Differential Revision: https://reviews.llvm.org/D91121
Add lvm/svm intrinsic instructions and a regression test. Change
RegisterInfo to specify that VM0/VMP0 are constant and reserved
registers. This modifies a vst regression test, so update it.
Also add pseudo instructions for VM512 register classes
and mechanism to expand them after register allocation.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D91541
When processing conditional branches, if the condition is an OR of 2 compares
and the false successor only has the current block as predecessor, queue both
negated conditions for the false successor
This is a cut down version of 1ec6e1 which was reverted due to a compile time issue. The key changes made from that patch: 1) only infer the flags needed along each path, 2) be careful to preserve order of checks, and 3) avoid computing NW flags at all since we need to prove the stronger property (does not cross 0) in the caller anyways.
Assuming this doesn't trip regressions, I'm going to try weakening (1). My end objective is to move flag inference into addrec construction. If I can't weaken (1) without compile time impact, I'll have a problem.
The G_ZEXT in these cases seems to actually come from a combine that we do but
SelectionDAG doesn't. Looking through it allows us to match "uxtw #2" addressing
modes.
Differential Revision: https://reviews.llvm.org/D91475
The `Range` of an alias/anchor token includes the leading `&` or `*`,
but it is skipped while parsing the name. The check for an empty name
fails to account for the skipped leading character and so the error is
never hit.
Fix the off-by-one and add a couple regression tests.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D91462
We need to make sure the upper 32 bits are all ones to ensure the result is properly sign extended. Previously we only checked the lower 32 bits of the mask. I've also added a check that the shift amount is less than 32. Without that the original code asserts inside maskLeadingOnes if the SROI check is removed or the SROIW pattern is checked first. I've refactored the code to use early outs to reduce nesting.
I've also updated SLOIW matching with the same changes, but I couldn't find a broken test case with the existing code.
Differential Revision: https://reviews.llvm.org/D90961
In the existing logic, for a given alloca, as long as its pointer value is stored into another location, it's considered as escaped.
This is a bit too conservative. Specifically, in non-optimized build mode, it's often to have patterns of code that first store an alloca somewhere and then load it right away.
These used should be handled without conservatively marking them escaped.
This patch tracks how the memory location where an alloca pointer is stored into is being used. As long as we only try to load from that location and nothing else, we can still
consider the original alloca not escaping and keep it on the stack instead of putting it on the frame.
Differential Revision: https://reviews.llvm.org/D91305
This patch is added to remove the unreachable MBBs reference in the jump table.
Differential Revisien: https://reviews.llvm.org/D90498
Reviewed by: amyk, bsaleil
This defines a 'fastcc' for the VE target and implements vreg-to-vreg
copy for parameter passing. The 'fastcc' extends the standard CC for
SX-Aurora with register passing of vector-typed parameters and return
values.
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D90842
This patch adds a new pass to add !annotation metadata for entries in
@llvm.global.anotations, which is generated using
__attribute__((annotate("_name"))) on functions in Clang.
This has been discussed on llvm-dev as part of
RFC: Combining Annotation Metadata and Remarks
http://lists.llvm.org/pipermail/llvm-dev/2020-November/146393.html
Reviewed By: thegameg
Differential Revision: https://reviews.llvm.org/D91195
This patch fixes the function isWideningInstruction for scalable vectors.
Now the cost model can check the widening pattern for SVE.
Differential Revision: https://reviews.llvm.org/D91260
This patch updates Clang's IRGen to add !annotation nodes with an
"auto-init" annotation to all stores for auto-initialization.
As discussed in 'RFC: Combining Annotation Metadata and Remarks'
(http://lists.llvm.org/pipermail/llvm-dev/2020-November/146393.html)
this allows using optimization remarks to track down where auto-init
code was inserted (and not removed by optimizations).
There are a few cases in the tests where !annotation gets dropped by
optimizations. Those optimizations will be updated in subsequent
patches.
This patch is based on a patch by Francis Visoiu Mistrih.
Reviewed By: thegameg, paquette
Differential Revision: https://reviews.llvm.org/D91417
Widen the IV to the widest available and legal integer type, which makes this
transformations always safe so that we can skip overflow checks.
Motivation is to let this pass trigger on 64-bit targets too, and this is the
last patch in a serie to achieve this: D90402 moves pass LoopFlatten to just
before IndVarSimplify so that IVs are not already widened, D90421 factors out
widening from IndVarSimplify into Utils/SimplifyIndVar so that we can also use
it in LoopFlatten.
Differential Revision: https://reviews.llvm.org/D90640
This patch adds the SchedMachineModel for Cortex-M7. It
also adds test cases for the scheduling information.
Details of the pipeline and descriptions are in comments
in file ARMScheduleM7.td included in this patch.
Differential Revision: https://reviews.llvm.org/D91355
Similar to the X86 and AMDGPU targets, this uses a macro to cut down on
repetitive and error-prone code when converting RISCVISD node names to
strings in getTargetNodeName.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D91414
Patch by Elena Kovanova. Thanks Elena!
Problem:
LLVM already has a feature to profile the JIT-compiled code with VTune. This is
done using Intel JIT Profiling API (https://github.com/intel/ittapi). Function
information is captured by VTune as soon as the function is JIT-compiled. We
tried to use the same approach to report the function information generated by
the MCJIT engine – read parsing the debug information for in-memory ELF module
and report it using JIT API. As the results, we figured out that it did not work
properly for the following cases: inline functions, the functions located in
multiple source files, the functions having several bodies (address ranges).
Solution:
To overcome limitations described above, we have introduced new APIs as a part
of Intel ITT APIs to report the entire in-memory ELF module to be further
processed as regular ELF binaries with debug information.
This patch
1. Switches LLVM to open source version of Intel ITT/JIT APIs
(https://github.com/intel/ittapi) to keep it always up to date.
2. Adds support of profiling the code generated by MCJIT engine using Intel
VTune profiler
Another separate patch will get rid of obsolete Intel ITT APIs stuff, having
LLVM already switched to https://github.com/intel/ittapi.
Differential Revision: https://reviews.llvm.org/D86435
The VE backend represents vector instructions with an explicit 'i32'
vector length operand. In the VE ISA, the vector length is always read
from the VL hardware register. The LVLGen pass inserts 'lvl'
instructions as necessary to set VL to the right value before each
vector instruction.
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D91416
This patch teaches the jump threading pass to call BPI->eraseBlock
when it folds a conditional branch.
Without this patch, BranchProbabilityInfo could end up with stale edge
probabilities for the basic block containing the conditional branch --
one edge probability with less than 1.0 and the other for a removed
edge.
This patch is one of the steps before we can safely re-apply D91017.
Differential Revision: https://reviews.llvm.org/D91511
We unconditionally marked i64 as Custom, but did not install a
handler in ReplaceNodeResults when i64 isn't legal type. This
leads to ReplaceNodeResults asserting.
We have two options to fix this. Only mark i64 as Custom on
64-bit targets and let it expand to two i32 bitreverses which
each need a VPPERM. Or the other option is to add the Custom
handling to ReplaceNodeResults. This is what I went with.
In the last change to IRCE the BPI is ignored if BFI is present, however
BFI and BPI have a different thresholds. Specifically BPI approach checks only
latch exit probability so it is expected if the loop has only one exit block (latch)
the behavior with BFI and BPI should be the same,
BPI approach by default uses threshold 10, so it considers the loop with estimated
number of iterations less then 10 should not be considered for IRCE optimization.
BFI approach uses the default value 3 and this is inconsistent.
The CL modifies the code to use the same threshold for both approaches..
The test is updated due to it has two side-exits (except latch) and each of them has a
probability 1/16, so BFI estimates the number of runtime iteration is about to 7
(1/16 + 1/16 + some for latch) and test fails.
Reviewers: mkazantsev, ebrevnov
Reviewed By: mkazantsev
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D91230
There are 1-2 potential follow-up NFC commits to reduce
this further on the way to generalizing this for vectors.
The operand replacing path should be dead code because demanded
bits handles that more generally (D91415).
I noticed an add example like the one from D91343, so here's a similar patch.
The logic is based on existing code for the single-use demanded bits fold.
But I only matched a constant instead of using compute known bits on the
operands because that was the motivating patterni that I noticed.
I think this will allow removing a special-case (but incomplete) dedicated
fold within visitAnd(), but I need to untangle the existing code to be sure.
https://rise4fun.com/Alive/V6fP
Name: add with low mask
Pre: (C1 & (-1 u>> countLeadingZeros(C2))) == 0
%a = add i8 %x, C1
%r = and i8 %a, C2
=>
%r = and i8 %x, C2
Differential Revision: https://reviews.llvm.org/D91415
This patch turns VPWidenGEPRecipe into a VPValue and uses it
during VPlan construction and codegeneration instead of the plain IR
reference where possible.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D84683
In an effort to make code around flag determination more readable, and (possibly) prepare for a follow up change, factor out some of the flag detection logic. In the process, reduce the number of locations we mutate wrap flags by a couple.
Note that this isn't NFC. The old code tried for NSW xor (NUW || NW). This is, two different paths computed different sets of wrap flags. The new code will try for all three. The result is that some expressions end up with a few extra flags set.
This is used to test RemoveRedundantDbgInstrs(), which is used by other
passes.
Reviewed By: ychen
Differential Revision: https://reviews.llvm.org/D91477
Describe in the BackEnd Developer's Guide. Instrument a few backends.
Remove an old unused timing facility. Add a null backend for timing
the parser.
Differential Revision: https://reviews.llvm.org/D91388
See discussion in https://bugs.llvm.org/show_bug.cgi?id=45073 / https://reviews.llvm.org/D66324#2334485
the implementation is known-broken for certain inputs,
the bugreport was up for a significant amount of timer,
and there has been no activity to address it.
Therefore, just completely rip out all of misexpect handling.
I suspect, fixing it requires redesigning the internals of MD_misexpect.
Should anyone commit to fixing the implementation problem,
starting from clean slate may be better anyways.
This reverts commit 7bdad08429,
and some of it's follow-ups, that don't stand on their own.
This library is only used in Flang at the moment and not tested withing LLVM.
Having it as a component is breaking llvm-config:
$ bin/llvm-config --shared-mode
llvm-config: error: component libraries and shared library
llvm-config: error: missing: [...]/lib/libLLVMFrontendOpenACC.a
This will reverted when unit-tests are provided for it.
Reviewed By: clementval
Differential Revision: https://reviews.llvm.org/D91470
When the load value is folded into the sin/cos operation, the
AMDGPU library calls simplifier could still mark the function
as unmodified. Instead ensure if there is an early return,
return whether the load was folded into the sin/cos call.
Authored by MJDSys
Differential Revision: https://reviews.llvm.org/D91401
I'm not why it was added to DAGToDAG oringally but it seems
to make sense alongside the non-TLS version: LowerGlobalAddress
Differential Revision: https://reviews.llvm.org/D91432
dependency. NFC
Use findSingleDependency in place of FindDependencies and stop passing a
set of Instructions around. Modify FindDependencies to return a boolean
flag which indicates whether the dependencies it has found are all
valid.
ValueTracking was using a more powerful abs() implementation. Roll
it into KnownBits::abs(). Also add an exhaustive test for abs(),
in both the poisoning and non-poisoning variants.
Like inlineCallIfPossible and InlinerPass, after inlining mergeAttributesForInlining
should be called to merge callee's attributes to caller. But it is not called in
AlwaysInliner, causes caller's attributes inconsistent with inlined code.
Attached test case demonstrates that attribute "min-legal-vector-width"="512" is
not merged into caller without this patch, and it causes failure in SelectionDAG
when lowering the inlined AVX512 intrinsic.
Differential Revision: https://reviews.llvm.org/D91446
When we see
```
%sub = G_SUB 0, %x
%select = G_SELECT %cc, %t, %sub
```
Fold away the G_SUB by producing
```
%select = CSNEG %t, %x, cc
```
Simple IR example: https://godbolt.org/z/K8TEnh
This is valid on both sides of the select, but for now, just handle one side.
It may make more sense to handle swapping sides during post-legalizer lowering.
Differential Revision: https://reviews.llvm.org/D90723
Reducing some code duplication.
We had a helper for checking if a predicate is unsigned. Remove that and use
the existing function in Instructions.cpp.
Differential Revision: https://reviews.llvm.org/D91288