This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
This should be a perfectly reasonable operation for scalable vectors.
Currently, it only works for zeroinitializer values of
ScalableVectorType, but the fundamental operation is sound and it should
be possible to make it work for other splats
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D77442
m_SpecificInt has the same 'no undef element' behaviour as m_APInt so no change there, and anyway we have test coverage for undef elements in the fold.
Noticed while fixing a Wshadow warning about shadow Value *X, *Y variables.
Constant hoisting may hide the constant value behind bitcast for And's
operand. Track down the constant to make the BFI result consistent
regardless of hoisting.
Differential Revision: https://reviews.llvm.org/D91450
aliasGEP() currently implements some special handling for the case
where all variable offsets are positive, in which case the constant
offset can be taken as the minimal offset. However, it does not
perform the same handling for the all-negative case. This means that
the alias-analysis result between two GEPs is asymmetric:
If GEP1 - GEP2 is all-positive, then GEP2 - GEP1 is all-negative,
and the first will result in NoAlias, while the second will result
in MayAlias.
Apart from producing sub-optimal results for one order, this also
violates our caching assumption. In particular, if BatchAA is used,
the cached result depends on the order of the GEPs in the first query.
This results in an inconsistency in BatchAA and AA results, which
is how I noticed this issue in the first place.
Differential Revision: https://reviews.llvm.org/D91383
This patch introduces a new VPDef class, which can be used to
manage VPValues defined by recipes/VPInstructions.
The idea here is to mirror VPUser for values defined by a recipe. A
VPDef can produce either zero (e.g. a store recipe), one (most recipes)
or multiple (VPInterleaveRecipe) result VPValues.
To traverse the def-use chain from a VPDef to its users, one has to
traverse the users of all values defined by a VPDef.
VPValues now contain a pointer to their corresponding VPDef, if one
exists. To traverse the def-use chain upwards from a VPValue, we first
need to check if the VPValue is defined by a VPDef. If it does not have
a VPDef, this means we have a VPValue that is not directly defined
iniside the plan and we are done.
If we have a VPDef, it is defined inside the region by a recipe, which
is a VPUser, and the upwards def-use chain traversal continues by
traversing all its operands.
Note that we need to add an additional field to to VPVAlue to link them
to their defs. The space increase is going to be offset by being able to
remove the SubclassID field in future patches.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D90558
This wasn't properly remapping the type like with the other
attributes, so this would end up hitting a verifier error after
linking different modules using byref.
For the scattered operands of load instructions it makes sense
to use gathering load intrinsic, which can lower to native instruction
for X86/AVX512 and ARM/SVE. This also enables building
vectorization tree with entries containing scattered operands.
The next step is to add scattered store.
Fixes PR47629 and PR47623
Differential Revision: https://reviews.llvm.org/D90445
When processing conditional branches, if the condition is an AND of 2 compares
and the true successor only has the current block as predecessor, queue both
conditions for the true successor.
Implement JumpTable to make BRIND work on VE. Update an existing
br_jt regression test also.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D91582
A common routine is to have the compiler crash, and attempt to rerun the
cc1 command-line by copying and pasting the arguments printed by
`llvm::Support::PrettyStackProgram::print`. However, these arguments are
not quoted or escaped which means they must be manually edited before
working correctly. This patch ensures that shell-unfriendly characters
are C-escaped, and arguments with spaces are double-quoted reducing the
frustration of running cc1 inside a debugger.
As the quoting is C, this is "best effort for most shells", but should
be fine for at least bash, zsh, csh, and cmd.exe.
Reviewed by: jhenderson
Differential Revision: https://reviews.llvm.org/D90759
This patch uses the new `getMnemonic` helper from D90039
to display mnemonics instead of the internal opcodes.
The main motivation behind using the mnemonics is that they
are more user-friendly and more directly related to the assembly
the users will be presented.
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D90040
This patch factors out the part of printInstruction that gets the
mnemonic string for a given MCInst. This is intended to be used
subsequently for the instruction-mix remarks to display the final
mnemonic (D90040).
Unfortunately making `getMnemonic` available to the AsmPrinter
seems to require making it virtual. Not sure if there's a way around
that with the current layering of the AsmPrinters.
Reviewed By: Paul-C-Anagnostopoulos
Differential Revision: https://reviews.llvm.org/D90039
Use LINK_COMPONENTS instead of explicit target_link_libraries for components.
This avoids redundancy and potential inconsistencies.
Differential Revision: https://reviews.llvm.org/D91461
When instructions are cloned from block BB to PredBB in the method
DuplicateCondBranchOnPHIIntoPred() number of successors of PredBB
changes from 1 to number of successors of BB. So we have to copy
branch probabilities from BB to PredBB.
Reviewed By: Kazu Hirata
Differential Revision: https://reviews.llvm.org/D90841
With a function pass manager, it would insert debuginfo metadata before
getting to function passes while processing the pass manager, causing
debugify to skip while running the function passes.
Skip special passes + verifier + printing passes. Compared to the legacy
implementation of -debugify-each, this additionally skips verifier
passes. Probably no need to update the legacy version since it will be
obsolete soon.
This fixes 2 instcombine tests using -debugify-each under NPM.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D91558
When we see
```
xor = G_XOR xor_lhs, -1
select = G_SELECT cc, tval, xor
```
Fold this into
```
select = CSINV tval, xor_lhs, cc
```
Update select-select.mir to reflect the changes.
For now, only handle the case where the G_XOR is the false-value for the
G_SELECT. It may make more sense to handle the true-value case in post-legalizer
lowering.
Differential Revision: https://reviews.llvm.org/D90774
- In certain cases, a generic pointer could be assumed as a pointer to
the global memory space or other spaces. With a dedicated target hook
to query that address space from a given value, infer-address-space
pass could infer and propagate that to all its users.
Differential Revision: https://reviews.llvm.org/D91121
Add lvm/svm intrinsic instructions and a regression test. Change
RegisterInfo to specify that VM0/VMP0 are constant and reserved
registers. This modifies a vst regression test, so update it.
Also add pseudo instructions for VM512 register classes
and mechanism to expand them after register allocation.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D91541
When processing conditional branches, if the condition is an OR of 2 compares
and the false successor only has the current block as predecessor, queue both
negated conditions for the false successor
This is a cut down version of 1ec6e1 which was reverted due to a compile time issue. The key changes made from that patch: 1) only infer the flags needed along each path, 2) be careful to preserve order of checks, and 3) avoid computing NW flags at all since we need to prove the stronger property (does not cross 0) in the caller anyways.
Assuming this doesn't trip regressions, I'm going to try weakening (1). My end objective is to move flag inference into addrec construction. If I can't weaken (1) without compile time impact, I'll have a problem.