If we happen to extract a non-dword subreg that breaks the
logic of the function and it may shrink the dmask because
it does not recognize the use of a lane(s).
This bug is next to impossible to trigger with the current
lowering in the BE, but it breaks in one of my future patches.
Differential Revision: https://reviews.llvm.org/D93782
D93491 implemented `--version` for the MachO LLD port, but asserts that the string contains something like "LLD N.N". However, this is just the cmake default for `LLD_VERSION_STRING`, and downstream users may choose a different value, e.g. a rolling distro may print "LLD trunk".
Some predicates, can be considered the same as long as the operands are
flipped. For example, a > b gives the same result as b > a. This maps
instructions in a greater than form, to their appropriate less than
form, swapping the operands in the IRInstructionData only, allowing for
more flexible matching.
Tests:
llvm/test/Transforms/IROutliner/outlining-isomorphic-predicates.ll
llvm/unittests/Analysis/IRSimilarityIdentifierTest.cpp
Reviewers: jroelofs, paquette
Differential Revision: https://reviews.llvm.org/D87310
Certain instructions, such as adds and multiplies can have the operands
flipped and still be considered the same. When we are analyzing
structure, this gives slightly more flexibility to create a mapping from
one region to another. We can add both operands in a corresponding
instruction to an operand rather than just the exact match. We then try
to eliminate items from the set, until there is only one valid mapping
between the regions of code.
We do this for adds, multiplies, and equality checking. However, this is
not done for floating point instructions, since the order can still
matter in some cases.
Tests:
llvm/test/Transforms/IROutliner/outlining-commutative-fp.ll
llvm/test/Transforms/IROutliner/outlining-commutative.ll
llvm/unittests/Analysis/IRSimilarityIdentifierTest.cpp
Reviewers: jroelofs, paquette
Differential Revision: https://reviews.llvm.org/D87311
The source pointer type is not necessarily the same as the result
pointer type, so we can't simply return the original null pointer,
it might be a different one.
Effectively, this is what we were previously already doing when
the GEP was used in conjunction with a load or store, but this
fold can also be applied more generally:
> The only in bounds address for a null pointer in the default
> address-space is the null pointer itself.
This removes alot of unnecessary padding, trimming the size of the struct from 728->608 on 64 bit platforms.
Reviewed By: MyDeveloperDay
Differential Revision: https://reviews.llvm.org/D93758
This patch extends the SDNode ISel support for RVV from only the
vector/vector instructions to include the vector/scalar and
vector/immediate forms.
It uses splat_vector to carry the scalar in each case, except when
XLEN<SEW (RV32 SEW=64) when a custom node `SPLAT_VECTOR_I64` is used for
type-legalization and to encode the fact that the value is sign-extended
to SEW. When the scalar is a full 64-bit value we use a sequence to
materialize the constant into the vector register.
The non-intrinsic ISel patterns have also been split into their own
file.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Fraser Cormack <fraser@codeplay.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93312
If the GEP isn't inbounds, then accessing a GEP of null location
is generally not UB.
While this is a minimal fix, the GEP of null handling should
probably be its own fold.
Previously for each op we generate a separate serialization
method for it. Those serialization methods duplicate the logic
of parsing operands/results/attributes and such.
This commit creates a generic method and let suitable op-specific
serialization method to call into it.
wc -l SPIRVSerialization.inc: before 8304; after: 5597 (So -2707)
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93535
Previously for each op we generate a separate deserialization
method for it. Those deserialization methods duplicate the logic
of parsing operands/results/attributes and such.
This commit creates a generic method and let suitable op-specific
deserialization method to call into it.
wc -l SPIRVSerialization.inc: before 13290; after: 8304 (So -4986)
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93504
This commit renames various SPIR-V related conversion files for
consistency. It drops the "Convert" prefix to various files and
fixes various comment headers.
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93489
Every basic block section symbol created by -fbasic-block-sections will contain
".__part." to know that this symbol corresponds to a basic block fragment of
the function.
This patch solves two problems:
a) Like D89617, we want function symbols with suffixes to be properly qualified
so that external tools like profile aggregators know exactly what this
symbol corresponds to.
b) The current basic block naming just adds a ".N" to the symbol name where N is
some integer. This collides with how clang creates __cxx_global_var_init.N.
clang creates these symbol names to call constructor functions and basic
block symbol naming should not use the same style.
Fixed all the test cases and added an extra test for __cxx_global_var_init
breakage.
Differential Revision: https://reviews.llvm.org/D93082
Previously all SCF to SPIR-V conversion patterns were tested as
the -convert-gpu-to-spirv pass. That obscured the structure we
want. This commit fixed it.
Reviewed By: ThomasRaoux, hanchung
Differential Revision: https://reviews.llvm.org/D93488
The current state of the transform is still not enough to support
my motivational pattern, because it has one more "induction variable".
I have delayed posting this patch, because originally even just rewriting
the loop as countable wasn't enough to nicely transform my motivational pattern,
because i expected that extra IV to be rewritten afterwards,
but it wasn't happening until i fixed that in D91800.
So, this patch allows the 'left-shift until bittest' loop idiom
as long as the inserted ops are cheap,
and lifts any and all extra use checks on the instructions.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D92754
If the bitmask is for sign bit, instcombine would have canonicalized
the pattern into a proper sign bit check. Supporting that is still
simple, but requires a bit of a roundtrip - we first have to use
`decomposeBitTestICmp()`, and the rest again just works.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D91726
The handing of the case where the mask is a constant is trivial,
if said constant is a power of two, the bit in question is log2(mask),
rest just works.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D91725
The motivation here is the following inner loop in fp16/fp24 -> fp32 expander,
that runs as part of the floating-point DNG decompression in RawSpeed library:
cd380bb9a2/src/librawspeed/decompressors/DeflateDecompressor.cpp (L112-L115)
```
while (!(fp32_fraction & (1 << 23))) {
fp32_exponent -= 1;
fp32_fraction <<= 1;
}
```
(https://godbolt.org/z/r13YMh)
As one might notice, that loop is currently uncountable, and that whole code stays scalar.
Yet, it is rather trivial to make that loop countable:
https://godbolt.org/z/do8WMz
and we can prove that via alive2:
https://alive2.llvm.org/ce/z/7vQnji (ha nice, isn't it?)
... and that allow for the whole fp16->fp32 code to vectorize:
https://godbolt.org/z/7hYr13
Now, while i'd love to get there, i feel like i should take it in steps.
For now, this introduces support for the most basic case,
where the bit position is known as a variable,
and the loop *will* go away (has no live-outs other than the recurrence,
no extra instructions in the loop).
I have added sufficient (i believe) test coverage,
and alive2 is happy with those transforms.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D91038
This should've been in 7ad666798f but wasn't.
Squashes these twoc commits:
Revert "[clang][cli] Let denormalizer decide how to render the option based on the option class"
This reverts commit 70410a2649.
Revert "[clang][cli] Implement `getAllArgValues` marshalling"
This reverts commit 63a24816f5.
When there are constants that have the same structural location, but not
the same value, between different regions, we cannot simply outline the
region. Instead, we find the constants that are not the same in each
location, and promote them to arguments to be passed into the respective
functions. At each call site, we pass the constant in as an argument
regardless of type.
Added/Edited Tests:
llvm/test/Transforms/IROutliner/outlining-constants-vs-registers.ll
llvm/test/Transforms/IROutliner/outlining-different-constants.ll
llvm/test/Transforms/IROutliner/outlining-different-globals.ll
Reviewers: paquette, jroelofs
Differential Revision: https://reviews.llvm.org/D87294
- updated the link to join the meeting to reflect the new WebEx information
- Added a note about the new Google Doc for keeping track of notes, and who
to contact if you experience access issues with the document
- Left a reference to the minutes from previous meetings being available
through a search of the flang-dev mailing list
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D93770
Also include a special case pattern to use vmv.v.x vd, zero when
the argument is 0.0.
Reviewed By: khchen
Differential Revision: https://reviews.llvm.org/D93672
GetCommandSPExact is called exaclty once with include_aliases set to
true, so make it a default argument. Use early returns to simplify the
implementation.
741978d727 made clang produce output that's 2x as large at least in
sanitizer builds. https://reviews.llvm.org/D83892#2470185 has a
standalone repro.
This reverts the following commits:
Revert "[clang][cli] Port CodeGenOpts simple string flags to new option parsing system"
This reverts commit 95d3cc67ca.
Revert "[clang][cli] Port LangOpts simple string based options to new option parsing system"
This reverts commit aec2991d08.
Revert "[clang][cli] Streamline MarshallingInfoFlag description"
This reverts commit 27b7d64688.
Revert "[clang][cli] Port LangOpts option flags to new option parsing system"
This reverts commit 383778e217.
Revert "[clang][cli] Port CodeGen option flags to new option parsing system"
This reverts commit 741978d727.
We didn't have support for parsing DriverKit in our `-platform`
flag, so add that too. Also remove a bunch of unnecessary namespace
prefixes.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D93741
Update the documentation and add a test.
Build failed: Change SIZE_MAX to std::numeric_limits<int64_t>::max().
Differential Revision: https://reviews.llvm.org/D93419
Current approach doesn't work well in cases when multiple paths are predicted to be "cold". By "cold" paths I mean those containing "unreachable" instruction, call marked with 'cold' attribute and 'unwind' handler of 'invoke' instruction. The issue is that heuristics are applied one by one until the first match and essentially ignores relative hotness/coldness
of other paths.
New approach unifies processing of "cold" paths by assigning predefined absolute weight to each block estimated to be "cold". Then we propagate these weights up/down IR similarly to existing approach. And finally set up edge probabilities based on estimated block weights.
One important difference is how we propagate weight up. Existing approach propagates the same weight to all blocks that are post-dominated by a block with some "known" weight. This is useless at least because it always gives 50\50 distribution which is assumed by default anyway. Worse, it causes the algorithm to skip further heuristics and can miss setting more accurate probability. New algorithm propagates the weight up only to the blocks that dominates and post-dominated by a block with some "known" weight. In other words, those blocks that are either always executed or not executed together.
In addition new approach processes loops in an uniform way as well. Essentially loop exit edges are estimated as "cold" paths relative to back edges and should be considered uniformly with other coldness/hotness markers.
Reviewed By: yrouban
Differential Revision: https://reviews.llvm.org/D79485
This is follow up to D93393.
Without this patch clangd takes the symbol definition from the static index if this definition was removed from the dynamic index.
Reviewed By: sammccall
Differential Revision: https://reviews.llvm.org/D93683
Adds rewrite patterns to convert select+cmp instructions into clamp
instructions whenever possible. Support is added to convert:
- FOrdLessThan, FOrdLessThanEqual to GLSLFClampOp.
- SLessThan, SLessThanEqual to GLSLSClampOp.
- ULessThan, ULessThanEqual to GLSLUClampOp.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D93618
https://bugs.llvm.org/show_bug.cgi?id=48539
Add support for Qt Translator Comments to reflow
When reflown and a part of the comments are added on a new line, it should repeat these extra characters as part of the comment token.
Reviewed By: curdeius, HazardyKnusperkeks
Differential Revision: https://reviews.llvm.org/D93490
https://bugs.llvm.org/show_bug.cgi?id=48535
using `SpaceAfterCStyleCast: true`
```
size_t idx = (size_t) a;
size_t idx = (size_t) (a - 1);
```
is formatted as:
```
size_t idx = (size_t) a;
size_t idx = (size_t)(a - 1);
```
This revision aims to improve that by improving the function which tries to identify a CastRParen
Reviewed By: curdeius
Differential Revision: https://reviews.llvm.org/D93626
Adds ARMBankConflictHazardRecognizer. This hazard recognizer
looks for a few situations where the same base pointer is used and
then checks whether the offsets lead to a bank conflict. Two
parameters are also added to permit overriding of the target
assumptions:
arm-data-bank-mask=<int> - Mask of bits which are to be checked for
conflicts. If all these bits are equal in the offsets, there is a
conflict.
arm-assume-itcm-bankconflict=<bool> - Assume that there will be bank
conflicts on any loads to a constant pool.
This hazard recognizer is enabled for Cortex-M7, where the Technical
Reference Manual states that there are two DTCM banks banked using bit
2 and one ITCM bank.
Differential Revision: https://reviews.llvm.org/D93054