Test Plan: using kernel ASAN and MSAN implementations in FreeBSD
Reviewed By: emaste, dim, arichardson
Differential Revision: https://reviews.llvm.org/D98286
There are four new PowerPC instructions that are introduced in
Power 10. They are hashst, hashchk, hashstp, hashchkp.
These instructions will be used for ROP Protection.
This patch adds the four instructions.
Reviewed By: nemanjai, amyk, #powerpc
Differential Revision: https://reviews.llvm.org/D99375
This patch changed the isLegalUse check to ensure that
LSRInstance::GenerateConstantOffsetsImpl generates an
offset that results in a legal addressing mode and
formula. The check is changed to look similar to the
assert check used for illegal formulas.
Differential Revision: https://reviews.llvm.org/D100383
Change-Id: Iffb9e32d59df96b8f072c00f6c339108159a009a
Value::replaceUsesOutsideBlock doesn't replace debug uses which leads to an
unnecessary reduction in variable location coverage. Fix this, add a unittest for
it, and add a regression test demonstrating the change through instcombine's
replacedSelectWithOperand.
Reviewed By: djtodoro
Differential Revision: https://reviews.llvm.org/D99169
I'm working on the implementation of OpenMP 5.1 feature `atomic compare`.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D100507
Double square bracket attribute arguments can be arbitrarily complex,
and the attribute argument parsing logic recovers by skipping tokens.
As a fallback recovery mechanism, parse recovery stops before reading a
semicolon. This could lead to an infinite loop in the attribute list
parsing logic.
This also fixes a CHECK line in @fadd_strict_unroll which ensures the
changes made to fixReduction() to support in-order reductions with
unrolling are being tested correctly.
The `e_flags` contains a mixture of bitfields and regular ones, ensure all of them can be serialized and deserialized.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100250
Returning in memory is not supported, so fall back to sret.
Also, extend i1 and i16 to i32. Otherwise, they would be passed through
memory.
Differential Revision: https://reviews.llvm.org/D100543
If we are truncating from a i32 source before comparing the result against zero, then see if we can directly compare the source value against zero.
If the upper (truncated) bits are known to be zero then we can compare against that, hopefully increasing the chances of us folding the compare into a EFLAG result of the source's operation.
Fixes PR49028.
Differential Revision: https://reviews.llvm.org/D100491
With this patch vbslq_f32(vnegq_s32(a), b, c) lowers to a BIT instruction.
Co-authored-by: Paul Walker <paul.walker@arm.com>
Differential Revision: https://reviews.llvm.org/D100304
Add an initial version of a helper to determine whether a recipe may
have side-effects.
Reviewed By: a.elovikov
Differential Revision: https://reviews.llvm.org/D100259
There were a few places in widenPHIInstruction where calculations of
offsets were failing to take the runtime calculation of VF into
account for scalable vectors. I've fixed those cases in this patch
as well as adding an assert that we should not be scalarising for
scalable vectors.
Tests are added here:
Transforms/LoopVectorize/AArch64/sve-widen-phi.ll
Differential Revision: https://reviews.llvm.org/D99254
At the moment, getMemoryOpCost returns 1 for all inputs if CostKind is
CodeSize or SizeAndLatency. This fools LoopUnroll into thinking memory
operations on large vectors have a cost of one, even if they will get
expanded to a large number of memory operations in the backend.
This patch updates getMemoryOpCost to return the cost for the type
legalization for both CodeSize and SizeAndLatency. This should more
accurately reflect the number of memory operations required.
I am not sure how latency should properly be included in SizeAndLatency
from the description, but returning the size cost should be clearly more
accurate.
This does not cause any binary changes when building
MultiSource/SPEC2000/SPEC2006 with -O3 -flto for AArch64, likely because
large vector memops are not really formed by code emitted from Clang.
But using the C/C++ matrix extension can easily result in code with very
large vector operations directly from Clang, e.g.
https://clang.godbolt.org/z/6xzxcTGvb
Reviewed By: samparker
Differential Revision: https://reviews.llvm.org/D100291
The armv6m entry in cores_match() got separated from its
friends armv7m and armv7em. Reuniting them to make it
easier to keep them updated in all at the same time.
This patch updates most of the remaining regression tests (~400) to use
`flang-new` rather then `f18` when `FLANG_BUILD_NEW_DRIVER` is set.
This allows us to share more Flang regression tests between `f18` and
`flang-new`. A handful of tests have not been ported yet - these are
currently either failing or not supported by the new driver.
Summary of changes:
* RUN lines in tests are updated to use `%flang_fc1` instead of `%f18`
* option spellings in tests are updated to forms accepted by both `f18` and
`flang-new`
* variables in Bash scripts are renamed (e.g. F18 --> FLANG_FC1)
The updated tests will now be run with the new driver, `flang-new`,
whenever it is enabled (i.e when `FLANG_BUILD_NEW_DRIVER` is set).
Although this patch touches many files, vast majority of the changes are
automatic:
```
grep -IEZlr "%f18" flang/test/ | xargs -0 -l sed -i 's/%f18/%flang_fc1/g
```
Differential Revision: https://reviews.llvm.org/D100309
There are a few places in LoopVectorize.cpp where we have been too
cautious in adding VF.isScalable() asserts and it can be confusing.
It also makes it more difficult to see the genuine places where
work needs doing to improve scalable vectorization support.
This patch changes getMemInstScalarizationCost to return an
invalid cost instead of firing an assert for scalable vectors. Also,
vectorizeInterleaveGroup had multiple asserts all for the same
thing. I have removed all but one assert near the start of the
function, and added a new assert that we aren't dealing with masks
for scalable vectors.
Differential Revision: https://reviews.llvm.org/D99727
This change adds convenient composed constants to be used for tsan_read_try_lock annotations, reducing the boilerplate at the instrumentation site.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D99595
On Windows, float arguments are normally passed in float registers
in the calling convention for regular functions. For variable
argument functions, floats are passed in integer registers. This
already was done correctly since many years.
However, the surprising bit was that floats among the fixed arguments
also are supposed to be passed in integer registers, contrary to regular
functions. (This also seems to be the behaviour on ARM though, both
on Windows, but also on e.g. hardfloat linux.)
In the calling convention, don't promote shorter floats to f64, but
convert them to integers of the same length. (Floats passed as part of
the actual variable arguments are promoted to double already on the
C/Clang level; the LLVM vararg calling convention doesn't do any
extra promotion of f32 to f64 - this matches how it works on X86 too.)
Technically, this is an ABI break compared to older LLVM versions,
but it fixes compatibility with the official platform ABI. (In practice,
floats among the fixed arguments in variable argument functions is
a pretty rare construct.)
Differential Revision: https://reviews.llvm.org/D100365
Keep running "not --crash" via the external "not" executable, but
for plain negations, and for cases that use the shell "!" operator,
just skip that argument and invert the return code.
The libcxx tests only use the shell operator "!" for negations,
never the "not" executable, because libcxx tests can be run without
having a fully built llvm tree available providing the "not"
executable.
This allows using the internal shell for libcxx tests.
It should be possible to reland this now that D99938 fixed the
one test failure in clang-tidy that broke when "not" was handled
internally, letting lit/python execute grep.exe directly instead
of via not.exe. (See D99330 and D99406 for more commentery on the
exact issue that broke and other potential ways of fixing it.)
Differential Revision: https://reviews.llvm.org/D98859
If the PHI-of-ops simplifies to an existing value, no real PHI is
created, which means the dependencies between the
PHI-of-ops and its operands is not materialized in IR. At the
moment, we fail to create a real PHI node for the PHI-of-ops,
because the PHI-of-ops root instruction is not re-visited if
one of the PHI-of-ops operands changes. We need to add the
operands as additional users in this case.
Even with this patch, there are still some dependencies
missing. I will continue tackling the outstanding
reporeted crashes in this area.
Fixes PR36501, PR42422, PR42557.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D66924
Now since LDS uses within non-kernel functions are being handled in the
pass - LowerModuleLDS, we *NO* need to *forcefully* inline non-kernel
functions just because they use LDS. Do forceful inlining only when the
pass - LowerModuleLDS is not enabled. It is enabled by default.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D100481
This is useful for expressing specific table-gen options, like selecting
a particular dialect to print.
Use it to fix the documentation for the `pdl_interp` dialect which is now
generating the first dialect it finds in its input which is `pdl`.
Differential Revision: https://reviews.llvm.org/D100517