Summary:
One option for the LIBOMPTARGET_INFO environment variable is to print the current status of the device's data mappings. These are a shared resource among threads so this needs to be protected when using multiple streams.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D95786
This requires a second index query for refs to overrides, as the refs
call doesn't tell you which ref points at which symbol.
Differential Revision: https://reviews.llvm.org/D95451
Under the softfp calling convention, we are often left with
VMOVRRD(extract(bitcast(build_vector(a, b, c, d)))) for the return value
of the function. These can be simplified to a,b or c,d directly,
depending on the value of the extract.
Big endian is a little different because the bitcast switches the lanes
around, meaning we end up with b,a or d,c.
Differential Revision: https://reviews.llvm.org/D94989
Unfortunately this treats overrides declarations as declarations, not as
references. I don't plan to land this until I have a fix for that issue.
Differential Revision: https://reviews.llvm.org/D95450
If we instantiate self-referenced anonymous records in foreach and
multiclass, the NAME value will point to incorrect record. It's because
anonymous name is resolved too early.
This patch adds AnonymousNameInit to represent an anonymous record name.
When instantiating an anonymous record, it will update the referred name.
Differential Revision: https://reviews.llvm.org/D95309
Add deleted volatile copy-assignment operator in the most derived atomic
to fix the Bug 41784. The root cause: there is an `operator=(T) volatile`
that has better match than the deleted copy-assignment operator of the base
class when `this` is `volatile`. The compiler sees that right operand of
the assignment operator can be converted to `T` and chooses that path
without taking into account the deleted copy-assignment operator of the
base class.
The current behavior on libstdc++ is different from what we have in libc++.
On the same test compilation fails with libstdc++. Proof: https://godbolt.org/z/nebPYd
(everything is the same except the -stdlib option).
I choose the way with explicit definition of copy-assignment for atomic
in the most derived class. But probably we can fix that by moving
`operator=(T)` overloads to the base class from both specializations.
At first glance, it shouldn't break anything.
Differential Revision: https://reviews.llvm.org/D90968
The AArch64 DAG combine added by D90945 & D91433 extends the index
of a scalable masked gather or scatter to i32 if necessary.
This patch removes the combine and instead adds shouldExtendGSIndex, which
is used by visitMaskedGather/Scatter in SelectionDAGBuilder to query whether
the index should be extended before calling getMaskedGather/Scatter.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D94525
This patch skips the test for the SBTarget::IsLoaded method on windows
since the logic is different.
Differential Revision: https://reviews.llvm.org/D95686
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
D90687 introduced a crash:
llvm::LoopVectorizationCostModel::computeMaxVF(llvm::ElementCount, unsigned int):
Assertion `WideningDecisions.empty() && Uniforms.empty() && Scalars.empty() &&
"No decisions should have been taken at this point"' failed.
when compiling the following C code:
typedef struct {
char a;
} b;
b *c;
int d, e;
int f() {
int g = 0;
for (; d; d++) {
e = 0;
for (; e < c[d].a; e++)
g++;
}
return g;
}
with:
clang -Os -target hexagon -mhvx -fvectorize -mv67 testcase.c -S -o -
This occurred since prior to D90687 computeFeasibleMaxVF would only be
called in computeMaxVF when a scalar epilogue was allowed, but now it's
always called. This causes the assert above since computeFeasibleMaxVF
collects all viable VFs larger than the default MaxVF, and for each VF
calculates the register usage which results in analysis being done the
assert above guards against. This can occur in computeFeasibleMaxVF if
TTI.shouldMaximizeVectorBandwidth and this target hook is implemented in
the hexagon backend to always return true.
Reported by @iajbar.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D94869
This reverts commit 9ad94c12
It turns out that to correctly generate command line flags for LangOptions::OpenMP and LangOptions::OpenMPSimd, we need the flexibility of C++.
This patch adds an `SBTarget::IsLoaded(const SBModule&) const` endpoint
to lldb's Scripting Bridge API. As the name suggests, it will allow the
user to know if the module is loaded in a specific target.
rdar://37957625
Differential Review: https://reviews.llvm.org/D95686
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This adds a DAG combine for converting sext_inreg of VGetLaneu into
VGetLanes, providing the types match correctly.
Differential Revision: https://reviews.llvm.org/D95073
We can only legally extract from the lowest 128-bit subvector, so extract the correct subvector to allow us to handle 256/512-bit vector element extracts.
Under SoftFP calling conventions, we can be left with
extract(bitcast(BUILD_VECTOR(VMOVDRR(a, b), ..))) patterns that can
simplify to a or b, depending on the extract lane.
Differential Revision: https://reviews.llvm.org/D94990
Correct integer constants like `1UL << 63` to `UINT64_C(1) << 63` in
order to make them work on 32-bit machines. Tested on both an i386
and x86_64 machines.
Reviewed By: mgorny
Differential Revision: https://reviews.llvm.org/D95724
If we determine that the invariant path through the loop has no effects,
we can directly branch to the exit block, instead to unswitching first.
Besides avoiding some extra work (unswitching first, then deleting the
loop again) this allows to be more aggressive than regular unswitching
with respect to cost-modeling. This approach should always be be
desirable.
This is similar in spirit to D93734, just that it uses the previously
added checks for loop-unswitching.
I tried to add the required no-op checks from scratch, as we only check
a subset of the loop. There is potential to unify the checks with
LoopDeletion, at the cost of adding a predicate whether a block should
be considered.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D95468
The reduction of a sanitizer build failure when enabling the dominance check (D95335) showed that loop peeling also needs to take care of scope duplication, just like loop unrolling (D92887).
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D95544
For unknown reasons the alabaster theme on the docs server is always generating
a GitHub link in the side bar. Beside the privacy problems of having an iframe
to some third-party service, we never configured any GitHub integration so
this button just links to the GitHub main site.
The button generation should be disabled by default, but as that's apparently
not true in the alabaster theme on the server, this patch tries working around
the issue by just explicitly turning off the GitHub integration.
This reverts commit d9b953d84b.
This commit resulted in build bot failures and the author is away from a
computer, so I am reverting on their behalf until they have a chance to
look into this.
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.
Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D95671
This primarily occurs with isel patterns using vnot. This reduces
the number of variants in the isel tables.
We generally canonicalize build_vectors of constants to the RHS. I think
we might fail if there is a bitcast on the build_vector, but that
should be easy to fix if we can find a case. Usually the
bitcast is introduced by type legalization or lowering. It's
likely canonicalization would have already occured.
To set non-default rounding mode user usually calls function 'fesetround'
from standard C library. This way has some disadvantages.
* It creates unnecessary dependency on libc. On the other hand, setting
rounding mode requires few instructions and could be made by compiler.
Sometimes standard C library even is not available, like in the case of
GPU or AI cores that execute small kernels.
* Compiler could generate more effective code if it knows that a particular
call just sets rounding mode.
This change introduces new IR intrinsic, namely 'llvm.set.rounding', which
sets current rounding mode, similar to 'fesetround'. It however differs
from the latter, because it is a lower level facility:
* 'llvm.set.rounding' does not return any value, whereas 'fesetround'
returns non-zero value in the case of failure. In glibc 'fesetround'
reports failure if its argument is invalid or unsupported or if floating
point operations are unavailable on the hardware. Compiler usually knows
what core it generates code for and it can validate arguments in many
cases.
* Rounding mode is specified in 'fesetround' using constants like
'FE_TONEAREST', which are target dependent. It is inconvenient to work
with such constants at IR level.
C standard provides a target-independent way to specify rounding mode, it
is used in FLT_ROUNDS, however it does not define standard way to set
rounding mode using this encoding.
This change implements only IR intrinsic. Lowering it to machine code is
target-specific and will be implemented latter. Mapping of 'fesetround'
to 'llvm.set.rounding' is also not implemented here.
Differential Revision: https://reviews.llvm.org/D74729
A couple patterns used bitconvert on the immAllOnesV, but
the isel matching uses ISD::isBuildVectorAllOnes which
is able to look through bitcasts. So isel patterns don't need
to do it explicitly.
This reverts commit 6e58539659.
This failed in http://lab.llvm.org:8011/#/builders/123/builds/2676. I guess
were're still missing some symbols, but unfortunately the specific error is
masked by a bug in python/lit that hides stderr. This test will have to remain
disabled on Windows until I can get help to debug it further.
We need to add a mask to the shift amount for these operations
to use the FSR/FSL instructions. We were previously doing this
in isel patterns, but custom lowering will make the mask
visible to optimizations earlier.
This testcase was failing on windows due to missing definitions. This commit
adds definitions of the missing symbols (as absolute symbols) to eliminate the
errors.