This patch implements VSCode DAP logpoints feature (also called tracepoint
in other VS debugger).
This will provide a convenient way for user to do printf style logging
debugging without pausing debuggee.
Differential Revision: https://reviews.llvm.org/D127702
The error used to look like this:
ld64.lld: error: undefined symbol: _foo
>>> referenced by /path/to/bar.o:(symbol _baz+0x4)
If DWARF line information is available, we now show where in the source
the references are coming from:
ld64.lld: error: unreferenced symbol: _foo
>>> referenced by: bar.cpp:42 (/path/to/bar.cpp:42)
>>> /path/to/bar.o:(symbol _baz+0x4)
Differential Revision: https://reviews.llvm.org/D128184
Add missing intrinsics and tests for them. An expanding macro
from _vel_pack_f32p to __builtin_ve_vl_pack_f32p and others is
already defined in clang/lib/Headers/velintrin.h.
Reviewed By: efocht
Differential Revision: https://reviews.llvm.org/D128120
The instructions that generate the source of dual source blend export
should run in strict-wqm. That is if any lane in a quad is active,
we need to enable all four lanes of that quad to make the shuffling
operation before exporting to dual source blend target work correctly.
Differential Revision: https://reviews.llvm.org/D127981
LDS_PARAM_LOAD and LDS_DIRECT_LOAD use EXEC per quad
(if any pixel is enabled in the quad, data is written
to all 4 pixels/threads in the quad).
Tag LDS_PARAM_LOAD and LDS_DIRECT_LOAD as using strict_wqm
to enforce this and avoid lane clobbering issues.
Note that only the instruction itself is tagged.
The implicit uses of these do not need to be set WQM.
The reduces unnecessary WQM calculation of M0.
Differential Revision: https://reviews.llvm.org/D127977
Detect LDS direct WAR/WAW hazards and compute values for
wait_vdst (va_vdst) parameter. Where appropriate this
raises wait_vdst from the default 0 to allow concurrent
issue of LDS direct with VALU execution.
Also detect LDS direct versus VMEM source VGPR hazards
and insert vm_vsrc=0 waits using s_waitcnt_depctr.
Differential Revision: https://reviews.llvm.org/D127963
If we would scalarize a fixed vector, we know we can't do so for a scalable one. However, there's no need to crash, we can instead simply return a invalid cost which will work its way through the computation (since invalid is sticky), and the client should bail out.
Sorry for the lack of test here. The particular codepath I saw this reached on was the result of another bug.
Make Offsets and OpcodeOperandTypes tables human-readable by printing the
instruction name before the operand list.
In effect, this makes debugging generated `getOperandType` possible.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D127931
This was a bug introduced in d764aa. A pointer type is not a primitive type, and thus we were ending up dividing by zero when computing VLMax.
Differential Revision: https://reviews.llvm.org/D128219
There are instances where using paired vector stores leads to significant
performance degradation due to issues with store forwarding.To avoid falling
into this trap with compiler - generated code, we will not emit these
instructions unless the user requests them explicitly(with a builtin or by
specifying the option).
Reviewed By : lei, amyk, saghir
Differential Revision: https://reviews.llvm.org/D127218
Make Offsets and OpcodeOperandTypes tables human-readable by printing the
instruction name before the operand list.
In effect, this makes debugging generated `getOperandType` possible.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D127931
Add trace load functionality to SBDebugger via the `LoadTraceFromFile` method.
Update intelpt test case class to have `testTraceLoad` method so we can take advantage of
the testApiAndSB decorator to test both the CLI and SB without duplicating code.
Differential Revision: https://reviews.llvm.org/D128107
An AArch64ISD::DUP is just a splat, where the known bits for each lane
are the same as the input. This teaches that to computeKnownBitsForTargetNode.
Problems arise for constants though, as a constant BUILD_VECTOR can be
lowered to an AArch64ISD::DUP, which SimplifyDemandedBits would then
turn back into a constant BUILD_VECTOR leading to an infinite cycle.
This has been prevented by adding a isTargetCanonicalConstantNode node
to prevent the conversion back into a BUILD_VECTOR.
Differential Revision: https://reviews.llvm.org/D128144
Fix test_platform_file_fstat to correctly truncate/max out the expected
value when GDB Remote Serial Protocol specifies a value as an unsigned
integer but the underlying platform type uses a signed integer.
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.llvm.org/D128042
Make the AVX/MPX register tests more robust by checking for the presence
of actual registers rather than register sets. Account for the option
that the respective registers are defined but not available, as is
the case on FreeBSD and NetBSD. This fixes test regression on these
platforms.
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.llvm.org/D128041
Refactor GDBRemoteCommunicationServerLLGS::SendStopReasonForState()
to accept process as an argument rather than hardcoding
m_current_process, in order to make it work correctly for multiprocess
scenarios.
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.llvm.org/D127497
Refactor SendStopReplyPacketForThread() to accept process instance
as a parameter rather than use m_current_process. This future-proofs
it for multiprocess support.
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.llvm.org/D127289
This change removes an explicit scalable vector bailout for fshl and fshr. This bailout was added in 60e4698b9a, when sinking a unconditional bailout for all intrinsics into selected cases. Its not clear if the bailout was originally unneeded, or if our cost model infrastructure has simply matured in the meantime. Either way, the generic code appears to handle scalable vectors without issue.
Note that the RISC-V cost model changes here aren't particularly interesting. They do probably better match the current lowering, but the main point is to have coverage of the BasicTTI path and simply show lack of crashing.
AArch64 costing was changed to preserve legacy behavior. There will most likely be an upcoming change to use the generic costs there too, but I didn't want to make that change not being particularly familiar with the target.
Differential Revision: https://reviews.llvm.org/D127680
The code being removed is technically correct; if we end up with two VL=0 instructions next to each other, we can avoid a state transition if the second is a scalar move. However, since both ops are also nops, we should simply delete them instead. As such, this compatibility rule simply complicates the code for no purpose.
Depending on the environment, a floating point instruction should
treat denormal inputs as zero, and/or flush a denormal output to zero.
Denormals are not currently accounted for when an instruction gets
folded to a constant, which can lead to differences in output between
a folded and a unfolded instruction when running on the target. The
denormal handling mode can be set by the function level attribute
denormal-fp-math, which this patch uses to determine whether any
denormal inputs to or outputs from folding should be zero, and that
the sign is set appropriately.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D116952
The example wouldn't compile, and used an invalid case style for a
function.
Reviewed By: MatzeB
Differential Revision: https://reviews.llvm.org/D128176
In order to support newer hardware, define wrappers around MFMA
intrinsics that have not previously been exposed in the ROCDL dialect.
A `amdgpu.mfma` wrapper around these instructions is in development
and will provide a more user-friendly interface to them.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D128079
When working through correctness issues in this pass, I moved a number of transforms which were phrased as mutating prior vsetvli instructions out of the main data flow because mutating prior instructions can invalidate the running dataflow results in subtle ways. We ended up creating both a prepass and a post-pass.
After consideration, I believe the prepass to be redundant, and this change removes it by folding it back into the data flow via a key conceptual change. Instead of phrasing the mutations on instructions, we can phrase them on abstract states. This avoids the dataflow inconsistency problem mentioned above by simply propagating the potential change forward, and thus reflecting its results in the dataflow. Critically, we do so without modifying existing VSETVLI instructions; some of the data flow steps include non-local IR analysis.
Compile time wise, this removes a linear pass, but has the potential to increase the number of iterations for the data flow to converge. That's not a algorithmic complexity change, the needVSETVLI mechanism has the same effect. In practice, I don't see this triggering more iterations, so I think it's likely to be a net win overall. (I didn't do any careful analysis here; just an impression from glancing at a couple tests.)
This has the potential to produce better results, so this isn't strictly speaking NFC.
Differential Revision: https://reviews.llvm.org/D127870
This is to fix the following error on https://green.lab.llvm.org/green/job/clang-stage2-Rthinlto:
BranchProbability.h:236:34: error: declaration of 'distance' must be imported from module 'std.iterator.__iterator.distance' before it is required
In D127983, I had flipped from using the computed EEW to using the SEW value pulled from the VSETVLI when checking compatibility. This wasn't intentional, though thankfully it appears to be a non-functional difference. The new code does make a unchecked assumption that the initial SEW operand on the load/store is the EEW. This patch clarifies the assumption, and adds an assert to make sure this remains true.
Differential Revision: https://reviews.llvm.org/D128085
These tests demonstrate cases where the constant produced by folding
a floating point instruction should differ based on the denormal
handling mode set in function attributes.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D125807