The plan was to use `wasm.catch.exn` intrinsic to catch exceptions and
add `wasm.catch.longjmp` intrinsic, that returns two values (setjmp
buffer and return value), later to catch longjmps. But because we
decided not to use multivalue support at the moment, we are going to use
one intrinsic that returns a single value for both exceptions and
longjmps. And even if it's not for that, I now think the naming of
`wasm.catch.exn` is a little weird, because the intrinsic can still take
a tag immediate, which means it can be used for anything, not only
exceptions, as long as that returns a single value.
This partially reverts D107405.
Reviewed By: tlively
Differential Revision: https://reviews.llvm.org/D108683
Breaks realpath(, nullptr) for all sanitizers.
Somehow INTERCEPT_FUNCTION and INTERCEPT_FUNCTION_VER return
false even if everything seemingly right.
And this is the issue for COMMON_INTERCEPT_FUNCTION_GLIBC_VER_MIN.
There is a check in every sanitlizer:
if (!INTERCEPT_FUNCTION_VER(name, ver) && !INTERCEPT_FUNCTION(name))
For non-versioned interceptors when INTERCEPT_FUNCTION returns false
it's not considered fatal, and it just prints a warning.
However INTERCEPT_FUNCTION_VER in this case will fallback to
INTERCEPT_FUNCTION replacing realpath with wrong version.
We need to investigate that before relanding the patch.
This reverts commit faef0d042f.
This patch adds a new type of reusable UI components. Searcher Windows
contain a text field to enter a search keyword and a list of scrollable
matches are presented. The target match can be selected and executed
which invokes a user callback to do something with the match.
This patch also adds one searcher delegate, which wraps the common
command completion searchers for simple use cases.
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D108545
This is another bug exposed by https://llvm.org/PR51612
(and the one that triggered the initial assertion) in the report.
That example was suppressed with:
985b48f183
...but these would still crash because we created nodes
like UADDO without the expected 2 output values.
The --set-section-flags option was being ignored when adding a new
section. Take it into account if present.
Fixes https://llvm.org/PR51244
Reviewed By: jhenderson, MaskRay
Differential Revision: https://reviews.llvm.org/D106942
Moved View.h and View.cpp from /tools/llvm-mca/Views/ to /lib/MCA/ and
/include/llvm/MCA/. This is so that targets can define their own Views within
the /lib/Target/ directory (so that the View can use backend functionality).
To enable these Views within mca, targets will need to add them to the vector of
Views returned by their target's CustomBehaviour::getViews() methods.
Differential Revision: https://reviews.llvm.org/D108520
This adds extra MVE vector fptosi.sat and fptoui.sat tests, along with
adding or adjusting the existing scalar tests to cover more
architectures and instruction combinations.
If multiple /manifestdependency: flags are passed, they are
naively deduped, but after that each of them should have an
effect, instead of just the last one.
Also, /manifestdependency: flags are allowed in .drectve sections
(from `#pragma comment(linker, ...`). To make the interaction between
/manifestdependency: flags enabling manifest by default but
/manifest:no overriding this work, add an explict ManifestKind::Default
state to represent no explicit /manifest flag being passed.
To make /manifestdependency: flags from input file .drectve sections
work with /manifest:embed, delay embedded manifest emission until
after input files have been read.
Differential Revision: https://reviews.llvm.org/D108628
There are 2 bugs here:
1. We were not checking uses of operand 2 (the false value of the select).
2. We were not checking for multiple uses of nodes that produce >1 result.
Correcting those is enough to avoid the crash in the reduced test based on:
https://llvm.org/PR51612
The additional use check on operand 0 (the condition value of the select)
should not strictly be necessary because we are only replacing one use
with another (whether it makes performance sense to do the transform with
that pattern is not clear). But as noted in the TODO, changing that
uncovers another bug.
Note: there's at least one more bug here - we aren't propagating EVTs
correctly, but I plan to fix that in another patch.
In -P mode, PrintPPOutputPPCallbacks::MoveToLine started at least one
newline if current and target line number mismatched. The method is also
called when entering a new file, be it the main file or an include file.
In this situation line numbers always almost mismatch, resulting in a
newline for each occurance even if no tokens have been printed
in-between.
Empty lines at the beginning of the output must be trimmed because it
may be parsed by scripts expecting the result to appear on the first
output line, as done by LibreOffice's configure script.
Fix by only emitting a newline if tokens have been printed so far using
the EmittedTokensOnThisLine flag. Also adding a test case of FileChanged
callbacks occuring with empty include files.
This fixes llvm.org/PR51616
Add support for the GNU C style __attribute__((error(""))) and
__attribute__((warning(""))). These attributes are meant to be put on
declarations of functions whom should not be called.
They are frequently used to provide compile time diagnostics similar to
_Static_assert, but which may rely on non-ICE conditions (ie. relying on
compiler optimizations). This is also similar to diagnose_if function
attribute, but can diagnose after optimizations have been run.
While users may instead simply call undefined functions in such cases to
get a linkage failure from the linker, these provide a much more
ergonomic and actionable diagnostic to users and do so at compile time
rather than at link time. Users instead may be able use inline asm .err
directives.
These are used throughout the Linux kernel in its implementation of
BUILD_BUG and BUILD_BUG_ON macros. These macros generally cannot be
converted to use _Static_assert because many of the parameters are not
ICEs. The Linux kernel still needs to be modified to make use of these
when building with Clang; I have a patch that does so I will send once
this feature is landed.
To do so, we create a new IR level Function attribute, "dontcall" (both
error and warning boil down to one IR Fn Attr). Then, similar to calls
to inline asm, we attach a !srcloc Metadata node to call sites of such
attributed callees.
The backend diagnoses these during instruction selection, while we still
know that a call is a call (vs say a JMP that's a tail call) in an arch
agnostic manner.
The frontend then reconstructs the SourceLocation from that Metadata,
and determines whether to emit an error or warning based on the callee's
attribute.
Link: https://bugs.llvm.org/show_bug.cgi?id=16428
Link: https://github.com/ClangBuiltLinux/linux/issues/1173
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D106030
pointers to structs
clang was just being conservative and trying to prevent users from
messing up the qualifier on the inner pointer type. Lifting this
restriction enables using some of the libc++ templates with ObjC pointer
arguments, which clang currently rejects.
rdar://79018677
Differential Revision: https://reviews.llvm.org/D107021
In-register structure returns are not special, and handled by lowering
to multiple-value tuples. We can tail-call from non-sret fns to
structure-returning functions, except on i686 where the sret pointer
is callee-pop.
Differential Revision: https://reviews.llvm.org/D105807
With spilling into AGPRs enabled we cannot reliably predict
if we need to save FP or not. We may finally spill everything
into AGPRs and never touch stack. In this case we still may
save FP. This is deficiency but not an error, so avoid the
assert.
Differential Revision: https://reviews.llvm.org/D107404
The instruction extractelement/extractvalue are not required to
be scheduled since they only depend on the source vector/aggregate (with
constant indices), smae applies to the parent basic block checks.
Improves compile time and saves scheduling budget.
Differential Revision: https://reviews.llvm.org/D108703
We have "-profile-isfs" internal option for text, binary, and
compactbinary format (mostly for debug and test purpose). We
need to set the related flag in FunctionSamples so that ProfileIsFS
is written to the header in extbinary format.
Differential Revision: https://reviews.llvm.org/D108707
This is a follow up diff for BinarySizeContextTracker to track zero size for fully optimized inlinee. When an inlinee is fully optimized away, we won't be able to get its size through symbolizing instructions, hence we will treat the corresponding context size as unknown. However by traversing the inlined probe forest, we know what're original inlinees regardless of optimization. If a context show up in inlined probes, but not during symbolization, we know that it's fully optimized away hence its size is zero instead of unknown. It should provide more accurate size cost estimation for pre-inliner to make better inline decisions in llvm-profgen.
Differential Revision: https://reviews.llvm.org/D108350
The implementation uses the int_asan_check_memaccess intrinsic to instrument the code. The intrinsic is replaced by a call to a function which performs the access check. The generated function names encode the input register name as a number using Reg - X86::NoRegister formula.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D107850
Description: This change enables the compare operations to be selected to SALU/VALU form
dependent of the SDNode divergence flag.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D106079
This patch replaces the SpecialRegisters field with a unique_ptr instead of a raw pointer. This is better practice, and allows us to remove the definition of the dtor for the SystemZSubtarget class.
Reviewed By: uweigand, Kai
Differential Revision: https://reviews.llvm.org/D108639
This reverts commit df1f4e0cc6.
Now the test case explicitly specifies the target triple.
I decided to use x86_64 for that matter, to have a fixed
bitwidth for `size_t`.
Aside from that, relanding the original changes of:
https://reviews.llvm.org/D105184
Before this patch, WriteIMulH reported a latency value which is correct for the
RR variant of MULX, but not for the RM variant.
This patch fixes the issue by introducing a new WriteIMulHLd, which is meant to
be used only by the RM variant of MULX.
Differential Revision: https://reviews.llvm.org/D108701
A single smov instruction is capable of moving from a vector register while performing
the sign-extend during said move, rather than each step being performed by separate instructions.
Differential Revision: https://reviews.llvm.org/D108633
Currently only `ConstantArrayType` is considered for flexible array
members (FAMs) in `getStaticSize()`.
However, `IncompleteArrayType` also shows up in practice as FAMs.
This patch will ignore the `IncompleteArrayType` and return Unknown
for that case as well. This way it will be at least consistent with
the current behavior until we start modeling them accurately.
I'm expecting that this will resolve a bunch of false-positives
internally, caused by the `ArrayBoundV2`.
Reviewed By: ASDenysPetrov
Differential Revision: https://reviews.llvm.org/D105184
InstrRefBasedLDV is marginally slower than VarlocBasedLDV when analysing
optimised code -- however, it's much slower when analysing code compiled
-O0.
To avoid this: don't use instruction referencing for -O0 functions. In the
"pure" case of unoptimised code, this won't really harm the debugging
experience because most variables won't have been promoted off the stack,
so can't go missing. It becomes more complicated when optimised code is
inlined into functions marked optnone; however these are rare, and as -O0
doesn't run many optimisations there should be little damage to the debug
experience as a result.
I've taken the opportunity to refactor testing for instruction-referencing
into a MachineFunction method, which seems the most appropriate place to
put it.
Differential Revision: https://reviews.llvm.org/D108585
Move most debug printing in rtl.cpp behind DP() macro
Adjust the print output for gpu arch mismatch when the architectures match
Convert an assert into graceful failure
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D108562
Makes patterns added for gfx90a usable with the gfx10 versions of the
insts.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D108654
Change-Id: I86167bf6b4823f975f74ccb619bd6190331ba16b
We don't support any compiler that doesn't support C++14 constexpr when
compiling in C++14 mode anymore, so we can just assume that we have C++14
extended constexpr when compiling in C++14 mode. This allows us to remove
some workarounds for older compilers.
Differential Revision: https://reviews.llvm.org/D108638
Support for peeling with multiple exit blocks was added in D63921/77bb3a486fa6.
So far it has only been enabled for loops where all non-latch exits are
'de-optimizing' exits (D63923). But peeling of multi-exit loops can be
highly beneficial in other cases too, like if all non-latch exiting
blocks are unreachable.
The motivating case are loops with runtime checks, like the C++ example
below. The main issue preventing vectorization is that the invariant
accesses to load the bounds of B is conditionally executed in the loop
and cannot be hoisted out. If we peel off the first iteration, they
become dereferenceable in the loop, because they must execute before the
loop is executed, as all non-latch exits are terminated with
unreachable. This subsequently allows hoisting the loads and runtime
checks out of the loop, allowing vectorization of the loop.
int sum(std::vector<int> *A, std::vector<int> *B, int N) {
int cost = 0;
for (int i = 0; i < N; ++i)
cost += A->at(i) + B->at(i);
return cost;
}
This gives a ~20-30% increase of score for Geekbench5/HDR on AArch64.
Note that this requires a follow-up improvement to the peeling cost
model to actually peel iterations off loops as above. I will share that
shortly.
Also, peeling of multi-exits might be beneficial for exit blocks with
other terminators, but I would like to keep the scope limited to known
high-reward cases for now.
I removed the option to disable peeling for multi-deopt exits because
the code is more general now. Alternatively, the option could also be
generalized, but I am not sure if there's much value in the option?
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D108108
This change fixes issue found by Markus: https://reviews.llvm.org/rG11338e998df1
Before this patch following code was transformed to memmove:
for (int i = 15; i >= 1; i--) {
p[i] = p[i-1];
sum += p[i-1];
}
However load from p[i-1] is used not only by store to p[i] but also by sum computation.
Therefore we cannot emit memmove in loop header.
Differential Revision: https://reviews.llvm.org/D107964