This caused severe compile-time regressions, see PR43455.
> Modern processors predict the targets of an indirect branch regardless of
> the size of any jump table used to glean its target address. Moreover,
> branch predictors typically use resources limited by the number of actual
> targets that occur at run time.
>
> This patch changes the semantics of the option `-max-jump-table-size` to limit
> the number of different targets instead of the number of entries in a jump
> table. Thus, it is now renamed to `-max-jump-table-targets`.
>
> Before, when `-max-jump-table-size` was specified, it could happen that
> cluster jump tables could have targets used repeatedly, but each one was
> counted and typically resulted in tables with the same number of entries.
> With this patch, when specifying `-max-jump-table-targets`, tables may have
> different lengths, since the number of unique targets is counted towards the
> limit, but the number of unique targets in tables is the same, but for the
> last one containing the balance of targets.
>
> Differential revision: https://reviews.llvm.org/D60295
llvm-svn: 373060
We can't use short granules with stack instrumentation when targeting older
API levels because the rest of the system won't understand the short granule
tags stored in shadow memory.
Moreover, we need to be able to let old binaries (which won't understand
short granule tags) run on a new system that supports short granule
tags. Such binaries will call the __hwasan_tag_mismatch function when their
outlined checks fail. We can compensate for the binary's lack of support
for short granules by implementing the short granule part of the check in
the __hwasan_tag_mismatch function. Unfortunately we can't do anything about
inline checks, but I don't believe that we can generate these by default on
aarch64, nor did we do so when the ABI was fixed.
A new function, __hwasan_tag_mismatch_v2, is introduced that lets code
targeting the new runtime avoid redoing the short granule check. Because tag
mismatches are rare this isn't important from a performance perspective; the
main benefit is that it introduces a symbol dependency that prevents binaries
targeting the new runtime from running on older (i.e. incompatible) runtimes.
Differential Revision: https://reviews.llvm.org/D68059
llvm-svn: 373035
We have isel patterns that can put an IMPLICIT_DEF on one of
the sources for these instructions. So we should make sure
we break any dependencies there. This should be done by
just using one of the other sources.
llvm-svn: 373025
Similar for f64 and having a non-zero passthru value.
We were previously not trying to fold the load at all. Using
a CodeGenOnly instruction allows us to use FR32X/FR64X as the
register class to avoid a bunch of COPY_TO_REGCLASS.
llvm-svn: 373021
This patch emits the function descriptor csect for functions with definitions
under both 32-bit/64-bit mode on AIX.
Differential Revision: https://reviews.llvm.org/D66724
llvm-svn: 373009
The static analyzer is warning about potential null dereferences, but we should be able to use cast<> directly and if not assert will fire for us.
llvm-svn: 372992
Summary:
This was found during review of https://reviews.llvm.org/D66050.
In the simple test of fdiv, we miss to fold
```
fneg 2, 2
xsmaddasp 3, 2, 0
```
to
```
xsnmsubasp 3, 2, 0
```
We have the patterns for Double Precision and vectors, just missing
Single Precision, the patch add that.
Reviewers: #powerpc, hfinkel, nemanjai, steven.zhang
Reviewed By: #powerpc, steven.zhang
Subscribers: wuzish, hiraditya, kbarton, MaskRay, shchenz, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67595
llvm-svn: 372985
Implement aggregate structure split to simpler types in splitToValueTypes.
splitToValueTypes is used for return values.
According to MipsABIInfo from clang/lib/CodeGen/TargetInfo.cpp,
aggregate structure arguments for O32 always get simplified and thus
will remain unsupported by the MIPS GlobalISel for the time being.
For O32, aggregate structures can be encountered only for complex number
returns e.g. 'complex float' or 'complex double' from <complex.h>.
Differential Revision: https://reviews.llvm.org/D67963
llvm-svn: 372957
The static analyzer is warning about a potential null dereference, but we should be able to use cast<MCConstantExpr> directly and if not assert will fire for us.
llvm-svn: 372956
SLM is 2 x slower for <2 x i64> comparison ops than other vector types, we should account for this like we do for SLM <2 x i64> add/sub/mul costs.
This should remove some of the SLM codegen diffs in D43582
llvm-svn: 372954
With -pg -mfentry -mnop-mcount, a nop is emitted instead of the call to
fentry.
Review: Ulrich Weigand
https://reviews.llvm.org/D67765
llvm-svn: 372950
This matches what's done for VRNDSCALE and most other instructions.
This mainly determines which instruction will be preferred by
disassembler and assembly parser. The printing and encoding
information is the same.
We prefer the _Int form since it uses the VR128 class due to
intrinsic interface. For some of EVEX features like embedded
rounding, we only select from intrinsics today. So there is
only a VR128 version. So making the VR128 version the preferred
is overally consistent.
llvm-svn: 372947
Rename old function to explicitly show that it cares only about alignment.
The new allowsMemoryAccess call the function related to alignment by default
and can be overridden by target to inform whether the memory access is legal or
not.
Differential Revision: https://reviews.llvm.org/D67121
llvm-svn: 372935
This pass is only concerned with ZMM0-15 and YMM0-15. For YMM
we use VR256 which only contains YMM0-15, but for ZMM we were
using VR512 which contains ZMM0-31. Using VR512_0_15 is more
correct.
Given that the ABI and register allocator will use registers in
order, its unlikely that register from 16-31 would be used
without also using 0-15. So this probably doesn't functionally
matter.
llvm-svn: 372933
Summary:
Useful in case you want to have control over interrupt vector generation.
For example in Rust language we have an arrangement where all unhandled
ISR vectors gets mapped to a single default handler function. Which is
hard to implement when LLVM tries to generate vectors on its own.
Reviewers: asl, krisb
Subscribers: hiraditya, JDevlieghere, awygle, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67313
llvm-svn: 372910
When checking for tail call eligibility, we should use the correct CCAssignFn
for each argument, rather than just checking if the caller/callee is varargs or
not.
This is important for tail call lowering with varargs. If we don't check it,
then basically any varargs callee with parameters cannot be tail called on
Darwin, for one thing. If the parameters are all guaranteed to be in registers,
this should be entirely safe.
On top of that, not checking for this could potentially make it so that we have
the wrong stack offsets when checking for tail call eligibility.
Also refactor some of the stuff for CCAssignFnForCall and pull it out into a
helper function.
Update call-translator-tail-call.ll to show that we can now correctly tail call
on Darwin. Also add two extra tail call checks. The first verifies that we still
respect the caller's stack size, and the second verifies that we still don't
tail call when a varargs function has a memory argument.
Differential Revision: https://reviews.llvm.org/D67939
llvm-svn: 372897
Modern processors predict the targets of an indirect branch regardless of
the size of any jump table used to glean its target address. Moreover,
branch predictors typically use resources limited by the number of actual
targets that occur at run time.
This patch changes the semantics of the option `-max-jump-table-size` to limit
the number of different targets instead of the number of entries in a jump
table. Thus, it is now renamed to `-max-jump-table-targets`.
Before, when `-max-jump-table-size` was specified, it could happen that
cluster jump tables could have targets used repeatedly, but each one was
counted and typically resulted in tables with the same number of entries.
With this patch, when specifying `-max-jump-table-targets`, tables may have
different lengths, since the number of unique targets is counted towards the
limit, but the number of unique targets in tables is the same, but for the
last one containing the balance of targets.
Differential revision: https://reviews.llvm.org/D60295
llvm-svn: 372893
Neither the base implementation of findCommutedOpIndices nor any in-tree target modifies the instruction passed in and there is no reason why they would in the future.
Committed on behalf of @hvdijk (Harald van Dijk)
Differential Revision: https://reviews.llvm.org/D66138
llvm-svn: 372882
Summary:
This patch fixes a bug that originated from passing a virtual exit block (nullptr) to `MachinePostDominatorTee::findNearestCommonDominator` and resulted in assertion failures inside its callee. It also applies a small cleanup to the class.
The patch introduces a new function in PDT that given a list of `MachineBasicBlock`s finds their NCD. The new overload of `findNearestCommonDominator` handles virtual root correctly.
Note that similar handling of virtual root nodes is not necessary in (forward) `DominatorTree`s, as right now they don't use virtual roots.
Reviewers: tstellar, tpr, nhaehnle, arsenm, NutshellySima, grosser, hliao
Reviewed By: hliao
Subscribers: hliao, kzhuravl, jvesely, wdng, yaxunl, dstuttard, t-tye, hiraditya, llvm-commits
Tags: #amdgpu, #llvm
Differential Revision: https://reviews.llvm.org/D67974
llvm-svn: 372874
Merge more Select pseudo instructions in emitSelect() by allowing other
instructions between them as long as they do not clobber CC.
Debug value instructions are now moved down to below the new PHIs instead of
erasing them.
Review: Ulrich Weigand
https://reviews.llvm.org/D67619
llvm-svn: 372873
During legalisation we can end up with some pretty strange nodes, like shifts
of 0. We need to make sure we don't try to make long shifts of these, ending up
with invalid assembly instructions. A long shift with a zero immediate actually
encodes a shift by 32.
Differential Revision: https://reviews.llvm.org/D67664
llvm-svn: 372839
I think we should be able to use shl instead of sshl and ushl for
positive constant shift values, unless I am missing something.
We already have the machinery in place to ensure we only replace
nodes, if the shift value is positive and <= the element width.
This is a generalization of an earlier patch rL372565.
Reviewers: t.p.northover, samparker, dmgreen, anemet
Reviewed By: anemet
Differential Revision: https://reviews.llvm.org/D67955
llvm-svn: 372824
Summary:
Instead of having different v128.load and v128.store instructions for
each MVT, just have one of each that is reused in all the
patterns. Also removes the HasSIMD128 predicate where accompanied by
HasUnimplementedSIMD128, since the latter implies the former.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67930
llvm-svn: 372792
Currently, if an array element type size is 0, the number of
array elements will be set to 0, regardless of what user
specified. This implementation is done in the beginning where
BTF is mostly used to calculate the member offset.
For example,
struct s {};
struct s1 {
int b;
struct s a[2];
};
struct s1 s1;
The BTF will have struct "s1" member "a" with element count 0.
Now BTF types are used for compile-once and run-everywhere
relocations and we need more precise type representation
for type comparison. Andrii reported the issue as there
are differences between original structure and BTF-generated
structure.
This patch made the change to correctly assign "2"
as the number elements of member "a".
Some dead codes related to ElemSize compuation are also removed.
Differential Revision: https://reviews.llvm.org/D67979
llvm-svn: 372785
Summary:
The Regex "match" and "sub" member functions were previously not "const"
because they wrote to the "error" member variable. This commit removes
those assignments, and instead assumes that the validity of the regex
is already known after the initial compilation of the regular
expression. As a result, these member functions were possible to make
"const". This makes it easier to do things like pre-compile Regexes
up-front, and makes "match" and "sub" thread-safe. The error status is
now returned as an optional output, which also makes the API of "match"
and "sub" more consistent with each other.
Also, some uses of Regex that could be refactored to be const were made const.
Patch by Nicolas Guillemot
Reviewers: jankratochvil, thopre
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67241
llvm-svn: 372764
Similar to rL372717, we can force the splitting of extends of vector loads in
MVE, in order to use the better widening loads as opposed to going through
expensive extends. This adds a combine to early-on detect extends of loads and
split the load in two, from where normal legalisation will kick in and we get a
series of widening loads.
Differential Revision: https://reviews.llvm.org/D67909
llvm-svn: 372721
MVE does not have a simple sign extend instruction that can move elements
across lanes. We currently often end up moving each lane into and out of a GPR,
in order to get elements into the correct places. When we have a store of a
trunc (or a extend of a load), we can instead just split the store/load in two,
using the narrowing/widening load/store instructions from each half of the
vector.
This does that for stores. It happens very early in a store combine, so as to
easily detect the truncates. (It would be possible to do this later, but that
would involve looking through a buildvector of extract elements. Not impossible
but this way seemed simpler).
By enabling store combines we also get a vmovdrr combine for free, helping some
other tests.
Differential Revision: https://reviews.llvm.org/D67828
llvm-svn: 372717
Summary:
The functions different in two ways:
- getLLVMRegNum could return both "eh" and "other" dwarf register
numbers, while getLLVMRegNumFromEH only returned the "eh" number.
- getLLVMRegNum asserted if the register was not found, while the second
function returned -1.
The second distinction was pretty important, but it was very hard to
infer that from the function name. Aditionally, for the use case of
dumping dwarf expressions, we needed a function which can work with both
kinds of number, but does not assert.
This patch solves both of these issues by merging the two functions into
one, returning an Optional<unsigned> value. While the same thing could
be achieved by adding an "IsEH" argument to the (renamed)
getLLVMRegNumFromEH function, it seemed better to avoid the confusion of
two functions and put the choice of asserting into the hands of the
caller -- if he checks the Optional value, he can safely process
"untrusted" input, and if he blindly dereferences the Optional, he gets
the assertion.
I've updated all call sites to the new API, choosing between the two
options according to the function they were calling originally, except
that I've updated the usage in DWARFExpression.cpp to use the "safe"
method instead, and added a test case which would have previously
triggered an assertion failure when processing (incorrect?) dwarf
expressions.
Reviewers: dsanders, arsenm, JDevlieghere
Subscribers: wdng, aprantl, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67154
llvm-svn: 372710
Summary:
Adds the new load_splat instructions as specified at
https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md#load-and-splat.
DAGISel does not allow matching multiple copies of the same load in a
single pattern, so we use a new node in WebAssemblyISD to wrap loads
that should be splatted.
Depends on D67783.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67784
llvm-svn: 372655