This is due to PR26161 needing to be resolved before we can fix
big endian bugs like PR35359. The work to split aggregates into smaller LLTs
instead of using one large scalar will take some time, so in the mean time
we'll fall back to SDAG.
Some ARM BE tests xfailed for now as a result.
Differential Revision: https://reviews.llvm.org/D40789
llvm-svn: 320388
At first, I tried to thread the x86 needle and use a target hook (isVectorShiftByScalarCheap())
to disable the transform only for non-splat pow-of-2 constants, but not AVX2, but only some
element types, but...it's difficult.
Here we just avoid the loop with the x86 vector transform that conflicts with the general DAG
combine and preserve all of the existing behavior AFAICT otherwise.
Some tests that will probably fail if someone does try to restrict this in a more targeted way
for x86-only may be found in:
test/CodeGen/X86/combine-mul.ll
test/CodeGen/X86/vector-mul.ll
test/CodeGen/X86/widen_arith-5.ll
This should prevent the infinite looping seen with:
https://bugs.llvm.org/show_bug.cgi?id=35579
Differential Revision: https://reviews.llvm.org/D41040
llvm-svn: 320374
This patch introduces getShadowOriginPtr(), a method that obtains both the shadow and origin pointers for an address as a Value pair.
The existing callers of getShadowPtr() and getOriginPtr() are updated to use getShadowOriginPtr().
The rationale for this change is to simplify KMSAN instrumentation implementation.
In KMSAN origins tracking is always enabled, and there's no direct mapping between the app memory and the shadow/origin pages.
Both the shadow and the origin pointer for a given address are obtained by calling a single runtime hook from the instrumentation,
therefore it's easier to work with those pointers together.
Reviewed at https://reviews.llvm.org/D40835.
llvm-svn: 320373
A wrong type was passed to insertVector, causing an out-of-bounds value
to be added an an operand to HexagonISD::INSERT. This later failed in
instruction selection.
llvm-svn: 320369
Second part of https://reviews.llvm.org/D40348.
Revision r318436 has extended all constants feeding a store to 64 bits
to allow for CSE on the SDAG. However, negative constants were zero extended
which made the constant being loaded appear to be a positive value larger than
16 bits. This resulted in long sequences to materialize such constants
rather than simply a "load immediate". This patch just sign-extends those
updated constants so that they remain 16-bit signed immediates if they started
out that way.
llvm-svn: 320368
This commit is the first part of https://reviews.llvm.org/D40348.
In order to allow target combines to be performed on newly combined
indexed loads, add them back to the worklist. The remainder of the
above patch will be committed in subsequent revisions and will use
this. Test cases will be included with those follow-up commits.
llvm-svn: 320365
The TableGen-based calling convention definitions are inflexible, while
writing a function to implement the calling convention is very
straight-forward, and allows difficult cases to be handled more easily. With
this patch adds support for:
* Passing large scalars according to the RV32I calling convention
* Byval arguments
* Passing values on the stack when the argument registers are exhausted
The custom CC_RISCV calling convention is also used for returns.
This patch also documents the ABI lowering that a language frontend is
expected to perform. I would like to work to simplify these requirements over
time, but this will require further discussion within the LLVM community.
We add PendingArgFlags CCState, as a companion to PendingLocs.
The PendingLocs vector is used by a number of backends to handle arguments
that are split during legalisation. However CCValAssign doesn't keep track of
the original argument alignment. Therefore, add a PendingArgFlags vector which
can be used to keep track of the ISD::ArgFlagsTy for every value added to
PendingLocs.
Differential Revision: https://reviews.llvm.org/D39898
llvm-svn: 320359
As frame pointer elimination isn't implemented until a later patch and we make
extensive use of update_llc_test_checks.py, this changes touches a lot of the
RISC-V tests.
Differential Revision: https://reviews.llvm.org/D39849
llvm-svn: 320357
This is a preparatory step for D34515.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
- fixes PR34564
- fixes PR35103
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 320355
Introduces the AddrFI "addressing mode", which is necessary simply because
it's not possible to write a pattern that directly matches a frameindex.
Ensure callee-saved registers are accessed relative to the stackpointer. This
is necessary as callee-saved register spills are performed before the frame
pointer is set.
Move HexagonDAGToDAGISel::isOrEquivalentToAdd to SelectionDAGISel, so we can
make use of it in the RISC-V backend.
Differential Revision: https://reviews.llvm.org/D39848
llvm-svn: 320353
All files and parts of files related to microMIPS4R6 are removed.
When target is microMIPS4R6, errors are printed.
This is LLVM part of patch.
Differential Revision: https://reviews.llvm.org/D35625
llvm-svn: 320350
This matches AVX512 version and is more consistent overall. And improves our scheduler models.
In some cases this adds _Int to instructions that didn't have any Int_ before. It's a side effect of the adjustments made to some of the multiclasses.
llvm-svn: 320325
This separates the CPU specific scheduler model includes to occur after the instructions. Moves the instruction includes between the basic scheduler information and the CPU specific scheduler models.
llvm-svn: 320313
CreateAddRecFromPHIWithCastsImpl() adds an IncrementNUSW overflow predicate
which allows the PSCEV rewriter to rewrite this scev expression:
(zext i8 {0, + , (trunc i32 step to i8)} to i32)
into
{0, +, (sext i8 (trunc i32 step to i8) to i32)}
But then it adds the wrong Equal predicate:
%step == (zext i8 (trunc i32 %step to i8) to i32).
instead of:
%step == (sext i8 (trunc i32 %step to i8) to i32)
This is fixed here.
Differential Revision: https://reviews.llvm.org/D40641
llvm-svn: 320298