This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.
As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:
```
linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
outs(%c : memref<?x?xf32>)
```
, whereas the syntax for a `linalg.matmul` returning a new tensor is:
```
%d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
init(%c : memref<?x?xf32>)
-> tensor<?x?xf32>
```
Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
This switches to using DSE + MemorySSA by default again, after
fixing the issues reported after the first commit.
Notable fixes fc82006331, a0017c2bc2.
This reverts commit 3a59628f3c.
This patch extends SCEVParameterRewriter to support rewriting unknown
epxressions to arbitrary SCEV expressions. It will be used by further
patches.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D67176
When generating matching tables for GlobalISel, TableGen would output
"::zero_reg" whenever encountering the zero_reg, which in turn would
result in compilation error. This patch fixes that by instead outputting
NoRegister (== 0), which is the same result that TableGen produces when
generating matching tables for ISelDAG.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D86215
This turns all jump table entries into deltas within the target
function because in the small memory model all code & static data must
be in a 4GB block somewhere in memory.
When the entries were a delta between the table location and a basic
block, the 32-bit signed entries are not enough to guarantee
reachability.
https://reviews.llvm.org/D87286
Split out of D87120 (memory profiler). Added unit testing of the new
printing facility.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D87792
The test started to consistently fail after unrelated
2ffaa9a173.
Even before the patch it was possible to fail the test,
e.g. -seed=1660180256 on my workstation.
Also this checks do not look related to strcmp.
Previously methods `FPOptions::get*` returned unsigned value even if the
corresponding property was represented by specific enumeration type. With
this change such methods return actual type of the property. It also
allows printing value of a property as text rather than integer code.
Differential Revision: https://reviews.llvm.org/D87812
The number of ones in the mask for the PDEP determines how many
bits of the other operand are used. If the mask is constant we
can use this to build a mask for SimplifyDemandedBits. This can
be used to replace the extends in the test with anyextend.
When the source of the zext is AssertZext or AssertSext, it is hard to know any information about the upper 32 bits,
so we should insert a zext move before emitting SUBREG_TO_REG to define the lower 32 bits.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D87771
This op is a catch-all for creating witnesses from various random kinds
of constraints. In particular, I when dealing with extents directly,
which are of `index` type, one can directly use std ops for calculating
the predicates, and then use cstr_require for the final conversion to a
witness.
Differential Revision: https://reviews.llvm.org/D87871
Previously, there was a little ambiguity about whether IsInlined should
return true for an inlined lexical block, since technically the lexical
block would not represent an inlined function (it'd just be contained
within one).
Edit suggested by Jim Ingham.
This patch adds the instruction definitions and assembly/disassembly tests for
the set boolean condition instructions. This also includes the negative, and
reverse variants of the instruction.
Differential Revision: https://reviews.llvm.org/D86252
This patch implements the vec_cntm function prototypes in altivec.h in order to
utilize the vector count mask bits instructions introduced in Power10.
Differential Revision: https://reviews.llvm.org/D82726
- Change OpClass new method addition to find and eliminate any existing methods that
are made redundant by the newly added method, as well as detect if the newly added
method will be redundant and return nullptr in that case.
- To facilitate that, add the notion of resolved and unresolved parameters, where resolved
parameters have each parameter type known, so that redundancy checks on methods
with same name but different parameter types can be done.
- Eliminate existing code to avoid adding conflicting/redundant build methods and rely
on this new mechanism to eliminate conflicting build methods.
Fixes https://bugs.llvm.org/show_bug.cgi?id=47095
Differential Revision: https://reviews.llvm.org/D87059
Currenlty assume x18 is used as pointer to shadow call stack. User shall pass
flags:
"-fsanitize=shadow-call-stack -ffixed-x18"
Runtime supported is needed to setup x18.
If SCS is desired, all parts of the program should be built with -ffixed-x18 to
maintain inter-operatability.
There's no particuluar reason that we must use x18 as SCS pointer. Any register
may be used, as long as it does not have designated purpose already, like RA or
passing call arguments.
Differential Revision: https://reviews.llvm.org/D84414
This change enables the generic implicit null transformation for the AArch64 target. As background for those unfamiliar with our implicit null check support:
An implicit null check is the use of a signal handler to catch and redirect to a handler a null pointer. Specifically, it's replacing an explicit conditional branch with such a redirect. This is only done for very cold branches under frontend control w/appropriate metadata.
FAULTING_OP is used to wrap the faulting instruction. It is modelled as being a conditional branch to reflect the fact it can transfer control in the CFG.
FAULTING_OP does not need to be an analyzable branch to achieve it's purpose. (Or at least, that's the x86 model. I find this slightly questionable.)
When lowering to MC, we convert the FAULTING_OP back into the actual instruction, record the labels, and lower the original instruction.
As can be seen in the test changes, currently the AArch64 backend does not eliminate the unconditional branch to the fallthrough block. I've tried two approaches, neither of which worked. I plan to return to this in a separate change set once I've wrapped my head around the interactions a bit better. (X86 handles this via AllowModify on analyzeBranch, but adding the obvious code causing BranchFolding to crash. I haven't yet figured out if it's a latent bug in BranchFolding, or something I'm doing wrong.)
Differential Revision: https://reviews.llvm.org/D87851
I believe the intention of this test added in
https://reviews.llvm.org/D71687 was to test LoopFullUnrollPass with
clang's -fno-unroll-loops, not its interaction with optnone. Loop
unrolling passes don't run under optnone/-O0.
Also added back unintentionally removed -disable-loop-unrolling from
https://reviews.llvm.org/D85578.
Reviewed By: echristo
Differential Revision: https://reviews.llvm.org/D86485
Before this patch, the last chance recoloring and deferred spilling
techniques were solely controled by command line options.
This patch adds target hooks for these two techniques so that it
is easier for backend writers to override the default behavior.
The default behavior of the hooks preserves the default values of
the related command line options.
NFC
Initial support for dwarf fission sections (-gsplit-dwarf) on wasm.
The most interesting change is support for writing 2 files (.o and .dwo) in the
wasm object writer. My approach moves object-writing logic into its own function
and calls it twice, swapping out the endian::Writer (W) in between calls.
It also splits the import-preparation step into its own function (and skips it when writing a dwo).
Differential Revision: https://reviews.llvm.org/D85685
I think we need to be even more conservative when traversing memory
phis, to make sure we catch any loop carried dependences.
This approach updates fillInCurrentPair to use unknown sizes for
locations when we walk over a phi, unless the location is guaranteed to
be loop-invariant for any possible loop. Using an unknown size for
locations should ensure we catch all memory accesses to locations after
the given memory location, which includes loop-carried dependences.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D87778
Enable canonicalization of SPF_ABS and SPF_NABS to the abs intrinsic.
To be conservative, the one-use check on the comparison is retained,
this may be relaxed if all goes well.
It's pretty likely that this will uncover places that missing
handling for the abs() intrinsic. Please report any seen performance
regressions.
Differential Revision: https://reviews.llvm.org/D87188
Summary: Allow unroll and jam loops forced by user.
LoopUnrollAndJamPass is still disabled by default in the NPM pipeline,
and can be controlled by -enable-npm-unroll-and-jam.
Reviewed By: Meinersbur, dmgreen
Differential Revision: https://reviews.llvm.org/D87786
The other assume tests seem to be dealing with equalities in
particular. Test implication for the condition itself, especially
the negated case from PR47496.
This adds test cases for PR40961 and PR47247. They illustrate cases in
which the max backedge-taken count can be improved by information from
the loop guards.
X86 can use xmm registers for pointers operations. e.g. for std::swap.
I don't know yet if it's possible on other platforms.
NT_X86_XSTATE includes all registers from NT_FPREGSET so
the latter used only if the former is not available. I am not sure how
reasonable to expect that but LLD has such fallback in
NativeRegisterContextLinux_x86_64::ReadFPR.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D87754