Summary: This will put a space in `lock (process)` when spaces are required after keywords.
Reviewers: krasimir
Reviewed By: krasimir
Subscribers: cfe-commits
Tags: #clang-format, #clang
Differential Revision: https://reviews.llvm.org/D81255
Summary:
To avoid excessive extra stat()s, only check the possible locations of
headers that weren't found at all (leading to a compile error).
For headers that *were* found, we don't check for files earlier on the
search path that could override them.
Reviewers: kadircet
Subscribers: javed.absar, jkorous, arphaman, usaxena95, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D77942
Replace getNumElements() with getElementCount() when asserting that
two types have the same element counts.
Differential Revision: https://reviews.llvm.org/D81371
Summary:
This patch adds the following intrinsics for creating two-tuple,
three-tuple and four-tuple scalable vectors:
* llvm.aarch64.sve.tuple.create2
* llvm.aarch64.sve.tuple.create3
* llvm.aarch64.sve.tuple.create4
As well as:
* llvm.aarch64.sve.tuple.get
* llvm.aarch64.sve.tuple.set
For extracting and inserting scalable vectors from vector tuples. These
intrinsics are intended to be used by the ACLE functions svcreate<n>,
svget and svset.
This patch also includes calling convention support for passing and
returning tuples of scalable vectors to/from functions.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D75674
Summary: Note to downstream target maintainers: this might silently change the semantics of your code if you override `TargetLowering::HandleByVal` without marking it `override`.
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet
Subscribers: sdardis, hiraditya, jrtc27, atanasyan, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81365
The SSA values created with `shape.const_size` are now named depending on the
value.
A constant size of 3, e.g., is now automatically named `%c3`.
Differential Revision: https://reviews.llvm.org/D81249
Summary:
Add regression tests of asmparser, mccodeemitter, and disassembler for
control instructions. Add not defined LPM/SPM/LFR/SFR/SMIR/NOP/LCR/
SCR/TSCR/FIDCR control isntructions newly. Define MISC registers which
SMIR instruction reads and IC register which SIC instruction reads.
Change asmparser to support Zero, UImm3, and UImm6 operands and MISC
registers. Change instprinter to support MISC registers also.
Change to use auto to receive dyn_cast also.
Differential Revision: https://reviews.llvm.org/D81370
Scalable vectors cannot use 'BUILD_VECTOR', so it is necessary to
properly split and widen scalable vectors when passing them
to CopyToReg/CopyFromReg.
This functionality is added to TargetLoweringBase::getVectorTypeBreakdown().
This patch only adds support for 'splitting' scalable vectors that
are a multiple of some legal type, e.g.
<vscale x 6 x i64> -> 3 x <vscale x 2 x i64>
Reviewers: efriedma, c-rhodes
Reviewed By: efriedma
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80139
The operations `to_extent_tensor` and `from_extent_tensor` become no-ops when
lowered to the standard dialect.
This is possible with a lowering from `shape.shape` to `tensor<?xindex>`.
Differential Revision: https://reviews.llvm.org/D81162
Summary:
Add regression tests of asmparser, mccodeemitter, and disassembler for
shift operation instructions. Also change asmparser to support UImm7
operand. And, add new SLD/SRD/SLA instructions also.
Differential Revision: https://reviews.llvm.org/D81324
Attempt to handle unsupported types, such as structs, in
getMemoryOpCost. The backend now checks for a supported type and
calls into BasicTTI as a fallback. BasicTTI will now also perform
the same check and will default to an expensive cost of 4 for 'Other'
MVTs.
Differential Revision: https://reviews.llvm.org/D80984
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.
Differential Revision: https://reviews.llvm.org/D81137
Summary:
With -mbig-endian -mexecute-only and targeting an fpu,
an incorrect sequence of movw/movt was generated to construct a double literal.
The test suite was hardwired to check these wrong values.
The fault was caused by the explicit word swap in LowerConstantFP().
With -mbig-endian -mexecute-only -mfpu=none, a correct sequence of
movw/movt is generated to construct a double literal.
The test suite did not test this no fpu case.
The test suite expected values have been corrected.
The test file is updated to add testing of fpu=none case
Reviewers: christof, llvm-commits, dmgreen
Reviewed By: dmgreen
Subscribers: dmgreen, kristof.beyls, hiraditya, danielkiss
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81259
Change-Id: Ia3737df243218c89c82f02b7f9f4032ecd5a3917
Summary: This is a follow up on D81196, fixing one more call site.
Reviewers: courbet
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81268
After pseudo-expansion, we may end up with ADDI (add immediate)
instructions where the operand is not an immediate but a relocation.
For such instructions, attempts to get the immediate result in
assertion failures for obvious reasons.
Fixes: https://bugs.llvm.org/show_bug.cgi?id=45432
This combine tries shrink a vzmovl if its input is an
insert_subvector. This patch improves it to turn
(vzmovl (bitcast (insert_subvector))) into
(insert_subvector (vzmovl (bitcast))) potentially allowing the
bitcast to be folded with a load.
The instruction addi is usually used to post increase the loop indvar, which looks like this:
label_X:
load x, base(i)
...
y = op x
...
i = addi i, 1
goto label_X
However, for PowerPC, if there are too many vsx instructions that between y = op x and i = addi i, 1,
it will use all the hw resource that block the execution of i = addi, i, 1, which result in the stall
of the load instruction in next iteration. So, a heuristic is added to move the addi as early as possible
to have the load hide the latency of vsx instructions, if other heuristic didn't apply to avoid the starve.
Reviewed By: jji
Differential Revision: https://reviews.llvm.org/D80269
We can pad the v2f32 with 0s up to v8f32 and use a v8f32->v8i64
operation. This is what we end up with on non-strict nodes except
we don't pad with 0s since we don't care about exceptions.
With a change to use `CGM.getCodeGenOpts().getDebugInfo() != codegenoptions::NoDebugInfo`
instead of `getDebugInfo()`,
to fix `Profile-<arch> :: instrprof-gcov-multithread_fork.test`
See CodeGenModule::CodeGenModule, `EmitGcovArcs || EmitGcovNotes` can
set `clang::CodeGen::CodeGenModule::DebugInfo`.
---
Clang marks calls to operator new as heap allocation sites, but the
operator declared at global scope returns a void pointer. There is no
explicit cast in the code, so the compiler has to write down the
allocated type itself.
Also generalize a cast to use CallBase, so that we mark heap alloc sites
when exceptions are enabled.
Differential Revision: https://reviews.llvm.org/D80966
In the sign splat case, we can fold PMOVMSKB(PACKSSBW(LO(X), HI(X))) -> PMOVMSKB(BITCAST_v32i8(X)) without introducing a signmask + comparison (which unlike for any_of won't fold into a single TEST).