RISCVMachineFunctionInfo has some fields like VarArgsFrameIndex and
VarArgsSaveSize are calculated at ISel lowering stage, those info are
not contained in MIR files, that cause test cases rely on those field
can't not reproduce correctly by MIR dump files.
This patch adding the MIR read/write for those fields.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D123178
Currently, the utility supports lowering of non atomic memory transfer routines only. This patch adds support for atomic version of memcopy. This may be useful for targets not supporting atomic memcopy.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D118443
Similar to the problem in 0bb25b4603, bitcasts that are inserted must
dominate all uses. When rewriting "values" with "new values" that have
the updated address space, we may replace the "new value" with a bitcast
if one of the original users is an addresspace cast. This bitcast must
be inserted before ALL users, not only before the addresspace cast.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D122964
By adding a parameter to function FoldOpIntoSelect, we can fold more Ops to Select.
For this example, we tend to fold the division instruction,
so we no longer care whether SelectInst is one use.
This patch slove TODO left in InstCombine/div.ll.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D122967
This diff contains:
- Parameterization of bit enum attributes in OpBase.td by bit width (e.g. 32
and 64). Previously, all enums were 32-bits. This brings enum functionality in
line with other integer attributes, and allows for bit enums greater than 32
bits.
- SPIRV and Vector dialects were updated to use bit enum attributes with an
explicit bit width
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123095
Removes a bogus dyn_cast_or_null that was breaking cast-expression handling when
parsing llvm.global_ctors.
The intent of this code was to identify Functions nested within cast
expressions, but the offending dyn_cast_or_null was actually blocking that:
Since a function is not a cast expression, we would set FuncC to null and break
the loop without finding the Function. The cast was not necessary either:
Functions are already Constants, and we didn't need to do anything
ConstantExpr-specific with FuncC, so we could just drop the cast.
Thanks to Jonas Hahnfeld for tracking this down.
http://llvm.org/PR54797
Since the NTTP may need to be cast to the type when rebuilding the name,
check that the type can be rebuilt when determining whether a template
name can be simplified.
Some parts of the code have to distinguish between live and postmortem threads
to figure out how to get some data, e.g. thread trace buffers. This makes the
code less generic and more error prone. An example of that is that we have
two different decoders: LiveThreadDecoder and PostMortemThreadDecoder. They
exist because getting the trace bufer is different for each case.
The problem doesn't stop there. Soon we'll have even more kinds of data, like
the context switch trace, whose fetching will be different for live and post-
mortem processes.
As a way to fix this, I'm creating a common API for accessing thread data,
which is able to figure out how to handle the postmortem and live cases on
behalf of the caller. As a result of that, I was able to eliminate the two
decoders and unify them into a simpler one. Not only that, our TraceSave
functionality only worked for live threads, but now it can also work for
postmortem processes, which might be useful now, but it might in the future.
This common API is OnThreadBinaryDataRead. More information in the inline
documentation.
Differential Revision: https://reviews.llvm.org/D123281
As we soon will need to decode multiple raw traces for the same thread,
having a class that encapsulates the decoding of a single raw trace is
a stepping stone that will make the coming features easier to implement.
So, I'm creating a LibiptDecoder class with that purpose. I refactored
the code and it's now much more readable. Besides that, more comments
were added. With this new structure, it's also easier to implement unit
tests.
Differential Revision: https://reviews.llvm.org/D123106
Using a portable format specifier avoids a "format specifies type
'unsigned long long' but the argument has type 'uint64_t' (aka 'unsigned
long') [-Werror,-Wformat]" error depending on the exact definition of
`uint64_t`.
This fixes the issue when the current line offset is actually for next range.
Maintain a current code range with current line offset and cache next file/line
offset. Update file/line offset after finishing current range.
Differential Revision: https://reviews.llvm.org/D123151
This diff is motivated by my work to add proper DWARF unwind support. As
detailed in PR50956 functions that need DWARF unwind need to have
compact unwind entries synthesized for them. These CU entries encode an
offset within `__eh_frame` that points to the corresponding DWARF FDE.
In order to encode this offset during
`UnwindInfoSectionImpl::finalize()`, we need to first assign values to
`InputSection::outSecOff` for each `__eh_frame` subsection. But
`__eh_frame` is ordered after `__unwind_info` (according to ld64 at
least), which puts us in a bit of a bind: `outSecOff` gets assigned
during finalization, but `__eh_frame` is being finalized after
`__unwind_info`.
But it occurred to me that there's no real need for most
ConcatOutputSections to be finalized sequentially. It's only necessary
for text-containing ConcatOutputSections that may contain branch relocs
which may need thunks. ConcatOutputSections containing other types of
data can be finalized in any order.
This diff moves the finalization logic for non-text sections into a
separate `finalizeContents()` method. This method is called before
section address assignment & unwind info finalization takes place. In
theory we could call these `finalizeContents()` methods in parallel, but
in practice it seems to be faster to do it all on the main thread.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D123279
It was only handled for FLAT initially because we did not have
unaligned DS instructions lowering. Now it is implemented but
the bug is not handled.
Differential Revision: https://reviews.llvm.org/D123338
Compiler-rt cross-compile for ARMv5 fails because D99282 made it an error if DMB
is used for any pre-ARMv6 targets. More specifically, the "#error only supported
on ARMv6+" added in D99282 will cause compilation to fail when any source file
which includes assembly.h are compiled for pre-ARMv6 targets. Since the only
place where DMB is used is syn-ops.h (which is only included by
arm/sync_fetch_and_* and these files are excluded from being built for older
targets), this patch moves the definition there to avoid the issues described
above.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D123105
This reverts commit 764cd491b1, which I
incorrectly assumed NFC partly because there were no test coverage for the
non-relocatable non-emit-relocs case before 9d6d936243fe343abe89323a27c7241b395af541.
The interaction of {,-r,--emit-relocs} {,--discard-locals} {,--gc-sections} is
complex but without -r/--emit-relocs, --gc-sections does need to discard .L
symbols like --no-gc-sections. The behavior matches GNU ld.
There is no need to fully scalarize an unaligned operation in
some case, just split it to alignment.
Differential Revision: https://reviews.llvm.org/D123330
This patch changes `EmitPPCBuiltinExpr` in `CGBuiltin.cpp` to remove
the loop at the beginning of the function that emits the arguments and
to delay emitting the arguments until inside the switch statement. These
changes will put `EmitPPCBuiltinExpr` in line with the strategy of the
target independent function `EmitBuiltinExpr`. Also, this patch
ensures that arguments are only emitted once.
Tests that included builtins affected by these changes have been
modified to match expected behaviour.
Reviewed By: #powerpc, nemanjai, amyk
Differential Revision: https://reviews.llvm.org/D121637
If the variables view shows a variable that is a struct that has
MightHaveChildren(), the expand diamond is shown, but if trying to expand
it and it's not possible (e.g. incomplete type), remove the expand diamond
to visualize that it can't be in fact expanded. Otherwise it feels kinda
weird that a tree item cannot be expanded even though it "should".
I guess the MightHaveChildren() checking means that GetChildren() may
be expensive, so also do not call it for rows that are not expanded.
Differential Revision: https://reviews.llvm.org/D123008
I assume we meant to return the result of the call to
BaseT::isLoweredToCall(F).
This might not be a functional change in practice because it would
still hit the default case in the switch and call
BaseT::isLoweredToCall(F) at the end.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D123333
I was wondering if SymtabSection::emitStabs() should check
defined->includeInSymtab. Add asserts and comments explaining why that's not
necessary.
No behavior change.
Differential Revision: https://reviews.llvm.org/D123302
We aim at improving the readability and maintainability of Options.td,
and in particular its handling of 'Flags', by
- limiting the extent of 'let Flags = [...] in {'s, and
- adding closing comments to matching '}'s.
- being more consistent about empty lines around 'let Flags' and '}'.
More concretely,
- we do not let a 'let Flags' span across several headline comments.
When all 'def's in two consecutive headlines share the same flags,
we stil close and start a new 'let Flags' at the intermediate
headline.
- when a 'let Flags' span just one or two 'def's, set 'Flags' within
the 'def's instead.
- we remove nested 'let Flags'.
Note that nested 'let Flags' can be quite confusing, especially when
the outer was started long before the inner. Moving a 'def' out of the
inner 'let Flags' and setting 'Flags' within the 'def' will not have the
intended effect, as those flags will be overridden by the outer
'let Flags'.
Reviewed By: awarzynski, jansvoboda11, hans
Differential Revision: https://reviews.llvm.org/D123070
Reland Note: Adds a fix to properly mark a commutative operation as folded if we change the order
of its operands. This was uncovered by the fact that we no longer re-process constants.
This avoids accidentally reversing the order of constants during successive
application, e.g. when running the canonicalizer. This helps reduce the number
of iterations, and also avoids unnecessary changes to input IR.
Fixes#51892
Differential Revision: https://reviews.llvm.org/D122692
{D118797} means that we can now check the name/segname of a given
section directly, instead of having to look those properties up on one
of its subsections. This allows us to simplify our code.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D123275
An insert subvector under aarch64 can often be done as a single lane mov
operation. For example a v4i8 inserted into a v16i8 is a s-reg mov, so
long as the index is a multiple of 4. This teaches the cost model that,
using code copied over from the X86 backend.
Some of the costs (v16i16_4_0) are still high because they get matched
as a SK_Select, not an SK_InsertSubvector. D120879 has some codegen
tests for inserting subvectors, which I were added as
llvm/test/CodeGen/AArch64/insert-subvector.ll.
Differential Revision: https://reviews.llvm.org/D120880
This patch adds the `llvm_omp_target_dynamic_shared_alloc` function to
the `omp.h` header file so users can access it by default. Also changed
the name to keep it consistent with the other target allocators. Added
some documentation so users know how to use it. Didn't add the interface
for Fortran since there's no way to test it right now.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D123246
The target allocators have been supported for NVPTX offloading for
awhile. The tests should use the allocators instead of calling the
functions manually. Also the comments indicating these being a preview
should be removed.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D123242
As noticed in D87637, when LLDB crashes, we only print stack traces if
LLDB is directly executed, not when used via Python bindings. Enabling
this by default may be undesirable (libraries shouldn't be messing with
signal handlers), so make this an explicit opt-in.
I "commandeered" this patch from Jordan Rupprecht who put this up for
review originally.
Differential revision: https://reviews.llvm.org/D91835
In addition, fixed a small bug with padding incorrectly inferring output shape for dynaic inputs in convolution
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D121872
The code to check if the regular LTO summary should be emitted and to
add the corresponding module flags was duplicated in the
'EmitAssemblyHelper::EmitAssemblyWithLegacyPassManager' and
'EmitAssemblyHelper::RunOptimizationPipeline' methods.
In order to eliminate these code duplications, the
'EmitAssemblyHelper::shouldEmitRegularLTOSummary' method has been
extracted. The method returns a bool value, the value is 'true' if the
module summary should be emitted. The patch keeps the setting of the
module flags inline.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D123026