Before, COMPILER_RT_TEST_COMPILER was used which pointed to a C compiler. While
it is incorrect to assume either of these is the default compiler, using the
C++ one allows for linking cpp tests.
Differential Revision: https://reviews.llvm.org/D109207
D104143 introduced canonical value numbering between regions, which allows for the easy identification of items across a region, eliminating the need in the outliner to create parallel lists of instructions for each region, and replace output values in a less convoluted way.
Additionally, in a future commit, the output values will not necessarily be recorded values from the region itself, it could be a combination value where the actual value being output is a PHINode instead. This new method allows us to handle the replacement of the output value to the stored value with the corresponding item in the same place for both normal output values, and PHINode outputs instead of handling the different types of outputs in different locations.
Reviewers: paquette, roelofs
Differential Revision: https://reviews.llvm.org/D108656
This patch changes SPMDization to not trigger for regions with no
parallelism. Otherwise, this will introduce unnecessary barriers that
will slow the single-threaded region down.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109438
IRExecutionUnit::SearchSpec is a struct that encapsulates information
needed to look for a symbol. Specifically, it is comprised of a name
represented with a ConstString and a FunctionNameType mask.
Because the mask is unused (effectively always set to
eFunctionNameTypeFull), we can remove the mask and replace all uses with
eFunctionNameTypeFull. After doing that, SearchSpec is effectively a
wrapper around a ConstString.
As an aside, SearchSpec is similar in purpose to Module::LookupInfo. I
briefly considered replacing uses of SearchSpec with LookupInfo, but
the current code only cares about symbol names (treating them as
eFunctionNameTypeFull). This code does care about language type, so
LookupInfo may be appropriate for IRExecutionUnit in the future.
Differential Revision: https://reviews.llvm.org/D109384
Otherwise we end up with an extra conditional jump, following by an
unconditional jump off the end of a function. ie.
bb.0:
BT32rr ..
JCC_1 %bb.4 ...
bb.1:
BT32rr ..
JCC_1 %bb.2 ...
JMP_1 %bb.3
bb.2:
...
bb.3.unreachable:
bb.4:
...
Should be equivalent to:
bb.0:
BT32rr ..
JCC_1 %bb.4 ...
JMP_1 %bb.2
bb.1:
bb.2:
...
bb.3.unreachable:
bb.4:
...
This can occur since at the higher level IR (Instruction) SwitchInsts
are required to have BBs for default destinations, even when it can be
deduced that such BBs are unreachable.
For most programs, this isn't an issue, just wasted instructions since the
unreachable has been statically proven.
The x86_64 Linux kernel when built with CONFIG_LTO_CLANG_THIN=y fails to
boot though once D106056 is re-applied. D106056 makes it more likely
that correlation-propagation (CVP) can deduce that the default case of
SwitchInsts are unreachable. The x86_64 kernel uses a binary post
processor called objtool, which emits this warning:
vmlinux.o: warning: objtool: cfg80211_edmg_chandef_valid()+0x169: can't
find jump dest instruction at .text.cfg80211_edmg_chandef_valid+0x17b
I haven't debugged precisely why this causes a failure at boot time, but
fixing this very obvious jump off the end of the function fixes the
warning and boot problem.
Link: https://bugs.llvm.org/show_bug.cgi?id=50080
Fixes: https://github.com/ClangBuiltLinux/linux/issues/679
Fixes: https://github.com/ClangBuiltLinux/linux/issues/1440
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D109103
This should have been the 4 byte version in the first place. Unfortunatelly there is no easy way to add a test as both the 1 byte and 4 byte version are printed as 'jmp' in the assembly code.
Reviewed By: kda
Differential Revision: https://reviews.llvm.org/D109453
HIP currently diagnose capture of this pointer in device lambda in
host member functions. If this pointer points to managed memory,
it can be used in both device and host functions. Under this
situation, capturing this pointer in device lambda functions
in host member functions is valid usage. Change the diagnostic
about capturing this pointer to warning.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D108493
Detected by evil-izing the widely used `MoveOnly` testing type.
I had to patch some tests that were themselves using its comma operator,
but I think that's a worthwhile cost in order to catch more places
in our headers that needed comma-proofing.
The trick here is that even `++ptr, SomeClass()` can find a comma operator
by ADL, if `ptr` is of type `Evil*`. (A comma between two operands
of non-class-or-enum type is always treated as the built-in
comma, without ADL. But if either operand is class-or-enum, then
ADL happens for _both_ operands' types.)
Differential Revision: https://reviews.llvm.org/D109414
Precede the `extern template` declaration prior to use. This is helpful
as it prevents the compiler from having to worry about instantiating the
template as it will be provided for. This is particularly important for
Windows where `__declspec(dllexport)` will traverses inheritance clauses
resulting in an incorrect application of dll interface to declarations.
Similar to `DAGCombiner::visitRotate`.
This makes `rotl_bitwidth_cst` in postlegalizercombiner-rotate.mir reduce down
to a COPY. Modify the checkline to make sure that only rotate_out_of_range
runs there.
Differential Revision: https://reviews.llvm.org/D109264
Previously the test was failing on platforms where `long` was less than
64-bits wide (e.g. older WatchOS simulators and arm64_32) because the
`padding` field was too small.
The test currently relies on the `my_object->isa` being scribbled or
left unmodified after `my_object` is freed. However, this was not the
case because the `isa` pointer intersected with
`ChunkHeader::free_context_id`. `free_context_id` starts at the
beginning of user memory but it only initialized once the memory is
freed. This caused the `isa` pointer to change after it was freed
leading to the test crashing.
To fix this the `padding` field has been made explicitly 64-bits wide
(same size as `ChunkHeader::free_context_id`).
rdar://75806757
Differential Revision: https://reviews.llvm.org/D109409
This is consistent with the RVV intrinsic patterns. This has been
shown to prevent some "ran out of registers" errors in our internal
testing.
Unfortunately, there are some regressions on LMUL=8 tests in here.
I think the lack of registers with LMUL=8 just makes it very hard
to schedule correctly.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D109245
Currently, we only deal with the case where we can match
the number of low bits to be kept, i.e.:
```
x & ((1 << y) - 1)
```
will extract low `y` bits of `x`.
But what will
```
x & (-1 >> y)
```
do?
Logically, it will extract `bitwidth(x) - y` low bits, i.e.:
```
x & ~(-1 << (bitwidth(x)-y))
```
... except we can't do such a transformation in IR in general,
because if we wanted to extract all the bits `(-1 >> 0)` is fine,
but `-1 << bitwidth(x)` would be `poison`: https://alive2.llvm.org/ce/z/BKJZfw,
Yet, here with BMI's BEXTR and BMI2's BZHI we don't have any such problems with edge-cases.
So what we can do is: https://alive2.llvm.org/ce/z/gm5M2B
As briefly discussed with @craig.topper, this appears to be not worse than what we'd end up with currently (a pair of shifts):
* https://godbolt.org/z/nsPb8bejs (direct data dependency, sequential execution)
* https://godbolt.org/z/7bj3zeh1d (no direct data dependency, parallel execution)
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D107923
The expansion of these pseudos creates ADD instructions. Those
ADDs modify a GPR so that it is no longer contains the same value
as the input base pointer. Therefore, I believe we should have a
GPR as a Def on these instructions and expansion should get the
destination register for the ADDs from that operand.
At least in our tests here this works out so that register
scavenging picks the same register as the base pointer.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D109405
I found that the initial corpus allocation of fork mode has certain defects.
I designed a new initial corpus allocation strategy based on size grouping.
This method can give more energy to the small seeds in the corpus and
increase the throughput of the test.
Fuzzbench data (glibfuzzer is -fork_corpus_groups=1):
https://www.fuzzbench.com/reports/experimental/2021-08-05-parallel/index.html
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D105084
This moves 2 variable declarations from `llvm/Support/Debug.h` to a more
appropriate home in the headers for `LLVMAnalysis`. These variables are
defined in `LLVMAnalysis` rather than in `LLVMSupport` and although they
control debugging behavior, the declarations being colocated in the same
library's headers is both easier to locate and aids correctly describing
the library's interfaces.
Reviewed By: rnk, mehdi_amini, aeubanks
Differential Revision: https://reviews.llvm.org/D109396
This patch continues refactoring done by D99055. It puts format specific
options into the correponding CopyConfig structures.
Differential Revision: https://reviews.llvm.org/D102277
When we start outlining across branches, there is the possibility that we will have two different blocks with different output locations, or a single branch that goes to two blocks outside of the region that is being outlined. While the CodeExtractor provides most of the mechanisms by using the return value of the extracted function as the input to a switch statement to correctly branch to the correct location, we need special handling for different output schemas to each location.
This is done by repeating the existing storing scheme for each different exit block. We have a map from the return values used, to the basic block that is used to store the outputs for that particular exit block within the outlined function. Then if needed, we create a switch statement for each return block to branch to the correct set of stored outputs.
Reviewers: paquette
Differential Revision: https://reviews.llvm.org/D106993
This moves the declaration of `VerifyDomInfo` into
`llvm/IR/Dominators.h` from `llvm/Support/Debug.h`. Although this is a
debugging utility, the definition of the symbol is in LLVMIR, not in
LLVMSupport. This moves the declaration to the containing modules'
header.
Reviewed By: rnk, mehdhi_amini
Differential Revision: https://reviews.llvm.org/D109395
This patch refactors the existing implementation of computing an explicit
representation of an identifier as a floordiv in terms of other identifiers and
exposes this computation as a public function.
The computation of this representation is required to support local identifiers
in PresburgerSet subtract, complement and isEqual.
Reviewed By: bondhugula, arjunp
Differential Revision: https://reviews.llvm.org/D106662
New omp_all_memory task dependence type is implemented.
Library recognizes the new type via either
(dependence_address == NULL && dependence_flag == 0x80)
or
(dependence_address == SIZE_MAX).
A task with new dependence type depends on each preceding task
with any dependence type (kind of a dependence barrier).
Differential Revision: https://reviews.llvm.org/D108574
Remove File::eOpenOptionAppend from the mode used by 'platform file
open' command. According to POSIX, O_APPEND causes all successive
writes to be done at the end of the file. This effectively makes
the offset argument to 'platform file write' meaningless.
Furthermore, apparently O_APPEND is not implemented reliably everywhere.
The Linux manpage for pwrite(2) suggests that Linux does respect
O_APPEND there while according to POSIX it should not, so the actual
behavior would be dependent on how the vFile:pwrite packet is
implemented on the server.
Ideally, the mode used for opening flags would be provided via options.
However, changing the default mode seems to be a reasonable intermediate
solution.
Differential Revision: https://reviews.llvm.org/D107664
I just ran into a compiler error involving __bind_back and some overloads
that were being disabled with _EnableIf. I noticed that the error message
was quite bad and did not mention the reason for the overload being
excluded. Specifically, the error looked like this:
candidate template ignored: substitution failure [with _Args =
<ContiguousView>]: no member named '_EnableIfImpl' in 'std::_MetaBase<false>'
Instead, when using enable_if or enable_if_t, the compiler is clever and
can produce better diagnostics, like so:
candidate template ignored: requirement 'is_invocable_v<
std::__bind_back_op<1, std::integer_sequence<unsigned long, 0>>,
std::ranges::views::__transform::__fn &, std::tuple<PlusOne> &,
ContiguousView>' was not satisfied [with _Args = <ContiguousView>]
Basically, it tries to do a poor man's implementation of concepts, which
is already a lot better than simply complaining about substitution failure.
Hence, this commit uses enable_if_t instead of _EnableIf whenever
possible. That is both more straightforward than using the internal
helper, and also leads to better error messages in those cases.
I understand the motivation for _EnableIf's implementation was to improve
compile-time performance, however I believe striving to improve error
messages is even more important for our QOI, hence this patch. Furthermore,
it is unclear that _EnableIf actually improved compile-time performance
in any noticeable way (see discussion in the review for details).
Differential Revision: https://reviews.llvm.org/D108216
Add loop coalesce utility for affine.for. This expects loops to have
been normalized a-priori. This works for both constant as well non
constant upper bounds having single/multiple result upper bound affine
map.
With contributions from Arnab Dutta and Uday Bondhugula.
Reviewed By: bondhugula, ayzhuang
Differential Revision: https://reviews.llvm.org/D108126
The call to emitCodeAlignment was missing a STI which is required
after D45962.
emitCodeAlignment has a default parameter of 0 for MaxBytesToEmit.
Explicitly passing 0 here was interpreted as as nullptr for the STI.
This could possibly be avoided by taking STI as a const reference in
emitCodeAlignment.
Differential Revision: https://reviews.llvm.org/D109425
When handling register spill for indirect debug value LiveDebugValues pass doesn't add
DW_OP_deref operator which may in some cases cause debugger to return value address, instead
of value while machine register holding that address is spilled.
Differential revision: https://reviews.llvm.org/D109142
Earlier BundleEntryID used to be <OffloadKind>-<Triple>-<GPUArch>.
This used to work because the clang-offload-bundler didn't need
GPUArch explicitly for any bundling/unbundling action. With
unbundleArchive it needs GPUArch to ensure compatibility between
device specific code objects. D93525 enforced triples to have
separators for all 4 components irrespective of number of
components, like "amdgcn-amd-amdhsa--". It was required to
to correctly parse a possible 4th environment component or a GPU.
But, this condition is breaking backward compatibility with
archive libraries compiled with compilers older than D93525.
This patch allows triples to have any number of components with
and without extra separator for empty environment field. Thus,
both the following bundle entry IDs are same:
openmp-amdgcn-amd-amdhsa--gfx906
openmp-amdgcn-amd-amdhsa-gfx906
Reviewed By: yaxunl, grokos
Differential Revision: https://reviews.llvm.org/D106809
This patch fixes a variety of crashes resulting from the `MemCpyOptPass`
casting `TypeSize` to a constant integer, whether implicitly or
explicitly.
Since the `MemsetRanges` requires a constant size to work, all but one
of the fixes in this patch simply involve skipping the various
optimizations for scalable types as cleanly as possible.
The optimization of `byval` parameters, however, has been updated to
work on scalable types in theory. In practice, this optimization is only
valid when the length of the `memcpy` is known to be larger than the
scalable type size, which is currently never the case. This could
perhaps be done in the future using the `vscale_range` attribute.
Some implicit casts have been left as they were, under the knowledge
they are only called on aggregate types. These should never be
scalably-sized.
Reviewed By: nikic, tra
Differential Revision: https://reviews.llvm.org/D109329
Enhance the generic register fallback code to support "eflags" register
name in addition to "rflags", as the former is used by gdbserver. This
permits lldb-server to recognize the generic flags register when
interfacing with gdbserver-style target.xml (i.e. without generic=""
attributes), and therefore aligns ABI plugins' AugmentRegisterInfo()
between lldb-server and gdbserver.
Differential Revision: https://reviews.llvm.org/D108548