InstCombine tries to rewrite
%prod = mul nsw i64 %X, Scale
%acc = add nsw i64 %prod, Offset
%0 = alloca i8, i64 %acc, align 4
%1 = bitcast i8* %0 to i32*
Use ( %1 )
into
%prod = mul nsw i64 %X, Scale/4
%acc = add nsw i64 %prod, Offset/4
%0 = alloca i32, i64 %acc, align 4
Use (%0)
But it assumes Scale is unsigned, and performs an unsigned division.
So we should bail out if Scale cannot be interpreted as an unsigned safely.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D126546
Implement R_AARCH64_ABS64 relocation entry. This relocation type is generated
when creating a static function pointer to symbol.
Reviewed By: lhames, sgraenitz
Differential Revision: https://reviews.llvm.org/D126658
Implement R_AARCH64_LDST*_ABS_LO12_NC relocaiton entries by reusing PageOffset21
generic relocation edge. The difference between MachO backend is that in ELF,
the shift value is explicitly given by relocation type. lld generates the
relocation type that matches with instruction bitwidth, so getting the shift
value implicitly from instruction bytes should be fine in typical use cases.
This updates existing asan allocator settings to use the same allocator settings as what lsan uses for platforms where they already match.
Differential Revision: https://reviews.llvm.org/D126927
This way downstream tools that read sanitizer output can differentiate between OOM errors
reported by sanitizers from other sanitizer errors.
Changes:
- Introduce ErrorIsOOM for checking if a platform-specific error code from an "mmap" is an OOM err.
- Add ReportOOMError which just prepends this error message to the start of a Report call.
- Replace some Reports for OOMs with calls to ReportOOMError.
- Update necessary tests.
Differential Revision: https://reviews.llvm.org/D127161
If we don't demand low bits and it is valid to pre-shift a constant:
(C2 >> X) << C1 --> (C2 << C1) >> X
https://alive2.llvm.org/ce/z/_UzTMP
This is the reverse-order shift sibling to 82040d414b ( D127122 ).
It seems likely that we would want to add this to the SDAG version of
the code too to keep it on par with IR.
* The yaml input is inlined into the test file.
* Unnecessary members and fields are removed.
Reviewed By: thieta
Differential Revision: https://reviews.llvm.org/D127235
Repalce the fixed buffer in SymbolizerProcess with InternalScopedString,
and simply append to it when reading data.
Fixes#55460
Reviewed By: vitalybuka, leonardchan
Differential Revision: https://reviews.llvm.org/D126580
A change to the allocation characteristics of MDNodes, introducing the ability
to add operands one at a time. This functionality is restricted to MDTuples.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D125998
This test depends on multiple threads with one of them
hitting a watchpoint at the same time as a breakpoint, and
can fail because of the way arm64 watchpoints are handled.
I added skips to most of these via
```
commit bef4da4a6a
Author: Jason Molenda <jason@molenda.com>
Date: Wed May 25 16:05:16 2022 -0700
Skip testing of watchpoint hit-count/ignore-count on multithreaded
```
but missed that this test is susceptable to the same issue.
The `init` and `tensor` ops are renamed (and one moved to another dialect).
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D127169
Reverts commit 1469ebf838 (original commit)
Reverts commit a392a39f75 (build fix for above commit)
The commit broke tests in out-of-tree projects, indicating that some logical
error was made in the previous change but not covered by current tests.
On macOS, a process will be launched with /usr/lib/dyld (the
dynamic linker) and the main binary by the kernel. The
first thing the standalone dyld will do is call into the dyld
in the shared cache image. This patch tracks the transition
between the dyld's at the very beginning of process startup.
In DynamicLoaderMacOS::NotifyBreakpointHit() there are two new
cases handled:
`dyld_image_dyld_moved` which is the launch /usr/lib/dyld indicating
that it is about call into the shared cache dyld ane evict itself.
lldb will remove the notification breakpoint it set, clear the binary
image list entirely, get the notification function pointer value out
of the dyld_all_image_infos struct (which is the notification fptr
in the to-be-run shared-cache dyld) and put an address breakpoint
there.
`dyld_notify_adding` is then called by shared-cache dyld, and we
detect this case by noticing that we have an empty binary image list,
normally impossibe, and treating this as if we'd just started a
process attach/launch.
Differential Revision: https://reviews.llvm.org/D127247
rdar://84222158
The debug mode has been broken pretty much ever since it was shipped
because it was possible to enable the debug mode in user code without
actually enabling it in the dylib, leading to ODR violations that
caused various kinds of failures.
This commit makes the debug mode a knob that is configured when
building the library and which can't be changed afterwards. This is
less flexible for users, however it will actually work as intended
and it will allow us, in the future, to add various kinds of checks
that do not assume the same ABI as the normal library. Furthermore,
this will make the debug mode more robust, which means that vendors
might be more tempted to support it properly, which hasn't been the
case with the current debug mode.
This patch shouldn't break any user code, except folks who are building
against a library that doesn't have the debug mode enabled and who try
to enable the debug mode in their code. Such users will get a compile-time
error explaining that this configuration isn't supported anymore.
In the future, we should further increase the granularity of the debug
mode checks so that we can cherry-pick which checks to enable, like we
do for unspecified behavior randomization.
Differential Revision: https://reviews.llvm.org/D122941
c2eccc6 introduced a call to etHasNoUnsignedWrap which implicitly assumes that Inst is a OverflowingBinaryOperator. This is frequently untrue, but was not caught because cast<Ty>(X) has been broken, see https://discourse.llvm.org/t/cast-x-is-broken-implications-and-proposal-to-address/63033 for context.
I considered reverting this, but since doing so re-introduces a nasty miscompile of its own, I decided to fix forward instead.
I'll note that this is a particularly nasty form of the cast<Ty>(X) issue. Because the cast was succeeding unexpected, we were writing data to instructions which weren't OBOs. This could result in near arbitrary data or memory corruption. I'm a bit shocked that the sanitizers didn't find this TBH.
The separate isLoadStoreImm12 predicate will be used for validating ELF/aarch64
ldst relocation types.
Reviewed By: lhames, sgraenitz
Differential Revision: https://reviews.llvm.org/D126628
Summary:
The OffloadingBinary uses a convenience struct to help manage the memory
that will be serialized using the binary format. This currently uses a
reference to an existing buffer, but this should own the memory instead
so it is easier to work with seeing as its only current use requires
saving the buffer anyway.
Summary:
The OffloadBinary wraps around an embedded device image, commonly an ELF
or LLVM BC file. These file formats have alignment requirements for
parsing, so if the image is stored at an un-aligned offset from the
beginning of the file we will be unable to parse the embeded image
without copying the image buffer. This patch adds alignment padding
before the binary image is appended to ensure we can parse the symbolic
file it contains in-place without copying memory.
This reverts commit 0809f63826. The patch appears not to have included corresponding isa<Ty> support.
This was revealed when reintroducing the required isa<Ty> asserts in cast<Ty>. See https://discourse.llvm.org/t/cast-x-is-broken-implications-and-proposal-to-address/63033 for context.
Here's the template instantiation error:
In file included from /home/preames/llvm-repo/llvm-project/llvm/unittests/Support/Casting.cpp:9:
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h: In instantiation of ‘static bool llvm::isa_impl<To, From, Enabler>::doit(const From&) [with To = llvm::bar*; From = llvm::bar; Enabler = void]’:
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:110:36: required from ‘static bool llvm::isa_impl_cl<To, const From*>::doit(const From*) [with To = llvm::bar*; From = llvm::bar]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:137:41: required from ‘static bool llvm::isa_impl_wrap<To, FromTy, FromTy>::doit(const FromTy&) [with To = llvm::bar*; FromTy = const llvm::bar*]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:129:13: required from ‘static bool llvm::isa_impl_wrap<To, From, SimpleFrom>::doit(const From&) [with To = llvm::bar*; From = const llvm::bar* const; SimpleFrom = const llvm::bar*]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:263:62: required from ‘static bool llvm::CastIsPossible<To, From, Enable>::isPossible(const From&) [with To = llvm::bar*; From = const llvm::bar*; Enable = void]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:517:38: required from ‘static bool llvm::CastInfo<To, From, typename std::enable_if<(! llvm::is_simple_type<From>::value), void>::type>::isPossible(From&) [with To = llvm::bar*; From = llvm::bar* const]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:556:46: required from ‘bool llvm::isa(const From&) [with To = llvm::bar*; From = llvm::bar*]’
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:585:3: required from ‘decltype(auto) llvm::cast(From*) [with To = llvm::bar*; From = llvm::bar]’
/home/preames/llvm-repo/llvm-project/llvm/unittests/Support/Casting.cpp:181:27: required from here
/home/preames/llvm-repo/llvm-project/llvm/include/llvm/Support/Casting.h:64:64: error: ‘classof’ is not a member of ‘llvm::bar*’
64 | static inline bool doit(const From &Val) { return To::classof(&Val); }
Enhance memchr libcall folder to handle constant arrays consisting
of one or two sequences of cosecutive equal characters.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D126515
There was an issue with encoding wide (>64 bit) instructions on
BigEndian hosts, which is fixed in D127195. Therefore reland this.
gfx11 adds the ability to use dpp modifiers on vop3 instructions.
This patch adds machine code layer support for that. The MCCodeEmitter
is changed to use APInt instead of uint64_t to support these wider
instructions.
Patch 16/N for upstreaming of AMDGPU gfx11 architecture
Differential Revision: https://reviews.llvm.org/D126483
There are 3 places where we were using WASM_SEC_TAG as the "last" known
section type, which requires updating (or leaves a bug) when a new known
section type is added. Instead add a "last type" to the enum for this
purpose.
Differential Revision: https://reviews.llvm.org/D127164
This patch moves the aarch64 fixup logic from the MachO/arm64 backend to
aarch64.h header so that it can be re-used in the ELF/aarch64 backend. This
significantly expands relocation support in the ELF/aarch64 backend.
Reviewed By: lhames, sgraenitz
Differential Revision: https://reviews.llvm.org/D126286
This has been superseded by the llvm/Support/VCSRevision.h header. So
far as I can tell, nothing in the CMake build sets LLVM_VERSION_INFO. It
was always undefined, and the ifdefs using it were dead. However, CMake
is very flexible, so it's possible that I missed some ways to set this
variable. One could, for example, probably pass -DLLVM_VERSION_INFO=x on
the command line and get that through to configure_file, or set the
variable in an obscure way (`set(${proj}_VERSION_INFO "x")`). I'm
reasonably confident that isn't happening, but I'd like a second
opinion.
Update the Bazel and gn builds accordingly.
Differential Revision: https://reviews.llvm.org/D126977