This commit adds two test about template class instantiation in
transitively imported module. They are used as pre-commit tests for
successive patches.
Depends on D115940 for the `Binary_rv_vr_vv` pattern class op isel
fragment used for divisions.
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D116035
This fixes the assertion failure reported at
https://reviews.llvm.org/D114889#3198921 with a straightforward
check, until the cleaner fix in D115924 can be reapplied.
- Improve extension description.
- Rename "What is DWARF?" section to better reflect what it is
describing.
Reviewed By: kzhuravl
Differential Revision: https://reviews.llvm.org/D116077
Matteo Croce reported a bpf backend fatal error in
https://github.com/llvm/llvm-project/issues/52779
A simplified case looks like:
$ cat bug.c
extern int do_smth(int);
int test() {
return __builtin_btf_type_id(*(typeof(do_smth) *)do_smth, 1);
}
$ clang -target bpf -O2 -g -c bug.c
fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
...
The reason for the fatal error is that the relocation is against
a DISubroutineType like type 13 below:
!10 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed)
!11 = !{}
!12 = !DILocation(line: 3, column: 10, scope: !7)
!13 = !DISubroutineType(types: !14)
!14 = !{!10, !10}
The DISubroutineType doesn't have a name and there is no way for
downstream bpfloader/kernel to do proper relocation for it.
But we can improve error message to be more specific for this case.
The patch improved the error message to be:
fatal error: error in backend: SubroutineType not supported for BTF_TYPE_ID_REMOTE reloc
Differential Revision: https://reviews.llvm.org/D116063
Summary: When disassembling, symbolize a branch target operand
to print a label instead of a real address.
Reviewed By: shchenz
Differential Revision: https://reviews.llvm.org/D114492
This also makes the sanitizer_stoptheworld_test cross-platform by using the STL, rather than pthread.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D115204
In 20a895c4be, we introduce `finalizeOptimizationRemarks()` to make sure we flush the diagnostic remarks file in case the linker doesn't call the global destructors before exiting.
In https://reviews.llvm.org/D73597, we add optimization remarks for removed functions for debugging or for detecting dead code.
But there is a case, if PreOptModuleHook or PostInternalizeModuleHook is defined (e.g. `--plugin-opt=emit-llvm` is passed to linker), we do not call `finalizeOptimizationRemarks()`, therefore we will get an incomplete optimization remarks file.
This patch make sure we flush the diagnostic remarks file when PreOptModuleHook or PostInternalizeModuleHook is defined.
Reviewed By: tejohnson, MaskRay
Differential Revision: https://reviews.llvm.org/D115417
This is a reapply of a8a51fe5, which was reverted in 1ba99e due to a failing compiler-rt test. That test was a false positive because it was checking asan failures not accounting for the fact the call could be validly optimized out. I hopefully managed to stablize that test in 9b955f. (That's a speculative fix due to disk consumption needed to build compiler-rt tests locally being absurd.)
Original commit message follows..
The majority of this change is sinking logic from instcombine into MemoryLocation such that it can be generically reused. If we have a call with a single analyzable write to an argument, we can treat that as-if it were a store of unknown size.
Merging the code in this was unblocks DSE in the store to dead memory code paths. In theory, it should also enable classic DSE of such calls, but the code appears to not know how to use object sizes to refine unknown access bounds (yet).
In addition, this does make the isAllocRemovable path slightly stronger by reusing the libfunc and additional intrinsics bits which are already in getForDest.
Differential Revision: https://reviews.llvm.org/D115904
This attempts to adjust the test to still exercise the expected codepath after D115904. This test is fundementally rather fragile.
Unfortunately, I have not been able to confirm this workaround either does, or does not, work. Attempting check-all with compiler-rt blows through an additional 30GB of disk space so my build config which exceeds my local disk space.
One of the uses of `LTOCodeGenerator` is to take it as a middle+back end. Sometimes
it is very helpful to access, especially get information from the optimized module.
If the information can be changed in optimization, it cannot be get before the
module is added to `LTOCodeGenerator`. This patch adds a function
`LTOCodeGenerator::getMergedModule` to access the `MergedModule`.
Reviewed By: steven_wu
Differential Revision: https://reviews.llvm.org/D114201
These actions should only be used for adjusting the register types
(and the memory type as needed to satisfy the register
type). Unaligned accesses should be split as a type of lowering.
This has the effect of improving the code in many cases since now we
produce zextloads instead of separate loads with ands. The load/store
legality rules still seem far more complicated than necessary though.
Control-Flow Integrity (CFI) replaces references to address-taken
functions with pointers to the CFI jump table. This is a problem
for low-level code, such as operating system kernels, which may
need the address of an actual function body without the jump table
indirection.
This change adds the __builtin_function_start() builtin, which
accepts an argument that can be constant-evaluated to a function,
and returns the address of the function body.
Link: https://github.com/ClangBuiltLinux/linux/issues/1353
Depends on D108478
Reviewed By: pcc, rjmccall
Differential Revision: https://reviews.llvm.org/D108479
With Control-Flow Integrity (CFI), the LowerTypeTests pass replaces
function references with CFI jump table references, which is a problem
for low-level code that needs the address of the actual function body.
For example, in the Linux kernel, the code that sets up interrupt
handlers needs to take the address of the interrupt handler function
instead of the CFI jump table, as the jump table may not even be mapped
into memory when an interrupt is triggered.
This change adds the no_cfi constant type, which wraps function
references in a value that LowerTypeTestsModule::replaceCfiUses does not
replace.
Link: https://github.com/ClangBuiltLinux/linux/issues/1353
Reviewed By: nickdesaulniers, pcc
Differential Revision: https://reviews.llvm.org/D108478
Currently the behavior with relative paths is pretty broken. It differs
between external shell and internal shell because the path resolution
is done with a different working directory. With the internal shell,
it's resolved relative to the directory from which lit is executed,
whereas with the external shell it's resolved relative to where the
test case is executed. To make matters worse, using the internal shell
the filepath to binaries looked up with `which` is returned relative
to the directory from which lit is executed, but then executed from
the test execution directory. That means that relative paths with the
internal shell give a `[Errno 2] No such file or directory` error
instead of the expected `command not found`.
To address these issues this patch makes lit interpret relative paths
as relative to the directory from which lit was invoked and modifies
`which` to return absolute paths, matching the behavior of its
namesake unix function.
See https://groups.google.com/g/llvm-dev/c/KzMWlOXR98Y/m/QJoqn0U5HAAJ
Reviewed By: yln
Differential Revision: https://reviews.llvm.org/D115486
This patch allows the user to request all resources of a particular
layer (or core-attribute). The syntax of KMP_HW_SUBSET is modified
so the number of units requested is optional or can be replaced with an
'*' character.
e.g., KMP_HW_SUBSET=c:intel_atom@3 will use all the cores after offset 3
e.g., KMP_HW_SUBSET=*c:intel_core will use all the big cores
e.g., KMP_HW_SUBSET=*s,*c,1t will use all the sockets, all cores per
each socket and 1 thread per core.
Differential Revision: https://reviews.llvm.org/D115826
Test a range of acceptable forms of co_max calls, including
combinations of keyword and non-keyword actual arguments of
numeric types. Also test that several invalid forms of
co_max call generate the correct error messages.
Reviewed By: ktras
Differential Revision: https://reviews.llvm.org/D113083
This reverts 3816c53f04 and removes follow-up
fixups.
The original intention was to show error earlier (posix_fallocate time) than
later for ld.lld but it appears to cause some problems which make it not free.
* FreeBSD ZFS: EINVAL, not too bad.
* FreeBSD UFS: according to khng "devastatingly slow on freebsd because UFS on freebsd does not have preallocation support like illumos. It zero-fills."
* NetBSD: maybe EOPNOTSUPP
* Linux tmpfs: unless tmpfs is set up to use huge pages (requires CONFIG_TRANSPARENT_HUGE_PAGECACHE=y), I can consistently demonstrate ~300ms delay for a 1.4GiB output.
* Linux ext4: I don't measure any benefit, either backed by a hard disk or by a file in tmpfs.
* The current code organization of `defined(HAVE_POSIX_FALLOCATE)` costs us a macro dispatch for AIX.
I think we should just remove it. I think if posix_fallocate ever finds demonstrable benefit,
it is likely Linux specific and will not need HAVE_POSIX_FALLOCATE, and possibly opt-in by some specific programs.
In a filesystem with CoW and compression, the ENOSPC benefit may be lost as well.
Reviewed By: khng300
Differential Revision: https://reviews.llvm.org/D115957
Test various acceptable forms of co_min calls, including
combinations of keyword and non-keyword actual arguments of
integer, real, and character types. Also test that several
invalid forms of co_min call generate the correct error messages.
Reviewed By: ktras
Differential Revision: https://reviews.llvm.org/D113077
writeSections is typically a bottleneck.
This was used to track down the following bottlenecks:
* Output section .rela.dyn (9115d75117)
* Output section .debug_str (3aae04c744)
* posix_fallocate is slow for Linux tmpfs: D115957
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D115984
Test a range of acceptable forms of co_reduce calls, including
combinations of keyword and non-keyword actual arguments of
numeric types. Also test that several invalid forms of
co_reduce call generate the correct error messages.
Reviewed By: kiranchandramohan, ktras, ekieri
Differential Revision: https://reviews.llvm.org/D113086
When P0883R2 was initially implemented in D103769 #pragma clang deprecated didn't exist yet.
We also forgot to cleanup usages in libc++ itself.
This takes care of both.
Differential Revision: https://reviews.llvm.org/D115995
These conversions are better suited to be applied at whole tensor
level. Applying these as canonicalizations end up triggering such
canonicalizations at all levels of the stack which might be
undesirable. For example some of the resulting code patterns wont
bufferize in-place and need additional stack buffers. Best is to be
more deliberate in when these canonicalizations apply.
Differential Revision: https://reviews.llvm.org/D115912
GCC's powerpc32 port predefines `PPC` as a macro in GNU C++ mode in some configurations (Linux,
FreeBSD, and some others. See `builtin_define_std ("PPC"); ` in gcc/config/rs6000).
```
% powerpc-linux-gnu-g++ -E -dM -xc++ /dev/null -o - | grep -w PPC
#define PPC 1
```
Fixes https://bugs.gentoo.org/829599
Reviewed By: thesamesam
Differential Revision: https://reviews.llvm.org/D116017
There is a small chance that the slot may be not queued in TraceSwitchPart.
This can happen if the slot has kEpochLast epoch and another thread
in FindSlotAndLock discovered that it's exhausted and removed it from
the slot queue. kEpochLast can happen in 2 cases: (1) if TraceSwitchPart
was called with the slot locked and epoch already at kEpochLast,
or (2) if we've acquired a new slot in SlotLock in the beginning
of the function and the slot was at kEpochLast - 1, so after increment
in SlotAttachAndLock it become kEpochLast.
If this happens we crash on ctx->slot_queue.Remove(thr->slot).
Skip the requeueing if the slot is not queued.
The slot is exhausted, so it must not be ctx->slot_queue.
The existing stress test triggers this with very small probability.
I am not sure how to make this condition more likely to be triggered,
it evaded lots of testing.
Depends on D116040.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D116041
SlotPairLocker calls SlotLock under ctx->multi_slot_mtx.
SlotLock can invoke global reset DoReset if we are out of slots/epochs.
But DoReset locks ctx->multi_slot_mtx as well, which leads to deadlock.
Resolve the deadlock by removing SlotPairLocker/multi_slot_mtx
and only lock one slot for which we will do RestoreStack.
We need to lock that slot because RestoreStack accesses the slot journal.
But it's unclear why we need to lock the current slot.
Initially I did it just to be on the safer side (but at that time
we dit not lock the second slot, so it was easy just to lock the current slot).
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D116040