When converting a store into a memset, we currently insert the new
MemoryDef after the store MemoryDef, which requires all uses to be
renamed to the new def using a whole block scan. Instead, we can
insert the new MemoryDef before the store and not rename uses,
because we know that the location is immediately overwritten, so
all uses should still refer to the old MemoryDef. Those uses will
get renamed when the old MemoryDef is actually dropped, which is
efficient.
I expect something similar can be done for some of the other MSSA
updates in MemCpyOpt. This is an alternative to D107513, at least
for this particular case.
Differential Revision: https://reviews.llvm.org/D107702
Clang diagnostics refer to identifier names in quotes.
This patch makes inline remarks conform to the convention.
New behavior:
```
% clang -O2 -Rpass=inline -Rpass-missed=inline -S a.c
a.c:4:25: remark: 'foo' inlined into 'bar' with (cost=-30, threshold=337) at callsite bar:0:25; [-Rpass=inline]
int bar(int a) { return foo(a); }
^
```
Reviewed By: hoy
Differential Revision: https://reviews.llvm.org/D107791
This enables querying shapes/values as shapes without mutating the IR
directly (e.g., towards enabling doing inference in analysis &
application steps, inferring function shape with constant from callsite,
...). Add a new ShapeAdaptor that abstracts over whether shape is from
Type or ShapedTypeComponents or DenseIntElementsAttribute. This adds new
accessors to ValueShapeRange to get Shape and value as shape, but
doesn't restrict or remove the previous way of accessing Type via the
Value for now, that does mean a less refined shape could be accidentally
queried and will be restricted in follow up.
Currently restricted Value query to what can be represented as Shape. So
only supports cases where constant subgraph evaluation's output is a
shape. I had considered making it more general, but without TBD extern
attribute concept or some such a user cannot today uniformly avoid
overhead.
Update TOSA ops and also the shape inference pass.
Differential Revision: https://reviews.llvm.org/D107768
Skeleton vs. DWO units mismatch has been fixed in D106270. As they both
have type DWARFUnit it is a bit difficult to debug. So it is better to
make it safe against future changes.
Reviewed By: kimanh, clayborg
Differential Revision: https://reviews.llvm.org/D107659
The intrinsics have an extra chunk of known bits logic
compared to the normal cmp+select idiom. That allows
folding the icmp in each case to something better, but
that then opposes the canonical form of min/max that
we try to form for a select.
I'm carving out a narrow exception to preserve all
existing regression tests while avoiding the inf-loop.
It seems unlikely that this is the only bug like this
left, but this should fix:
https://llvm.org/PR51419
This patch adds the `AlwaysInline` attribute to the `__kmpc_parallel_51`
device runtime call. This improves inlining heuristics which encourages
the indirect function pointer arguemnt to also be inlined. This greatly
improves performance for a few applications whose outlined regions were
not inlined otherwise.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D107839
New tsan runtime will need to compress addresses/PCs to fewer bits.
Add CompressAddr/RestoreAddr functions that compress/restore
addresses to 44 bits.
Depends on D107744.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107745
Rename mapping field selectors according to the code style.
Reuse the actual field names, there is no need to invent
second set of names.
Depends on D107743.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107744
Currently there are 2 levels when selecting the active mapping:
the branchy ifdef tree + another ifdef tree in SelectMapping.
Moreover, there is an additional indirection for some platforms
via HAS_48_BIT_ADDRESS_SPACE define. This makes already complex
logic even more complex and almost impossible to read.
Remove one level of indirection and define the active mapping
in SelectMapping.
Depends on D107742.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107743
Remove direct uses of Mapping in preperation for removing Mapping type
(which we already don't have for all platforms).
Remove dependence on HAS_48_BIT_ADDRESS_SPACE in preparation for removing it.
As far as I see for Apple/Mac platforms !HAS_48_BIT_ADDRESS_SPACE
simply means SANITIZER_IOS.
Depends on D107741.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107742
Move the mapping checking logic from startup to unit tests
and test all mapping instead of just the active one.
This makes it much more feasible to make any global changes
to the mappings since we have 17 of them.
Depends on D107740.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107741
Currently we have ifdef's for Go/C++ and Windows/non-Windows
in MemToShadow, MemToMeta, ShadowToMem. This does not allow
to test all mappings on a single platform.
Make all these functions support a superset of mappings for
all platforms by defining missing mapping consts to 0.
E.g. we always do ^A+B, but if A and B are defined to 0,
then these operations become no-op.
Depends on D107739.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107740
Define all fields to 0 for all mappings.
This allows to write portable code and tests.
For all existing cases 0 values work out of the box
because we check if an address belongs to the range
and nothing belongs to [0, 0] range.
Depends on D107738.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107739
Unify Go mapping naming with C++ naming to allow
writing portable code/tests that can work for both C++ and Go.
No functional changes.
Depends on D107737.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107738
First, the define conflicts with definition/testing of all mappings,
since it's not a global property anymore. Second, macros/ifdefs are bad.
Define kMidAppMemBeg/End to 0 to denote that there is no "mid" range instead.
Depends on D107736.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107737
Add FALLTHROUGH portably defined to [[clang::fallthrough]].
We have -Wimplicit-fallthrough already enabled, and currently
it's not possible to fix the warning.
Depends on D107735.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107736
Currently we have mapping selection duplicated 9 times.
Deduplicate it. No functional changes.
Depends on D107734.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107735
Currently we define/compile the mapping for a platform
only on that platform. This makes it impossible to unit-test
them on a single platform, and even to build test.
We have 17 of them and the Go mappings will be tested
only after a manual episodic update of the Go runtime.
Define all mappings always with unique names.
This will allow to unit-test them.
No functional changes.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107734
There were 2 issues reported on https://reviews.llvm.org/D105716:
1. FreeBSD phtread.h is annotated with thread-safety attributes
and this causes errors in gtest headers.
2. If sanitizers are compiled with an older versions of clang
(which supports the annotations, but has some false positives
in analysis not present in later versions of clang), compilation
fails with errors.
Switch the errors to warnings by default.
Some CI bots enable COMPILER_RT_ENABLE_WERROR, which should
turn these warnings back into errors.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D107826
After switching tsan from the old mutex to the new sanitizer_common mutex,
we've observed a significant degradation of performance on a test.
The test effectively stresses a lock-free stack with 4 threads
with a mix of atomic_compare_exchange and atomic_load operations.
The former takes write lock, while the latter takes read lock.
It turned out the new mutex performs worse because readers don't
use active spinning, which results in significant amount of thread
blocking/unblocking. The old tsan mutex used active spinning
for both writers and readers.
Add active spinning for readers.
Don't hand off the mutex to readers, and instread make them
compete for the mutex after wake up again.
This makes readers and writers almost symmetric.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D107824
We may call lowerRelativeReference in MC to determine whether target
supports this lowering. We should return nullptr instead of crashing
when we haven't implemented the real lowering.
Reviewed By: hubert.reinterpretcast
Differential Revision: https://reviews.llvm.org/D107830
When the upper bound is less than the lower bound, the extent is zero. This is
specified in section 8.5.8.2, paragraph 3.
Note that similar problems exist in the lowering code. This change only fixes
the problem for the front end.
I also added a test.
Differential Revision: https://reviews.llvm.org/D107832
The requested register class priorities weren't respected
globally. Not sure why this is a target option, and not just the
expected behavior (recently added in
1a6dc92be7). This avoids an allocation
failure when many wide tuple spills are introduced. I think this is a
workaround since I would not expect the allocation priority to be
required, and only a performance hint. The allocator should be smarter
about when only a subregister needs to be spilled and restored.
This does regress a couple of degenerate store stress lit tests which
shouldn't be too important.
%%%
This patch defines the macro __HOS_AIX__ when the target is AIX and without any dependency on the host. The macro indicates that the host is AIX. Defining the macro will help minimize porting pain for existing code compiled with xlc/xlC. xlC never shipped cross-compiling support, so the difference is not observable anyway.
%%%
This is a follow up to the discussion in https://reviews.llvm.org/D107242.
Reviewed By: cebowleratibm, joerg
Differential Revision: https://reviews.llvm.org/D107825
All supported compilers should support
_LIBCPP_HAS_NO_BUILTIN_IS_CONSTANT_EVALUATED so this can be removed.
Reviewed By: ldionne, #libc, Quuxplusone
Differential Revision: https://reviews.llvm.org/D107239
Removed redundant assignment from condition which causes gcc to emit the following error:
error: operation on ‘MoveData’ may be undefined [-Werror=sequence-point]
std::string, std::string_view, and absl::string_view all have a three-parameter version of find()
which has a "count" (or "n") paremeter limiting the size of the substring to search. We don't want
to propose changing to absl::StrContains in those cases. This change fixes that and adds unit tests
to confirm.
Reviewed By: ymandel
Differential Revision: https://reviews.llvm.org/D107837
targetDataEnd and targetDataBegin compute CopyMember/copy differently,
and I don't see why they should. This patch eliminates one of those
differences by making a simplifying NFC change to targetDataEnd.
The change is NFC as follows. The change only affects the case when
`!UNIFIED_SHARED_MEMORY || HasCloseModifier`. In that case, the
following points are always true:
* The value of CopyMember is relevant later only if DelEntry = false.
* DelEntry = false only if one of the following is true:
* IsLast = false. In this case, it's always true that CopyMember
= false = IsLast.
* `MEMBER_OF && !PTR_AND_OBJ` is true. In this case, CopyMember =
IsLast.
* Thus, if CopyMember is relevant, CopyMember = IsLast.
Reviewed By: grokos
Differential Revision: https://reviews.llvm.org/D105990
Summary: Move the test case to existing test file. Remove test file as duplicated. The file was mistakenly added due to concerns of a hidden bug (see https://reviews.llvm.org/D104381). After it turned out, that the bug was already fixed with another revision (https://reviews.llvm.org/D85817) and corresponding test was added as well, we can remove this file.
Differential Revision: https://reviews.llvm.org/D106152
Similar for sub except sub isn't commutative.
Modify the existing and/or/xor folds to also work on ISD::SELECT
and not just RISCVISD::SELECT_CC. This is needed to make sure
we do this transform before type legalization turns i32 add/sub
into add/sub+sign_extend_inreg on RV64. If we don't do this before
that, the sign_extend_inreg will still be after the select.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D107603
This patch implements paper P0692R1 from the C++20 standard. Disable usual access checking rules to template argument names in a declaration of partial specializations, explicit instantiation or explicit specialization (C++20 13.7.5/10, 13.9.1/6).
Fixes: https://llvm.org/PR37424
This patch also implements option *A* from this paper P0692R1 from the C++20 standard.
This patch follows the @rsmith suggestion from D78404.
Reviewed By: krisb
Differential Revision: https://reviews.llvm.org/D92024
This is already done within InstCombine:
https://alive2.llvm.org/ce/z/MiGE22
...but leaving it out of analysis makes it
harder to avoid infinite loops there.
This is already done within InstCombine:
https://alive2.llvm.org/ce/z/MiGE22
...but leaving it out of analysis makes it
harder to avoid infinite loops there.
tsan in some cases (e.g. after fork from multithreaded program, which arguably is problematic) increments ignore_interceptors and in that case runs just the intercepted functions and not their wrappers.
For realpath the interceptor handles the resolved_path == nullptr case though and so when ignore_interceptors is non-zero, realpath (".", nullptr) will fail instead of succeeding.
This patch uses instead the COMMON_INTERCEPT_FUNCTION_GLIBC_VER_MIN macro to use realpath@@GLIBC_2.3 whenever possible (if not, then it is likely a glibc architecture
with more recent oldest symbol version than 2.3, for which any realpath in glibc will DTRT, or unsupported glibc older than 2.3), which never supported NULL as second argument.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D107819