move _WIN64 and _WIN32 defines to lib/Basic/Targets/OSTargets.h
move WIN32, WIN64 and __MINGW64__ to addMinGWDefines
fixes __MINGW64__ not being defined for aarch64
adds WIN32 definition for x64
Reviewers: mstorsjo
Differential Revision: https://reviews.llvm.org/D40285
llvm-svn: 318755
CFG wass built in non-deterministic order due to the fact that indirect
goto labels' declarations (LabelDecl's) are stored in the llvm::SmallSet
container. LabelDecl's are pointers, whose order is not deterministic,
and llvm::SmallSet sorts them by their non-deterministic addresses after
"small" container is exceeded. This leads to non-deterministic processing
of the elements of the container.
The fix is to use llvm::SmallSetVector that was designed to have
deterministic iteration order.
Patch by Ilya Palachev!
Differential Revision: https://reviews.llvm.org/D40073
llvm-svn: 318754
This patch introduces a couple of helper functions that make it
possible to handle the caching logic in a single place.
Differential Revision: https://reviews.llvm.org/D39953
llvm-svn: 318752
CFG wass built in non-deterministic order due to the fact that indirect
goto labels' declarations (LabelDecl's) are stored in the llvm::SmallSet
container. LabelDecl's are pointers, whose order is not deterministic,
and llvm::SmallSet sorts them by their non-deterministic addresses after
"small" container is exceeded. This leads to non-deterministic processing
of the elements of the container.
The fix is to use llvm::SmallSetVector that was designed to have
deterministic iteration order.
Patch by Ilya Palachev!
Differential Revision: https://reviews.llvm.org/D40073
llvm-svn: 318750
This implements [dcl.modules.export] from the C++ Modules TS, which lets a module re-export another module with the "export import" syntax.
Differential Revision: https://reviews.llvm.org/D40270
llvm-svn: 318744
properlyDominates() shouldn't be used as sort key. It causes different output between stdlibc++ and libc++.
Instead, I introduced RPOT. In most cases, it works for CSE.
llvm-svn: 318743
Summary:
The pthread_once(3)/NetBSD type is built with the following structure:
struct __pthread_once_st {
pthread_mutex_t pto_mutex;
int pto_done;
};
Set the pto_done position as shifted by __sanitizer::pthread_mutex_t_sz
from the beginning of the pthread_once struct.
This corrects deadlocks when the pthread_once(3) function
is used.
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, dvyukov, vitalybuka
Reviewed By: dvyukov
Subscribers: llvm-commits, kubamracek, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D40262
llvm-svn: 318742
The obvious approach of defining a pattern like the one below actually doesn't
work:
`def : Pat<(i32 0), (i32 X0)>;`
As was noted when Lanai made this change (https://reviews.llvm.org/rL288215),
attempting to handle the constant 0 in tablegen leads to assertions due to a
physical register being used where a virtual register is expected.
llvm-svn: 318738
Previous patches primarily ensured that codegen was possible for the standard
RISC-V instructions. However, there are a number of IR inputs that wouldn't be
appropriately lowered. This patch both adds test cases and supports lowering
for a number of these cases:
* Improved sext/zext/trunc support
* Support for setcc variants that don't map directly to RISC-V instructions
* Lowering mul, and hence support for external symbols
* addc, adde, subc, sube
* mulhs, srem, mulhu, urem, udiv, sdiv
* {srl,sra,shl}_parts
* brind
* br_jt
* bswap, ctlz, cttz, ctpop
* rotl, rotr
* BlockAddress operands
Differential Revision: https://reviews.llvm.org/D29938
llvm-svn: 318737
Although ISD::SELECT_CC is a more natural match for RISCVISD::SELECT_CC (and
ultimately the integer RISC-V conditional branch instructions), we choose to
expand ISD::SELECT_CC and lower ISD::SELECT. The appropriate compare+branch
will be created in the case where an ISD::SELECT condition value is created by
an ISD::SETCC node, which operates on XLen types. Other datatypes such as
floating point don't have conditional branch instructions, and lowering
ISD::SELECT allows more flexibility for handling these cases.
Differential Revision: https://reviews.llvm.org/D29937
llvm-svn: 318735
Summary:
Before this patch, XRay's basic (naive mode) logging would be
initialised and installed in an adhoc manner. This patch ports the
implementation of the basic (naive mode) logging implementation to use
the common XRay framework.
We also make the following changes to reduce the variance between the
usage model of basic mode from FDR (flight data recorder) mode:
- Allow programmatic control of the size of the buffers dedicated to
per-thread records. This removes some hard-coded constants and turns
them into runtime-controllable flags and through an Options
structure.
- Default the `xray_naive_log` option to false. For now, the only way
to start basic mode is to set the environment variable, or set the
default at build-time compiler options. Because of this change we've
had to update a couple of tests relying on basic mode being always
on.
- Removed the reliance on a non-trivially destructible per-thread
resource manager. We use a similar trick done in D39526 to use
pthread_key_create() and pthread_setspecific() to ensure that the
per-thread cleanup handling is performed at thread-exit time.
We also radically simplify the code structure for basic mode, to move
most of the implementation in the `__xray` namespace.
Reviewers: pelikan, eizan, kpw
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40164
llvm-svn: 318734
Summary:
Before this change, the FDR mode implementation relied on at thread-exit
handling to return buffers back to the (global) buffer queue. This
introduces issues with the initialisation of the thread_local objects
which, even through the use of pthread_setspecific(...) may eventually
call into an allocation function. Similar to previous changes in this
line, we're finding that there is a huge potential for deadlocks when
initialising these thread-locals when the memory allocation
implementation is also xray-instrumented.
In this change, we limit the call to pthread_setspecific(...) to provide
a non-null value to associate to the key created with
pthread_key_create(...). While this doesn't completely eliminate the
potential for the deadlock(s), it does allow us to still clean up at
thread exit when we need to. The change is that we don't need to do more
work when starting and ending a thread's lifetime. We also have a test
to make sure that we actually can safely recycle the buffers in case we
end up re-using the buffer(s) available from the queue on multiple
thread entry/exits.
This change cuts across both LLVM and compiler-rt to allow us to update
both the XRay runtime implementation as well as the library support for
loading these new versions of the FDR mode logging. Version 2 of the FDR
logging implementation makes the following changes:
* Introduction of a new 'BufferExtents' metadata record that's outside
of the buffer's contents but are written before the actual buffer.
This data is associated to the Buffer handed out by the BufferQueue
rather than a record that occupies bytes in the actual buffer.
* Removal of the "end of buffer" records. This is in-line with the
changes we described above, to allow for optimistic logging without
explicit record writing at thread exit.
The optimistic logging model operates under the following assumptions:
* Threads writing to the buffers will potentially race with the thread
attempting to flush the log. To avoid this situation from occuring,
we make sure that when we've finalized the logging implementation,
that threads will see this finalization state on the next write, and
either choose to not write records the thread would have written or
write the record(s) in two phases -- first write the record(s), then
update the extents metadata.
* We change the buffer queue implementation so that once it's handed
out a buffer to a thread, that we assume that buffer is marked
"used" to be able to capture partial writes. None of this will be
safe to handle if threads are racing to write the extents records
and the reader thread is attempting to flush the log. The optimism
comes from the finalization routine being required to complete
before we attempt to flush the log.
This is a fairly significant semantics change for the FDR
implementation. This is why we've decided to update the version number
for FDR mode logs. The tools, however, still need to be able to support
older versions of the log until we finally deprecate those earlier
versions.
Reviewers: dblaikie, pelikan, kpw
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D39526
llvm-svn: 318733
We don't need separate 32 and 64 node types. We can use SDTCisInt and SDTCisSameSizeAs to ensure the mask size the result type and is integer.
llvm-svn: 318732
DAGTypeLegalizer::SplitInteger uses default pointer size as shift amount constant type,
which causes less performant ISA in amdgcn---amdgiz target since the default pointer
type is i64 whereas the desired shift amount type is i32.
This patch fixes that by using TLI.getScalarShiftAmountTy in DAGTypeLegalizer::SplitInteger.
The X86 change is necessary since splitting i512 requires shifting amount of 256, which
cannot be held by i8.
Differential Revision: https://reviews.llvm.org/D40148
llvm-svn: 318727
Initialize IsVis2 and IsVis3 in SparcSubtarget::initializeSubtargetDependencies.
MSan detected uninitialized read of IsVis3 after r318704. Initializing the
variables to false will prevent undefined behavior.
llvm-svn: 318724
Summary:
This raises __STDCPP_DEFAULT_NEW_ALIGNMENT__ from 8 to 16 on Win64.
This matches platforms that follow the usual `2 * sizeof(void*)`
alignment requirement for malloc. We might want to consider making that
the default rather than relying on long double alignment.
Fixes PR35356
Reviewers: STL_MSFT, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40277
llvm-svn: 318723
This is still breaking greendragon.
At this point I give up until someone can fix the greendragon
bots, and I will probably abandon this effort in favor of using
a private github repository.
llvm-svn: 318722
This effectively reverts r318548 and r318635 while keeping the
functionality behind the flag and preserving the bug fix from r318548.
Differential Revision: https://reviews.llvm.org/D40264
llvm-svn: 318721
ASan requires that the min alignment be at least the shadow
granularity, so add an init function to do that.
Differential Revision: https://reviews.llvm.org/D39473
llvm-svn: 318717
There's a generic partial specialization for all std::vector<T> that
does what's desired, so no need for this full specialization that's
causing an ODR violation anyway.
llvm-svn: 318712
Normally this would be cleaned up by promoting the condition operand next. But in the attached case we promoted the result from v2i48 to v2i64 and the condition from v2i1 to v2i48. Then we tried to "promote" the v2i48 condition back to v2i1 because that's what the SetCC result type for v2i64 is on X86 with VLX. But promote is either a NOP or SIGN_EXTEND and this would need a truncation.
With the change here we now get the SetCC type of v2i1 when we're handling the result promotion and the operand no longer needs to be promoted itself.
Fixes PR35272.
llvm-svn: 318706
This diff extends StackAddrEscapeChecker
to catch stack addresses leaks via block captures
if the block is executed asynchronously or
returned from a function.
Differential revision: https://reviews.llvm.org/D39438
llvm-svn: 318705