forked from OSchip/llvm-project
Enhance synchscope representation
OpenCL 2.0 introduces the notion of memory scopes in atomic operations to global and local memory. These scopes restrict how synchronization is achieved, which can result in improved performance. This change extends existing notion of synchronization scopes in LLVM to support arbitrary scopes expressed as target-specific strings, in addition to the already defined scopes (single thread, system). The LLVM IR and MIR syntax for expressing synchronization scopes has changed to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this replaces *singlethread* keyword), or a target-specific name. As before, if the scope is not specified, it defaults to CrossThread/System scope. Implementation details: - Mapping from synchronization scope name/string to synchronization scope id is stored in LLVM context; - CrossThread/System and SingleThread scopes are pre-defined to efficiently check for known scopes without comparing strings; - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in the bitcode. Differential Revision: https://reviews.llvm.org/D21723 llvm-svn: 307722
This commit is contained in:
parent
1d06f44f0f
commit
bb80d3e1d3
|
@ -2209,12 +2209,21 @@ For a simpler introduction to the ordering constraints, see the
|
|||
same address in this global order. This corresponds to the C++0x/C1x
|
||||
``memory_order_seq_cst`` and Java volatile.
|
||||
|
||||
.. _singlethread:
|
||||
.. _syncscope:
|
||||
|
||||
If an atomic operation is marked ``singlethread``, it only *synchronizes
|
||||
with* or participates in modification and seq\_cst total orderings with
|
||||
other operations running in the same thread (for example, in signal
|
||||
handlers).
|
||||
If an atomic operation is marked ``syncscope("singlethread")``, it only
|
||||
*synchronizes with* and only participates in the seq\_cst total orderings of
|
||||
other operations running in the same thread (for example, in signal handlers).
|
||||
|
||||
If an atomic operation is marked ``syncscope("<target-scope>")``, where
|
||||
``<target-scope>`` is a target specific synchronization scope, then it is target
|
||||
dependent if it *synchronizes with* and participates in the seq\_cst total
|
||||
orderings of other operations.
|
||||
|
||||
Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
|
||||
or ``syncscope("<target-scope>")`` *synchronizes with* and participates in the
|
||||
seq\_cst total orderings of other operations that are not marked
|
||||
``syncscope("singlethread")`` or ``syncscope("<target-scope>")``.
|
||||
|
||||
.. _fastmath:
|
||||
|
||||
|
@ -7380,7 +7389,7 @@ Syntax:
|
|||
::
|
||||
|
||||
<result> = load [volatile] <ty>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.load !<index>][, !invariant.group !<index>][, !nonnull !<index>][, !dereferenceable !<deref_bytes_node>][, !dereferenceable_or_null !<deref_bytes_node>][, !align !<align_node>]
|
||||
<result> = load atomic [volatile] <ty>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>]
|
||||
<result> = load atomic [volatile] <ty>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>]
|
||||
!<index> = !{ i32 1 }
|
||||
!<deref_bytes_node> = !{i64 <dereferenceable_bytes>}
|
||||
!<align_node> = !{ i64 <value_alignment> }
|
||||
|
@ -7401,14 +7410,14 @@ modify the number or order of execution of this ``load`` with other
|
|||
:ref:`volatile operations <volatile>`.
|
||||
|
||||
If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
|
||||
<ordering>` and optional ``singlethread`` argument. The ``release`` and
|
||||
``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads
|
||||
produce :ref:`defined <memmodel>` results when they may see multiple atomic
|
||||
stores. The type of the pointee must be an integer, pointer, or floating-point
|
||||
type whose bit width is a power of two greater than or equal to eight and less
|
||||
than or equal to a target-specific size limit. ``align`` must be explicitly
|
||||
specified on atomic loads, and the load has undefined behavior if the alignment
|
||||
is not set to a value which is at least the size in bytes of the
|
||||
<ordering>` and optional ``syncscope("<target-scope>")`` argument. The
|
||||
``release`` and ``acq_rel`` orderings are not valid on ``load`` instructions.
|
||||
Atomic loads produce :ref:`defined <memmodel>` results when they may see
|
||||
multiple atomic stores. The type of the pointee must be an integer, pointer, or
|
||||
floating-point type whose bit width is a power of two greater than or equal to
|
||||
eight and less than or equal to a target-specific size limit. ``align`` must be
|
||||
explicitly specified on atomic loads, and the load has undefined behavior if the
|
||||
alignment is not set to a value which is at least the size in bytes of the
|
||||
pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
|
||||
|
||||
The optional constant ``align`` argument specifies the alignment of the
|
||||
|
@ -7509,7 +7518,7 @@ Syntax:
|
|||
::
|
||||
|
||||
store [volatile] <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.group !<index>] ; yields void
|
||||
store atomic [volatile] <ty> <value>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
|
||||
store atomic [volatile] <ty> <value>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
|
||||
|
||||
Overview:
|
||||
"""""""""
|
||||
|
@ -7529,14 +7538,14 @@ allowed to modify the number or order of execution of this ``store`` with other
|
|||
structural type <t_opaque>`) can be stored.
|
||||
|
||||
If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
|
||||
<ordering>` and optional ``singlethread`` argument. The ``acquire`` and
|
||||
``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
|
||||
produce :ref:`defined <memmodel>` results when they may see multiple atomic
|
||||
stores. The type of the pointee must be an integer, pointer, or floating-point
|
||||
type whose bit width is a power of two greater than or equal to eight and less
|
||||
than or equal to a target-specific size limit. ``align`` must be explicitly
|
||||
specified on atomic stores, and the store has undefined behavior if the
|
||||
alignment is not set to a value which is at least the size in bytes of the
|
||||
<ordering>` and optional ``syncscope("<target-scope>")`` argument. The
|
||||
``acquire`` and ``acq_rel`` orderings aren't valid on ``store`` instructions.
|
||||
Atomic loads produce :ref:`defined <memmodel>` results when they may see
|
||||
multiple atomic stores. The type of the pointee must be an integer, pointer, or
|
||||
floating-point type whose bit width is a power of two greater than or equal to
|
||||
eight and less than or equal to a target-specific size limit. ``align`` must be
|
||||
explicitly specified on atomic stores, and the store has undefined behavior if
|
||||
the alignment is not set to a value which is at least the size in bytes of the
|
||||
pointee. ``!nontemporal`` does not have any defined semantics for atomic stores.
|
||||
|
||||
The optional constant ``align`` argument specifies the alignment of the
|
||||
|
@ -7597,7 +7606,7 @@ Syntax:
|
|||
|
||||
::
|
||||
|
||||
fence [singlethread] <ordering> ; yields void
|
||||
fence [syncscope("<target-scope>")] <ordering> ; yields void
|
||||
|
||||
Overview:
|
||||
"""""""""
|
||||
|
@ -7631,17 +7640,17 @@ A ``fence`` which has ``seq_cst`` ordering, in addition to having both
|
|||
``acquire`` and ``release`` semantics specified above, participates in
|
||||
the global program order of other ``seq_cst`` operations and/or fences.
|
||||
|
||||
The optional ":ref:`singlethread <singlethread>`" argument specifies
|
||||
that the fence only synchronizes with other fences in the same thread.
|
||||
(This is useful for interacting with signal handlers.)
|
||||
A ``fence`` instruction can also take an optional
|
||||
":ref:`syncscope <syncscope>`" argument.
|
||||
|
||||
Example:
|
||||
""""""""
|
||||
|
||||
.. code-block:: llvm
|
||||
|
||||
fence acquire ; yields void
|
||||
fence singlethread seq_cst ; yields void
|
||||
fence acquire ; yields void
|
||||
fence syncscope("singlethread") seq_cst ; yields void
|
||||
fence syncscope("agent") seq_cst ; yields void
|
||||
|
||||
.. _i_cmpxchg:
|
||||
|
||||
|
@ -7653,7 +7662,7 @@ Syntax:
|
|||
|
||||
::
|
||||
|
||||
cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [singlethread] <success ordering> <failure ordering> ; yields { ty, i1 }
|
||||
cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [syncscope("<target-scope>")] <success ordering> <failure ordering> ; yields { ty, i1 }
|
||||
|
||||
Overview:
|
||||
"""""""""
|
||||
|
@ -7682,10 +7691,8 @@ must be at least ``monotonic``, the ordering constraint on failure must be no
|
|||
stronger than that on success, and the failure ordering cannot be either
|
||||
``release`` or ``acq_rel``.
|
||||
|
||||
The optional "``singlethread``" argument declares that the ``cmpxchg``
|
||||
is only atomic with respect to code (usually signal handlers) running in
|
||||
the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
|
||||
respect to all other code in the system.
|
||||
A ``cmpxchg`` instruction can also take an optional
|
||||
":ref:`syncscope <syncscope>`" argument.
|
||||
|
||||
The pointer passed into cmpxchg must have alignment greater than or
|
||||
equal to the size in memory of the operand.
|
||||
|
@ -7739,7 +7746,7 @@ Syntax:
|
|||
|
||||
::
|
||||
|
||||
atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [singlethread] <ordering> ; yields ty
|
||||
atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [syncscope("<target-scope>")] <ordering> ; yields ty
|
||||
|
||||
Overview:
|
||||
"""""""""
|
||||
|
@ -7773,6 +7780,9 @@ be a pointer to that type. If the ``atomicrmw`` is marked as
|
|||
order of execution of this ``atomicrmw`` with other :ref:`volatile
|
||||
operations <volatile>`.
|
||||
|
||||
A ``atomicrmw`` instruction can also take an optional
|
||||
":ref:`syncscope <syncscope>`" argument.
|
||||
|
||||
Semantics:
|
||||
""""""""""
|
||||
|
||||
|
|
|
@ -59,6 +59,8 @@ enum BlockIDs {
|
|||
FULL_LTO_GLOBALVAL_SUMMARY_BLOCK_ID,
|
||||
|
||||
SYMTAB_BLOCK_ID,
|
||||
|
||||
SYNC_SCOPE_NAMES_BLOCK_ID,
|
||||
};
|
||||
|
||||
/// Identification block contains a string that describes the producer details,
|
||||
|
@ -172,6 +174,10 @@ enum OperandBundleTagCode {
|
|||
OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
|
||||
};
|
||||
|
||||
enum SyncScopeNameCode {
|
||||
SYNC_SCOPE_NAME = 1,
|
||||
};
|
||||
|
||||
// Value symbol table codes.
|
||||
enum ValueSymtabCodes {
|
||||
VST_CODE_ENTRY = 1, // VST_ENTRY: [valueid, namechar x N]
|
||||
|
@ -404,12 +410,6 @@ enum AtomicOrderingCodes {
|
|||
ORDERING_SEQCST = 6
|
||||
};
|
||||
|
||||
/// Encoded SynchronizationScope values.
|
||||
enum AtomicSynchScopeCodes {
|
||||
SYNCHSCOPE_SINGLETHREAD = 0,
|
||||
SYNCHSCOPE_CROSSTHREAD = 1
|
||||
};
|
||||
|
||||
/// Markers and flags for call instruction.
|
||||
enum CallMarkersFlags {
|
||||
CALL_TAIL = 0,
|
||||
|
|
|
@ -650,7 +650,7 @@ public:
|
|||
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
|
||||
unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
|
||||
const MDNode *Ranges = nullptr,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
|
||||
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
|
||||
|
||||
|
|
|
@ -124,8 +124,8 @@ public:
|
|||
private:
|
||||
/// Atomic information for this memory operation.
|
||||
struct MachineAtomicInfo {
|
||||
/// Synchronization scope for this memory operation.
|
||||
unsigned SynchScope : 1; // enum SynchronizationScope
|
||||
/// Synchronization scope ID for this memory operation.
|
||||
unsigned SSID : 8; // SyncScope::ID
|
||||
/// Atomic ordering requirements for this memory operation. For cmpxchg
|
||||
/// atomic operations, atomic ordering requirements when store occurs.
|
||||
unsigned Ordering : 4; // enum AtomicOrdering
|
||||
|
@ -152,7 +152,7 @@ public:
|
|||
unsigned base_alignment,
|
||||
const AAMDNodes &AAInfo = AAMDNodes(),
|
||||
const MDNode *Ranges = nullptr,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
|
||||
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
|
||||
|
||||
|
@ -202,9 +202,9 @@ public:
|
|||
/// Return the range tag for the memory reference.
|
||||
const MDNode *getRanges() const { return Ranges; }
|
||||
|
||||
/// Return the synchronization scope for this memory operation.
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return static_cast<SynchronizationScope>(AtomicInfo.SynchScope);
|
||||
/// Returns the synchronization scope ID for this memory operation.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return static_cast<SyncScope::ID>(AtomicInfo.SSID);
|
||||
}
|
||||
|
||||
/// Return the atomic ordering requirements for this memory operation. For
|
||||
|
|
|
@ -927,7 +927,7 @@ public:
|
|||
SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
|
||||
unsigned Alignment, AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope);
|
||||
SyncScope::ID SSID);
|
||||
SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
|
||||
SDVTList VTs, SDValue Chain, SDValue Ptr,
|
||||
SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
|
||||
|
@ -937,7 +937,7 @@ public:
|
|||
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
|
||||
SDValue Ptr, SDValue Val, const Value *PtrVal,
|
||||
unsigned Alignment, AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope);
|
||||
SyncScope::ID SSID);
|
||||
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
|
||||
SDValue Ptr, SDValue Val, MachineMemOperand *MMO);
|
||||
|
||||
|
|
|
@ -1213,8 +1213,8 @@ public:
|
|||
/// Returns the Ranges that describes the dereference.
|
||||
const MDNode *getRanges() const { return MMO->getRanges(); }
|
||||
|
||||
/// Return the synchronization scope for this memory operation.
|
||||
SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); }
|
||||
/// Returns the synchronization scope ID for this memory operation.
|
||||
SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
|
||||
|
||||
/// Return the atomic ordering requirements for this memory operation. For
|
||||
/// cmpxchg atomic operations, return the atomic ordering requirements when
|
||||
|
|
|
@ -1203,22 +1203,22 @@ public:
|
|||
return SI;
|
||||
}
|
||||
FenceInst *CreateFence(AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
const Twine &Name = "") {
|
||||
return Insert(new FenceInst(Context, Ordering, SynchScope), Name);
|
||||
return Insert(new FenceInst(Context, Ordering, SSID), Name);
|
||||
}
|
||||
AtomicCmpXchgInst *
|
||||
CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope = CrossThread) {
|
||||
SyncScope::ID SSID = SyncScope::System) {
|
||||
return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
|
||||
FailureOrdering, SynchScope));
|
||||
FailureOrdering, SSID));
|
||||
}
|
||||
AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope = CrossThread) {
|
||||
return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope));
|
||||
SyncScope::ID SSID = SyncScope::System) {
|
||||
return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
|
||||
}
|
||||
Value *CreateGEP(Value *Ptr, ArrayRef<Value *> IdxList,
|
||||
const Twine &Name = "") {
|
||||
|
|
|
@ -52,11 +52,6 @@ class ConstantInt;
|
|||
class DataLayout;
|
||||
class LLVMContext;
|
||||
|
||||
enum SynchronizationScope {
|
||||
SingleThread = 0,
|
||||
CrossThread = 1
|
||||
};
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
// AllocaInst Class
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -195,17 +190,16 @@ public:
|
|||
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
|
||||
unsigned Align, BasicBlock *InsertAtEnd);
|
||||
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
|
||||
AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread,
|
||||
AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
|
||||
Instruction *InsertBefore = nullptr)
|
||||
: LoadInst(cast<PointerType>(Ptr->getType())->getElementType(), Ptr,
|
||||
NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {}
|
||||
NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
|
||||
LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
Instruction *InsertBefore = nullptr);
|
||||
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope,
|
||||
unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd);
|
||||
LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
|
||||
LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
|
||||
|
@ -235,34 +229,34 @@ public:
|
|||
|
||||
void setAlignment(unsigned Align);
|
||||
|
||||
/// Returns the ordering effect of this fence.
|
||||
/// Returns the ordering constraint of this load instruction.
|
||||
AtomicOrdering getOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
|
||||
}
|
||||
|
||||
/// Set the ordering constraint on this load. May not be Release or
|
||||
/// AcquireRelease.
|
||||
/// Sets the ordering constraint of this load instruction. May not be Release
|
||||
/// or AcquireRelease.
|
||||
void setOrdering(AtomicOrdering Ordering) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
|
||||
((unsigned)Ordering << 7));
|
||||
}
|
||||
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
|
||||
/// Returns the synchronization scope ID of this load instruction.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return SSID;
|
||||
}
|
||||
|
||||
/// Specify whether this load is ordered with respect to all
|
||||
/// concurrently executing threads, or only with respect to signal handlers
|
||||
/// executing in the same thread.
|
||||
void setSynchScope(SynchronizationScope xthread) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
|
||||
(xthread << 6));
|
||||
/// Sets the synchronization scope ID of this load instruction.
|
||||
void setSyncScopeID(SyncScope::ID SSID) {
|
||||
this->SSID = SSID;
|
||||
}
|
||||
|
||||
/// Sets the ordering constraint and the synchronization scope ID of this load
|
||||
/// instruction.
|
||||
void setAtomic(AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope = CrossThread) {
|
||||
SyncScope::ID SSID = SyncScope::System) {
|
||||
setOrdering(Ordering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
}
|
||||
|
||||
bool isSimple() const { return !isAtomic() && !isVolatile(); }
|
||||
|
@ -297,6 +291,11 @@ private:
|
|||
void setInstructionSubclassData(unsigned short D) {
|
||||
Instruction::setInstructionSubclassData(D);
|
||||
}
|
||||
|
||||
/// The synchronization scope ID of this load instruction. Not quite enough
|
||||
/// room in SubClassData for everything, so synchronization scope ID gets its
|
||||
/// own field.
|
||||
SyncScope::ID SSID;
|
||||
};
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -325,11 +324,10 @@ public:
|
|||
unsigned Align, BasicBlock *InsertAtEnd);
|
||||
StoreInst(Value *Val, Value *Ptr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
Instruction *InsertBefore = nullptr);
|
||||
StoreInst(Value *Val, Value *Ptr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope,
|
||||
unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd);
|
||||
|
||||
// allocate space for exactly two operands
|
||||
|
@ -356,34 +354,34 @@ public:
|
|||
|
||||
void setAlignment(unsigned Align);
|
||||
|
||||
/// Returns the ordering effect of this store.
|
||||
/// Returns the ordering constraint of this store instruction.
|
||||
AtomicOrdering getOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
|
||||
}
|
||||
|
||||
/// Set the ordering constraint on this store. May not be Acquire or
|
||||
/// AcquireRelease.
|
||||
/// Sets the ordering constraint of this store instruction. May not be
|
||||
/// Acquire or AcquireRelease.
|
||||
void setOrdering(AtomicOrdering Ordering) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
|
||||
((unsigned)Ordering << 7));
|
||||
}
|
||||
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
|
||||
/// Returns the synchronization scope ID of this store instruction.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return SSID;
|
||||
}
|
||||
|
||||
/// Specify whether this store instruction is ordered with respect to all
|
||||
/// concurrently executing threads, or only with respect to signal handlers
|
||||
/// executing in the same thread.
|
||||
void setSynchScope(SynchronizationScope xthread) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
|
||||
(xthread << 6));
|
||||
/// Sets the synchronization scope ID of this store instruction.
|
||||
void setSyncScopeID(SyncScope::ID SSID) {
|
||||
this->SSID = SSID;
|
||||
}
|
||||
|
||||
/// Sets the ordering constraint and the synchronization scope ID of this
|
||||
/// store instruction.
|
||||
void setAtomic(AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope = CrossThread) {
|
||||
SyncScope::ID SSID = SyncScope::System) {
|
||||
setOrdering(Ordering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
}
|
||||
|
||||
bool isSimple() const { return !isAtomic() && !isVolatile(); }
|
||||
|
@ -421,6 +419,11 @@ private:
|
|||
void setInstructionSubclassData(unsigned short D) {
|
||||
Instruction::setInstructionSubclassData(D);
|
||||
}
|
||||
|
||||
/// The synchronization scope ID of this store instruction. Not quite enough
|
||||
/// room in SubClassData for everything, so synchronization scope ID gets its
|
||||
/// own field.
|
||||
SyncScope::ID SSID;
|
||||
};
|
||||
|
||||
template <>
|
||||
|
@ -435,7 +438,7 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(StoreInst, Value)
|
|||
|
||||
/// An instruction for ordering other memory operations.
|
||||
class FenceInst : public Instruction {
|
||||
void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope);
|
||||
void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
|
||||
|
||||
protected:
|
||||
// Note: Instruction needs to be a friend here to call cloneImpl.
|
||||
|
@ -447,10 +450,9 @@ public:
|
|||
// Ordering may only be Acquire, Release, AcquireRelease, or
|
||||
// SequentiallyConsistent.
|
||||
FenceInst(LLVMContext &C, AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope = CrossThread,
|
||||
SyncScope::ID SSID = SyncScope::System,
|
||||
Instruction *InsertBefore = nullptr);
|
||||
FenceInst(LLVMContext &C, AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope,
|
||||
FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd);
|
||||
|
||||
// allocate space for exactly zero operands
|
||||
|
@ -458,28 +460,26 @@ public:
|
|||
return User::operator new(s, 0);
|
||||
}
|
||||
|
||||
/// Returns the ordering effect of this fence.
|
||||
/// Returns the ordering constraint of this fence instruction.
|
||||
AtomicOrdering getOrdering() const {
|
||||
return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
|
||||
}
|
||||
|
||||
/// Set the ordering constraint on this fence. May only be Acquire, Release,
|
||||
/// AcquireRelease, or SequentiallyConsistent.
|
||||
/// Sets the ordering constraint of this fence instruction. May only be
|
||||
/// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
|
||||
void setOrdering(AtomicOrdering Ordering) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
|
||||
((unsigned)Ordering << 1));
|
||||
}
|
||||
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return SynchronizationScope(getSubclassDataFromInstruction() & 1);
|
||||
/// Returns the synchronization scope ID of this fence instruction.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return SSID;
|
||||
}
|
||||
|
||||
/// Specify whether this fence orders other operations with respect to all
|
||||
/// concurrently executing threads, or only with respect to signal handlers
|
||||
/// executing in the same thread.
|
||||
void setSynchScope(SynchronizationScope xthread) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
|
||||
xthread);
|
||||
/// Sets the synchronization scope ID of this fence instruction.
|
||||
void setSyncScopeID(SyncScope::ID SSID) {
|
||||
this->SSID = SSID;
|
||||
}
|
||||
|
||||
// Methods for support type inquiry through isa, cast, and dyn_cast:
|
||||
|
@ -496,6 +496,11 @@ private:
|
|||
void setInstructionSubclassData(unsigned short D) {
|
||||
Instruction::setInstructionSubclassData(D);
|
||||
}
|
||||
|
||||
/// The synchronization scope ID of this fence instruction. Not quite enough
|
||||
/// room in SubClassData for everything, so synchronization scope ID gets its
|
||||
/// own field.
|
||||
SyncScope::ID SSID;
|
||||
};
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -509,7 +514,7 @@ private:
|
|||
class AtomicCmpXchgInst : public Instruction {
|
||||
void Init(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope);
|
||||
SyncScope::ID SSID);
|
||||
|
||||
protected:
|
||||
// Note: Instruction needs to be a friend here to call cloneImpl.
|
||||
|
@ -521,13 +526,11 @@ public:
|
|||
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope,
|
||||
Instruction *InsertBefore = nullptr);
|
||||
SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
|
||||
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope,
|
||||
BasicBlock *InsertAtEnd);
|
||||
SyncScope::ID SSID, BasicBlock *InsertAtEnd);
|
||||
|
||||
// allocate space for exactly three operands
|
||||
void *operator new(size_t s) {
|
||||
|
@ -561,7 +564,12 @@ public:
|
|||
/// Transparently provide more efficient getOperand methods.
|
||||
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
|
||||
|
||||
/// Set the ordering constraint on this cmpxchg.
|
||||
/// Returns the success ordering constraint of this cmpxchg instruction.
|
||||
AtomicOrdering getSuccessOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
|
||||
}
|
||||
|
||||
/// Sets the success ordering constraint of this cmpxchg instruction.
|
||||
void setSuccessOrdering(AtomicOrdering Ordering) {
|
||||
assert(Ordering != AtomicOrdering::NotAtomic &&
|
||||
"CmpXchg instructions can only be atomic.");
|
||||
|
@ -569,6 +577,12 @@ public:
|
|||
((unsigned)Ordering << 2));
|
||||
}
|
||||
|
||||
/// Returns the failure ordering constraint of this cmpxchg instruction.
|
||||
AtomicOrdering getFailureOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
|
||||
}
|
||||
|
||||
/// Sets the failure ordering constraint of this cmpxchg instruction.
|
||||
void setFailureOrdering(AtomicOrdering Ordering) {
|
||||
assert(Ordering != AtomicOrdering::NotAtomic &&
|
||||
"CmpXchg instructions can only be atomic.");
|
||||
|
@ -576,28 +590,14 @@ public:
|
|||
((unsigned)Ordering << 5));
|
||||
}
|
||||
|
||||
/// Specify whether this cmpxchg is atomic and orders other operations with
|
||||
/// respect to all concurrently executing threads, or only with respect to
|
||||
/// signal handlers executing in the same thread.
|
||||
void setSynchScope(SynchronizationScope SynchScope) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
|
||||
(SynchScope << 1));
|
||||
/// Returns the synchronization scope ID of this cmpxchg instruction.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return SSID;
|
||||
}
|
||||
|
||||
/// Returns the ordering constraint on this cmpxchg.
|
||||
AtomicOrdering getSuccessOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
|
||||
}
|
||||
|
||||
/// Returns the ordering constraint on this cmpxchg.
|
||||
AtomicOrdering getFailureOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
|
||||
}
|
||||
|
||||
/// Returns whether this cmpxchg is atomic between threads or only within a
|
||||
/// single thread.
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
|
||||
/// Sets the synchronization scope ID of this cmpxchg instruction.
|
||||
void setSyncScopeID(SyncScope::ID SSID) {
|
||||
this->SSID = SSID;
|
||||
}
|
||||
|
||||
Value *getPointerOperand() { return getOperand(0); }
|
||||
|
@ -652,6 +652,11 @@ private:
|
|||
void setInstructionSubclassData(unsigned short D) {
|
||||
Instruction::setInstructionSubclassData(D);
|
||||
}
|
||||
|
||||
/// The synchronization scope ID of this cmpxchg instruction. Not quite
|
||||
/// enough room in SubClassData for everything, so synchronization scope ID
|
||||
/// gets its own field.
|
||||
SyncScope::ID SSID;
|
||||
};
|
||||
|
||||
template <>
|
||||
|
@ -711,10 +716,10 @@ public:
|
|||
};
|
||||
|
||||
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering, SynchronizationScope SynchScope,
|
||||
AtomicOrdering Ordering, SyncScope::ID SSID,
|
||||
Instruction *InsertBefore = nullptr);
|
||||
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering, SynchronizationScope SynchScope,
|
||||
AtomicOrdering Ordering, SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd);
|
||||
|
||||
// allocate space for exactly two operands
|
||||
|
@ -748,7 +753,12 @@ public:
|
|||
/// Transparently provide more efficient getOperand methods.
|
||||
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
|
||||
|
||||
/// Set the ordering constraint on this RMW.
|
||||
/// Returns the ordering constraint of this rmw instruction.
|
||||
AtomicOrdering getOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
|
||||
}
|
||||
|
||||
/// Sets the ordering constraint of this rmw instruction.
|
||||
void setOrdering(AtomicOrdering Ordering) {
|
||||
assert(Ordering != AtomicOrdering::NotAtomic &&
|
||||
"atomicrmw instructions can only be atomic.");
|
||||
|
@ -756,23 +766,14 @@ public:
|
|||
((unsigned)Ordering << 2));
|
||||
}
|
||||
|
||||
/// Specify whether this RMW orders other operations with respect to all
|
||||
/// concurrently executing threads, or only with respect to signal handlers
|
||||
/// executing in the same thread.
|
||||
void setSynchScope(SynchronizationScope SynchScope) {
|
||||
setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
|
||||
(SynchScope << 1));
|
||||
/// Returns the synchronization scope ID of this rmw instruction.
|
||||
SyncScope::ID getSyncScopeID() const {
|
||||
return SSID;
|
||||
}
|
||||
|
||||
/// Returns the ordering constraint on this RMW.
|
||||
AtomicOrdering getOrdering() const {
|
||||
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
|
||||
}
|
||||
|
||||
/// Returns whether this RMW is atomic between threads or only within a
|
||||
/// single thread.
|
||||
SynchronizationScope getSynchScope() const {
|
||||
return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
|
||||
/// Sets the synchronization scope ID of this rmw instruction.
|
||||
void setSyncScopeID(SyncScope::ID SSID) {
|
||||
this->SSID = SSID;
|
||||
}
|
||||
|
||||
Value *getPointerOperand() { return getOperand(0); }
|
||||
|
@ -797,13 +798,18 @@ public:
|
|||
|
||||
private:
|
||||
void Init(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering, SynchronizationScope SynchScope);
|
||||
AtomicOrdering Ordering, SyncScope::ID SSID);
|
||||
|
||||
// Shadow Instruction::setInstructionSubclassData with a private forwarding
|
||||
// method so that subclasses cannot accidentally use it.
|
||||
void setInstructionSubclassData(unsigned short D) {
|
||||
Instruction::setInstructionSubclassData(D);
|
||||
}
|
||||
|
||||
/// The synchronization scope ID of this rmw instruction. Not quite enough
|
||||
/// room in SubClassData for everything, so synchronization scope ID gets its
|
||||
/// own field.
|
||||
SyncScope::ID SSID;
|
||||
};
|
||||
|
||||
template <>
|
||||
|
|
|
@ -42,6 +42,24 @@ class Output;
|
|||
|
||||
} // end namespace yaml
|
||||
|
||||
namespace SyncScope {
|
||||
|
||||
typedef uint8_t ID;
|
||||
|
||||
/// Known synchronization scope IDs, which always have the same value. All
|
||||
/// synchronization scope IDs that LLVM has special knowledge of are listed
|
||||
/// here. Additionally, this scheme allows LLVM to efficiently check for
|
||||
/// specific synchronization scope ID without comparing strings.
|
||||
enum {
|
||||
/// Synchronized with respect to signal handlers executing in the same thread.
|
||||
SingleThread = 0,
|
||||
|
||||
/// Synchronized with respect to all concurrently executing threads.
|
||||
System = 1
|
||||
};
|
||||
|
||||
} // end namespace SyncScope
|
||||
|
||||
/// This is an important class for using LLVM in a threaded context. It
|
||||
/// (opaquely) owns and manages the core "global" data of LLVM's core
|
||||
/// infrastructure, including the type and constant uniquing tables.
|
||||
|
@ -111,6 +129,16 @@ public:
|
|||
/// tag registered with an LLVMContext has an unique ID.
|
||||
uint32_t getOperandBundleTagID(StringRef Tag) const;
|
||||
|
||||
/// getOrInsertSyncScopeID - Maps synchronization scope name to
|
||||
/// synchronization scope ID. Every synchronization scope registered with
|
||||
/// LLVMContext has unique ID except pre-defined ones.
|
||||
SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
|
||||
|
||||
/// getSyncScopeNames - Populates client supplied SmallVector with
|
||||
/// synchronization scope names registered with LLVMContext. Synchronization
|
||||
/// scope names are ordered by increasing synchronization scope IDs.
|
||||
void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
|
||||
|
||||
/// Define the GC for a function
|
||||
void setGC(const Function &Fn, std::string GCName);
|
||||
|
||||
|
|
|
@ -542,7 +542,7 @@ lltok::Kind LLLexer::LexIdentifier() {
|
|||
KEYWORD(release);
|
||||
KEYWORD(acq_rel);
|
||||
KEYWORD(seq_cst);
|
||||
KEYWORD(singlethread);
|
||||
KEYWORD(syncscope);
|
||||
|
||||
KEYWORD(nnan);
|
||||
KEYWORD(ninf);
|
||||
|
|
|
@ -1919,20 +1919,42 @@ bool LLParser::parseAllocSizeArguments(unsigned &BaseSizeArg,
|
|||
}
|
||||
|
||||
/// ParseScopeAndOrdering
|
||||
/// if isAtomic: ::= 'singlethread'? AtomicOrdering
|
||||
/// if isAtomic: ::= SyncScope? AtomicOrdering
|
||||
/// else: ::=
|
||||
///
|
||||
/// This sets Scope and Ordering to the parsed values.
|
||||
bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
|
||||
bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
|
||||
AtomicOrdering &Ordering) {
|
||||
if (!isAtomic)
|
||||
return false;
|
||||
|
||||
Scope = CrossThread;
|
||||
if (EatIfPresent(lltok::kw_singlethread))
|
||||
Scope = SingleThread;
|
||||
return ParseScope(SSID) || ParseOrdering(Ordering);
|
||||
}
|
||||
|
||||
return ParseOrdering(Ordering);
|
||||
/// ParseScope
|
||||
/// ::= syncscope("singlethread" | "<target scope>")?
|
||||
///
|
||||
/// This sets synchronization scope ID to the ID of the parsed value.
|
||||
bool LLParser::ParseScope(SyncScope::ID &SSID) {
|
||||
SSID = SyncScope::System;
|
||||
if (EatIfPresent(lltok::kw_syncscope)) {
|
||||
auto StartParenAt = Lex.getLoc();
|
||||
if (!EatIfPresent(lltok::lparen))
|
||||
return Error(StartParenAt, "Expected '(' in syncscope");
|
||||
|
||||
std::string SSN;
|
||||
auto SSNAt = Lex.getLoc();
|
||||
if (ParseStringConstant(SSN))
|
||||
return Error(SSNAt, "Expected synchronization scope name");
|
||||
|
||||
auto EndParenAt = Lex.getLoc();
|
||||
if (!EatIfPresent(lltok::rparen))
|
||||
return Error(EndParenAt, "Expected ')' in syncscope");
|
||||
|
||||
SSID = Context.getOrInsertSyncScopeID(SSN);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/// ParseOrdering
|
||||
|
@ -6100,7 +6122,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
bool AteExtraComma = false;
|
||||
bool isAtomic = false;
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
|
||||
SynchronizationScope Scope = CrossThread;
|
||||
SyncScope::ID SSID = SyncScope::System;
|
||||
|
||||
if (Lex.getKind() == lltok::kw_atomic) {
|
||||
isAtomic = true;
|
||||
|
@ -6118,7 +6140,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
if (ParseType(Ty) ||
|
||||
ParseToken(lltok::comma, "expected comma after load's type") ||
|
||||
ParseTypeAndValue(Val, Loc, PFS) ||
|
||||
ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
|
||||
ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
|
||||
ParseOptionalCommaAlign(Alignment, AteExtraComma))
|
||||
return true;
|
||||
|
||||
|
@ -6134,7 +6156,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
return Error(ExplicitTypeLoc,
|
||||
"explicit pointee type doesn't match operand's pointee type");
|
||||
|
||||
Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope);
|
||||
Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
|
||||
return AteExtraComma ? InstExtraComma : InstNormal;
|
||||
}
|
||||
|
||||
|
@ -6149,7 +6171,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
bool AteExtraComma = false;
|
||||
bool isAtomic = false;
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
|
||||
SynchronizationScope Scope = CrossThread;
|
||||
SyncScope::ID SSID = SyncScope::System;
|
||||
|
||||
if (Lex.getKind() == lltok::kw_atomic) {
|
||||
isAtomic = true;
|
||||
|
@ -6165,7 +6187,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
if (ParseTypeAndValue(Val, Loc, PFS) ||
|
||||
ParseToken(lltok::comma, "expected ',' after store operand") ||
|
||||
ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
|
||||
ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
|
||||
ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
|
||||
ParseOptionalCommaAlign(Alignment, AteExtraComma))
|
||||
return true;
|
||||
|
||||
|
@ -6181,7 +6203,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
Ordering == AtomicOrdering::AcquireRelease)
|
||||
return Error(Loc, "atomic store cannot use Acquire ordering");
|
||||
|
||||
Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope);
|
||||
Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
|
||||
return AteExtraComma ? InstExtraComma : InstNormal;
|
||||
}
|
||||
|
||||
|
@ -6193,7 +6215,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
bool AteExtraComma = false;
|
||||
AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
|
||||
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
|
||||
SynchronizationScope Scope = CrossThread;
|
||||
SyncScope::ID SSID = SyncScope::System;
|
||||
bool isVolatile = false;
|
||||
bool isWeak = false;
|
||||
|
||||
|
@ -6208,7 +6230,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
|
||||
ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
|
||||
ParseTypeAndValue(New, NewLoc, PFS) ||
|
||||
ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) ||
|
||||
ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
|
||||
ParseOrdering(FailureOrdering))
|
||||
return true;
|
||||
|
||||
|
@ -6231,7 +6253,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
if (!New->getType()->isFirstClassType())
|
||||
return Error(NewLoc, "cmpxchg operand must be a first class value");
|
||||
AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
|
||||
Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope);
|
||||
Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
|
||||
CXI->setVolatile(isVolatile);
|
||||
CXI->setWeak(isWeak);
|
||||
Inst = CXI;
|
||||
|
@ -6245,7 +6267,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
|
||||
bool AteExtraComma = false;
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
|
||||
SynchronizationScope Scope = CrossThread;
|
||||
SyncScope::ID SSID = SyncScope::System;
|
||||
bool isVolatile = false;
|
||||
AtomicRMWInst::BinOp Operation;
|
||||
|
||||
|
@ -6271,7 +6293,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
|
||||
ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
|
||||
ParseTypeAndValue(Val, ValLoc, PFS) ||
|
||||
ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
|
||||
ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
|
||||
return true;
|
||||
|
||||
if (Ordering == AtomicOrdering::Unordered)
|
||||
|
@ -6288,7 +6310,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
" integer");
|
||||
|
||||
AtomicRMWInst *RMWI =
|
||||
new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope);
|
||||
new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
|
||||
RMWI->setVolatile(isVolatile);
|
||||
Inst = RMWI;
|
||||
return AteExtraComma ? InstExtraComma : InstNormal;
|
||||
|
@ -6298,8 +6320,8 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
/// ::= 'fence' 'singlethread'? AtomicOrdering
|
||||
int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
|
||||
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
|
||||
SynchronizationScope Scope = CrossThread;
|
||||
if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
|
||||
SyncScope::ID SSID = SyncScope::System;
|
||||
if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
|
||||
return true;
|
||||
|
||||
if (Ordering == AtomicOrdering::Unordered)
|
||||
|
@ -6307,7 +6329,7 @@ int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
|
|||
if (Ordering == AtomicOrdering::Monotonic)
|
||||
return TokError("fence cannot be monotonic");
|
||||
|
||||
Inst = new FenceInst(Context, Ordering, Scope);
|
||||
Inst = new FenceInst(Context, Ordering, SSID);
|
||||
return InstNormal;
|
||||
}
|
||||
|
||||
|
|
|
@ -241,8 +241,9 @@ namespace llvm {
|
|||
bool ParseOptionalCallingConv(unsigned &CC);
|
||||
bool ParseOptionalAlignment(unsigned &Alignment);
|
||||
bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
|
||||
bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
|
||||
bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
|
||||
AtomicOrdering &Ordering);
|
||||
bool ParseScope(SyncScope::ID &SSID);
|
||||
bool ParseOrdering(AtomicOrdering &Ordering);
|
||||
bool ParseOptionalStackAlignment(unsigned &Alignment);
|
||||
bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);
|
||||
|
|
|
@ -93,7 +93,7 @@ enum Kind {
|
|||
kw_release,
|
||||
kw_acq_rel,
|
||||
kw_seq_cst,
|
||||
kw_singlethread,
|
||||
kw_syncscope,
|
||||
kw_nnan,
|
||||
kw_ninf,
|
||||
kw_nsz,
|
||||
|
|
|
@ -513,6 +513,7 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
|
|||
TBAAVerifier TBAAVerifyHelper;
|
||||
|
||||
std::vector<std::string> BundleTags;
|
||||
SmallVector<SyncScope::ID, 8> SSIDs;
|
||||
|
||||
public:
|
||||
BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
|
||||
|
@ -648,6 +649,7 @@ private:
|
|||
Error parseTypeTable();
|
||||
Error parseTypeTableBody();
|
||||
Error parseOperandBundleTags();
|
||||
Error parseSyncScopeNames();
|
||||
|
||||
Expected<Value *> recordValue(SmallVectorImpl<uint64_t> &Record,
|
||||
unsigned NameIndex, Triple &TT);
|
||||
|
@ -668,6 +670,8 @@ private:
|
|||
Error findFunctionInStream(
|
||||
Function *F,
|
||||
DenseMap<Function *, uint64_t>::iterator DeferredFunctionInfoIterator);
|
||||
|
||||
SyncScope::ID getDecodedSyncScopeID(unsigned Val);
|
||||
};
|
||||
|
||||
/// Class to manage reading and parsing function summary index bitcode
|
||||
|
@ -998,14 +1002,6 @@ static AtomicOrdering getDecodedOrdering(unsigned Val) {
|
|||
}
|
||||
}
|
||||
|
||||
static SynchronizationScope getDecodedSynchScope(unsigned Val) {
|
||||
switch (Val) {
|
||||
case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
|
||||
default: // Map unknown scopes to cross-thread.
|
||||
case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
|
||||
}
|
||||
}
|
||||
|
||||
static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) {
|
||||
switch (Val) {
|
||||
default: // Map unknown selection kinds to any.
|
||||
|
@ -1745,6 +1741,44 @@ Error BitcodeReader::parseOperandBundleTags() {
|
|||
}
|
||||
}
|
||||
|
||||
Error BitcodeReader::parseSyncScopeNames() {
|
||||
if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
|
||||
return error("Invalid record");
|
||||
|
||||
if (!SSIDs.empty())
|
||||
return error("Invalid multiple synchronization scope names blocks");
|
||||
|
||||
SmallVector<uint64_t, 64> Record;
|
||||
while (true) {
|
||||
BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
|
||||
switch (Entry.Kind) {
|
||||
case BitstreamEntry::SubBlock: // Handled for us already.
|
||||
case BitstreamEntry::Error:
|
||||
return error("Malformed block");
|
||||
case BitstreamEntry::EndBlock:
|
||||
if (SSIDs.empty())
|
||||
return error("Invalid empty synchronization scope names block");
|
||||
return Error::success();
|
||||
case BitstreamEntry::Record:
|
||||
// The interesting case.
|
||||
break;
|
||||
}
|
||||
|
||||
// Synchronization scope names are implicitly mapped to synchronization
|
||||
// scope IDs by their order.
|
||||
|
||||
if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
|
||||
return error("Invalid record");
|
||||
|
||||
SmallString<16> SSN;
|
||||
if (convertToString(Record, 0, SSN))
|
||||
return error("Invalid record");
|
||||
|
||||
SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
|
||||
Record.clear();
|
||||
}
|
||||
}
|
||||
|
||||
/// Associate a value with its name from the given index in the provided record.
|
||||
Expected<Value *> BitcodeReader::recordValue(SmallVectorImpl<uint64_t> &Record,
|
||||
unsigned NameIndex, Triple &TT) {
|
||||
|
@ -3132,6 +3166,10 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
|
|||
if (Error Err = parseOperandBundleTags())
|
||||
return Err;
|
||||
break;
|
||||
case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
|
||||
if (Error Err = parseSyncScopeNames())
|
||||
return Err;
|
||||
break;
|
||||
}
|
||||
continue;
|
||||
|
||||
|
@ -4204,7 +4242,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
break;
|
||||
}
|
||||
case bitc::FUNC_CODE_INST_LOADATOMIC: {
|
||||
// LOADATOMIC: [opty, op, align, vol, ordering, synchscope]
|
||||
// LOADATOMIC: [opty, op, align, vol, ordering, ssid]
|
||||
unsigned OpNum = 0;
|
||||
Value *Op;
|
||||
if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
|
||||
|
@ -4226,12 +4264,12 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
return error("Invalid record");
|
||||
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
|
||||
return error("Invalid record");
|
||||
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
|
||||
SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
|
||||
|
||||
unsigned Align;
|
||||
if (Error Err = parseAlignmentValue(Record[OpNum], Align))
|
||||
return Err;
|
||||
I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope);
|
||||
I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
|
||||
|
||||
InstructionList.push_back(I);
|
||||
break;
|
||||
|
@ -4260,7 +4298,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
}
|
||||
case bitc::FUNC_CODE_INST_STOREATOMIC:
|
||||
case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
|
||||
// STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope]
|
||||
// STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
|
||||
unsigned OpNum = 0;
|
||||
Value *Val, *Ptr;
|
||||
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
|
||||
|
@ -4280,20 +4318,20 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
Ordering == AtomicOrdering::Acquire ||
|
||||
Ordering == AtomicOrdering::AcquireRelease)
|
||||
return error("Invalid record");
|
||||
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
|
||||
SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
|
||||
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
|
||||
return error("Invalid record");
|
||||
|
||||
unsigned Align;
|
||||
if (Error Err = parseAlignmentValue(Record[OpNum], Align))
|
||||
return Err;
|
||||
I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope);
|
||||
I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
|
||||
InstructionList.push_back(I);
|
||||
break;
|
||||
}
|
||||
case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
|
||||
case bitc::FUNC_CODE_INST_CMPXCHG: {
|
||||
// CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope,
|
||||
// CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
|
||||
// failureordering?, isweak?]
|
||||
unsigned OpNum = 0;
|
||||
Value *Ptr, *Cmp, *New;
|
||||
|
@ -4310,7 +4348,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
if (SuccessOrdering == AtomicOrdering::NotAtomic ||
|
||||
SuccessOrdering == AtomicOrdering::Unordered)
|
||||
return error("Invalid record");
|
||||
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]);
|
||||
SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
|
||||
|
||||
if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
|
||||
return Err;
|
||||
|
@ -4322,7 +4360,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
|
||||
|
||||
I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
|
||||
SynchScope);
|
||||
SSID);
|
||||
cast<AtomicCmpXchgInst>(I)->setVolatile(Record[OpNum]);
|
||||
|
||||
if (Record.size() < 8) {
|
||||
|
@ -4339,7 +4377,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
break;
|
||||
}
|
||||
case bitc::FUNC_CODE_INST_ATOMICRMW: {
|
||||
// ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope]
|
||||
// ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
|
||||
unsigned OpNum = 0;
|
||||
Value *Ptr, *Val;
|
||||
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
|
||||
|
@ -4356,13 +4394,13 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
if (Ordering == AtomicOrdering::NotAtomic ||
|
||||
Ordering == AtomicOrdering::Unordered)
|
||||
return error("Invalid record");
|
||||
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
|
||||
I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope);
|
||||
SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
|
||||
I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
|
||||
cast<AtomicRMWInst>(I)->setVolatile(Record[OpNum+1]);
|
||||
InstructionList.push_back(I);
|
||||
break;
|
||||
}
|
||||
case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope]
|
||||
case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
|
||||
if (2 != Record.size())
|
||||
return error("Invalid record");
|
||||
AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
|
||||
|
@ -4370,8 +4408,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||
Ordering == AtomicOrdering::Unordered ||
|
||||
Ordering == AtomicOrdering::Monotonic)
|
||||
return error("Invalid record");
|
||||
SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]);
|
||||
I = new FenceInst(Context, Ordering, SynchScope);
|
||||
SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
|
||||
I = new FenceInst(Context, Ordering, SSID);
|
||||
InstructionList.push_back(I);
|
||||
break;
|
||||
}
|
||||
|
@ -4567,6 +4605,14 @@ Error BitcodeReader::findFunctionInStream(
|
|||
return Error::success();
|
||||
}
|
||||
|
||||
SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
|
||||
if (Val == SyncScope::SingleThread || Val == SyncScope::System)
|
||||
return SyncScope::ID(Val);
|
||||
if (Val >= SSIDs.size())
|
||||
return SyncScope::System; // Map unknown synchronization scopes to system.
|
||||
return SSIDs[Val];
|
||||
}
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
// GVMaterializer implementation
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
|
|
@ -266,6 +266,7 @@ private:
|
|||
const GlobalObject &GO);
|
||||
void writeModuleMetadataKinds();
|
||||
void writeOperandBundleTags();
|
||||
void writeSyncScopeNames();
|
||||
void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
|
||||
void writeModuleConstants();
|
||||
bool pushValueAndType(const Value *V, unsigned InstID,
|
||||
|
@ -316,6 +317,10 @@ private:
|
|||
return VE.getValueID(VI.getValue());
|
||||
}
|
||||
std::map<GlobalValue::GUID, unsigned> &valueIds() { return GUIDToValueIdMap; }
|
||||
|
||||
unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
|
||||
return unsigned(SSID);
|
||||
}
|
||||
};
|
||||
|
||||
/// Class to manage the bitcode writing for a combined index.
|
||||
|
@ -485,14 +490,6 @@ static unsigned getEncodedOrdering(AtomicOrdering Ordering) {
|
|||
llvm_unreachable("Invalid ordering");
|
||||
}
|
||||
|
||||
static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
|
||||
switch (SynchScope) {
|
||||
case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
|
||||
case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
|
||||
}
|
||||
llvm_unreachable("Invalid synch scope");
|
||||
}
|
||||
|
||||
static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
|
||||
StringRef Str, unsigned AbbrevToUse) {
|
||||
SmallVector<unsigned, 64> Vals;
|
||||
|
@ -2042,6 +2039,24 @@ void ModuleBitcodeWriter::writeOperandBundleTags() {
|
|||
Stream.ExitBlock();
|
||||
}
|
||||
|
||||
void ModuleBitcodeWriter::writeSyncScopeNames() {
|
||||
SmallVector<StringRef, 8> SSNs;
|
||||
M.getContext().getSyncScopeNames(SSNs);
|
||||
if (SSNs.empty())
|
||||
return;
|
||||
|
||||
Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
|
||||
|
||||
SmallVector<uint64_t, 64> Record;
|
||||
for (auto SSN : SSNs) {
|
||||
Record.append(SSN.begin(), SSN.end());
|
||||
Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
|
||||
Record.clear();
|
||||
}
|
||||
|
||||
Stream.ExitBlock();
|
||||
}
|
||||
|
||||
static void emitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) {
|
||||
if ((int64_t)V >= 0)
|
||||
Vals.push_back(V << 1);
|
||||
|
@ -2658,7 +2673,7 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
|
|||
Vals.push_back(cast<LoadInst>(I).isVolatile());
|
||||
if (cast<LoadInst>(I).isAtomic()) {
|
||||
Vals.push_back(getEncodedOrdering(cast<LoadInst>(I).getOrdering()));
|
||||
Vals.push_back(getEncodedSynchScope(cast<LoadInst>(I).getSynchScope()));
|
||||
Vals.push_back(getEncodedSyncScopeID(cast<LoadInst>(I).getSyncScopeID()));
|
||||
}
|
||||
break;
|
||||
case Instruction::Store:
|
||||
|
@ -2672,7 +2687,8 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
|
|||
Vals.push_back(cast<StoreInst>(I).isVolatile());
|
||||
if (cast<StoreInst>(I).isAtomic()) {
|
||||
Vals.push_back(getEncodedOrdering(cast<StoreInst>(I).getOrdering()));
|
||||
Vals.push_back(getEncodedSynchScope(cast<StoreInst>(I).getSynchScope()));
|
||||
Vals.push_back(
|
||||
getEncodedSyncScopeID(cast<StoreInst>(I).getSyncScopeID()));
|
||||
}
|
||||
break;
|
||||
case Instruction::AtomicCmpXchg:
|
||||
|
@ -2684,7 +2700,7 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
|
|||
Vals.push_back(
|
||||
getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getSuccessOrdering()));
|
||||
Vals.push_back(
|
||||
getEncodedSynchScope(cast<AtomicCmpXchgInst>(I).getSynchScope()));
|
||||
getEncodedSyncScopeID(cast<AtomicCmpXchgInst>(I).getSyncScopeID()));
|
||||
Vals.push_back(
|
||||
getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getFailureOrdering()));
|
||||
Vals.push_back(cast<AtomicCmpXchgInst>(I).isWeak());
|
||||
|
@ -2698,12 +2714,12 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
|
|||
Vals.push_back(cast<AtomicRMWInst>(I).isVolatile());
|
||||
Vals.push_back(getEncodedOrdering(cast<AtomicRMWInst>(I).getOrdering()));
|
||||
Vals.push_back(
|
||||
getEncodedSynchScope(cast<AtomicRMWInst>(I).getSynchScope()));
|
||||
getEncodedSyncScopeID(cast<AtomicRMWInst>(I).getSyncScopeID()));
|
||||
break;
|
||||
case Instruction::Fence:
|
||||
Code = bitc::FUNC_CODE_INST_FENCE;
|
||||
Vals.push_back(getEncodedOrdering(cast<FenceInst>(I).getOrdering()));
|
||||
Vals.push_back(getEncodedSynchScope(cast<FenceInst>(I).getSynchScope()));
|
||||
Vals.push_back(getEncodedSyncScopeID(cast<FenceInst>(I).getSyncScopeID()));
|
||||
break;
|
||||
case Instruction::Call: {
|
||||
const CallInst &CI = cast<CallInst>(I);
|
||||
|
@ -3716,6 +3732,7 @@ void ModuleBitcodeWriter::write() {
|
|||
writeUseListBlock(nullptr);
|
||||
|
||||
writeOperandBundleTags();
|
||||
writeSyncScopeNames();
|
||||
|
||||
// Emit function bodies.
|
||||
DenseMap<const Function *, uint64_t> FunctionToBitcodeIndex;
|
||||
|
|
|
@ -361,7 +361,7 @@ LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
|
|||
auto *NewLI = Builder.CreateLoad(NewAddr);
|
||||
NewLI->setAlignment(LI->getAlignment());
|
||||
NewLI->setVolatile(LI->isVolatile());
|
||||
NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
|
||||
NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
|
||||
DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
|
||||
|
||||
Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
|
||||
|
@ -444,7 +444,7 @@ StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
|
|||
StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
|
||||
NewSI->setAlignment(SI->getAlignment());
|
||||
NewSI->setVolatile(SI->isVolatile());
|
||||
NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
|
||||
NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
|
||||
DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
|
||||
SI->eraseFromParent();
|
||||
return NewSI;
|
||||
|
@ -801,7 +801,7 @@ void AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
|
|||
Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
|
||||
AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
|
||||
PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
|
||||
CI->getFailureOrdering(), CI->getSynchScope());
|
||||
CI->getFailureOrdering(), CI->getSyncScopeID());
|
||||
NewCI->setVolatile(CI->isVolatile());
|
||||
// When we're building a strong cmpxchg, we need a loop, so you
|
||||
// might think we could use a weak cmpxchg inside. But, using strong
|
||||
|
@ -924,7 +924,7 @@ AtomicCmpXchgInst *AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *
|
|||
auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
|
||||
CI->getSuccessOrdering(),
|
||||
CI->getFailureOrdering(),
|
||||
CI->getSynchScope());
|
||||
CI->getSyncScopeID());
|
||||
NewCI->setVolatile(CI->isVolatile());
|
||||
NewCI->setWeak(CI->isWeak());
|
||||
DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");
|
||||
|
|
|
@ -345,7 +345,7 @@ bool IRTranslator::translateLoad(const User &U, MachineIRBuilder &MIRBuilder) {
|
|||
*MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
|
||||
Flags, DL->getTypeStoreSize(LI.getType()),
|
||||
getMemOpAlignment(LI), AAMDNodes(), nullptr,
|
||||
LI.getSynchScope(), LI.getOrdering()));
|
||||
LI.getSyncScopeID(), LI.getOrdering()));
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -363,7 +363,7 @@ bool IRTranslator::translateStore(const User &U, MachineIRBuilder &MIRBuilder) {
|
|||
*MF->getMachineMemOperand(
|
||||
MachinePointerInfo(SI.getPointerOperand()), Flags,
|
||||
DL->getTypeStoreSize(SI.getValueOperand()->getType()),
|
||||
getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(),
|
||||
getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
|
||||
SI.getOrdering()));
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -365,6 +365,14 @@ static Cursor maybeLexIRValue(Cursor C, MIToken &Token,
|
|||
return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback);
|
||||
}
|
||||
|
||||
static Cursor maybeLexStringConstant(Cursor C, MIToken &Token,
|
||||
ErrorCallbackType ErrorCallback) {
|
||||
if (C.peek() != '"')
|
||||
return None;
|
||||
return lexName(C, Token, MIToken::StringConstant, /*PrefixLength=*/0,
|
||||
ErrorCallback);
|
||||
}
|
||||
|
||||
static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
|
||||
auto Range = C;
|
||||
C.advance(); // Skip '%'
|
||||
|
@ -630,6 +638,8 @@ StringRef llvm::lexMIToken(StringRef Source, MIToken &Token,
|
|||
return R.remaining();
|
||||
if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback))
|
||||
return R.remaining();
|
||||
if (Cursor R = maybeLexStringConstant(C, Token, ErrorCallback))
|
||||
return R.remaining();
|
||||
|
||||
Token.reset(MIToken::Error, C.remaining());
|
||||
ErrorCallback(C.location(),
|
||||
|
|
|
@ -127,7 +127,8 @@ struct MIToken {
|
|||
NamedIRValue,
|
||||
IRValue,
|
||||
QuotedIRValue, // `<constant value>`
|
||||
SubRegisterIndex
|
||||
SubRegisterIndex,
|
||||
StringConstant
|
||||
};
|
||||
|
||||
private:
|
||||
|
|
|
@ -229,6 +229,7 @@ public:
|
|||
bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
|
||||
bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
|
||||
bool parseMachinePointerInfo(MachinePointerInfo &Dest);
|
||||
bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
|
||||
bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
|
||||
bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
|
||||
|
||||
|
@ -318,6 +319,10 @@ private:
|
|||
///
|
||||
/// Return true if the name isn't a name of a bitmask target flag.
|
||||
bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag);
|
||||
|
||||
/// parseStringConstant
|
||||
/// ::= StringConstant
|
||||
bool parseStringConstant(std::string &Result);
|
||||
};
|
||||
|
||||
} // end anonymous namespace
|
||||
|
@ -2135,6 +2140,26 @@ bool MIParser::parseMachinePointerInfo(MachinePointerInfo &Dest) {
|
|||
return false;
|
||||
}
|
||||
|
||||
bool MIParser::parseOptionalScope(LLVMContext &Context,
|
||||
SyncScope::ID &SSID) {
|
||||
SSID = SyncScope::System;
|
||||
if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
|
||||
lex();
|
||||
if (expectAndConsume(MIToken::lparen))
|
||||
return error("expected '(' in syncscope");
|
||||
|
||||
std::string SSN;
|
||||
if (parseStringConstant(SSN))
|
||||
return true;
|
||||
|
||||
SSID = Context.getOrInsertSyncScopeID(SSN);
|
||||
if (expectAndConsume(MIToken::rparen))
|
||||
return error("expected ')' in syncscope");
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
|
||||
Order = AtomicOrdering::NotAtomic;
|
||||
if (Token.isNot(MIToken::Identifier))
|
||||
|
@ -2174,12 +2199,10 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
|
|||
Flags |= MachineMemOperand::MOStore;
|
||||
lex();
|
||||
|
||||
// Optional "singlethread" scope.
|
||||
SynchronizationScope Scope = SynchronizationScope::CrossThread;
|
||||
if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") {
|
||||
Scope = SynchronizationScope::SingleThread;
|
||||
lex();
|
||||
}
|
||||
// Optional synchronization scope.
|
||||
SyncScope::ID SSID;
|
||||
if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
|
||||
return true;
|
||||
|
||||
// Up to two atomic orderings (cmpxchg provides guarantees on failure).
|
||||
AtomicOrdering Order, FailureOrder;
|
||||
|
@ -2244,7 +2267,7 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
|
|||
if (expectAndConsume(MIToken::rparen))
|
||||
return true;
|
||||
Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
|
||||
Scope, Order, FailureOrder);
|
||||
SSID, Order, FailureOrder);
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -2457,6 +2480,14 @@ bool MIParser::getBitmaskTargetFlag(StringRef Name, unsigned &Flag) {
|
|||
return false;
|
||||
}
|
||||
|
||||
bool MIParser::parseStringConstant(std::string &Result) {
|
||||
if (Token.isNot(MIToken::StringConstant))
|
||||
return error("expected string constant");
|
||||
Result = Token.stringValue();
|
||||
lex();
|
||||
return false;
|
||||
}
|
||||
|
||||
bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS,
|
||||
StringRef Src,
|
||||
SMDiagnostic &Error) {
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include "llvm/ADT/SmallPtrSet.h"
|
||||
#include "llvm/ADT/SmallVector.h"
|
||||
#include "llvm/ADT/STLExtras.h"
|
||||
#include "llvm/ADT/StringExtras.h"
|
||||
#include "llvm/ADT/StringRef.h"
|
||||
#include "llvm/ADT/Twine.h"
|
||||
#include "llvm/CodeGen/GlobalISel/RegisterBank.h"
|
||||
|
@ -139,6 +140,8 @@ class MIPrinter {
|
|||
ModuleSlotTracker &MST;
|
||||
const DenseMap<const uint32_t *, unsigned> &RegisterMaskIds;
|
||||
const DenseMap<int, FrameIndexOperand> &StackObjectOperandMapping;
|
||||
/// Synchronization scope names registered with LLVMContext.
|
||||
SmallVector<StringRef, 8> SSNs;
|
||||
|
||||
bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const;
|
||||
bool canPredictSuccessors(const MachineBasicBlock &MBB) const;
|
||||
|
@ -162,7 +165,8 @@ public:
|
|||
void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
|
||||
unsigned I, bool ShouldPrintRegisterTies,
|
||||
LLT TypeToPrint, bool IsDef = false);
|
||||
void print(const MachineMemOperand &Op);
|
||||
void print(const LLVMContext &Context, const MachineMemOperand &Op);
|
||||
void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
|
||||
|
||||
void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
|
||||
};
|
||||
|
@ -731,11 +735,12 @@ void MIPrinter::print(const MachineInstr &MI) {
|
|||
|
||||
if (!MI.memoperands_empty()) {
|
||||
OS << " :: ";
|
||||
const LLVMContext &Context = MF->getFunction()->getContext();
|
||||
bool NeedComma = false;
|
||||
for (const auto *Op : MI.memoperands()) {
|
||||
if (NeedComma)
|
||||
OS << ", ";
|
||||
print(*Op);
|
||||
print(Context, *Op);
|
||||
NeedComma = true;
|
||||
}
|
||||
}
|
||||
|
@ -1031,7 +1036,7 @@ void MIPrinter::print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
|
|||
}
|
||||
}
|
||||
|
||||
void MIPrinter::print(const MachineMemOperand &Op) {
|
||||
void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
|
||||
OS << '(';
|
||||
// TODO: Print operand's target specific flags.
|
||||
if (Op.isVolatile())
|
||||
|
@ -1049,8 +1054,7 @@ void MIPrinter::print(const MachineMemOperand &Op) {
|
|||
OS << "store ";
|
||||
}
|
||||
|
||||
if (Op.getSynchScope() == SynchronizationScope::SingleThread)
|
||||
OS << "singlethread ";
|
||||
printSyncScope(Context, Op.getSyncScopeID());
|
||||
|
||||
if (Op.getOrdering() != AtomicOrdering::NotAtomic)
|
||||
OS << toIRString(Op.getOrdering()) << ' ';
|
||||
|
@ -1119,6 +1123,23 @@ void MIPrinter::print(const MachineMemOperand &Op) {
|
|||
OS << ')';
|
||||
}
|
||||
|
||||
void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
|
||||
switch (SSID) {
|
||||
case SyncScope::System: {
|
||||
break;
|
||||
}
|
||||
default: {
|
||||
if (SSNs.empty())
|
||||
Context.getSyncScopeNames(SSNs);
|
||||
|
||||
OS << "syncscope(\"";
|
||||
PrintEscapedString(SSNs[SSID], OS);
|
||||
OS << "\") ";
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
|
||||
const TargetRegisterInfo *TRI) {
|
||||
int Reg = TRI->getLLVMRegNum(DwarfReg, true);
|
||||
|
|
|
@ -305,11 +305,11 @@ MachineFunction::DeleteMachineBasicBlock(MachineBasicBlock *MBB) {
|
|||
MachineMemOperand *MachineFunction::getMachineMemOperand(
|
||||
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
|
||||
unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
|
||||
SynchronizationScope SynchScope, AtomicOrdering Ordering,
|
||||
SyncScope::ID SSID, AtomicOrdering Ordering,
|
||||
AtomicOrdering FailureOrdering) {
|
||||
return new (Allocator)
|
||||
MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
|
||||
SynchScope, Ordering, FailureOrdering);
|
||||
SSID, Ordering, FailureOrdering);
|
||||
}
|
||||
|
||||
MachineMemOperand *
|
||||
|
@ -320,13 +320,13 @@ MachineFunction::getMachineMemOperand(const MachineMemOperand *MMO,
|
|||
MachineMemOperand(MachinePointerInfo(MMO->getValue(),
|
||||
MMO->getOffset()+Offset),
|
||||
MMO->getFlags(), Size, MMO->getBaseAlignment(),
|
||||
AAMDNodes(), nullptr, MMO->getSynchScope(),
|
||||
AAMDNodes(), nullptr, MMO->getSyncScopeID(),
|
||||
MMO->getOrdering(), MMO->getFailureOrdering());
|
||||
return new (Allocator)
|
||||
MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
|
||||
MMO->getOffset()+Offset),
|
||||
MMO->getFlags(), Size, MMO->getBaseAlignment(),
|
||||
AAMDNodes(), nullptr, MMO->getSynchScope(),
|
||||
AAMDNodes(), nullptr, MMO->getSyncScopeID(),
|
||||
MMO->getOrdering(), MMO->getFailureOrdering());
|
||||
}
|
||||
|
||||
|
@ -359,7 +359,7 @@ MachineFunction::extractLoadMemRefs(MachineInstr::mmo_iterator Begin,
|
|||
(*I)->getFlags() & ~MachineMemOperand::MOStore,
|
||||
(*I)->getSize(), (*I)->getBaseAlignment(),
|
||||
(*I)->getAAInfo(), nullptr,
|
||||
(*I)->getSynchScope(), (*I)->getOrdering(),
|
||||
(*I)->getSyncScopeID(), (*I)->getOrdering(),
|
||||
(*I)->getFailureOrdering());
|
||||
Result[Index] = JustLoad;
|
||||
}
|
||||
|
@ -393,7 +393,7 @@ MachineFunction::extractStoreMemRefs(MachineInstr::mmo_iterator Begin,
|
|||
(*I)->getFlags() & ~MachineMemOperand::MOLoad,
|
||||
(*I)->getSize(), (*I)->getBaseAlignment(),
|
||||
(*I)->getAAInfo(), nullptr,
|
||||
(*I)->getSynchScope(), (*I)->getOrdering(),
|
||||
(*I)->getSyncScopeID(), (*I)->getOrdering(),
|
||||
(*I)->getFailureOrdering());
|
||||
Result[Index] = JustStore;
|
||||
}
|
||||
|
|
|
@ -614,7 +614,7 @@ MachineMemOperand::MachineMemOperand(MachinePointerInfo ptrinfo, Flags f,
|
|||
uint64_t s, unsigned int a,
|
||||
const AAMDNodes &AAInfo,
|
||||
const MDNode *Ranges,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
AtomicOrdering Ordering,
|
||||
AtomicOrdering FailureOrdering)
|
||||
: PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
|
||||
|
@ -625,8 +625,8 @@ MachineMemOperand::MachineMemOperand(MachinePointerInfo ptrinfo, Flags f,
|
|||
assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
|
||||
assert((isLoad() || isStore()) && "Not a load/store!");
|
||||
|
||||
AtomicInfo.SynchScope = static_cast<unsigned>(SynchScope);
|
||||
assert(getSynchScope() == SynchScope && "Value truncated");
|
||||
AtomicInfo.SSID = static_cast<unsigned>(SSID);
|
||||
assert(getSyncScopeID() == SSID && "Value truncated");
|
||||
AtomicInfo.Ordering = static_cast<unsigned>(Ordering);
|
||||
assert(getOrdering() == Ordering && "Value truncated");
|
||||
AtomicInfo.FailureOrdering = static_cast<unsigned>(FailureOrdering);
|
||||
|
|
|
@ -5443,7 +5443,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
|
|||
unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
|
||||
SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
|
||||
unsigned Alignment, AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) {
|
||||
AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
|
||||
assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
|
||||
Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
|
||||
assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
|
||||
|
@ -5459,7 +5459,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
|
|||
MachineMemOperand::MOStore;
|
||||
MachineMemOperand *MMO =
|
||||
MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
|
||||
AAMDNodes(), nullptr, SynchScope, SuccessOrdering,
|
||||
AAMDNodes(), nullptr, SSID, SuccessOrdering,
|
||||
FailureOrdering);
|
||||
|
||||
return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
|
||||
|
@ -5481,7 +5481,7 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
|
|||
SDValue Chain, SDValue Ptr, SDValue Val,
|
||||
const Value *PtrVal, unsigned Alignment,
|
||||
AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope) {
|
||||
SyncScope::ID SSID) {
|
||||
if (Alignment == 0) // Ensure that codegen never sees alignment 0
|
||||
Alignment = getEVTAlignment(MemVT);
|
||||
|
||||
|
@ -5501,7 +5501,7 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
|
|||
MachineMemOperand *MMO =
|
||||
MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
|
||||
MemVT.getStoreSize(), Alignment, AAMDNodes(),
|
||||
nullptr, SynchScope, Ordering);
|
||||
nullptr, SSID, Ordering);
|
||||
|
||||
return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
|
||||
}
|
||||
|
|
|
@ -3990,7 +3990,7 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
|
|||
SDLoc dl = getCurSDLoc();
|
||||
AtomicOrdering SuccessOrder = I.getSuccessOrdering();
|
||||
AtomicOrdering FailureOrder = I.getFailureOrdering();
|
||||
SynchronizationScope Scope = I.getSynchScope();
|
||||
SyncScope::ID SSID = I.getSyncScopeID();
|
||||
|
||||
SDValue InChain = getRoot();
|
||||
|
||||
|
@ -4000,7 +4000,7 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
|
|||
ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
|
||||
getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
|
||||
getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
|
||||
/*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope);
|
||||
/*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
|
||||
|
||||
SDValue OutChain = L.getValue(2);
|
||||
|
||||
|
@ -4026,7 +4026,7 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
|
|||
case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
|
||||
}
|
||||
AtomicOrdering Order = I.getOrdering();
|
||||
SynchronizationScope Scope = I.getSynchScope();
|
||||
SyncScope::ID SSID = I.getSyncScopeID();
|
||||
|
||||
SDValue InChain = getRoot();
|
||||
|
||||
|
@ -4037,7 +4037,7 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
|
|||
getValue(I.getPointerOperand()),
|
||||
getValue(I.getValOperand()),
|
||||
I.getPointerOperand(),
|
||||
/* Alignment=*/ 0, Order, Scope);
|
||||
/* Alignment=*/ 0, Order, SSID);
|
||||
|
||||
SDValue OutChain = L.getValue(1);
|
||||
|
||||
|
@ -4052,7 +4052,7 @@ void SelectionDAGBuilder::visitFence(const FenceInst &I) {
|
|||
Ops[0] = getRoot();
|
||||
Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
|
||||
TLI.getFenceOperandTy(DAG.getDataLayout()));
|
||||
Ops[2] = DAG.getConstant(I.getSynchScope(), dl,
|
||||
Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
|
||||
TLI.getFenceOperandTy(DAG.getDataLayout()));
|
||||
DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
|
||||
}
|
||||
|
@ -4060,7 +4060,7 @@ void SelectionDAGBuilder::visitFence(const FenceInst &I) {
|
|||
void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
|
||||
SDLoc dl = getCurSDLoc();
|
||||
AtomicOrdering Order = I.getOrdering();
|
||||
SynchronizationScope Scope = I.getSynchScope();
|
||||
SyncScope::ID SSID = I.getSyncScopeID();
|
||||
|
||||
SDValue InChain = getRoot();
|
||||
|
||||
|
@ -4078,7 +4078,7 @@ void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
|
|||
VT.getStoreSize(),
|
||||
I.getAlignment() ? I.getAlignment() :
|
||||
DAG.getEVTAlignment(VT),
|
||||
AAMDNodes(), nullptr, Scope, Order);
|
||||
AAMDNodes(), nullptr, SSID, Order);
|
||||
|
||||
InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
|
||||
SDValue L =
|
||||
|
@ -4095,7 +4095,7 @@ void SelectionDAGBuilder::visitAtomicStore(const StoreInst &I) {
|
|||
SDLoc dl = getCurSDLoc();
|
||||
|
||||
AtomicOrdering Order = I.getOrdering();
|
||||
SynchronizationScope Scope = I.getSynchScope();
|
||||
SyncScope::ID SSID = I.getSyncScopeID();
|
||||
|
||||
SDValue InChain = getRoot();
|
||||
|
||||
|
@ -4112,7 +4112,7 @@ void SelectionDAGBuilder::visitAtomicStore(const StoreInst &I) {
|
|||
getValue(I.getPointerOperand()),
|
||||
getValue(I.getValueOperand()),
|
||||
I.getPointerOperand(), I.getAlignment(),
|
||||
Order, Scope);
|
||||
Order, SSID);
|
||||
|
||||
DAG.setRoot(OutChain);
|
||||
}
|
||||
|
|
|
@ -2119,6 +2119,8 @@ class AssemblyWriter {
|
|||
bool ShouldPreserveUseListOrder;
|
||||
UseListOrderStack UseListOrders;
|
||||
SmallVector<StringRef, 8> MDNames;
|
||||
/// Synchronization scope names registered with LLVMContext.
|
||||
SmallVector<StringRef, 8> SSNs;
|
||||
|
||||
public:
|
||||
/// Construct an AssemblyWriter with an external SlotTracker
|
||||
|
@ -2134,10 +2136,15 @@ public:
|
|||
void writeOperand(const Value *Op, bool PrintType);
|
||||
void writeParamOperand(const Value *Operand, AttributeSet Attrs);
|
||||
void writeOperandBundles(ImmutableCallSite CS);
|
||||
void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope);
|
||||
void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
|
||||
void writeSyncScope(const LLVMContext &Context,
|
||||
SyncScope::ID SSID);
|
||||
void writeAtomic(const LLVMContext &Context,
|
||||
AtomicOrdering Ordering,
|
||||
SyncScope::ID SSID);
|
||||
void writeAtomicCmpXchg(const LLVMContext &Context,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope);
|
||||
SyncScope::ID SSID);
|
||||
|
||||
void writeAllMDNodes();
|
||||
void writeMDNode(unsigned Slot, const MDNode *Node);
|
||||
|
@ -2199,30 +2206,42 @@ void AssemblyWriter::writeOperand(const Value *Operand, bool PrintType) {
|
|||
WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
|
||||
}
|
||||
|
||||
void AssemblyWriter::writeAtomic(AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope) {
|
||||
void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
|
||||
SyncScope::ID SSID) {
|
||||
switch (SSID) {
|
||||
case SyncScope::System: {
|
||||
break;
|
||||
}
|
||||
default: {
|
||||
if (SSNs.empty())
|
||||
Context.getSyncScopeNames(SSNs);
|
||||
|
||||
Out << " syncscope(\"";
|
||||
PrintEscapedString(SSNs[SSID], Out);
|
||||
Out << "\")";
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void AssemblyWriter::writeAtomic(const LLVMContext &Context,
|
||||
AtomicOrdering Ordering,
|
||||
SyncScope::ID SSID) {
|
||||
if (Ordering == AtomicOrdering::NotAtomic)
|
||||
return;
|
||||
|
||||
switch (SynchScope) {
|
||||
case SingleThread: Out << " singlethread"; break;
|
||||
case CrossThread: break;
|
||||
}
|
||||
|
||||
writeSyncScope(Context, SSID);
|
||||
Out << " " << toIRString(Ordering);
|
||||
}
|
||||
|
||||
void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
|
||||
void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope) {
|
||||
SyncScope::ID SSID) {
|
||||
assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
|
||||
FailureOrdering != AtomicOrdering::NotAtomic);
|
||||
|
||||
switch (SynchScope) {
|
||||
case SingleThread: Out << " singlethread"; break;
|
||||
case CrossThread: break;
|
||||
}
|
||||
|
||||
writeSyncScope(Context, SSID);
|
||||
Out << " " << toIRString(SuccessOrdering);
|
||||
Out << " " << toIRString(FailureOrdering);
|
||||
}
|
||||
|
@ -3215,21 +3234,22 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
|
|||
// Print atomic ordering/alignment for memory operations
|
||||
if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) {
|
||||
if (LI->isAtomic())
|
||||
writeAtomic(LI->getOrdering(), LI->getSynchScope());
|
||||
writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
|
||||
if (LI->getAlignment())
|
||||
Out << ", align " << LI->getAlignment();
|
||||
} else if (const StoreInst *SI = dyn_cast<StoreInst>(&I)) {
|
||||
if (SI->isAtomic())
|
||||
writeAtomic(SI->getOrdering(), SI->getSynchScope());
|
||||
writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
|
||||
if (SI->getAlignment())
|
||||
Out << ", align " << SI->getAlignment();
|
||||
} else if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(&I)) {
|
||||
writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(),
|
||||
CXI->getSynchScope());
|
||||
writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
|
||||
CXI->getFailureOrdering(), CXI->getSyncScopeID());
|
||||
} else if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(&I)) {
|
||||
writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope());
|
||||
writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
|
||||
RMWI->getSyncScopeID());
|
||||
} else if (const FenceInst *FI = dyn_cast<FenceInst>(&I)) {
|
||||
writeAtomic(FI->getOrdering(), FI->getSynchScope());
|
||||
writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
|
||||
}
|
||||
|
||||
// Print Metadata info.
|
||||
|
|
|
@ -2756,11 +2756,14 @@ static LLVMAtomicOrdering mapToLLVMOrdering(AtomicOrdering Ordering) {
|
|||
llvm_unreachable("Invalid AtomicOrdering value!");
|
||||
}
|
||||
|
||||
// TODO: Should this and other atomic instructions support building with
|
||||
// "syncscope"?
|
||||
LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
|
||||
LLVMBool isSingleThread, const char *Name) {
|
||||
return wrap(
|
||||
unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
|
||||
isSingleThread ? SingleThread : CrossThread,
|
||||
isSingleThread ? SyncScope::SingleThread
|
||||
: SyncScope::System,
|
||||
Name));
|
||||
}
|
||||
|
||||
|
@ -3042,7 +3045,8 @@ LLVMValueRef LLVMBuildAtomicRMW(LLVMBuilderRef B,LLVMAtomicRMWBinOp op,
|
|||
case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
|
||||
}
|
||||
return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
|
||||
mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread));
|
||||
mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
|
||||
: SyncScope::System));
|
||||
}
|
||||
|
||||
LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
|
||||
|
@ -3054,7 +3058,7 @@ LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
|
|||
return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
|
||||
unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
|
||||
mapFromLLVMOrdering(FailureOrdering),
|
||||
singleThread ? SingleThread : CrossThread));
|
||||
singleThread ? SyncScope::SingleThread : SyncScope::System));
|
||||
}
|
||||
|
||||
|
||||
|
@ -3062,17 +3066,18 @@ LLVMBool LLVMIsAtomicSingleThread(LLVMValueRef AtomicInst) {
|
|||
Value *P = unwrap<Value>(AtomicInst);
|
||||
|
||||
if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
|
||||
return I->getSynchScope() == SingleThread;
|
||||
return cast<AtomicCmpXchgInst>(P)->getSynchScope() == SingleThread;
|
||||
return I->getSyncScopeID() == SyncScope::SingleThread;
|
||||
return cast<AtomicCmpXchgInst>(P)->getSyncScopeID() ==
|
||||
SyncScope::SingleThread;
|
||||
}
|
||||
|
||||
void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
|
||||
Value *P = unwrap<Value>(AtomicInst);
|
||||
SynchronizationScope Sync = NewValue ? SingleThread : CrossThread;
|
||||
SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
|
||||
|
||||
if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
|
||||
return I->setSynchScope(Sync);
|
||||
return cast<AtomicCmpXchgInst>(P)->setSynchScope(Sync);
|
||||
return I->setSyncScopeID(SSID);
|
||||
return cast<AtomicCmpXchgInst>(P)->setSyncScopeID(SSID);
|
||||
}
|
||||
|
||||
LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) {
|
||||
|
|
|
@ -362,13 +362,13 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
|
|||
(LI->getAlignment() == cast<LoadInst>(I2)->getAlignment() ||
|
||||
IgnoreAlignment) &&
|
||||
LI->getOrdering() == cast<LoadInst>(I2)->getOrdering() &&
|
||||
LI->getSynchScope() == cast<LoadInst>(I2)->getSynchScope();
|
||||
LI->getSyncScopeID() == cast<LoadInst>(I2)->getSyncScopeID();
|
||||
if (const StoreInst *SI = dyn_cast<StoreInst>(I1))
|
||||
return SI->isVolatile() == cast<StoreInst>(I2)->isVolatile() &&
|
||||
(SI->getAlignment() == cast<StoreInst>(I2)->getAlignment() ||
|
||||
IgnoreAlignment) &&
|
||||
SI->getOrdering() == cast<StoreInst>(I2)->getOrdering() &&
|
||||
SI->getSynchScope() == cast<StoreInst>(I2)->getSynchScope();
|
||||
SI->getSyncScopeID() == cast<StoreInst>(I2)->getSyncScopeID();
|
||||
if (const CmpInst *CI = dyn_cast<CmpInst>(I1))
|
||||
return CI->getPredicate() == cast<CmpInst>(I2)->getPredicate();
|
||||
if (const CallInst *CI = dyn_cast<CallInst>(I1))
|
||||
|
@ -386,7 +386,7 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
|
|||
return EVI->getIndices() == cast<ExtractValueInst>(I2)->getIndices();
|
||||
if (const FenceInst *FI = dyn_cast<FenceInst>(I1))
|
||||
return FI->getOrdering() == cast<FenceInst>(I2)->getOrdering() &&
|
||||
FI->getSynchScope() == cast<FenceInst>(I2)->getSynchScope();
|
||||
FI->getSyncScopeID() == cast<FenceInst>(I2)->getSyncScopeID();
|
||||
if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(I1))
|
||||
return CXI->isVolatile() == cast<AtomicCmpXchgInst>(I2)->isVolatile() &&
|
||||
CXI->isWeak() == cast<AtomicCmpXchgInst>(I2)->isWeak() &&
|
||||
|
@ -394,12 +394,13 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
|
|||
cast<AtomicCmpXchgInst>(I2)->getSuccessOrdering() &&
|
||||
CXI->getFailureOrdering() ==
|
||||
cast<AtomicCmpXchgInst>(I2)->getFailureOrdering() &&
|
||||
CXI->getSynchScope() == cast<AtomicCmpXchgInst>(I2)->getSynchScope();
|
||||
CXI->getSyncScopeID() ==
|
||||
cast<AtomicCmpXchgInst>(I2)->getSyncScopeID();
|
||||
if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(I1))
|
||||
return RMWI->getOperation() == cast<AtomicRMWInst>(I2)->getOperation() &&
|
||||
RMWI->isVolatile() == cast<AtomicRMWInst>(I2)->isVolatile() &&
|
||||
RMWI->getOrdering() == cast<AtomicRMWInst>(I2)->getOrdering() &&
|
||||
RMWI->getSynchScope() == cast<AtomicRMWInst>(I2)->getSynchScope();
|
||||
RMWI->getSyncScopeID() == cast<AtomicRMWInst>(I2)->getSyncScopeID();
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -1304,34 +1304,34 @@ LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
|
|||
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
|
||||
unsigned Align, Instruction *InsertBef)
|
||||
: LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
|
||||
CrossThread, InsertBef) {}
|
||||
SyncScope::System, InsertBef) {}
|
||||
|
||||
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
|
||||
unsigned Align, BasicBlock *InsertAE)
|
||||
: LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
|
||||
CrossThread, InsertAE) {}
|
||||
SyncScope::System, InsertAE) {}
|
||||
|
||||
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope, Instruction *InsertBef)
|
||||
SyncScope::ID SSID, Instruction *InsertBef)
|
||||
: UnaryInstruction(Ty, Load, Ptr, InsertBef) {
|
||||
assert(Ty == cast<PointerType>(Ptr->getType())->getElementType());
|
||||
setVolatile(isVolatile);
|
||||
setAlignment(Align);
|
||||
setAtomic(Order, SynchScope);
|
||||
setAtomic(Order, SSID);
|
||||
AssertOK();
|
||||
setName(Name);
|
||||
}
|
||||
|
||||
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
BasicBlock *InsertAE)
|
||||
: UnaryInstruction(cast<PointerType>(Ptr->getType())->getElementType(),
|
||||
Load, Ptr, InsertAE) {
|
||||
setVolatile(isVolatile);
|
||||
setAlignment(Align);
|
||||
setAtomic(Order, SynchScope);
|
||||
setAtomic(Order, SSID);
|
||||
AssertOK();
|
||||
setName(Name);
|
||||
}
|
||||
|
@ -1419,16 +1419,16 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
|
|||
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
|
||||
Instruction *InsertBefore)
|
||||
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
|
||||
CrossThread, InsertBefore) {}
|
||||
SyncScope::System, InsertBefore) {}
|
||||
|
||||
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
|
||||
BasicBlock *InsertAtEnd)
|
||||
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
|
||||
CrossThread, InsertAtEnd) {}
|
||||
SyncScope::System, InsertAtEnd) {}
|
||||
|
||||
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
Instruction *InsertBefore)
|
||||
: Instruction(Type::getVoidTy(val->getContext()), Store,
|
||||
OperandTraits<StoreInst>::op_begin(this),
|
||||
|
@ -1438,13 +1438,13 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
|
|||
Op<1>() = addr;
|
||||
setVolatile(isVolatile);
|
||||
setAlignment(Align);
|
||||
setAtomic(Order, SynchScope);
|
||||
setAtomic(Order, SSID);
|
||||
AssertOK();
|
||||
}
|
||||
|
||||
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
|
||||
unsigned Align, AtomicOrdering Order,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd)
|
||||
: Instruction(Type::getVoidTy(val->getContext()), Store,
|
||||
OperandTraits<StoreInst>::op_begin(this),
|
||||
|
@ -1454,7 +1454,7 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
|
|||
Op<1>() = addr;
|
||||
setVolatile(isVolatile);
|
||||
setAlignment(Align);
|
||||
setAtomic(Order, SynchScope);
|
||||
setAtomic(Order, SSID);
|
||||
AssertOK();
|
||||
}
|
||||
|
||||
|
@ -1474,13 +1474,13 @@ void StoreInst::setAlignment(unsigned Align) {
|
|||
void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope) {
|
||||
SyncScope::ID SSID) {
|
||||
Op<0>() = Ptr;
|
||||
Op<1>() = Cmp;
|
||||
Op<2>() = NewVal;
|
||||
setSuccessOrdering(SuccessOrdering);
|
||||
setFailureOrdering(FailureOrdering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
|
||||
assert(getOperand(0) && getOperand(1) && getOperand(2) &&
|
||||
"All operands must be non-null!");
|
||||
|
@ -1507,25 +1507,25 @@ void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
|
|||
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
Instruction *InsertBefore)
|
||||
: Instruction(
|
||||
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
|
||||
AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
|
||||
OperandTraits<AtomicCmpXchgInst>::operands(this), InsertBefore) {
|
||||
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
|
||||
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
|
||||
}
|
||||
|
||||
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
|
||||
AtomicOrdering SuccessOrdering,
|
||||
AtomicOrdering FailureOrdering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd)
|
||||
: Instruction(
|
||||
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
|
||||
AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
|
||||
OperandTraits<AtomicCmpXchgInst>::operands(this), InsertAtEnd) {
|
||||
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
|
||||
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
|
||||
}
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -1534,12 +1534,12 @@ AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
|
|||
|
||||
void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope) {
|
||||
SyncScope::ID SSID) {
|
||||
Op<0>() = Ptr;
|
||||
Op<1>() = Val;
|
||||
setOperation(Operation);
|
||||
setOrdering(Ordering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
|
||||
assert(getOperand(0) && getOperand(1) &&
|
||||
"All operands must be non-null!");
|
||||
|
@ -1554,24 +1554,24 @@ void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
|
|||
|
||||
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
Instruction *InsertBefore)
|
||||
: Instruction(Val->getType(), AtomicRMW,
|
||||
OperandTraits<AtomicRMWInst>::op_begin(this),
|
||||
OperandTraits<AtomicRMWInst>::operands(this),
|
||||
InsertBefore) {
|
||||
Init(Operation, Ptr, Val, Ordering, SynchScope);
|
||||
Init(Operation, Ptr, Val, Ordering, SSID);
|
||||
}
|
||||
|
||||
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
|
||||
AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd)
|
||||
: Instruction(Val->getType(), AtomicRMW,
|
||||
OperandTraits<AtomicRMWInst>::op_begin(this),
|
||||
OperandTraits<AtomicRMWInst>::operands(this),
|
||||
InsertAtEnd) {
|
||||
Init(Operation, Ptr, Val, Ordering, SynchScope);
|
||||
Init(Operation, Ptr, Val, Ordering, SSID);
|
||||
}
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -1579,19 +1579,19 @@ AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
|
|||
//===----------------------------------------------------------------------===//
|
||||
|
||||
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
Instruction *InsertBefore)
|
||||
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
|
||||
setOrdering(Ordering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
}
|
||||
|
||||
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
|
||||
SynchronizationScope SynchScope,
|
||||
SyncScope::ID SSID,
|
||||
BasicBlock *InsertAtEnd)
|
||||
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
|
||||
setOrdering(Ordering);
|
||||
setSynchScope(SynchScope);
|
||||
setSyncScopeID(SSID);
|
||||
}
|
||||
|
||||
//===----------------------------------------------------------------------===//
|
||||
|
@ -3795,12 +3795,12 @@ AllocaInst *AllocaInst::cloneImpl() const {
|
|||
|
||||
LoadInst *LoadInst::cloneImpl() const {
|
||||
return new LoadInst(getOperand(0), Twine(), isVolatile(),
|
||||
getAlignment(), getOrdering(), getSynchScope());
|
||||
getAlignment(), getOrdering(), getSyncScopeID());
|
||||
}
|
||||
|
||||
StoreInst *StoreInst::cloneImpl() const {
|
||||
return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
|
||||
getAlignment(), getOrdering(), getSynchScope());
|
||||
getAlignment(), getOrdering(), getSyncScopeID());
|
||||
|
||||
}
|
||||
|
||||
|
@ -3808,7 +3808,7 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cloneImpl() const {
|
|||
AtomicCmpXchgInst *Result =
|
||||
new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
|
||||
getSuccessOrdering(), getFailureOrdering(),
|
||||
getSynchScope());
|
||||
getSyncScopeID());
|
||||
Result->setVolatile(isVolatile());
|
||||
Result->setWeak(isWeak());
|
||||
return Result;
|
||||
|
@ -3816,14 +3816,14 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cloneImpl() const {
|
|||
|
||||
AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
|
||||
AtomicRMWInst *Result =
|
||||
new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1),
|
||||
getOrdering(), getSynchScope());
|
||||
new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
|
||||
getOrdering(), getSyncScopeID());
|
||||
Result->setVolatile(isVolatile());
|
||||
return Result;
|
||||
}
|
||||
|
||||
FenceInst *FenceInst::cloneImpl() const {
|
||||
return new FenceInst(getContext(), getOrdering(), getSynchScope());
|
||||
return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
|
||||
}
|
||||
|
||||
TruncInst *TruncInst::cloneImpl() const {
|
||||
|
|
|
@ -81,6 +81,16 @@ LLVMContext::LLVMContext() : pImpl(new LLVMContextImpl(*this)) {
|
|||
assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
|
||||
"gc-transition operand bundle id drifted!");
|
||||
(void)GCTransitionEntry;
|
||||
|
||||
SyncScope::ID SingleThreadSSID =
|
||||
pImpl->getOrInsertSyncScopeID("singlethread");
|
||||
assert(SingleThreadSSID == SyncScope::SingleThread &&
|
||||
"singlethread synchronization scope ID drifted!");
|
||||
|
||||
SyncScope::ID SystemSSID =
|
||||
pImpl->getOrInsertSyncScopeID("");
|
||||
assert(SystemSSID == SyncScope::System &&
|
||||
"system synchronization scope ID drifted!");
|
||||
}
|
||||
|
||||
LLVMContext::~LLVMContext() { delete pImpl; }
|
||||
|
@ -255,6 +265,14 @@ uint32_t LLVMContext::getOperandBundleTagID(StringRef Tag) const {
|
|||
return pImpl->getOperandBundleTagID(Tag);
|
||||
}
|
||||
|
||||
SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
|
||||
return pImpl->getOrInsertSyncScopeID(SSN);
|
||||
}
|
||||
|
||||
void LLVMContext::getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const {
|
||||
pImpl->getSyncScopeNames(SSNs);
|
||||
}
|
||||
|
||||
void LLVMContext::setGC(const Function &Fn, std::string GCName) {
|
||||
auto It = pImpl->GCNames.find(&Fn);
|
||||
|
||||
|
|
|
@ -205,6 +205,20 @@ uint32_t LLVMContextImpl::getOperandBundleTagID(StringRef Tag) const {
|
|||
return I->second;
|
||||
}
|
||||
|
||||
SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
|
||||
auto NewSSID = SSC.size();
|
||||
assert(NewSSID < std::numeric_limits<SyncScope::ID>::max() &&
|
||||
"Hit the maximum number of synchronization scopes allowed!");
|
||||
return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
|
||||
}
|
||||
|
||||
void LLVMContextImpl::getSyncScopeNames(
|
||||
SmallVectorImpl<StringRef> &SSNs) const {
|
||||
SSNs.resize(SSC.size());
|
||||
for (const auto &SSE : SSC)
|
||||
SSNs[SSE.second] = SSE.first();
|
||||
}
|
||||
|
||||
/// Singleton instance of the OptBisect class.
|
||||
///
|
||||
/// This singleton is accessed via the LLVMContext::getOptBisect() function. It
|
||||
|
|
|
@ -1297,6 +1297,20 @@ public:
|
|||
void getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const;
|
||||
uint32_t getOperandBundleTagID(StringRef Tag) const;
|
||||
|
||||
/// A set of interned synchronization scopes. The StringMap maps
|
||||
/// synchronization scope names to their respective synchronization scope IDs.
|
||||
StringMap<SyncScope::ID> SSC;
|
||||
|
||||
/// getOrInsertSyncScopeID - Maps synchronization scope name to
|
||||
/// synchronization scope ID. Every synchronization scope registered with
|
||||
/// LLVMContext has unique ID except pre-defined ones.
|
||||
SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
|
||||
|
||||
/// getSyncScopeNames - Populates client supplied SmallVector with
|
||||
/// synchronization scope names registered with LLVMContext. Synchronization
|
||||
/// scope names are ordered by increasing synchronization scope IDs.
|
||||
void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
|
||||
|
||||
/// Maintain the GC name for each function.
|
||||
///
|
||||
/// This saves allocating an additional word in Function for programs which
|
||||
|
|
|
@ -3108,7 +3108,7 @@ void Verifier::visitLoadInst(LoadInst &LI) {
|
|||
ElTy, &LI);
|
||||
checkAtomicMemAccessSize(ElTy, &LI);
|
||||
} else {
|
||||
Assert(LI.getSynchScope() == CrossThread,
|
||||
Assert(LI.getSyncScopeID() == SyncScope::System,
|
||||
"Non-atomic load cannot have SynchronizationScope specified", &LI);
|
||||
}
|
||||
|
||||
|
@ -3137,7 +3137,7 @@ void Verifier::visitStoreInst(StoreInst &SI) {
|
|||
ElTy, &SI);
|
||||
checkAtomicMemAccessSize(ElTy, &SI);
|
||||
} else {
|
||||
Assert(SI.getSynchScope() == CrossThread,
|
||||
Assert(SI.getSyncScopeID() == SyncScope::System,
|
||||
"Non-atomic store cannot have SynchronizationScope specified", &SI);
|
||||
}
|
||||
visitInstruction(SI);
|
||||
|
|
|
@ -3398,9 +3398,9 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG,
|
|||
static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
|
||||
const ARMSubtarget *Subtarget) {
|
||||
SDLoc dl(Op);
|
||||
ConstantSDNode *ScopeN = cast<ConstantSDNode>(Op.getOperand(2));
|
||||
auto Scope = static_cast<SynchronizationScope>(ScopeN->getZExtValue());
|
||||
if (Scope == SynchronizationScope::SingleThread)
|
||||
ConstantSDNode *SSIDNode = cast<ConstantSDNode>(Op.getOperand(2));
|
||||
auto SSID = static_cast<SyncScope::ID>(SSIDNode->getZExtValue());
|
||||
if (SSID == SyncScope::SingleThread)
|
||||
return Op;
|
||||
|
||||
if (!Subtarget->hasDataBarrier()) {
|
||||
|
|
|
@ -3182,13 +3182,13 @@ SDValue SystemZTargetLowering::lowerATOMIC_FENCE(SDValue Op,
|
|||
SDLoc DL(Op);
|
||||
AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
|
||||
cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
|
||||
SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
|
||||
SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
|
||||
cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
|
||||
|
||||
// The only fence that needs an instruction is a sequentially-consistent
|
||||
// cross-thread fence.
|
||||
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
|
||||
FenceScope == CrossThread) {
|
||||
FenceSSID == SyncScope::System) {
|
||||
return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
|
||||
Op.getOperand(0)),
|
||||
0);
|
||||
|
|
|
@ -22850,7 +22850,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
|
|||
|
||||
auto Builder = IRBuilder<>(AI);
|
||||
Module *M = Builder.GetInsertBlock()->getParent()->getParent();
|
||||
auto SynchScope = AI->getSynchScope();
|
||||
auto SSID = AI->getSyncScopeID();
|
||||
// We must restrict the ordering to avoid generating loads with Release or
|
||||
// ReleaseAcquire orderings.
|
||||
auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
|
||||
|
@ -22872,7 +22872,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
|
|||
// otherwise, we might be able to be more aggressive on relaxed idempotent
|
||||
// rmw. In practice, they do not look useful, so we don't try to be
|
||||
// especially clever.
|
||||
if (SynchScope == SingleThread)
|
||||
if (SSID == SyncScope::SingleThread)
|
||||
// FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
|
||||
// the IR level, so we must wrap it in an intrinsic.
|
||||
return nullptr;
|
||||
|
@ -22891,7 +22891,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
|
|||
// Finally we can emit the atomic load.
|
||||
LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
|
||||
AI->getType()->getPrimitiveSizeInBits());
|
||||
Loaded->setAtomic(Order, SynchScope);
|
||||
Loaded->setAtomic(Order, SSID);
|
||||
AI->replaceAllUsesWith(Loaded);
|
||||
AI->eraseFromParent();
|
||||
return Loaded;
|
||||
|
@ -22902,13 +22902,13 @@ static SDValue LowerATOMIC_FENCE(SDValue Op, const X86Subtarget &Subtarget,
|
|||
SDLoc dl(Op);
|
||||
AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
|
||||
cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
|
||||
SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
|
||||
SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
|
||||
cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
|
||||
|
||||
// The only fence that needs an instruction is a sequentially-consistent
|
||||
// cross-thread fence.
|
||||
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
|
||||
FenceScope == CrossThread) {
|
||||
FenceSSID == SyncScope::System) {
|
||||
if (Subtarget.hasMFence())
|
||||
return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));
|
||||
|
||||
|
|
|
@ -837,7 +837,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVariable *GV, CallInst *CI, Type *AllocTy,
|
|||
if (StoreInst *SI = dyn_cast<StoreInst>(GV->user_back())) {
|
||||
// The global is initialized when the store to it occurs.
|
||||
new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
|
||||
SI->getOrdering(), SI->getSynchScope(), SI);
|
||||
SI->getOrdering(), SI->getSyncScopeID(), SI);
|
||||
SI->eraseFromParent();
|
||||
continue;
|
||||
}
|
||||
|
@ -854,7 +854,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVariable *GV, CallInst *CI, Type *AllocTy,
|
|||
// Replace the cmp X, 0 with a use of the bool value.
|
||||
// Sink the load to where the compare was, if atomic rules allow us to.
|
||||
Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
|
||||
LI->getOrdering(), LI->getSynchScope(),
|
||||
LI->getOrdering(), LI->getSyncScopeID(),
|
||||
LI->isUnordered() ? (Instruction*)ICI : LI);
|
||||
InitBoolUsed = true;
|
||||
switch (ICI->getPredicate()) {
|
||||
|
@ -1605,7 +1605,7 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
|
|||
assert(LI->getOperand(0) == GV && "Not a copy!");
|
||||
// Insert a new load, to preserve the saved value.
|
||||
StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
|
||||
LI->getOrdering(), LI->getSynchScope(), LI);
|
||||
LI->getOrdering(), LI->getSyncScopeID(), LI);
|
||||
} else {
|
||||
assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) &&
|
||||
"This is not a form that we understand!");
|
||||
|
@ -1614,12 +1614,12 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
|
|||
}
|
||||
}
|
||||
new StoreInst(StoreVal, NewGV, false, 0,
|
||||
SI->getOrdering(), SI->getSynchScope(), SI);
|
||||
SI->getOrdering(), SI->getSyncScopeID(), SI);
|
||||
} else {
|
||||
// Change the load into a load of bool then a select.
|
||||
LoadInst *LI = cast<LoadInst>(UI);
|
||||
LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
|
||||
LI->getOrdering(), LI->getSynchScope(), LI);
|
||||
LI->getOrdering(), LI->getSyncScopeID(), LI);
|
||||
Value *NSI;
|
||||
if (IsOneZero)
|
||||
NSI = new ZExtInst(NLI, LI->getType(), "", LI);
|
||||
|
|
|
@ -461,7 +461,7 @@ static LoadInst *combineLoadToNewType(InstCombiner &IC, LoadInst &LI, Type *NewT
|
|||
LoadInst *NewLoad = IC.Builder.CreateAlignedLoad(
|
||||
IC.Builder.CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
|
||||
LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
|
||||
NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
|
||||
NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
|
||||
MDBuilder MDB(NewLoad->getContext());
|
||||
for (const auto &MDPair : MD) {
|
||||
unsigned ID = MDPair.first;
|
||||
|
@ -521,7 +521,7 @@ static StoreInst *combineStoreToNewValue(InstCombiner &IC, StoreInst &SI, Value
|
|||
StoreInst *NewStore = IC.Builder.CreateAlignedStore(
|
||||
V, IC.Builder.CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
|
||||
SI.getAlignment(), SI.isVolatile());
|
||||
NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
|
||||
NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
|
||||
for (const auto &MDPair : MD) {
|
||||
unsigned ID = MDPair.first;
|
||||
MDNode *N = MDPair.second;
|
||||
|
@ -1025,9 +1025,9 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
|
|||
SI->getOperand(2)->getName()+".val");
|
||||
assert(LI.isUnordered() && "implied by above");
|
||||
V1->setAlignment(Align);
|
||||
V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
|
||||
V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
|
||||
V2->setAlignment(Align);
|
||||
V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
|
||||
V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
|
||||
return SelectInst::Create(SI->getCondition(), V1, V2);
|
||||
}
|
||||
|
||||
|
@ -1540,7 +1540,7 @@ bool InstCombiner::SimplifyStoreAtEndOfBlock(StoreInst &SI) {
|
|||
SI.isVolatile(),
|
||||
SI.getAlignment(),
|
||||
SI.getOrdering(),
|
||||
SI.getSynchScope());
|
||||
SI.getSyncScopeID());
|
||||
InsertNewInstBefore(NewSI, *BBI);
|
||||
// The debug locations of the original instructions might differ; merge them.
|
||||
NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),
|
||||
|
|
|
@ -379,10 +379,11 @@ void ThreadSanitizer::chooseInstructionsToInstrument(
|
|||
}
|
||||
|
||||
static bool isAtomic(Instruction *I) {
|
||||
// TODO: Ask TTI whether synchronization scope is between threads.
|
||||
if (LoadInst *LI = dyn_cast<LoadInst>(I))
|
||||
return LI->isAtomic() && LI->getSynchScope() == CrossThread;
|
||||
return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
|
||||
if (StoreInst *SI = dyn_cast<StoreInst>(I))
|
||||
return SI->isAtomic() && SI->getSynchScope() == CrossThread;
|
||||
return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
|
||||
if (isa<AtomicRMWInst>(I))
|
||||
return true;
|
||||
if (isa<AtomicCmpXchgInst>(I))
|
||||
|
@ -676,7 +677,7 @@ bool ThreadSanitizer::instrumentAtomic(Instruction *I, const DataLayout &DL) {
|
|||
I->eraseFromParent();
|
||||
} else if (FenceInst *FI = dyn_cast<FenceInst>(I)) {
|
||||
Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
|
||||
Function *F = FI->getSynchScope() == SingleThread ?
|
||||
Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
|
||||
TsanAtomicSignalFence : TsanAtomicThreadFence;
|
||||
CallInst *C = CallInst::Create(F, Args);
|
||||
ReplaceInstWithInst(I, C);
|
||||
|
|
|
@ -1166,7 +1166,7 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock,
|
|||
|
||||
auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
|
||||
LI->isVolatile(), LI->getAlignment(),
|
||||
LI->getOrdering(), LI->getSynchScope(),
|
||||
LI->getOrdering(), LI->getSyncScopeID(),
|
||||
UnavailablePred->getTerminator());
|
||||
|
||||
// Transfer the old load's AA tags to the new load.
|
||||
|
|
|
@ -1212,7 +1212,7 @@ bool JumpThreadingPass::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
|
|||
LoadInst *NewVal = new LoadInst(
|
||||
LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
|
||||
LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
|
||||
LI->getSynchScope(), UnavailablePred->getTerminator());
|
||||
LI->getSyncScopeID(), UnavailablePred->getTerminator());
|
||||
NewVal->setDebugLoc(LI->getDebugLoc());
|
||||
if (AATags)
|
||||
NewVal->setAAMetadata(AATags);
|
||||
|
|
|
@ -2398,7 +2398,7 @@ private:
|
|||
LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
|
||||
LI.isVolatile(), LI.getName());
|
||||
if (LI.isVolatile())
|
||||
NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
|
||||
NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
|
||||
|
||||
// Any !nonnull metadata or !range metadata on the old load is also valid
|
||||
// on the new load. This is even true in some cases even when the loads
|
||||
|
@ -2433,7 +2433,7 @@ private:
|
|||
getSliceAlign(TargetTy),
|
||||
LI.isVolatile(), LI.getName());
|
||||
if (LI.isVolatile())
|
||||
NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
|
||||
NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
|
||||
|
||||
V = NewLI;
|
||||
IsPtrAdjusted = true;
|
||||
|
@ -2576,7 +2576,7 @@ private:
|
|||
}
|
||||
NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
|
||||
if (SI.isVolatile())
|
||||
NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope());
|
||||
NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
|
||||
Pass.DeadInsts.insert(&SI);
|
||||
deleteIfTriviallyDead(OldOp);
|
||||
|
||||
|
|
|
@ -513,8 +513,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
|
|||
if (int Res =
|
||||
cmpOrderings(LI->getOrdering(), cast<LoadInst>(R)->getOrdering()))
|
||||
return Res;
|
||||
if (int Res =
|
||||
cmpNumbers(LI->getSynchScope(), cast<LoadInst>(R)->getSynchScope()))
|
||||
if (int Res = cmpNumbers(LI->getSyncScopeID(),
|
||||
cast<LoadInst>(R)->getSyncScopeID()))
|
||||
return Res;
|
||||
return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
|
||||
cast<LoadInst>(R)->getMetadata(LLVMContext::MD_range));
|
||||
|
@ -529,7 +529,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
|
|||
if (int Res =
|
||||
cmpOrderings(SI->getOrdering(), cast<StoreInst>(R)->getOrdering()))
|
||||
return Res;
|
||||
return cmpNumbers(SI->getSynchScope(), cast<StoreInst>(R)->getSynchScope());
|
||||
return cmpNumbers(SI->getSyncScopeID(),
|
||||
cast<StoreInst>(R)->getSyncScopeID());
|
||||
}
|
||||
if (const CmpInst *CI = dyn_cast<CmpInst>(L))
|
||||
return cmpNumbers(CI->getPredicate(), cast<CmpInst>(R)->getPredicate());
|
||||
|
@ -584,7 +585,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
|
|||
if (int Res =
|
||||
cmpOrderings(FI->getOrdering(), cast<FenceInst>(R)->getOrdering()))
|
||||
return Res;
|
||||
return cmpNumbers(FI->getSynchScope(), cast<FenceInst>(R)->getSynchScope());
|
||||
return cmpNumbers(FI->getSyncScopeID(),
|
||||
cast<FenceInst>(R)->getSyncScopeID());
|
||||
}
|
||||
if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(L)) {
|
||||
if (int Res = cmpNumbers(CXI->isVolatile(),
|
||||
|
@ -601,8 +603,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
|
|||
cmpOrderings(CXI->getFailureOrdering(),
|
||||
cast<AtomicCmpXchgInst>(R)->getFailureOrdering()))
|
||||
return Res;
|
||||
return cmpNumbers(CXI->getSynchScope(),
|
||||
cast<AtomicCmpXchgInst>(R)->getSynchScope());
|
||||
return cmpNumbers(CXI->getSyncScopeID(),
|
||||
cast<AtomicCmpXchgInst>(R)->getSyncScopeID());
|
||||
}
|
||||
if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(L)) {
|
||||
if (int Res = cmpNumbers(RMWI->getOperation(),
|
||||
|
@ -614,8 +616,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
|
|||
if (int Res = cmpOrderings(RMWI->getOrdering(),
|
||||
cast<AtomicRMWInst>(R)->getOrdering()))
|
||||
return Res;
|
||||
return cmpNumbers(RMWI->getSynchScope(),
|
||||
cast<AtomicRMWInst>(R)->getSynchScope());
|
||||
return cmpNumbers(RMWI->getSyncScopeID(),
|
||||
cast<AtomicRMWInst>(R)->getSyncScopeID());
|
||||
}
|
||||
if (const PHINode *PNL = dyn_cast<PHINode>(L)) {
|
||||
const PHINode *PNR = cast<PHINode>(R);
|
||||
|
|
|
@ -5,14 +5,20 @@
|
|||
define void @f(i32* %x) {
|
||||
; CHECK: load atomic i32, i32* %x unordered, align 4
|
||||
load atomic i32, i32* %x unordered, align 4
|
||||
; CHECK: load atomic volatile i32, i32* %x singlethread acquire, align 4
|
||||
load atomic volatile i32, i32* %x singlethread acquire, align 4
|
||||
; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
|
||||
load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
|
||||
; CHECK: load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
|
||||
load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
|
||||
; CHECK: store atomic i32 3, i32* %x release, align 4
|
||||
store atomic i32 3, i32* %x release, align 4
|
||||
; CHECK: store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
|
||||
store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
|
||||
; CHECK: cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
|
||||
cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
|
||||
; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
|
||||
store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
|
||||
store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
|
||||
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
|
||||
cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
|
||||
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
|
||||
cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
|
||||
; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
|
||||
cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
|
||||
; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
|
||||
|
@ -23,9 +29,13 @@ define void @f(i32* %x) {
|
|||
atomicrmw add i32* %x, i32 10 seq_cst
|
||||
; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
|
||||
atomicrmw volatile xchg i32* %x, i32 10 monotonic
|
||||
; CHECK: fence singlethread release
|
||||
fence singlethread release
|
||||
; CHECK: atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
|
||||
atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
|
||||
; CHECK: fence syncscope("singlethread") release
|
||||
fence syncscope("singlethread") release
|
||||
; CHECK: fence seq_cst
|
||||
fence seq_cst
|
||||
; CHECK: fence syncscope("device") seq_cst
|
||||
fence syncscope("device") seq_cst
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
; RUN: llvm-dis -o - %s.bc | FileCheck %s
|
||||
|
||||
; Backwards compatibility test: make sure we can process bitcode without
|
||||
; synchronization scope names encoded in it.
|
||||
|
||||
; CHECK: load atomic i32, i32* %x unordered, align 4
|
||||
; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
|
||||
; CHECK: store atomic i32 3, i32* %x release, align 4
|
||||
; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
|
||||
; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
|
||||
; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
|
||||
; CHECK: cmpxchg weak i32* %x, i32 13, i32 0 seq_cst monotonic
|
||||
; CHECK: atomicrmw add i32* %x, i32 10 seq_cst
|
||||
; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
|
||||
; CHECK: fence syncscope("singlethread") release
|
||||
; CHECK: fence seq_cst
|
Binary file not shown.
|
@ -11,8 +11,8 @@ define void @test_cmpxchg(i32* %addr, i32 %desired, i32 %new) {
|
|||
cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
|
||||
; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
|
||||
|
||||
cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
|
||||
; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
|
||||
cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
|
||||
; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
|
||||
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -551,8 +551,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -571,33 +571,33 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
; XXX: The parser spits out the load type here.
|
||||
%ld.1 = load atomic i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -596,8 +596,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -616,32 +616,32 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
%ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -627,8 +627,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -647,32 +647,32 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
%ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
%ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
%ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -705,8 +705,8 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
|
||||
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
|
||||
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
|
||||
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
|
||||
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
|
||||
|
@ -725,32 +725,32 @@ define void @atomics(i32* %word) {
|
|||
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
|
||||
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
|
||||
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
|
||||
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
|
||||
fence acquire
|
||||
; CHECK: fence acquire
|
||||
fence release
|
||||
; CHECK: fence release
|
||||
fence acq_rel
|
||||
; CHECK: fence acq_rel
|
||||
fence singlethread seq_cst
|
||||
; CHECK: fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
%ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
|
||||
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
|
||||
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
|
||||
%ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
|
||||
|
||||
store atomic i32 23, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic i32 23, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
|
||||
store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -107,29 +107,29 @@ entry:
|
|||
; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
|
||||
%res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
|
||||
|
||||
; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
|
||||
%res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
|
||||
; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
%res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
|
||||
; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
|
||||
%res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
|
||||
; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
%res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
|
||||
; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
|
||||
%res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
|
||||
; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
|
||||
%res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
|
||||
|
||||
; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
|
||||
%res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
|
||||
; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
%res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
|
||||
; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
|
||||
%res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
|
||||
; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
%res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
|
||||
; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
|
||||
%res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
|
||||
; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
%res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
|
||||
; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
|
||||
%res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
|
||||
; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
|
||||
%res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
|
||||
|
||||
; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
|
||||
%res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
|
||||
; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
%res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
|
||||
ret void
|
||||
}
|
||||
|
@ -193,29 +193,29 @@ entry:
|
|||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
|
||||
store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
|
||||
store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread release, align 1
|
||||
store atomic i8 2, i8* %ptr1 singlethread release, align 1
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
|
||||
store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
|
||||
store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
|
||||
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
|
||||
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
|
||||
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
|
||||
|
||||
ret void
|
||||
}
|
||||
|
@ -232,13 +232,13 @@ entry:
|
|||
; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
|
||||
; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
|
||||
%res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
|
||||
; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
|
||||
%res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
|
||||
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire
|
||||
|
@ -249,13 +249,13 @@ entry:
|
|||
; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
|
||||
; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
|
||||
%res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
|
||||
; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
|
||||
%res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
|
||||
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic
|
||||
|
@ -266,13 +266,13 @@ entry:
|
|||
; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
|
||||
; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
|
||||
%res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
|
||||
; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
|
||||
%res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
|
||||
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
|
||||
|
@ -283,13 +283,13 @@ entry:
|
|||
; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
|
||||
; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
|
||||
%res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
|
||||
; CHECK-NEXT: %res16 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
|
||||
%res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
|
||||
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
|
||||
|
@ -300,13 +300,13 @@ entry:
|
|||
; CHECK-NEXT: %res18 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res18 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
|
||||
; CHECK-NEXT: %res19 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
|
||||
%res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
|
||||
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
|
||||
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
|
||||
; CHECK-NEXT: %res20 = extractvalue { i32, i1 } [[TMP]], 0
|
||||
%res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
|
||||
%res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
|
||||
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -1328,16 +1328,16 @@ define void @test_load_store_atomics(i8* %addr) {
|
|||
; CHECK: G_STORE [[V0]](s8), [[ADDR]](p0) :: (store monotonic 1 into %ir.addr)
|
||||
; CHECK: [[V1:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load acquire 1 from %ir.addr)
|
||||
; CHECK: G_STORE [[V1]](s8), [[ADDR]](p0) :: (store release 1 into %ir.addr)
|
||||
; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load singlethread seq_cst 1 from %ir.addr)
|
||||
; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store singlethread monotonic 1 into %ir.addr)
|
||||
; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load syncscope("singlethread") seq_cst 1 from %ir.addr)
|
||||
; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store syncscope("singlethread") monotonic 1 into %ir.addr)
|
||||
%v0 = load atomic i8, i8* %addr unordered, align 1
|
||||
store atomic i8 %v0, i8* %addr monotonic, align 1
|
||||
|
||||
%v1 = load atomic i8, i8* %addr acquire, align 1
|
||||
store atomic i8 %v1, i8* %addr release, align 1
|
||||
|
||||
%v2 = load atomic i8, i8* %addr singlethread seq_cst, align 1
|
||||
store atomic i8 %v2, i8* %addr singlethread monotonic, align 1
|
||||
%v2 = load atomic i8, i8* %addr syncscope("singlethread") seq_cst, align 1
|
||||
store atomic i8 %v2, i8* %addr syncscope("singlethread") monotonic, align 1
|
||||
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -16,6 +16,6 @@ define void @fence_singlethread() {
|
|||
; IOS: ; COMPILER BARRIER
|
||||
; IOS-NOT: dmb
|
||||
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -stop-before=si-debugger-insert-nops < %s | FileCheck --check-prefix=GCN %s
|
||||
|
||||
; GCN-LABEL: name: syncscopes
|
||||
; GCN: FLAT_STORE_DWORD killed %vgpr1_vgpr2, killed %vgpr0, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
|
||||
; GCN: FLAT_STORE_DWORD killed %vgpr4_vgpr5, killed %vgpr3, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
|
||||
; GCN: FLAT_STORE_DWORD killed %vgpr7_vgpr8, killed %vgpr6, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
|
||||
define void @syncscopes(
|
||||
i32 %agent,
|
||||
i32 addrspace(4)* %agent_out,
|
||||
i32 %workgroup,
|
||||
i32 addrspace(4)* %workgroup_out,
|
||||
i32 %wavefront,
|
||||
i32 addrspace(4)* %wavefront_out) {
|
||||
entry:
|
||||
store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
|
||||
store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
|
||||
store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
|
||||
ret void
|
||||
}
|
|
@ -11,6 +11,6 @@ define void @fence_singlethread() {
|
|||
; CHECK: @ COMPILER BARRIER
|
||||
; CHECK-NOT: dmb
|
||||
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# CHECK: %3(s16) = G_LOAD %0(p0) :: (load acquire 2)
|
||||
# CHECK: G_STORE %3(s16), %0(p0) :: (store release 2)
|
||||
# CHECK: G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
|
||||
# CHECK: G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
|
||||
# CHECK: G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
|
||||
name: atomic_memoperands
|
||||
body: |
|
||||
bb.0:
|
||||
|
@ -25,6 +25,6 @@ body: |
|
|||
%3:_(s16) = G_LOAD %0(p0) :: (load acquire 2)
|
||||
G_STORE %3(s16), %0(p0) :: (store release 2)
|
||||
G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
|
||||
G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
|
||||
G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
|
||||
RET_ReallyLR
|
||||
...
|
||||
|
|
|
@ -0,0 +1,98 @@
|
|||
# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -run-pass=none %s -o - | FileCheck --check-prefix=GCN %s
|
||||
|
||||
--- |
|
||||
; ModuleID = '<stdin>'
|
||||
source_filename = "<stdin>"
|
||||
target datalayout = "e-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64"
|
||||
target triple = "amdgcn-amd-amdhsa"
|
||||
|
||||
define void @syncscopes(i32 %agent, i32 addrspace(4)* %agent_out, i32 %workgroup, i32 addrspace(4)* %workgroup_out, i32 %wavefront, i32 addrspace(4)* %wavefront_out) #0 {
|
||||
entry:
|
||||
store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
|
||||
store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
|
||||
store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
|
||||
ret void
|
||||
}
|
||||
|
||||
; Function Attrs: convergent nounwind
|
||||
declare { i1, i64 } @llvm.amdgcn.if(i1) #1
|
||||
|
||||
; Function Attrs: convergent nounwind
|
||||
declare { i1, i64 } @llvm.amdgcn.else(i64) #1
|
||||
|
||||
; Function Attrs: convergent nounwind readnone
|
||||
declare i64 @llvm.amdgcn.break(i64) #2
|
||||
|
||||
; Function Attrs: convergent nounwind readnone
|
||||
declare i64 @llvm.amdgcn.if.break(i1, i64) #2
|
||||
|
||||
; Function Attrs: convergent nounwind readnone
|
||||
declare i64 @llvm.amdgcn.else.break(i64, i64) #2
|
||||
|
||||
; Function Attrs: convergent nounwind
|
||||
declare i1 @llvm.amdgcn.loop(i64) #1
|
||||
|
||||
; Function Attrs: convergent nounwind
|
||||
declare void @llvm.amdgcn.end.cf(i64) #1
|
||||
|
||||
attributes #0 = { "target-cpu"="gfx803" }
|
||||
attributes #1 = { convergent nounwind }
|
||||
attributes #2 = { convergent nounwind readnone }
|
||||
|
||||
# GCN-LABEL: name: syncscopes
|
||||
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
|
||||
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
|
||||
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
|
||||
...
|
||||
---
|
||||
name: syncscopes
|
||||
alignment: 0
|
||||
exposesReturnsTwice: false
|
||||
legalized: false
|
||||
regBankSelected: false
|
||||
selected: false
|
||||
tracksRegLiveness: true
|
||||
liveins:
|
||||
- { reg: '%sgpr4_sgpr5' }
|
||||
frameInfo:
|
||||
isFrameAddressTaken: false
|
||||
isReturnAddressTaken: false
|
||||
hasStackMap: false
|
||||
hasPatchPoint: false
|
||||
stackSize: 0
|
||||
offsetAdjustment: 0
|
||||
maxAlignment: 0
|
||||
adjustsStack: false
|
||||
hasCalls: false
|
||||
hasOpaqueSPAdjustment: false
|
||||
hasVAStart: false
|
||||
hasMustTailInVarArgFunc: false
|
||||
body: |
|
||||
bb.0.entry:
|
||||
liveins: %sgpr4_sgpr5
|
||||
|
||||
S_WAITCNT 0
|
||||
%sgpr0_sgpr1 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 8, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
|
||||
%sgpr6 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 0, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
|
||||
%sgpr2_sgpr3 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 24, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
|
||||
%sgpr7 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 16, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
|
||||
%sgpr8 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 32, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
|
||||
S_WAITCNT 127
|
||||
%vgpr0 = V_MOV_B32_e32 %sgpr0, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr0_sgpr1
|
||||
%sgpr4_sgpr5 = S_LOAD_DWORDX2_IMM killed %sgpr4_sgpr5, 40, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
|
||||
%vgpr1 = V_MOV_B32_e32 killed %sgpr1, implicit %exec, implicit killed %sgpr0_sgpr1, implicit %sgpr0_sgpr1, implicit %exec
|
||||
%vgpr2 = V_MOV_B32_e32 killed %sgpr6, implicit %exec, implicit %exec
|
||||
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
|
||||
S_WAITCNT 112
|
||||
%vgpr0 = V_MOV_B32_e32 %sgpr2, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr2_sgpr3
|
||||
%vgpr1 = V_MOV_B32_e32 killed %sgpr3, implicit %exec, implicit killed %sgpr2_sgpr3, implicit %sgpr2_sgpr3, implicit %exec
|
||||
%vgpr2 = V_MOV_B32_e32 killed %sgpr7, implicit %exec, implicit %exec
|
||||
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
|
||||
S_WAITCNT 112
|
||||
%vgpr0 = V_MOV_B32_e32 %sgpr4, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr4_sgpr5
|
||||
%vgpr1 = V_MOV_B32_e32 killed %sgpr5, implicit %exec, implicit killed %sgpr4_sgpr5, implicit %sgpr4_sgpr5, implicit %exec
|
||||
%vgpr2 = V_MOV_B32_e32 killed %sgpr8, implicit %exec, implicit %exec
|
||||
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
|
||||
S_ENDPGM
|
||||
|
||||
...
|
File diff suppressed because it is too large
Load Diff
|
@ -1959,7 +1959,7 @@ entry:
|
|||
|
||||
define void @atomic_signal_fence_acquire() nounwind uwtable {
|
||||
entry:
|
||||
fence singlethread acquire, !dbg !7
|
||||
fence syncscope("singlethread") acquire, !dbg !7
|
||||
ret void, !dbg !7
|
||||
}
|
||||
; CHECK-LABEL: atomic_signal_fence_acquire
|
||||
|
@ -1975,7 +1975,7 @@ entry:
|
|||
|
||||
define void @atomic_signal_fence_release() nounwind uwtable {
|
||||
entry:
|
||||
fence singlethread release, !dbg !7
|
||||
fence syncscope("singlethread") release, !dbg !7
|
||||
ret void, !dbg !7
|
||||
}
|
||||
; CHECK-LABEL: atomic_signal_fence_release
|
||||
|
@ -1991,7 +1991,7 @@ entry:
|
|||
|
||||
define void @atomic_signal_fence_acq_rel() nounwind uwtable {
|
||||
entry:
|
||||
fence singlethread acq_rel, !dbg !7
|
||||
fence syncscope("singlethread") acq_rel, !dbg !7
|
||||
ret void, !dbg !7
|
||||
}
|
||||
; CHECK-LABEL: atomic_signal_fence_acq_rel
|
||||
|
@ -2007,7 +2007,7 @@ entry:
|
|||
|
||||
define void @atomic_signal_fence_seq_cst() nounwind uwtable {
|
||||
entry:
|
||||
fence singlethread seq_cst, !dbg !7
|
||||
fence syncscope("singlethread") seq_cst, !dbg !7
|
||||
ret void, !dbg !7
|
||||
}
|
||||
; CHECK-LABEL: atomic_signal_fence_seq_cst
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
define void @syncscope_1() {
|
||||
fence syncscope("agent") seq_cst
|
||||
fence syncscope("workgroup") seq_cst
|
||||
fence syncscope("wavefront") seq_cst
|
||||
ret void
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
define void @syncscope_2() {
|
||||
fence syncscope("image") seq_cst
|
||||
fence syncscope("agent") seq_cst
|
||||
fence syncscope("workgroup") seq_cst
|
||||
ret void
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
; RUN: llvm-link %S/Inputs/syncscope-1.ll %S/Inputs/syncscope-2.ll -S | FileCheck %s
|
||||
|
||||
; CHECK-LABEL: define void @syncscope_1
|
||||
; CHECK: fence syncscope("agent") seq_cst
|
||||
; CHECK: fence syncscope("workgroup") seq_cst
|
||||
; CHECK: fence syncscope("wavefront") seq_cst
|
||||
|
||||
; CHECK-LABEL: define void @syncscope_2
|
||||
; CHECK: fence syncscope("image") seq_cst
|
||||
; CHECK: fence syncscope("agent") seq_cst
|
||||
; CHECK: fence syncscope("workgroup") seq_cst
|
|
@ -208,14 +208,14 @@ define void @fence_seq_cst(i32* %P1, i32* %P2) {
|
|||
ret void
|
||||
}
|
||||
|
||||
; Can't DSE across a full singlethread fence
|
||||
; Can't DSE across a full syncscope("singlethread") fence
|
||||
define void @fence_seq_cst_st(i32* %P1, i32* %P2) {
|
||||
; CHECK-LABEL: @fence_seq_cst_st(
|
||||
; CHECK: store
|
||||
; CHECK: fence singlethread seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
; CHECK: store
|
||||
store i32 0, i32* %P1, align 4
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
store i32 0, i32* %P1, align 4
|
||||
ret void
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
; CHECK-LABEL: define void @tinkywinky
|
||||
; CHECK-NEXT: fence seq_cst
|
||||
; CHECK-NEXT: fence singlethread acquire
|
||||
; CHECK-NEXT: fence syncscope("singlethread") acquire
|
||||
; CHECK-NEXT: ret void
|
||||
; CHECK-NEXT: }
|
||||
|
||||
|
@ -12,21 +12,21 @@ define void @tinkywinky() {
|
|||
fence seq_cst
|
||||
fence seq_cst
|
||||
fence seq_cst
|
||||
fence singlethread acquire
|
||||
fence singlethread acquire
|
||||
fence singlethread acquire
|
||||
fence syncscope("singlethread") acquire
|
||||
fence syncscope("singlethread") acquire
|
||||
fence syncscope("singlethread") acquire
|
||||
ret void
|
||||
}
|
||||
|
||||
; CHECK-LABEL: define void @dipsy
|
||||
; CHECK-NEXT: fence seq_cst
|
||||
; CHECK-NEXT: fence singlethread seq_cst
|
||||
; CHECK-NEXT: fence syncscope("singlethread") seq_cst
|
||||
; CHECK-NEXT: ret void
|
||||
; CHECK-NEXT: }
|
||||
|
||||
define void @dipsy() {
|
||||
fence seq_cst
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
ret void
|
||||
}
|
||||
|
||||
|
|
|
@ -5,9 +5,9 @@ target triple = "x86_64-unknown-linux-gnu"
|
|||
define void @test1(i32* ()*) {
|
||||
entry:
|
||||
%1 = call i32* %0() #0
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
%2 = load i32, i32* %1, align 4
|
||||
fence singlethread seq_cst
|
||||
fence syncscope("singlethread") seq_cst
|
||||
%3 = icmp eq i32 %2, 0
|
||||
br i1 %3, label %fail, label %pass
|
||||
|
||||
|
@ -20,9 +20,9 @@ pass: ; preds = %fail, %top
|
|||
|
||||
; CHECK-LABEL: @test1(
|
||||
; CHECK: %[[call:.*]] = call i32* %0()
|
||||
; CHECK: fence singlethread seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
; CHECK: load i32, i32* %[[call]], align 4
|
||||
; CHECK: fence singlethread seq_cst
|
||||
; CHECK: fence syncscope("singlethread") seq_cst
|
||||
|
||||
|
||||
attributes #0 = { nounwind readnone }
|
||||
|
|
|
@ -180,10 +180,11 @@ TEST_F(AliasAnalysisTest, getModRefInfo) {
|
|||
auto *VAArg1 = new VAArgInst(Addr, PtrType, "vaarg", BB);
|
||||
auto *CmpXChg1 = new AtomicCmpXchgInst(
|
||||
Addr, ConstantInt::get(IntType, 0), ConstantInt::get(IntType, 1),
|
||||
AtomicOrdering::Monotonic, AtomicOrdering::Monotonic, CrossThread, BB);
|
||||
AtomicOrdering::Monotonic, AtomicOrdering::Monotonic,
|
||||
SyncScope::System, BB);
|
||||
auto *AtomicRMW =
|
||||
new AtomicRMWInst(AtomicRMWInst::Xchg, Addr, ConstantInt::get(IntType, 1),
|
||||
AtomicOrdering::Monotonic, CrossThread, BB);
|
||||
AtomicOrdering::Monotonic, SyncScope::System, BB);
|
||||
|
||||
ReturnInst::Create(C, nullptr, BB);
|
||||
|
||||
|
|
Loading…
Reference in New Issue