iommu/arm-smmu-v3: Document ordering guarantees of command insertion
It turns out that we've always relied on some subtle ordering guarantees when inserting commands into the SMMUv3 command queue. With the recent changes to elide locking when possible, these guarantees become more subtle and even more important. Add a comment documented the barrier semantics of command insertion so that we don't have to derive the behaviour from scratch each time it comes up on the list. Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
parent
2af2e72b18
commit
05cbaf4ddd
|
@ -1286,6 +1286,22 @@ static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This is the actual insertion function, and provides the following
|
||||
* ordering guarantees to callers:
|
||||
*
|
||||
* - There is a dma_wmb() before publishing any commands to the queue.
|
||||
* This can be relied upon to order prior writes to data structures
|
||||
* in memory (such as a CD or an STE) before the command.
|
||||
*
|
||||
* - On completion of a CMD_SYNC, there is a control dependency.
|
||||
* This can be relied upon to order subsequent writes to memory (e.g.
|
||||
* freeing an IOVA) after completion of the CMD_SYNC.
|
||||
*
|
||||
* - Command insertion is totally ordered, so if two CPUs each race to
|
||||
* insert their own list of commands then all of the commands from one
|
||||
* CPU will appear before any of the commands from the other CPU.
|
||||
*/
|
||||
static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
|
||||
u64 *cmds, int n, bool sync)
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue