iommu/io-pgtable-arm: Fix race handling in split_blk_unmap()
In removing the pagetable-wide lock, we gained the possibility of the
vanishingly unlikely case where we have a race between two concurrent
unmappers splitting the same block entry. The logic to handle this is
fairly straightforward - whoever loses the race frees their partial
next-level table and instead dereferences the winner's newly-installed
entry in order to fall back to a regular unmap, which intentionally
echoes the pre-existing case of recursively splitting a 1GB block down
to 4KB pages by installing a full table of 2MB blocks first.
Unfortunately, the chump who implemented that logic failed to update the
condition check for that fallback, meaning that if said race occurs at
the last level (where the loser's unmap_idx is valid) then the unmap
won't actually happen. Fix that to properly account for both the race
and recursive cases.
Fixes: 2c3d273eab
("iommu/io-pgtable-arm: Support lockless operation")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[will: re-jig control flow to avoid duplicate cmpxchg test]
Signed-off-by: Will Deacon <will.deacon@arm.com>
This commit is contained in:
parent
657135f310
commit
85c7a0f1ef
|
@ -574,15 +574,14 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
|
|||
return 0;
|
||||
|
||||
tablep = iopte_deref(pte, data);
|
||||
}
|
||||
|
||||
if (unmap_idx < 0)
|
||||
return __arm_lpae_unmap(data, iova, size, lvl, tablep);
|
||||
|
||||
} else if (unmap_idx >= 0) {
|
||||
io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
|
||||
return size;
|
||||
}
|
||||
|
||||
return __arm_lpae_unmap(data, iova, size, lvl, tablep);
|
||||
}
|
||||
|
||||
static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
|
||||
unsigned long iova, size_t size, int lvl,
|
||||
arm_lpae_iopte *ptep)
|
||||
|
|
Loading…
Reference in New Issue