xsk: Clear page contiguity bit when unmapping pool
When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
to pages' DMA addresses that go in ascending order and at 4K stride.
The problem is that the bit does not get cleared before doing unmap.
As a result, a lot of warnings from iommu_dma_unmap_page() are seen
in dmesg, which indicates that lookups by iommu_iova_to_phys() fail.
Fixes: 2b43470add
("xsk: Introduce AF_XDP buffer allocation API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220628091848.534803-1-ivan.malov@oktetlabs.ru
This commit is contained in:
parent
32df6fe110
commit
512d1999b8
|
@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
|
|||
for (i = 0; i < dma_map->dma_pages_cnt; i++) {
|
||||
dma = &dma_map->dma_pages[i];
|
||||
if (*dma) {
|
||||
*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
|
||||
dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
|
||||
DMA_BIDIRECTIONAL, attrs);
|
||||
*dma = 0;
|
||||
|
|
Loading…
Reference in New Issue