oradax: convert get_user_pages() --> pin_user_pages()

This code was using get_user_pages_fast(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages_fast() + put_page() calls to
pin_user_pages_fast() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/

Cc: David S. Miller <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
John Hubbard 2020-05-16 19:08:29 -07:00 committed by David S. Miller
parent a012c1e866
commit 0a2576dae0
1 changed files with 3 additions and 5 deletions

View File

@ -410,9 +410,7 @@ static void dax_unlock_pages(struct dax_ctx *ctx, int ccb_index, int nelem)
if (p) {
dax_dbg("freeing page %p", p);
if (j == OUT)
set_page_dirty(p);
put_page(p);
unpin_user_pages_dirty_lock(&p, 1, j == OUT);
ctx->pages[i][j] = NULL;
}
}
@ -425,13 +423,13 @@ static int dax_lock_page(void *va, struct page **p)
dax_dbg("uva %p", va);
ret = get_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
ret = pin_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
if (ret == 1) {
dax_dbg("locked page %p, for VA %p", *p, va);
return 0;
}
dax_dbg("get_user_pages failed, va=%p, ret=%d", va, ret);
dax_dbg("pin_user_pages failed, va=%p, ret=%d", va, ret);
return -1;
}