[PATCH] mm: schedule find_trylock_page() removal

find_trylock_page() is an odd interface in that it doesn't take a reference
like the others.  Now that XFS no longer uses it, and its last remaining
caller actually wants an elevated refcount, opencode that callsite and
schedule find_trylock_page() for removal.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
Nick Piggin 2006-03-31 02:29:56 -08:00 committed by Linus Torvalds
parent 9bf9e89c3d
commit 93fac7041f
3 changed files with 24 additions and 6 deletions

View File

@ -241,3 +241,15 @@ Why: The USB subsystem has changed a lot over time, and it has been
Who: Greg Kroah-Hartman <gregkh@suse.de>
---------------------------
What: find_trylock_page
When: January 2007
Why: The interface no longer has any callers left in the kernel. It
is an odd interface (compared with other find_*_page functions), in
that it does not take a refcount to the page, only the page lock.
It should be replaced with find_get_page or find_lock_page if possible.
This feature removal can be reevaluated if users of the interface
cannot cleanly use something else.
Who: Nick Piggin <npiggin@suse.de>
---------------------------

View File

@ -72,8 +72,8 @@ extern struct page * find_get_page(struct address_space *mapping,
unsigned long index);
extern struct page * find_lock_page(struct address_space *mapping,
unsigned long index);
extern struct page * find_trylock_page(struct address_space *mapping,
unsigned long index);
extern __deprecated_for_modules struct page * find_trylock_page(
struct address_space *mapping, unsigned long index);
extern struct page * find_or_create_page(struct address_space *mapping,
unsigned long index, gfp_t gfp_mask);
unsigned find_get_pages(struct address_space *mapping, pgoff_t start,

View File

@ -397,18 +397,24 @@ void free_swap_and_cache(swp_entry_t entry)
p = swap_info_get(entry);
if (p) {
if (swap_entry_free(p, swp_offset(entry)) == 1)
page = find_trylock_page(&swapper_space, entry.val);
if (swap_entry_free(p, swp_offset(entry)) == 1) {
page = find_get_page(&swapper_space, entry.val);
if (page && unlikely(TestSetPageLocked(page))) {
page_cache_release(page);
page = NULL;
}
}
spin_unlock(&swap_lock);
}
if (page) {
int one_user;
BUG_ON(PagePrivate(page));
page_cache_get(page);
one_user = (page_count(page) == 2);
/* Only cache user (+us), or swap space full? Free it! */
if (!PageWriteback(page) && (one_user || vm_swap_full())) {
/* Also recheck PageSwapCache after page is locked (above) */
if (PageSwapCache(page) && !PageWriteback(page) &&
(one_user || vm_swap_full())) {
delete_from_swap_cache(page);
SetPageDirty(page);
}