mm, page_owner: don't grab zone->lock for init_pages_in_zone()

init_pages_in_zone() is run under zone->lock, which means a long lock
time and disabled interrupts on large machines.  This is currently not
an issue since it runs early in boot, but a later patch will change
that.

However, like other pfn scanners, we don't actually need zone->lock even
when other cpus are running.  The only potentially dangerous operation
here is reading bogus buddy page owner due to race, and we already know
how to handle that.  The worst that can happen is that we skip some
early allocated pages, which should not affect the debugging power of
page_owner noticeably.

Link: http://lkml.kernel.org/r/20170720134029.25268-4-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Vlastimil Babka 2017-09-06 16:20:51 -07:00 committed by Linus Torvalds
parent 0fc542b7dd
commit 1090302794
1 changed files with 10 additions and 6 deletions

View File

@ -562,11 +562,17 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
continue; continue;
/* /*
* We are safe to check buddy flag and order, because * To avoid having to grab zone->lock, be a little
* this is init stage and only single thread runs. * careful when reading buddy page order. The only
* danger is that we skip too much and potentially miss
* some early allocated pages, which is better than
* heavy lock contention.
*/ */
if (PageBuddy(page)) { if (PageBuddy(page)) {
pfn += (1UL << page_order(page)) - 1; unsigned long order = page_order_unsafe(page);
if (order > 0 && order < MAX_ORDER)
pfn += (1UL << order) - 1;
continue; continue;
} }
@ -585,6 +591,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
__set_page_owner_handle(page_ext, early_handle, 0, 0); __set_page_owner_handle(page_ext, early_handle, 0, 0);
count++; count++;
} }
cond_resched();
} }
pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n", pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n",
@ -595,15 +602,12 @@ static void init_zones_in_node(pg_data_t *pgdat)
{ {
struct zone *zone; struct zone *zone;
struct zone *node_zones = pgdat->node_zones; struct zone *node_zones = pgdat->node_zones;
unsigned long flags;
for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) { for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) {
if (!populated_zone(zone)) if (!populated_zone(zone))
continue; continue;
spin_lock_irqsave(&zone->lock, flags);
init_pages_in_zone(pgdat, zone); init_pages_in_zone(pgdat, zone);
spin_unlock_irqrestore(&zone->lock, flags);
} }
} }