Tony found on his setup, if memory block size 512M will cause crash
during booting.
BUG: unable to handle kernel paging request at ffffea0074000020
IP: [<ffffffff81670527>] get_nid_for_pfn+0x17/0x40
PGD 128ffcb067 PUD 128ffc9067 PMD 0
Oops: 0000 [#1] SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.2.0-rc8 #1
...
Call Trace:
[<ffffffff81453b56>] ? register_mem_sect_under_node+0x66/0xe0
[<ffffffff81453eeb>] register_one_node+0x17b/0x240
[<ffffffff81b1f1ed>] ? pci_iommu_alloc+0x6e/0x6e
[<ffffffff81b1f229>] topology_init+0x3c/0x95
[<ffffffff8100213d>] do_one_initcall+0xcd/0x1f0
The system has non continuous RAM address:
BIOS-e820: [mem 0x0000001300000000-0x0000001cffffffff] usable
BIOS-e820: [mem 0x0000001d70000000-0x0000001ec7ffefff] usable
BIOS-e820: [mem 0x0000001f00000000-0x0000002bffffffff] usable
BIOS-e820: [mem 0x0000002c18000000-0x0000002d6fffefff] usable
BIOS-e820: [mem 0x0000002e00000000-0x00000039ffffffff] usable
So there are start sections in memory block not present.
For example:
memory block : [0x2c18000000, 0x2c20000000) 512M
first three sections are not present.
Current register_mem_sect_under_node() assume first section is present,
but memory block section number range [start_section_nr, end_section_nr]
would include not present section.
For arch that support vmemmap, we don't setup memmap for struct page area
within not present sections area.
So skip the pfn range that belong to absent section.
Also fixes unregister_mem_sect_under_nodes() that assume one section per
memory block.
Reported-by: Tony Luck <tony.luck@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Fixes: bdee237c03 ("x86: mm: Use 2GB memory block size on large memory x86-64 systems")
Fixes: 982792c782 ("x86, mm: probe memory block size for generic x86 64bit")
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: stable@vger.kernel.org #v3.15
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>