x86: fix global_flush_tlb() bug
While we were reviewing pageattr_32/64.c for unification,
Thomas Gleixner noticed the following serious SMP bug in
global_flush_tlb():
down_read(&init_mm.mmap_sem);
list_replace_init(&deferred_pages, &l);
up_read(&init_mm.mmap_sem);
this is SMP-unsafe because list_replace_init() done on two CPUs in
parallel can corrupt the list.
This bug has been introduced about a year ago in the 64-bit tree:
commit ea7322decb
Author: Andi Kleen <ak@suse.de>
Date: Thu Dec 7 02:14:05 2006 +0100
[PATCH] x86-64: Speed and clean up cache flushing in change_page_attr
down_read(&init_mm.mmap_sem);
- dpage = xchg(&deferred_pages, NULL);
+ list_replace_init(&deferred_pages, &l);
up_read(&init_mm.mmap_sem);
the xchg() based version was SMP-safe, but list_replace_init() is not.
So this "cleanup" introduced a nasty bug.
why this bug never become prominent is a mystery - it can probably be
explained with the (still) relative obscurity of the x86_64 architecture.
the safe fix for now is to write-lock init_mm.mmap_sem.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
parent
4fa4d23fa2
commit
9a24d04a3c
|
@ -230,9 +230,14 @@ void global_flush_tlb(void)
|
|||
struct page *pg, *next;
|
||||
struct list_head l;
|
||||
|
||||
down_read(&init_mm.mmap_sem);
|
||||
/*
|
||||
* Write-protect the semaphore, to exclude two contexts
|
||||
* doing a list_replace_init() call in parallel and to
|
||||
* exclude new additions to the deferred_pages list:
|
||||
*/
|
||||
down_write(&init_mm.mmap_sem);
|
||||
list_replace_init(&deferred_pages, &l);
|
||||
up_read(&init_mm.mmap_sem);
|
||||
up_write(&init_mm.mmap_sem);
|
||||
|
||||
flush_map(&l);
|
||||
|
||||
|
|
Loading…
Reference in New Issue